Home > News & Studies > News > Searching for fairness – five ways to avoid bias in data-led decision-making

Searching for fairness – five ways to avoid bias in data-led decision-making

Dr. Paul Barth, Global Head of Data Literacy, Qlik

News

Despite the fact that we have troves of data at our fingertips, bias continues to play a prominent role in decision-making.

Yes or no? Left or right? Now or later? We make hundreds of decisions every day and though we may believe we are rational creatures, unconscious bias and misleading data can steer us in the wrong direction.

Despite the fact that we have troves of data at our fingertips, bias continues to play a prominent role in decision-making. As we consider our options, we have a “gut feel” for the right choice. With little information, we “jump to conclusions.” In a group setting, we don’t want to “rock the boat.” These are all decision-making shortcuts honed by years of personal experience – rules that are hard-wired in our brains to allow us to decide and act quickly. Often these rules lead us in the right direction, but not always.

Data is a critical tool for countering bias. Reviewing sales data can tell us if our new product features are as popular as we expected. We can spot biased hiring practices by analyzing the diversity of our employees. But bias can infiltrate even data-driven decision making. For example, if our workforce analysis only looks at current employees, we are excluding an important population: candidates who chose not to join us, employees who have left, and increasingly, candidates that an algorithm filtered out from consideration. This data skew, called survivor bias, can create a very misleading picture and lead to bad decisions.

Cognitive biases can lead us to poor decisions even when good data is available. Confirmation bias focuses us to only look at data that supports our assumptions. Anchoring bias encourages us to stick with our initial conclusion, even if data emerges that refutes it. And availability bias favors familiar, top-of-mind options over better, lesser known alternatives. These biases are unconscious, and require us to explicitly challenge ourselves, and our assumptions, to overcome them.

At QlikWorld, I hosted a fascinating session with Dr. Hannah Fry and Yassmin Abdel-Magied, during which we covered several issues around data bias and the role of data literacy in helping us avoid it. Here are my five key takeaways from the conversation:

1) It’s impossible to completely eradicate bias – Dr. Fry explained that every system contains some bias and none can be perfectly objective or fair. Therefore, we shouldn’t focus on  removing bias, but rather recognizing which biases are prevalent. With this awareness we can identify where the bias may cause harm or lead to ill-informed decisions, and bring additional data and perspectives to fill in the gaps.

2) Technology will inherit our biases – As we automate data-driven systems with algorithms, we need to avoid embedding our biases. If our hiring practices have been biased, AI will learn these preferences from our data. Before we feed data into these models, it’s key to create a complete picture of our objective. What information is missing that would make a better decision? Who are we not considering that we should?

3) Combine the unique strengths of humans and machines – There are countless tasks where machines outperform humans, and the list continues to grow. So, do we cut out human involvement altogether? Using examples in aviation or energy, the panelists described instances where automated processes needed to be overridden by expert humans to avoid disaster. Increasingly, business processes are human-machine collaborations, where automated steps are governed with human judgement. In addition to well-designed algorithms, humans need to be trained to understand data so that when there are obvious biases, they can step in confidently. Start with the human capabilities and design the technology to make up for our shortcomings, not the other way around.

4) Diverse teams can reduce the risk of bias – A like-minded team of familiar colleagues is often able to come to a decision quickly –but not always for the best. We discussed why more diverse teams make better decisions. In these teams, individuals bring different, often unfamiliar, perspectives to the decision. While this can be uncomfortable, this challenges the assumptions and shortcuts used by peers with a common background. Working as part of a diverse team – whether gender, ethnicity, or thought – makes us more rigorous and receptive to new ideas. That discomfort you feel equals growth, and importantly, mitigates the risk of bias. Hiring people with a range of perspectives and giving them a voice can help businesses progress here.

5) Upskilling in data literacy is crucial – As data is increasingly democratized, all business roles are going to use more data, not less, which comes with the responsibility to use it in a non-biased way. In this context, training employees to navigate these challenges is imperative to making informed decisions. Most data analysis is common sense, and the tools and terminology can be learned quickly. To avoid bias, we need to ensure partial information does not go by unchallenged because people do not have the confidence to do so. If we live in an organization or society where we can’t have factual and robust conversations about the decisions that affect us all because we don’t feel equipped to, we will most certainly fall victim to our biases.

You can view the full conversation, along with the range of other sessions at Qlik World, available on-demand here: Best of QlikWorld Online 2021 | Qlik.