Enterprise

The Transform Technology Summits start October 13th with Low-Code/No Code: Enabling Enterprise Agility. Register now!


The issue of bias in artificial intelligence is not going away any time soon. Bias is a tricky term in general, and psychiatrists have developed long treatises trying to explain what it is and how it works.

The current discussion around bias in AI, however, is a little off the mark, in large part by declaring that the objective is to remove AI bias altogether. This tends to gloss over two salient facts: one, that there are many types of bias — some good, some bad, depending on your point of view; and two, that bias exists in two separate elements of AI — the algorithm and the training data — but in neither case does it automatically produce an unfair outcome.

Expanding the data pool

While it is true, as Iterate.ai’s Shomrom Jacobs explained to VB recently, that great care should be taken to weed out bias in the training data, the actual algorithm will often produce better results if bias — the right kind of bias — is programmed into it. For instance, if the data fed into a skin cancer screening AI was to come from white men only, it will likely give inaccurate results for people with darker skin and for women. The solution is to increase the data pool both in size and diversity so the system functions on a broader spectrum of patients. In this way, we have removed bias from the training data.

But let’s compare this to the algorithm itself. A fully unbiased algorithm will reach a conclusion based on only one criterion — no other outside influences allowed. In the case of, say, a college admissions screener, that one criterion might be academic performance. But this is bound to skew results toward the wealthy and privileged and away from the poor and disadvantaged. By bringing in other factors to the AI, essentially increasing its bias toward factors other than academics, the AI ends up accounting for the bias that exists in the real world. So when it comes to the way algorithms are developed, the goal should be to increase bias — again, the right kind of bias — not eliminate it.

Rather than say we strive for unbiased AI, it would be clearer if we focused on developing AI that is fair. In a recent interview with Harvard Magazine, Meredith Broussard (author of Artificial Unintelligence: How Computers Misunderstand the World) points out the distinction between “mathematical fairness” and “social fairness,” asserting that technology is not necessarily the best way to produce the latter. We have reached a point where hidden algorithms are now making a wealth of decisions, many of them personal and private, and with the computing industry having been dominated by white males since its inception, it is undoubtedly biased in that direction. Unbiased AI will simply ignore this fact, while a properly biased AI will account for it and attempt to right the scales.

Intentional bias

We should also be careful not to completely stamp out bias in the training data, says Dr. Min Sun, chief AI scientist at Appier. If, for example, you are training an AI to predict the buying sentiment for one market segment, you don’t want to feed it data from another segment. Providing only the relevant data will produce better results at the beginning of the model and ultimately maximize its return. And, of course, the user will know that this model was trained with biased data and can then interpret the results in the right context.

This last point is key, because only by understanding algorithmic bias and incorporating it correctly can we build the trust in AI that is so vital to its acceptance. A recent report by PWC pointed out that most biases tend to creep into AI unintentionally, both in the coding of the algorithm and the selection of training data. This means organizations must actively counter this bias by fostering diversity in the workforce, training employees to spot biases (including their own), and in general constantly monitoring the output of AI processes to ensure that the results are fair.

Anatole France once wrote, “The law, in its majestic equality, forbids rich and poor alike to sleep under bridges, to beg in the streets, and to steal their bread.” In other words, without bias toward the plight of the poor, justice is not and cannot be fair.

The same holds true for non-biased AI. Without the ability to account for the bias that exists all around us, it will never provide equal service to all. And even then, we must avoid the temptation to think that we will achieve a state of perfect fairness from AI. It will be an eternal struggle in which even success will be hotly debated, in part because of the biases we all carry.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member