Our dilemma: Do we “adjust” the neural networks we’re creating to make them more fair in an unfair world, or address bias and prejudice in real life?

We’ve all heard Elon Musk speak with foreboding about the danger AI poses — something he says may potentially bring forth a Third World War: This is of course the power of artificial intelligence (AI).

But let’s put aside for a moment Musk’s claims about the threat of human extinction and look instead at the present-day risk AI poses.

This risk, which may well be commonplace in the world of technology business, is bias within the learning process of artificial neural networks.

This notion of bias may not be as alarming as that of “killer” artificial intelligence — something Hollywood has conditioned us to fear. But, in fact, a plethora of evidence suggests that AI has developed a system biased against racial minorities and women.

The proof? Consider the racial discrimination practiced against people of color who apply for loans. One reason may be that financial institutions are applying machine-learning algorithms to the data they collect about a user, to find patterns to determine if a borrower is a good or bad credit risk.

Think, too, about those AI-powered advertisements that portray the best jobs being performed by men, not women. Research by Carnegie Mellon University showed that in certain settings, Google online ads promising applicants help getting jobs paying more than $200,000 were shown to significantly fewer women than men.

That raised questions about the fairness of targeting ads online.

And, how about Amazon’s refusal to provide same-day delivery service to certain zip codes whose populations were predominantly black?

Factors like these suggest that human bias against race and gender has been transferred by AI professionals into machine intelligence. The result is that AI systems are being trained to reflect the general opinions, prejudices and assumptions of their creators, particularly in the fields of lending and finance.

Because of these biases, experts are already striving to implement a greater degree of fairness within AI. “Fairness” in this context means an effort to find, in the data, representations of the real world. These, in turn, can help model predictions which will follow a more diverse global belief system that doesn’t discriminate with regard to race, gender or ethnicity.

Continue Reading