Do what I say, not what I do

Anyone that has been a parent knows the challenges involved in teaching by example. This also turns out to be very important when developing Artificial Intelligence applications. In order for AI to learn, large quantities of training data needs to be supplied to the AI in order for it to develop the patterns it needs to work with. If the data used to train AI is biased toward gender or race then the AI system learns and encodes those patterns into it’s algorithm and perpetuates it.

For example, Reuters news reported that an AI job recruiting tool being developed by Amazon had to be scrapped because it didn’t produce balanced gender recommendations (it was giving preference to male candidates). Turns out that the recruiting system was trained on 10 years of resume data but the applicants during that 10 years were mostly men.

In another example, ProPublica, a non-profit news organization reported that an AI powered forecasting software used by Courts to predict re-offense was biasing it’s likelihood prediction scores toward black males. This became apparent when researchers studied actual offense data for white and black offenders. These examples illustrate how important it is to train the AI systems properly as we become more and more dependent on them for decision making.

So what can be done about bias in AI training?

* Strive for gender and racial diversity balance in the AI human workers training the systems

As Microsoft researcher Hanna Wallach put it: “The more diversity we have in machine learning, the better job we will do in creating products that don’t discriminate”

* Continue to work on developing “Explainable AI”

One of the difficulties in identifying bias in AI projects comes from the nature of how AI works. Often it is difficult to trace the path of learning that an AI system takes. By designing AI systems that have an explainable trail for the learning, we can more easily track how bias crept in to the system and eliminate it at the source.

* Continue to develop AGI specifically encoded with our version of ethics

Perhaps the holy grail of bias free AI is the development of Artificial General Intelligence (AGI) that can think and reason about ethics and bias for us. These systems could help us supervise the training of other systems and identify bias directly. In order to build successful AGI systems we need to encode them with ethics and benevolent goals. In order to build successful AGI systems we need to encode them with ethics and benevolent goals from their infancy which turns the spotlight back on us as humans for philosophical contemplation. What ethics are we interested in? What benevolent goals should we encode?

* Establish watchdog groups and processes that can help check that our AI systems are working fairly

News organizations have been good at researching and identifying bias in some of our AI projects but for the long term we may need to establish watchdog policy and procedure groups for monitoring our AI systems for bias.

Moving forward

Overall, the recognition of the importance of bias free AI has set us on the first step of adjusting our systems and our human teams to move toward our modern ideals. By acknowledging the areas we need to improve, we are able to begin planning and taking steps to meet new goals. Time will tell how well we’ve done with our progress toward the type of bias free world we want to see.