Amazon inadvertently built itself a sexist recruitment assistant in a failed experiment that demonstrated why artificial intelligence does not necessarily lead to artificial wisdom.
The company set out to create a virtual hiring tool that would sift thousands of job applications far more efficiently than people can. Unfortunately, the AI algorithm taught itself to discriminate against women based on the fact that many more men had applied for and got jobs in the past.
The new system began to penalise CVs that included the word “women’s”, as in “women’s chess club captain”. It downgraded applications sent by graduates of two all-female universities and prioritised applications that featured verbs more commonly found in male engineers’ CVs, such as “executed” and “captured”.
The company tinkered with the software to remove this bias but was unable to guarantee that no other discriminatory sorting procedures had crept in. The team working on the project was disbanded last year and said that Amazon recruiters looked at the recommendations generated during the trial but never relied exclusively on those rankings. The tool was “never used by Amazon recruiters to evaluate candidates”, the company said.
Amazon began building computer programmes to review job applicants’ CVs in 2014. “Everyone wanted this holy grail,” one source told Reuters. “They literally wanted it to be an engine where I’m going to give you 100 résumés, it will spit out the top five, and we’ll hire those.” However, not only did the AI teach itself gender bias, it would also often recommend unqualified candidates for inappropriate jobs.
Nihar Shah, who teaches machine learning at Carnegie Mellon University, said: “How to ensure that the algorithm is fair, how to make sure the algorithm is really interpretable and explainable — that’s still quite far off.”