Artificial Intelligence in Hiring: Friend or Foe?
Report by Lara Fleck, Nicholas Rounding, and Pelin Özgül
As Artificial Intelligence (AI) is deployed in more and more business settings it becomes imperative to investigate its impact. In this second policy brief the researchers of ai:conomics investigate the application of AI to hiring. Being a powerful predictive technology, modern AI is highly suitable for hiring. However, as with many other applications of AI, its deployment in hiring raises some important questions and provides lessons that have implications beyond the hiring process. This policy brief discusses the impact of AI on discrimination, showing that under certain circumstances an AI could lead to a more diverse, and better performing set of candidates due to its objective decision making. On the flip side, this brief also shows that an AI could further exacerbate worker discrimination by replicating human biases and producing unfair outcomes. It further demonstrates the importance of people's perceptions of algorithmic decision-making in hiring, showing that these perceptions might have a stronger influence than the objective truth of whether an AI is more or less biased than a human. These perceptions therefore become a critical ingredient when making informed choices about algorithm designs and policies, highlighting the importance of transparency to ensure safe, and trustworthy AI practices are in place.