Case studies – Fair Lab https://fair-lab.com Fair AI for humans Tue, 31 Oct 2023 17:54:26 +0000 en-US hourly 1 https://wordpress.org/?v=6.7.1 https://fair-lab.com/2023/wp-content/uploads/2023/09/cropped-fairlab-favicon-32x32.png Case studies – Fair Lab https://fair-lab.com 32 32 The Failures of AI in HR: A Strong Case for AI Audits https://fair-lab.com/stories/the-failures-of-ai-in-hr-a-strong-case-for-ai-audits/ Sun, 10 Sep 2023 16:14:00 +0000 https://fair-lab.com/2023/?p=32 The increasing use of AI in various industries has led to a heightened concern for fairness, particularly in automated decision-making systems. In this article, we will discuss the importance of auditing AI pipelines and provide examples of AI biases in recruitment tools, applicant screening, and job advertising. Auditing AI pipelines ensures that biases are not perpetuated and that discrimination against certain groups can be uncovered and fixed.

One of the most significant issues with AI in the recruitment process is gender bias. In 2018, Amazon had to abandon its AI-driven recruitment tool after discovering that the algorithm exhibited gender bias against female candidates. The algorithm was trained on resumes submitted to the company over a ten-year period and developed a preference for male candidates, penalizing resumes that included references to women or women’s colleges.

Another example of AI bias is in applicant screening, specifically in facial recognition algorithms. A 2018 study by researchers at MIT and Stanford found that facial recognition algorithms used in hiring processes exhibited racial and gender bias. The algorithms were less accurate in recognizing the faces of dark-skinned individuals and women, which could result in discriminatory hiring practices.

Finally, AI can also perpetuate bias in job advertising. A study published in the journal PLoS ONE in 2021 found that an AI-driven job advertising platform exhibited gender bias in its ad targeting. The algorithm was more likely to display job ads for high-paying positions to men, while women were more likely to see ads for lower-paying jobs.

The three examples mentioned clearly show the importance of independent audits of AI systems before they are deployed. With the audits certifAI offers, we can identify biases and discrimination in models, advise on how to mitigate them, and certify AI pipelines to fulfill legal obligations.

If you are unsure about your AI pipeline, feel free to reach out to us. Together we can find a pragmatic solution with minimal impact on your workflow while ensuring your models do not exhibit unwanted biases. We believe that AI tools should promote diversity and inclusion rather than hindering it.

In conclusion, it is essential to audit AI pipelines to ensure that they are not perpetuating biases and discrimination. The examples discussed above highlight the need for AI audits in recruitment tools, applicant screening, and job advertising. As we continue to rely on AI, it is crucial that we hold these systems accountable and ensure that they promote fairness and inclusivity.

]]>