Another AI Risk?

A study conducted at the University of Washington reached the following conclusion: “Human recruiters largely adopt the biases of the artificial intelligence tools they use when selecting job candidates.” It turns out that “large-language models” (LLMs), such as Gemini or ChatGPT, tend to harbor bias against people with disabilities, certain races, and even genders. From the study:

In a new University of Washington study, 528 people worked with simulated LLMs to pick candidates for 16 different jobs, from computer systems analyst to nurse practitioner to housekeeper. The researchers simulated different levels of racial biases in LLM recommendations for resumes from equally qualified white, Black, Hispanic and Asian men.

The bottom line is that the human evaluators tended to accept the AI verdict unless the bias therein was egregious; but even then, the humans accepted the biased AI verdict up to 90 percent of the time. Thus, it is incumbent upon employers to ensure that their entire selection process is free of illegal bias, regardless of what method is being used.

Previous
Previous

Suggestions for those Adopting AI

Next
Next

AI Risks in the Workplace