Artificial Intelligence (AI) and Machine Learning (ML) are transforming the way many organizations do business.

Companies are using AI-enhanced Chatbots to respond more quickly to customer queries.  E-Commerce sites are using AI for recommendation engines, dynamic pricing, and predictive analysis.  More than 111 million Americans use intelligent voice assistants monthly.  Artificial Intelligence, it seems, is turning up everywhere.

Artificial Intelligence in Recruitment

AI in recruiting has been hailed as a better way to find talent and cull through resumes.  Customized algorithms are used to identify candidates for jobs based on criteria gleaned from previous top performers.  Artificial Intelligence in recruitment can scan online resources, such as social media, to surface passive candidates that are qualified and might be open to a change of employment.

AI in recruitment has been billed as a way to eliminate unconscious bias or discrimination against some because of their gender, race, age, or history.  After all, the AI is following a logic path that is supposed to eliminate bias.  The more data it gathers, the more it sees logical connections, and the smarter it’s supposed to get.

Unfortunately, it’s not that easy.  Just ask the folks at Apple and Goldman Sachs that are fighting charges that the AI behind approvals for Apple’s new credit card is giving men dramatically higher credit limits than women even when they share assets. 

Unintended Consequences

In hiring, the data being used can lead to unintended consequences.

Amazon has built an incredible e-Commerce business based on AI.  Yet when it tried to implement AI in recruiting, the company found out how tricky it can be.  Amazon’s computer models were trained to find compatible traits and patterns in applications over the past decade.  Because the tech industry has traditionally employed more men than women, more of the data came from men.  Guess what happened?  According to Reuters, Amazon’s AI in recruitment software “taught” itself that male candidates were preferable.  While the online retailer says it fixed the problem, it scrapped the program completely over fears the AI would find other discriminatory pathways.

When comparing candidates to top performers or core competencies, companies must make sure the data used to fuel the AI is not in itself biased.  It’s especially important in organizations that have not had diverse workforces or have a decreasing level of diversity in the top ranks.

Facial Recognition & Emotional Analysis

One of the latest trends in AI in recruitment is the use of what’s called “Emotional Analysis.”  When candidates complete video job interviews, AI analyzes their facial expressions, which are incorporated into the algorithm.  IBM, Unilever, Dunkin Donuts and other companies are using the tech.  Unilever claims the new approach increase ethnic diversity in its candidate pool, but critics are skeptical.

A Wake Forest study tested several emotion detection systems on the market and determined that the algorithm assigned negative emotions more often to dark-skinned people than whites.  Research done by M.I.T. Media Lab demonstrated significant problems with facial recognition algorithms developed by Microsoft and IBM that more accurately identified white men than darker-skinned women.

Recently, the state of Illinois placed new limits on the use of facial recognition technology for interviews.   The Artificial Intelligence Video Interview Act goes into effect on January 1, 2020.

Outsourcing Artificial Intelligence in Recruiting

Many organizations are using third-party companies to handle AI in recruitment.  Critics ask whether the AI is eliminating bias or perpetuating patterns with historical bias.  As these systems are proprietary, companies can’t independently validate the algorithms.

Monitoring For Bias When Using AI in Recruitment

43% of recruiters say that the key benefit of artificial intelligence in recruitment is the potential to stop human bias.

It’s crucial, however, that HR professionals and Talent Acquisition teams continuously monitor for unintentional bias or “algorithmic bias.”  Best practices dictate regular audits and reviews.

  • Analyze outcomes for any signs of race, age, or gender bias
  • Pay close attention to historical data that may exhibit a lack of diversity
  • Use highly-diverse data sets and screen for selection bias

AI in recruiting will likely play a more significant role in the future.  It can take the bias out of hiring but it’s only as good as the data it analyzes.  The algorithm can help you create a diverse and qualified workforce, or it can learn bad habits that prevent it from happening.

Valid Predictors of Success

Cognitive ability and intelligence testing are valid predictors of job success.  Behavioral assessments and skills testing can help you find diverse and qualified candidates.

Learn more about what eSkill can offer you to improve your hiring.  Request a demo today.

Adina Miron

Comments are closed here.

  • Topics

  • Subscribe to Our Blog

  • Latest Posts

  • Stay Social