Can you use AI ethically in hiring?

Written By
Dalia Gulca
Published on
November 25, 2025
Blog

A majority of hiring managers use AI but lack ethical training. Additionally, AI-powered ATS resume screeners, AI interview bots, and lack of AI disclosure are rubbing some candidates the wrong way. Here’s the breakdown of the major ways AI can be used ethically (without alienating potential hires), and what constitutes unethical (or even illegal) use.

  • AI in hiring raises major ethical and legal concerns. Early examples like one company’s facial analysis feature and Amazon’s biased recruiting algorithm show how poorly designed AI systems can discriminate based on gender or disability, leading to damage to both the candidate pool and a company’s reputation.
  • AI is a helpful tool for hiring teams, but many employees lack proper guidance. Around 94% of hiring managers who use AI rely on it for key decisions like hiring, firing, and promotions — yet fewer than one-third have received formal training on ethical AI use, and some even let tools make final decisions without human input.
  • States are stepping in with new regulations. Cities and states like New York City, Illinois, and Colorado either now require, or will soon require, employers to disclose AI use, conduct bias audits, and gain applicant consent before using automated tools — reinforcing that ethical AI use isn’t just best practice, it’s becoming law.

A recent study from the University of Washington found that hiring managers are “perfectly willing to accept” AI’s biases when it comes to letting LLMs make hiring decisions. 

Seem unethical to you? That’s only the tip of the iceberg. We’ve already lived through years of unethical AI implementation in hiring — whether intentional or not — even before LLMs were relatively common. But an increase in inadvertent biased outcomes is to be expected when recruiters are trying anything (and fast!) to wade through record numbers of applications.

But the risks of moving fast and incorporating AI tools that haven’t undergone independent bias audits — and not understanding the risks of bias and how to use AI to support ethical hiring decisions in the first place — are real and directly impact actual people. And could possibly bring up legal challenges in a few states and cities (which we’ll dive into more fully later).

Consider this your (informal) crash course on how AI is being used — and what’s ethical and unethical — in hiring, for the vast majority of you who’ve never had formal or informal training in how to do so. We promise you’ll learn something! 

A blast to the AI past

Before we get into the present-day ethical risks of AI in recruiting, let’s go back in time a bit to revisit some older examples of biased or unethical AI-assisted hiring practices — back when sophisticated hiring algorithms existed here and there, and not everywhere.

Back in the years before 2021, a vendor of AI interviewing tools (we’re not saying who…) faced (a pun that will make sense shortly) backlash for its facial (here it is) analysis features.

As a result, the well-known video interview and video assessment vendor announced in January of 2021 that they were removing the “facial analysis” component from their video interview screening tests.

For what it’s worth, facial analysis is not a good predictor of job performance — and additionally brings up concerns around ADA accessibility and cultural differences, another reason the vendor removed the feature.

But rather than throwing their baby out with the bathwater, the company continued with analyzing transcripts of video interviews, though not the visual content of the videos themselves. Even so, issues with incorrect transcription or general software can still arise — again, affecting those with accents or with speech-related impediments or disabilities.

Here’s another example of AI tools inadvertently enabling biased hiring choices: albeit, one that probably began with good intentions. Or at least intentions of efficiency.

Amazon’s in-house recruiting engine, built and used in the years leading up to 2018, used machine learning to comb applications to the company from the preceding decade in order to make recommendations for new hires. Unfortunately, given that the majority of the applications from those years came from men (a reflection of the tech industry’s overarching skew toward male workers), the tool inadvertently became biased against women and downgraded applications featuring the names of women’s colleges, clubs, or sports leagues.

Eventually, Amazon’s hiring team realized the tool was biased, and while the team attempted to engineer the bias out of the tool, it ultimately scrapped the engine in 2018, running out of hope for the AI bot.

AI bias in 2025

In 2021, some AI interview tools that tested for personality didn’t seem to work all that well — such as giving a high score in English proficiency for an interviewee who spoke only German.

Even in 2025 (at least in Australia), AI-interviewing tools are still not properly transcribing the monologues of non-native English speakers or those with a disability affecting their speech — and therefore not rating these applicants highly.

In the research, of the 18 human resources professionals reviewed, 12 had used AI recruitment tools to help in hiring, with the most common type of tool being resume analysis and the second, video interviewing.

The researcher, Dr. Natalie Sheard, said there was little to no transparency into the AI interview systems used by hiring teams — for recruits, recruiters, or employers. As a result, it was hard to say if the tools were selecting the most deserving or qualified candidates.

Using AI haphazardly to decide promotions and layoffs can have consequences. For example, in 2022, the federal merit protection commissioner in Australia revealed that 11 of the promotion decisions in Services Australia (a government agency) had been overturned. The agency had outsourced the process to a recruitment specialist that had used AI to automate selection (including psychometric testing, questionnaires, and self-recorded interviews) — and the results, as you can assume by the fact they were overturned, were less than satisfactory.

When it comes to people’s jobs and hiring decisions, it seems hiring managers are following the maxim of asking for forgiveness, rather than permission, a bit too closely. Moving fast and breaking things may not work as well when you’re breaking the hiring process — and affecting real lives, at that.

Despite clear evidence that AI can perpetuate bias, even inadvertently, hiring managers still rely on AI tools to make hiring decisions. And when hiring teams aren’t transparent about their AI use either, this nondisclosure can exacerbate the issue.

Yet it’s clear that the benefits to hiring managers in terms of time savings make the use of AI inevitable. So if AI tools are here to stay (and they are), hiring managers need to make sure they’re using them the right way.

AI makes hiring faster

Ethical, shmethical. The point is, AI can make hiring processes a heck of a lot faster.

According to a report from Resume Now called the AI & Hiring Trends 2025 report, 91% of employers use AI to streamline “everything from screening resumes to scheduling interviews.” 

The survey, which had 900 US hiring professionals as respondents, found that AI was speeding up time-to-hire and also improving decision-making across all fronts. 

Here’s a real-world example: Ava Cado, Chipotle’s AI recruiting tool. Ava Cado, an AI which answers candidate questions and schedules interviews on behalf of managers, has sped up the hiring process at Chipotle. Although the tool doesn’t make hiring decisions or conduct candidate interviews, it’s helped streamline recruiting and saved countless hours for Chipotle’s hiring teams.

There’s a fine line between using AI to speed up the process — like with Ava Cado here — and using it in ways that can actually harm hiring.

Hiring teams make haste despite persistent LLM bias

Bias is an ongoing issue when it comes to black-box AI tools, especially when considering hiring outcomes. Consider that LLMs suggest women ask for lower salaries in job interviews. Or that a University of Washington study from 2024 that analyzed three different LLM tools found that the tools preferred white- and male-associated names over all other races and demographics included in the study.

Or consider the case of Derek Mobley, who sued Workday in 2023 after he applied to over 100 positions and never heard back, on the basis that their hiring algorithm is biased and that his resume was discriminated against on the basis of race, age, and disability status. 

While Workday says Mobley’s claims have no basis, and that its software matches keywords on resumes with job qualifications loaded by its customers, there is still little insight into how ATS screeners make hiring decisions — either because the vendors of these tools don’t disclose how decisions are made or because of the “black box” nature of LLMs and sophisticated machine learning technologies (that make it hard to “engineer out” bias, too). 

Even when vendors make the claim that their AI tools don’t harbor bias, without proper outside audits, reporting, or insight into how proprietary systems work, what seems true to candidates or hiring teams can still be what makes the biggest impact.

Employees & candidates want less AI involvement in HR decisions

Let’s look at the other side of the equation. 41% of employees, on their part, prefer less AI involvement in HR decisions, according to a study from Paychex.

In one Pew survey conducted in 2023, two-thirds of US adults said they wouldn’t want to apply for a job if the employers used AI to help make hiring decisions — and that view was even more pronounced among women (Maybe it has to do with AI suggesting they take lower salaries or not prioritizing their resumes? Who knows.).

More than 70% of those surveyed also opposed allowing AI to make a final hiring decision, and over 40% opposed using AI to review job applications.

Hiring managers admit to using AI for hiring decisions

And yet, according to one poll, workers believe AI may run hiring processes completely by the end of 2026. Combine that with managers replacing entry-level employees with AI, and it all begins to sound a bit…scary? Hasty? 

If we’re considering bias and the ethics of it all, that fear has solid grounds.

The vast majority of hiring managers (94%) who use AI tools at work say they’ve used it to make decisions on hiring, firing, and other people decisions — including determining raises (78%), promotions (77%), layoffs (66%), and terminations (64%), according to a report from Resume Builder.

The majority (over 7 in 10) additionally believe that AI makes fair and unbiased decisions about employees. And yet, less than a third of those using AI to make these types of decisions say they’ve received formal training on how to use AI to make decisions ethically. That’s even more concerning when you consider that over 20% of those using AI frequently let their tool make final decisions without human input.

However, almost all managers said they’re willing to step in if they disagree with an AI-based recommendation. But perhaps that’s easier said than done.

In a 2025 study from the University of Washington (that we referenced at the beginning of this piece — here it is again), hiring managers were asked to use AI tools (that were engineered to have varying degrees of bias) in order to make hiring decisions. 

The only time hiring managers actually overrode decisions was when there was an extreme, obvious level of bias — otherwise, at moderate or small amounts of bias, the recruiters allowed the decisions to stand.

Automation bias: the human tendency to over-rely on automated systems or AI decision aids, even when contradictory information or personal judgment suggests otherwise — poses serious risks, especially in high-stakes environments like healthcare, aviation, and hiring. Because people often view AI-generated outputs as more objective, they may trust them more than conflicting judgments from non-automated sources. In hiring, this becomes particularly problematic: if an AI system exhibits discriminatory patterns, adding human decision-makers doesn’t necessarily counteract that bias and may even entrench it, since humans bring their own biases to the process. New York City’s hiring-AI law complicates this further by including a disclosure exemption for firms that use AI systems alongside human reviewers, leaving it up to companies to determine whether their systems qualify. This loophole not only risks weakening the law but also raises the possibility that some of the most harmful discriminatory practices may go unreported.

The responsibility toward ethical hiring

At the end of the day, AI tools reflect the data they’re trained on. That data — like all of humanity — can be flawed and biased.

Organizations, therefore, have a responsibility to implement AI ethically — to make the right hires, maintain trust among their employees, and protect company culture.

A 2024 study from Gallup found that “77% of adults do not trust businesses much (44%) or at all (33%) to use AI responsibly. Additionally, nearly seven in 10 of those who are extremely knowledgeable about AI have little to no trust in businesses to use AI responsibly.”

However, the Gallup study also looked at steps employers could take to alleviate concerns. “When asked to choose from a list of actions that businesses can take to most reduce concerns about AI, Americans most frequently say companies should be transparent about how AI is being used in business practices (57%). No other strategy was chosen by more than 34% of respondents.”

It’s not only the responsible path to take. Sometimes, it’s the legally obligatory path to take. 

Consider NYC’s laws on using AI in hiring. The NYC law comes with two main requirements: employers have to audit any AI hiring/promotion decision tools before using them — and they must notify job candidates or employees at least ten business days before such tools are used.

Some AI hiring tools, like HeyMilo.ai, make it a point to prioritize ethical requirements, putting their independent AI audits on the internet for all to see.

Laws surrounding using AI ethically in hiring

Overall, the laws regulating AI in the US (beyond just in hiring) are fragmented, and just getting off the ground.

Some states have already passed laws that govern how AI tools must be operating when it comes to hiring. Some states and cities are in the process of doing so. Many laws are set to be released in 2026, including some of the ones we listed below.

Here’s what the current landscape looks like.

NYC

New York City was the first major jurisdiction to implement a dedicated framework for AI use in hiring. Local Law 144 (LL144), originally passed in 2021 and enforced beginning July 2023, requires employers using “automated employment decision tools” in hiring or promotion to provide public notice, conduct annual independent bias audits, and post the results publicly. 

Importantly, the law applies only to hiring and promotion decisions — not termination, compensation, or other HR processes — yet for those covered decisions, its requirements are stringent. Employers must notify candidates at least ten days in advance when an automated tool is used, explain the characteristics being evaluated, and ensure that the system undergoes regular testing for race- and gender-based disparate impact. 

Early research shows compliance has been uneven; in a sample of nearly 400 employers believed to be using automated tools, only a small fraction had posted legally required audit results or notices. Nonetheless, LL144 set the tone for later, broader regulation.

Illinois

Illinois followed with one of the most expansive statewide approaches to AI in employment. The state had already been an early mover with the Artificial Intelligence Video Interview Act, which requires employers to inform applicants when AI is used to evaluate video interviews, explain how the technology works, obtain applicant consent, and delete recordings upon request. 

In 2024, Illinois expanded its regulatory scope with HB 3773, which amends the Illinois Human Rights Act to cover AI-influenced employment decisions more broadly. Taking effect January 1, 2026, the law prohibits employers from using AI in ways that result in discrimination — directly or indirectly — including through proxies such as ZIP codes or educational backgrounds. It also requires employers to notify both applicants and employees when AI systems are used in screening, hiring, promotion, or other employment-related decisions.

The Illinois framework is notable for its unusually broad definition of AI, explicitly including generative AI, which means a wide variety of tools fall under its requirements.

Colorado

Colorado has enacted what many consider the most comprehensive and stringent statewide law to date. The Colorado Artificial Intelligence Act (SB 205), taking effect February 1, 2026, regulates the use of “high-risk” AI systems used in consequential decisions, including employment. 

The act places obligations not only on companies that build AI systems, but also on companies that deploy them. Employers using AI in hiring or promotions must implement full AI governance and risk-management programs, conduct impact assessments, provide clear and timely notice when AI tools influence employment decisions, and offer explanations when an AI-assisted decision results in an adverse outcome. 

The law also mandates human oversight, meaning employers cannot rely solely on algorithmic outputs for hiring decisions. While small employers are exempt from some requirements, any organization with more than 50 employees using AI-driven employment tools must prepare for a significantly higher compliance burden.

Get a Demo

Learn how pre-employment assessments can help you reduce recruiting costs.
Get a Demo
eSkill Pre-Employment assessment reporting dashboard displayed on desktop computer

The old guard: ATS screeners

ATS screeners have long been hiring’s first defense against a deluge of applications to open job roles — going back to the ‘90s. These tools worked by filtering applicants based on certain criteria — like whether they had a college degree — or by keywords on their resume, such as matching job titles, specific software proficiencies, or industry-standard certifications.

An article from the Wall Street Journal also pointed to resume screeners for throwing out resumes from qualified candidates on the basis of resume gaps or unlikely keywords.

In 2019, one report stated that three-quarters of all resumes never made it in front of human eyes. The article that parroted that stat, from CNBC, offers tips to make it past ATS resume screeners — such as by improving resume formatting, customizing for keywords included in job descriptions, and circumventing screeners by reaching out to an actual human at the company.

And now, rather than searching only by criteria or keywords, recruiting teams are additionally using LLM tools to “read” resumes and take a skills-based, holistic approach rather than a keyword-only approach.

And for candidates that assume potential employers are using ChatGPT to score candidate resumes, they are trying to dupe the system in a different way — by including secret prompts in white text on the documents, meant for LLMs like ChatGPT that may be used to rank resumes.

No matter whether it’s the traditional keyword-based ATS platform or a newer, LLM-based one, candidates are still trying to beat the hiring managers at their own game — and bias or undue preference persists, whether due to formatting quirks and keyword frequency or built-in LLM bias.

Here’s how to use AI ethically in hiring

So how do you approach ethicality in hiring, when the tools themselves can be just as biased as human recruiters? When used correctly, these tools give hiring managers huge time savings, candidate insights, and may be able to limit bias — such as by anonymizing applicant names — but in order to get the good results and not the bad ones, using AI tools needs to be approached the right way.

Do NOT: use AI to make the final hiring decision

Perhaps this is too obvious, and basically a cliche at this point. 

Hiring AI without a “human in the loop” not only allows for biased results, but can otherwise flaw a process that should at least involve a human. 

And even when a human is in the loop, if that human treats AI too reliably, it can inadvertently lead to bias.

Consider the studies we’ve mentioned where LLMs were shown to be biased, and where recruiters relied on these tools regardless. Not only is it important to keep a human in the loop — but that human, like awareness of cognitive biases, should be aware of AI biases.

Do: Disclose AI use to candidates

This would be a no-brainer as well, if it weren’t for headlines like: “When Your Job Interviewer Isn’t Human” — a headline from Time that proceeds to describe a situation in which a woman had no idea she would be talking to an AI, not a person, for a job interview.

It’s not the only headline like that, either. There’s the New York Times article where a woman describes having a phone interview with an AI bot when she was expecting a human interviewer.

Transparency is a top sticking point. In one study, 65% of HR professionals said employers should disclose their use of AI to employees and candidates.

Candidates aren’t particularly fond of AI interviewers to begin with — even though the process can be efficient. And when you don’t disclose to candidates they’ll be speaking to an AI avatar ahead of time, you risk alienating competent candidates, who may hang up on you (like the interviewed guests in the previous two articles), and in some cases, you end up running afoul of the law.

Yes, disclosure surrounding AI interviewing tools can be legally mandated: take Illinois’s state law requiring disclosure for AI interview tools, or NYC’s for that matter.

Do: Audit your AI tools

Some companies, like HeyMilo.ai, are already doing this, with an AI assurance dashboard that’s publicly available.

And again, often what’s unethical is also illegal — some states require companies to audit their AI tools yearly. New York City, as mentioned, requires companies to audit their AI hiring tools yearly.

Unfortunately, the wiggle room within the NYC law means that, in one study from Cornell University, less than 20 employers out of nearly 400 had actually completed a bias audit. The companies that did also made it somewhat difficult for applicants to access those audits.

Fortunately for hiring teams, they don’t have to audit these AI tools themselves — unless they’re proprietary, of course. Most respectable AI vendors have their systems independently audited and publish the results of these audits publicly, so users can be sure they don’t risk adverse impact or unwanted outcomes in hiring.

Regardless, it’s still best practice — especially if you want to attract qualified candidates and avoid adverse impact.

Do: Use AI in tools other than resume screeners and interviews (like skills tests!)

AI tools like resume screeners and interviews are the most common uses of AI in hiring, but other helpful avenues exist. Take skills testing for example. You can take advantage of AI tools layered on top of skills tests to have a more efficient, accurate screening process. For example:

  • AI-assisted assessment creation: An employer uploads a job description, and AI surfaces validated content that matches the role. By matching assessment content to each position, employers can quickly and accurately test skills that are essential to job success.
  • Interactive AI simulations: How can you ensure a hiring manager knows how to ethically use AI? Assess how well they can use AI. With AI Simulations, you can assess skills like communication, problem solving, language proficiency, and more.
  • AI Response Evaluation: Many pre-hire assessments contain open response questions. Instead of grading each by hand, leverage AI to summarize and grade responses at scale. This methodology can transform subjective evaluation into consistent, bias-free scoring based on pre-determined criteria (that can be reviewed and overridden by a hiring manager if necessary. 

  • AI-Powered Proctoring: AI can analyze video and audio recordings to ensure test-takers are taking assessments honestly. Suspicious behavior does not automatically fail someone, but rather flagged for manual review by the hiring team (sensing a trend here?)

So what’s the verdict? Can you use AI ethically in hiring?

Of course — like we’ve covered — there are numerous positives to using AI in hiring. As hiring managers are faced with record numbers of applications and applicants themselves use AI to create application materials, it’s the new normal for hiring teams to use AI-powered screeners to filter applications.

Even though these tools still encounter many issues surrounding bias and improperly filtering out qualified candidates, strides are also being taken to independently research AI interviewing tools for fairness. 

And when hiring managers take the time to disclose the use of AI tools within hiring to applicants, vendors take the time to independently audit their tools (like HeyMilo.ai), and teams train their employees on best practices on ethical AI usage, educate them on the risks and bias of AI, and remind teams to keep the human in the loop always, AI gets a little closer to becoming an ethical choice, and not just the faster and easier (and what seems like the only) choice. 

When in doubt, the important differentiator is to use AI as a powerful assistant, not as a decision maker. 

AI in hiring is still rolling along. Let’s make sure it’s actually being of benefit, and singling out the most qualified candidates — instead of a potential liability.

Hiring trends
HR explainers
TABLE OF CONTENTS

Get ademo.

Learn how pre-employment assessments can help you hire better.
Get a Demo
eSkill Pre-Employment assessment reporting dashboard displayed on desktop computer