Are AI agents hiring people yet?

Published on
March 5, 2026
Written By
Dalia Gulca

The term “agentic AI” is popping up in HR and recruiting software. The tech seems promising, but there are quite a few (big) caveats. Only some systems are truly autonomous, and the risks of systems that actually approach autonomy may outpace their potential value.

Tl;DR
  • “Agentic AI” is mostly hype right now. Real autonomy is rare. Most tools being marketed as agentic AI are actually AI assistants or rebranded automation (“agent washing”). Gartner estimates only about 130 vendors offer truly agentic systems.

  • HR and recruiting are early adopters, but with serious risks. AI is already deeply embedded in hiring: resume screening, assessments, candidate research, and even interviews — sometimes without human oversight. This edges toward agentic behavior, but it raises major concerns around bias, privacy, accountability, explainability, and trust.

  • The long-term shift to agents is real, but we’re in the messy middle. Agentic AI will likely transform work and HR over the next few years. Gartner predicts 15% of daily work decisions will be autonomous by 2028 and 33% of enterprise software will include agentic AI. But right now, the technology is undeveloped. The near future is best understood as a human-in-the-loop partnership, not fully autonomous systems replacing people anytime soon.

When it comes to agentic AI, it’s the wild, wild West out here.

Companies are promising headcount reductions by swapping humans for “agents.” And yet, agentic bots like OpenClaw have the capacity to delete emails en masse without authorization from their humans. And still other teams are “agent-washing” basic automation, slapping the label on copilots and workflows that very much are not autonomous agents.

Behind the hype, most agentic projects are still just early experiments. Proofs of concept. Expensive, fragile systems being tested in controlled environments (or what should be controlled environments — i.e., not your own email inbox). Deploying agents at scale is complex, costly, kind of a privacy nightmare, and doesn’t always come with guaranteed ROI — which helps explain why Gartner predicts that 40% of agentic AI projects will fail by 2027.

So, are AI agents hiring people yet? Sort of, but not really. No AI agent has full autonomous control of end-to-end hiring, from resume review through to onboarding, anywhere (as far as we know). But some AI tools are in charge of pieces of the process, according to a number of reports that survey recruiting teams and a flood of press releases from ATS platforms. But we have a slightly more pressing question. 

Why, oh why, are so many agentic AI and hiring tools being announced when the tech is still so immature and prone to error?

Agentic AI in HR & recruiting?

In 2025, one Gartner survey found that 82% of HR leaders planned to implement some form of agentic AI capability, ranging from AI assistants to AI agents, within the next 12 months. By 2030, Gartner estimates that 50% of current HR activities will be AI-automated or performed by AI agents.

Perhaps the most bullish forecast comes from an August 2025 report from Resume.org, where members of one in three HR departments said they expected AI to run their entire hiring processes by 2026. 

So why aren’t we there yet?

In practice, most organizations are already using AI in specific parts of the hiring process. Survey data shows that more than half of companies report using AI somewhere in recruitment. Among those organizations, 79% use AI for résumé screening, 66% for candidate assessments, and 63% for researching applicants. Smaller but still significant shares use AI for candidate communication (41%), onboarding (39%), and even interviews (34%).

For the companies that use AI in interviews, about half allow AI to conduct interviews directly. However, full autonomy is still rare. When AI conducts interviews, 71% of companies say a human always retains oversight, while only 6% report allowing AI to run the process independently. 

AI interview tools are better used cautiously — companies like HireVue have been wrapped up in lawsuits in the past, and when decisions aren’t transparent, they aren’t defensible either — which means they could run afoul of several emerging state laws.

That said, roughly three in four companies that use AI in hiring allow it to reject candidates without human oversight. This begins to approach what many describe as agentic AI — systems capable of making decisions and taking actions autonomously within a defined workflow.

Recruiting, in particular, is emerging as one of the most promising areas for agentic AI. AI agents are beginning to streamline candidate sourcing, automate early-stage screening, and improve the hiring experience. By 2028, Gartner predicts that 30% of recruitment teams will rely on AI agents for high-volume hiring and early-stage recruitment tasks.

Software vendors are already moving in this direction. HR platforms such as Workday, Dayforce, and ServiceNow have introduced agentic AI features designed to streamline recruiting workflows, while LinkedIn has rolled out an AI hiring assistant that helps recruiters shortlist candidates.

In these systems, AI agents can already analyze job requirements and resume data to identify candidate matches, while AI interviewer agents review interview transcripts to recommend best-fit candidates before a human makes the final decision.

At first glance, it can start to look like a silver bullet for currently overwhelmed hiring systems. Or it could just be a regular bullet, when you consider all the risks attached.

So what is agentic AI?

“What is an agent” is not an easy question to answer, it turns out.

According to a report from the Washington Post, nobody even really knows what agentic AI is. They are “broadly understood to be systems that can take some action on behalf of humans, like buying groceries or making restaurant reservations.” 

Unlike chatbots like Gemini or ChatGPT that answer questions and help you solve problems, AI agents can act within other software systems (via APIs) to complete tasks independently — like check your email inbox and sort emails for you, or respond to others on your behalf. They require little or no human supervision. 

But some say true agentic AI is years away. Tiernan Ray, a senior contributor for ZDNET, wrote that first, underlying issues of learning reinforcement and memory need to be addressed. We need a restructuring of agent memory and advancements in how AI handles reinforcement learning.

And yet, a lot of the tools companies are calling AI agents today are just glorified chatbots, copilots, or assistants, not autonomous agentic systems. And that’s kind of a good thing, given the insane privacy risks.

Wow, sounds like a privacy nightmare!

Yes, astute reader, you’re right.

The Director of AI Safety at Meta had her entire email inbox wiped out by her OpenClaw agent, who gave her a droning apology afterward. 

The issue stemmed from the way the system compressed its instructions and past interactions. Like many AI tools, the agent automatically summarized earlier context to conserve memory — what’s known as the model’s context window. The goal is to retain the general meaning of previous instructions without storing every detail. But that process can leave dangerous gaps. In this case, it meant the agent lost enough context to misinterpret its instructions and take a destructive action.

The lesson: don’t give an AI agent unrestricted access to your personal documents. Please. 

But, you ask, isn’t that the whole point of agentic AI?

Ideally, yes. The promise of agentic systems is that they can act across software environments — email, documents, calendars, databases — to complete tasks autonomously. And that promise is exactly why many companies are racing ahead with the technology, often planning to address governance and privacy concerns later.

But the underlying models powering these systems still have well-known limitations. Large language models are probabilistic systems, not deterministic ones. They can produce biased outcomes, hallucinate information, and behave inconsistently from one run to the next. When those models are embedded inside autonomous agents capable of taking real-world actions, the stakes rise considerably.

A hallucinated paragraph in a chatbot response is one thing. A rogue AI agent rejecting a mortgage application or making a college admissions decision based on faulty reasoning is another.

And when those decisions need to be explained — or applied consistently across cases — things become even more complicated. Most large language models operate as black boxes, meaning their internal reasoning is difficult to trace. Ask the system why it made a particular decision, and the answer may be little more than a confident guess wrapped in an apology.

Beyond bias and explainability, there are also significant cybersecurity risks. Highly autonomous assistants — like the much-hyped OpenClaw personal AI agent — often require access to email accounts, files, calendars, and other sensitive datasets in order to function. That level of integration creates an attractive target for attackers.

There’s also the question of accountability. When an agentic system makes a harmful decision, who is responsible? The developer? The organization deploying it? The human supervisor who may — or may not — have been monitoring the system?

Experts warn that organizations will need clear governance structures before deploying agentic AI widely, especially when these systems are allowed to execute workflows with minimal human supervision.

And in hiring, where algorithmic decision-making is already controversial, the trust gap is even wider.

Talk to sales

Learn how pre-employment assessments can help you reduce recruiting costs.
Talk to sales
eSkill Pre-Employment assessment reporting dashboard displayed on desktop computer

Are agentic AI solutions actually agentic?

Not always.

As the hype around agentic AI grows, many vendors are engaging in what some are calling agent washing — rebranding existing technologies such as chatbots, AI assistants, and robotic process automation as “AI agents,” even when those systems lack meaningful autonomy.

According to research from Gartner, only about 130 vendors currently offer products that could be considered truly agentic. That’s a small fraction of the thousands of companies now marketing “agentic AI” solutions.

In many cases, the marketing language has moved faster than the technology itself. Tools that summarize information, generate text, or automate simple workflows are often labeled as agents, even though they still require constant human direction.

More often than not, these tools are closer to copilots.

Copilots and agents may look similar on the surface, but they operate very differently. AI copilots are assistive systems: they summarize documents, analyze datasets, suggest actions, and help draft content. But they typically operate with a human in the loop, providing recommendations rather than executing decisions independently.

True AI agents, by contrast, are designed to operate more autonomously — planning tasks, interacting with other systems, and executing workflows with minimal supervision.

Market adoption reflects this distinction. A December study from Menlo Ventures found that the fastest-growing category of enterprise AI tools today isn’t agentic systems at all, but copilots. Products like ChatGPT Enterprise, Microsoft Copilot, and Claude for Work dominate adoption across workplaces.

More autonomous agentic offerings — such as Salesforce Agentforce, Writer, and Glean — exist, but remain far less widespread. But that hasn’t stopped a wave of announcements across the HR technology landscape.

Over the past year, major human capital management platforms — including Oracle, Workday, and SAP — have unveiled new products that incorporate “agentic” capabilities into their platforms.

The promise is compelling, especially for recruiting teams that are already stretched thin. In theory, AI agents could source candidates, screen applicants, and coordinate communications — allowing recruiters to focus on higher-level decision-making.

But in practice, many of these systems still function more like intelligent assistants than independent agents.

Take the recently introduced Hiring Assistant from LinkedIn. The system helps recruiters describe a role conversationally instead of building complex Boolean searches, and it reportedly reduces the number of candidate profiles recruiters need to review by more than half while saving several hours of work per role.

Yet the system still operates within a human-in-the-loop framework. Recruiters make the final decisions, while the AI surfaces recommendations and supporting evidence.

That doesn’t quite fit the definition of a fully autonomous agent.

AI has already reshaped the workplace through copilots and automation tools that assist employees with everyday tasks — handling employee inquiries, filtering resumes, recommending training programs, and supporting decision-making.

Agentic AI, if it arrives in its fully autonomous form, would represent a different leap entirely: systems capable of independently making decisions and executing complex workflows.

For now, though, many of the “agents” entering the recruiting software market appear to be something closer to very capable assistants.

So…where do we go from here?

Agentic AI is still a baby, swaddled in hype and vague potential use cases. Plenty of companies haven’t unlocked the real ROI behind agentic tools, given the tech’s immaturity. And many propositions for the tech lack significant value at the moment. 

Many companies have yet to unlock meaningful ROI from agentic systems, largely because the technology is still immature. In practice, many of the tools marketed as “agents” today are thinly veiled copilots: assistive systems that help humans work faster, rather than autonomous systems capable of independently making decisions.

And yet, the predictions about the future of agentic AI are ambitious. Research from Gartner estimates that by 2028, at least 15% of day-to-day work decisions will be made autonomously by AI agents. The firm also predicts that roughly one-third of enterprise software applications will incorporate agentic AI capabilities within the same timeframe.

Other analysts see an even broader transformation ahead. Researchers at McKinsey & Company describe the future workplace as a partnership between humans, AI systems, robots, and autonomous agents. According to their estimates, existing technologies already have the potential to automate more than half of current U.S. work hours. As adoption grows, some roles will shrink, others will evolve, and entirely new categories of work will emerge.

In HR and recruiting, AI copilots are already changing how hiring happens. These systems can help rank résumés, conduct preliminary interviews, and guide candidates through application workflows. But as automation increases, human interaction in the hiring process can decrease—making it even more important for candidates to tailor their applications to AI-driven screening systems.

If truly autonomous agents begin playing a larger role in hiring decisions, the stakes rise significantly.

Bias is one concern. Security is another. Agentic systems require deep access to sensitive datasets—resumes, interview transcripts, employee records, and internal communications. That level of access introduces significant governance and cybersecurity challenges.

For now, most organizations are still experimenting. There are plenty of predictions, plenty of press releases, and plenty of prototypes. But there are still relatively few real-world examples of AI agents operating autonomously without introducing new risks or unintended consequences.

Here’s what we think at eSkill: AI can help increase productivity and assist recruiters, no doubt. But a human needs to make the final decision when it comes to hiring — to step in where AI falls short, and to oversee suggestions with a critical, educated eye regarding potential bias and untraceable conclusions. To that end, we are developing AI tools that assist recruiters without interfering with how decisions are made — focusing on test creation like compiling questions together based on the title of a role, or assisting with proctoring by watching for any signs of cheating on assessments. That way, we’re supporting merit- and skills-based hiring while helping recruiters stay transparent about how hiring decisions are made — impartially, based on ability.

For now, here’s the best advice we have for organizations exploring agentic AI tools: copilots and assistants that automate rote work are certainly helpful, but they’re not ready to act autonomously and make all your decisions for you. Be cautious with any decisions that don’t require human oversight, especially when it concerns the livelihoods and employability of real people.

Hiring trends
IN THIS ARTICLE:

Check out the eSkill platform.

Learn how pre-employment assessments can help you hire better.
Talk to sales