How to use structured interviews to assess candidates

Written By
Dalia Gulca
Published on
August 15, 2025
Blog

Whether the interview is in-person or remote, a panel or single interviewer, recruiters should prioritize asking candidates the same questions, and assessing them with the same rubric.

  • Big companies like Google and government bodies like the US Office of Personnel Management and Department of Veterans Affairs have all adopted structured interviews — standardized, job-related, behavioral and situational questions scored with rubrics — in their hiring process.

  • Meta-analyses (Sackett 2022; Schmidt & Hunter 1998) show structured interviews nearly double the predictive validity of unstructured ones, reduce bias, improve candidate satisfaction, and save interviewer time.

  • Semi-structured and AI-driven interviews can offer efficiency and consistency but vary in predictive power, candidate experience, and bias risks.

Remember Google’s infamous hiring brain teasers from the 2000s and early 2010s? If you’ve read our blog on the best and worst hiring predictors, you may recall they predict absolutely nothing about job performance.

Regardless, we’ve got another one for you — because they’re fun, and because we’ve got a point to make.

Here goes:

There are three lightbulbs in a room and three switches outside. You can only enter the room once. How do you determine which switch controls which bulb?

If you don’t already know the answer, take a few minutes to think about it. 

Done? Ok. Here’s the answer: Supposedly, you’re meant to flip the first switch and leave it on for a few minutes. Then you flip the switch off, then flip the next switch on and immediately enter the room. The first switch leads to the bulb that is warm. The second switch leads to the bulb that is currently lit. And the third switch, the bulb that’s off.

If that solution didn’t occur to you, a newsflash is in order: incandescent lightbulbs, like hiring riddles themselves, are outdated and scarcely found in the real world anymore.

In fact, they were both phased out nearly within the same year. 

Google stopped using brain-teaser interview questions around 2013, while the US began implementing phase-out rules for incandescent bulbs in 2012. 

Laszlo Bock, then Google’s SVP of People Operations, called them (the riddles, not the lightbulbs) “a complete waste of time” in a New York Times article.

Google themselves now explicitly say to “avoid brainteasers” during the interview process on the Google re:Work website — and instead, encourages hiring managers to embrace structured interviews.

What is a structured interview?

A structured interview is a job interview in which all candidates are asked the same predetermined, job-related questions and evaluated using the same standardized scoring criteria.

In other words, every candidate is asked the same question in the same way, and graded on the same scoring system. It evens the interview playing field and makes the process more reliable and valid.

(Psst…if you’re interested in adding structured interviews to your hiring process, you can download our structured interview guide + template here.)

Unlike unstructured interviews — which despite being used more often, have about half as much predictive value when it comes to job outcomes and are far likelier to lead to adverse impact — structured interviews are the same for everybody, more predictive, and typically shorter.

What we can all learn from Google

Research by Google’s People Analytics team to find the best hiring interview style led the company to shift toward structured interviews, implementing behavioral and situational questions that were graded using standardized rubrics.

Laszlo Bock, who blasted riddles for being a waste of time in 2013, gave another interview to Wired in 2015 explaining the efficiency of structured interviewing.

“We looked at tens of thousands of interviews, and what we found is that structured interviews — where you ask the same question of every candidate and evaluate them the same way — are the best predictors of success,” said Bock.

In the years since, Google’s own research continually showed the validity of structured interviews.

Structured interviews were better at indicating who would do well on the job. Interviewers, too, came out happier. Google says they save an average of 40 minutes per interview using the structured interview process.

Google also found the process made candidates happier, too — candidate satisfaction in feedback scoring saw an uptick for those using structured interviews. Even rejected candidates who had a structured interview were 35% happier than those who didn’t have a structured interview.

Google has fully embraced structured interviews across roles — a far cry from their mid-2000s riddle-slinging. 

And Laszlo Bock even wrote a book on work culture, with a couple of chapters on hiring and structured interviews in 2015 — called Work Rules! (which we’ll be referencing later on, by the way).

Super cool, Google and Laszlo!

Why structured interviews?

It’s not just Google that embraces the structured interview approach to hiring. The US Office of Personnel Management also recommends structured interviews when hiring for government roles and otherwise. So does the US Department of Veteran Affairs.

And structured interviews are the topmost predictor of job success, according to Paul R. Sackett’s 2022 meta-analysis covering hiring predictors of job success. Previous studies and meta-analyses also found that structured interviews ranked highly for job success predictions, including Schmidt & Hunter’s 1998 analysis.

There’s also a growing number of studies that show using structured interviews reduce bias and increase hiring diversity.

Compare that to unstructured interviews, which score poorly across meta-analyses. When you don’t standardize questions or scoring, interviewers are apt to lean on biases and cognitive pitfalls to rank candidates.

In many studies, structured interviews have been found to be twice as effective as traditional, unstructured interviews.

How to run a structured interview

There’s significant prep that goes into the creation of a structured interview — a lot more than just asking a candidate questions pulled from your own head and assessing them based on vibes alone. Running a structured interview requires some upfront work: standardizing questions with the help of SMEs, creating scoring rubrics, and training interviewers.

The OPM has a few great resources on how to implement structured interviews.  The US Department of State also has a great archived guide.

Here’s the gist on how it’s done:

Ask all candidates the same core questions

Every candidate should be asked the same set of relevant questions, in the same order. 

That typically means involving SMEs within your company to come up with a list of the KSAOs (knowledge, skills, abilities, and other characteristics) that are essential to the role, and then creating questions pertaining to the role, and again getting feedback on their relevance. 

Questions typically stick to one of two types — behavioral questions, which ask about past examples of experience, and situational questions, which are hypothetical questions about future situations the candidate might encounter. But you can also ask background and job knowledge questions. We’ll break this down further later on.

The STAR Method

The STAR method—standing for Situation, Task, Action, and Result—is a structured approach to answering and evaluating behavioral interview questions. For interviewees, it serves as a clear framework to tell focused, compelling stories about past experiences, ensuring they provide enough context (Situation), define their responsibilities (Task), explain their specific contributions (Action), and highlight measurable or meaningful outcomes (Result). For interviewers, the STAR method offers a consistent way to guide candidates toward complete, relevant answers and to assess them against job-related competencies without relying on vague or subjective impressions. By encouraging both parties to focus on concrete examples, the STAR method reduces ambiguity, helps reveal patterns of behavior, and makes the evaluation process more fair and evidence-based.

Score answers using a rubric

Use a rubric or scorecard that defines what a strong, average, and weak answer looks like for each question. This keeps evaluations objective and consistent across interviewers. 

Google offers a template — and so do we.

Train your interviewers

Those asking questions should learn how to stick to the script, probe appropriately, and score consistently.

Probe questions are lists of follow-up questions, typically written beforehand, that an interviewer may ask to guide an interviewee toward a complete, satisfactory answer. We recommend coming up with both standard questions and probe follow-ups together with your interviewers and SMEs.

Combine interview results with other data

Interviews are stronger when paired with tools like skills tests, work samples or job trials, and reference checks.

The four interview question types

While most resources on structured interviews and a lot of the I/O literature point to two types of interview questions — past behavioral and situational — research has identified two additional types that are often used in practice but don’t receive a lot of attention otherwise — background and job knowledge questions. 

The four types are explicitly identified by researcher James E. Campion in his 1997 paper on structured interviews.

But how useful are all these question types? We’ll get into it right here.

In a paper from 2019, psychologist Christpher Hartwell talks about the reliability and validity of all four question types, pointing out that previous studies show strong validity for past behavioral and situational question types.

In his own study, Hartwell gathered data from 303 adult job applicants hired as employees of a state government agency in the southern United States. Applicants were hired for a wide variety of jobs, but a majority fell into one of the following categories: maintenance/skilled labor, engineering, or administration. 

With that in mind and research guiding us, let’s get into the four question types.

Past behavioral questions

“Tell me about a time you had to work closely with someone whose personality was very different from yours.”

Typically considered the best type of question to ask in an interview, behavioral interview questions ask candidates about their past behavior in situations relevant to the current job. They are based on the idea that past behavior in similar situations is the best predictor of future behavior.

They have high predictive validity among the question types — and as a result, they’re the primary types of questions interviewers ask. In Hartwell’s study, the answers to these questions were also good at indirectly predicting turnover.

Situational questions

“Imagine you're working on a team project and one team member consistently misses deadlines, putting the entire project's success at risk. What would you do?”

Situational interview questions are hypothetical, future-oriented questions that ask candidates how they would behave in hypothetical, realistic work situations. 

They also have high predictive validity. In Hartwell’s study, past behavioral and situational questions had roughly equal — and the highest — predictive validity scores.

Background questions

“What certifications or training do you have related to this position?”

A background question in an interview asks about a candidate’s past experiences, education, or qualifications to understand their history and how it relates to the job.

Background questions had a predictive validity score of 0.16, putting them behind past behavioral and situational questions in Hartwell’s study but still indicating some predictive validity. They were also found to indirectly relate to turnover.

Job knowledge questions

“What’s the difference between accrual and cash accounting?”

A job knowledge question in an interview tests what a candidate already knows about the work, tools, or subject matter needed for the job.

Job knowledge questions basically ask the type of questions you would see on a test.

What’s surprising, however, is that job knowledge questions did not significantly relate to job performance or turnover in Hartwell’s study. The outcome gives clues into the purpose of interviews and where job knowledge might fit best in the hiring process. 

After all, according to previous meta-analyses, work sample tests and job knowledge tests are relatively high predictors of hiring outcomes.

Hartwell notes: “Results indicate that all question types except job knowledge questions had validity in predicting subsequent job performance ratings. The insignificant result for job knowledge questions was surprising, as knowledge has been shown to be a top predictor of performance (Schmidt & Hunter, 1998).”

“However, the test of predictive validity of job knowledge was approaching marginal significance, and, as noted in the limitations below, our test was likely conservative due to the nature of our job performance measure.”

While this was a narrow sample size, it could reveal that job knowledge questions are better predictors when asked through a test than on the interview. Maybe it’s best to leave the job knowledge questions for the tests — and luckily with eSkill, you can tackle the hard knowledge underlying success in a role through testing before proceeding to a behavior-focused interview.

The research is also indicative that behavioral interviews are much better predictors of performance over behavioral tests.

According to current studies and the data available:

Behavioral interview questions win over behavioral tests.

Job knowledge tests win over job knowledge interview questions.

Using this information can help you create an adaptive and useful hiring strategy — hard skills tests coupled with behavioral interviews.

Get a Demo

Learn how pre-employment assessments can help you reduce recruiting costs.
Get a Demo
eSkill Pre-Employment assessment reporting dashboard displayed on desktop computer

Why don’t hiring managers use structured interviews more often?

Despite their proven validity, hiring managers don’t rely on structured interviews as often as they should.

While Google found that structured interviews decreased interview time, the upfront investment of creating rubrics and determining questions might seem like — and is — a lot of work.

One study decided to put the question to a group of Dutch managers.

79 of them read short scenarios about two hiring approaches — one a highly structured interview (built from Campion et al.’s structure elements), the other an unstructured one — and shared both their intentions and what they actually did in practice.

The results? Managers consistently leaned toward unstructured interviews. Their intentions were shaped most by how they personally felt about each method and by the perceived norms in their workplace.

Here’s the kicker: only pro–unstructured intentions actually matched real-world behavior. Even when managers said they liked structured interviews, that didn’t translate into using more standardized elements. The researchers concluded that the biggest roadblocks aren’t about know-how — they’re about beliefs and local culture. If organizations want more structured interviewing, they’ll need to shift attitudes and social expectations first.

The risks of using unstructured interviews

Yep, plenty of downsides here — from decreased diversity and higher risk of bias to costly bad hires that hurt team performance and morale.

Decreased diversity

When interviews lack structure, decisions are often shaped by gut instinct instead of job-related evidence. This leaves the door wide open for unconscious bias, which can unintentionally filter out qualified candidates from underrepresented backgrounds. Over time, this narrows the diversity of perspectives, skills, and experiences within your team — the very things that drive innovation and problem-solving.

Biases and cognitive pitfalls

Unstructured interviews tend to magnify human biases. Because there’s no standardized framework, each interviewer is left to rely on subjective impressions, often without realizing it. This creates fertile ground for cognitive pitfalls that distort how candidates are assessed.

Among the many biases that could be present are:

  • Confirmation Bias: Interviewers form an early impression and then unconsciously seek evidence to confirm it.

  • Halo and Horn Effects: The halo effect is when a positive impression in one area (like appearance or charisma) leads us to assume someone excels in other areas, while the horn effect is the opposite — one negative trait colors our perception of their overall abilities.

  • Similarity/Affinity Bias: Interviewers rate people more favorably if they share background, hobbies, education, or personality traits.

  • First Impression Bias: Snap judgments in the first 30 seconds can disproportionately shape the rest of the interview—even if unrelated to job performance.

  • Recency Bias: Candidates interviewed later in the process are more memorable, so they’re rated more favorably (or unfavorably) than earlier ones.

  • Contrast Effect: A mediocre candidate might seem great if they follow a weak one, or worse if they follow a strong one.

  • Anchoring Bias: Early ratings or information (like resume GPA or past employer) overly influence evaluation of answers.

  • Overconfidence Bias: Interviewers believe they can “read” people well, even though research shows intuition is a poor predictor of job performance.

Bad hires

The biggest cost of unstructured interviews is the hiring mistakes they produce. A candidate who “feels right” may turn out to lack the skills, work habits, or cultural fit the role demands. 

Onboarding, training, and eventual turnover costs add up quickly. Meanwhile, when a bad fit for the role is hired, existing team members often have to pick up the slack, which can lead to burnout and reduced morale. 

Over time, repeated bad hires can erode trust in leadership and stall organizational growth.

How to handle interview training

If you want to get the full benefit of structured interviews, you can’t just hand interviewers a question list and hope for the best. They need training on the why, the how, and the discipline it takes to stay consistent. That includes understanding the interview format, scoring rubrics, and common biases to watch out for.

One of the key training points is deciding whether to use panel interviews or individual interviews — and knowing the strengths and limitations of each.

Panel interviews

When done right, panel interviews can edge out individual interviews in predictive validity. With several evaluators, one person’s bias is less likely to dominate the decision. The key is structure: standardized questions, a shared scoring rubric, and clear guidance on who’s assessing what.

But panel interviews still require careful training. Without coordination, you risk interviewers talking over each other, drifting off-script, or letting stronger personalities dominate the scoring discussion.

Individual interviews

Individual structured interviews can still be highly effective — especially when the interviewer is well-trained and disciplined about sticking to the format. But they’re more vulnerable to one person’s biases, mood, or subjective impressions.

However, they still work. They’re easier to coordinate and schedule than panels. And they can be more comfortable for candidates, especially if panels feel intimidating.

They also still benefit from high predictive validity if the interviewer uses a structured format with standardized questions and clear scoring criteria.

What about semi-structured interviews?

The risk of using fully structured interviews is that it doesn’t allow for follow-up questions or spur-of-the-moment insights. 

Campion’s 1997 study emphasized that using structured techniques — even partially — like standardizing questions or rating scales, increases interview reliability and predictive validity.

It may be safe to assume that semi-structured interviews fall somewhere between structured and unstructured interviews in terms of validity.

Indeed, many interviews used in practice are semi-structured (following a guide but not rigidly standardized), and their validity is “adequate” – typically significantly above zero – but still falls short of the higher predictive power demonstrated by rigorously structured interviews.

But even fully structured interviews don’t need to be robotic. You can have introductions, inform the candidate on the type of interview you’re conducting, and explain you’ll be taking notes — and leave room for your more typical interview conversation once the structured interview wraps up.

Should I incorporate one-way interviews or AI interviewing?

Modern hiring processes increasingly use one-way video interviews (also called asynchronous interviews) and AI-based interviews alongside or in place of traditional interviews. 

In a one-way interview, candidates record answers to preset questions on video, without a live interviewer present. These responses are later evaluated – either by human reviewers or by algorithms – at the employer’s convenience. 

AI-based interviewing tools take this a step further by using algorithms (often machine learning) to analyze interview responses. 

One-way interviews

One-way video interviews (OVIs) have gained popularity for their efficiency: recruiters can send candidates a link to record answers, and review videos on their own schedule. 

They are used across industries — from retail and hospitality hiring hourly workers, to technology and finance firms screening professional candidates— particularly in the early stages of selection where applicant pools are large. 

Researchers generally consider asynchronous interviews as a variant of structured interviewing, just delivered via technology. The same factors that make structured interviews effective (consistent questions, evaluation criteria, focus on job-related competencies) can be applied in one-way formats.

It’s worth noting that one-way interviews are typically highly structured: every candidate receives the same predefined questions, often with time limits for responses, and no opportunity for follow-ups or clarifications. This standardization can enhance reliability and fairness across applicants. 

However, the lack of an interactive interviewer means candidates cannot be probed for more detail or guided to elaborate on an interesting response. This may limit the depth of information collected. In traditional interviews, a skilled interviewer can ask follow-up questions to dig deeper into a candidate’s answer; in a one-way format, that dynamic is absent.

In practical use, many employers treat asynchronous video interviews as an initial screening tool rather than a sole basis for hiring. Top performers in the one-way interview often advance to a second stage (such as a live interview). In that context, even a moderate validity in the screening stage can improve overall hiring outcomes by filtering in stronger candidates.

A consistent finding is that applicants tend to react less favorably to asynchronous interviews compared to live interviews. Many candidates feel talking to a camera is impersonal or even “dehumanizing,” and they miss the two-way interaction. Qualitative studies and surveys have reported that applicants find one-way video interviews awkward and stressful – for instance, having to record answers while watching themselves on screen and racing a countdown timer can be unnerving.

AI-interviews

To date, independent, peer-reviewed evidence of direct predictive validity is still limited. Vendors often report success stories, but academic scrutiny is just catching up. 

One rigorous study by Hickman et al. (2022) provides some of the first validity evidence for AI video interviews. They developed automated video interview personality assessments (AVI-PAs) – algorithms trained to evaluate Big Five personality traits from video responses.

In a series of samples, they had participants complete structured interview questions on video; the videos were scored by machine learning models, and those scores were compared to established measures and outcomes. The results showed moderately good reliability and construct validity for certain traits.

In student samples, the AI-generated personality scores correlated in expected ways with academic performance – for example, the AI’s conscientiousness rating was positively correlated with the student’s GPA and standardized test scores. 

These findings indicate that an AI can approximately assess certain candidate qualities that matter for performance.

However, it’s crucial to emphasize that most AI interview systems have not yet publicly demonstrated criterion-related validity against job performance metrics. In the Hickman et al. study, the criterion was academic success, not workplace performance, and the setting was a mock interview rather than actual hiring.

No need for facial analysis 

In 2020, under both public pressure and an independent algorithm audit, HireVue announced it would discontinue using facial expression analysis in its AI scoring. The reason given was telling: the facial analysis component did not add predictive value beyond what the verbal and audio analysis already provides. In other words, analyzing a candidate’s facial movements and micro-expressions during a video interview had no incremental validity for predicting job performance, so it was removed to simplify the model.

This incident underscores a broader point: some of the more “sci-fi” elements of AI interviewing (like reading body language via computer vision) may not actually improve the accuracy of hiring decisions. What seems to matter most are the content of what candidates say and certain vocal indicators – essentially, the same substantive information a human might use, distilled via algorithm.

AI-based hiring tools have faced significant criticism and scrutiny from academics, regulators, and advocacy groups. A primary concern is algorithmic bias. If the AI’s training data reflect human biases (for example, if it learned from historical interview ratings or employee performance that carry bias), the algorithm may perpetuate or even amplify those biases. T

There is fear that AI could unfairly screen out certain demographics – for instance, people with accents, speech disfluencies, or communication styles that differ from the “norm” might be penalized by a machine analysis that doesn’t account for diversity. 

Ensuring AI fairness is a hot topic: some jurisdictions in the U.S. have enacted or proposed laws requiring bias audits for AI hiring tools. For example, Illinois’ AI Interview Act (effective 2020) mandates informing candidates when AI is used and auditing for racial bias in outcomes. The EU is also considering AI regulations that would classify hiring algorithms as high-risk, demanding rigorous validation and transparency. These moves stem from the recognition that AI validity must include fairness, not just predictive accuracy.

Another criticism is the lack of transparency in AI interview scoring. Candidates often do not know why an AI rejected them, and employers themselves might not fully understand the model’s decisions if it’s a proprietary black box. This opaqueness can undermine trust and makes it hard to contest or improve the decision process. 

Pairing structured interviews with skills tests

Even Laszlo Bock points out in his book that “research shows that combinations of assessment techniques are better than any single technique. For example, a test of general cognitive ability (predicts 26 percent of performance), when combined with an assessment of conscientiousness (10 percent), is better able to predict who will be successful in a job (36 percent).”

When you combine the top predictors of job success, you get a more in-depth, revealing understanding of a candidate.

And when you’re dealing with a high influx of candidates, a test might help you find the candidates who shine.

Using eSkill’s testing platform and one-way interviews can streamline your hiring process while increasing its validity. Maybe your hiring process can look a bit like this: using a one-way interview or phone call for an initial screening. Then, following up with a pre-hire test to make sure the essential skills for the role are there. Then, you can whittle down your candidate pool and invite the remaining applicants for structured interviews — and make your final decision after.

How to incorporate structured interviews into your hiring

This is your lightbulb moment. Specifically, your LED-lightbulb moment.

Incorporating structured interviews takes some upfront work — and no HR professional can do it alone. It’s a process that requires the help of SMEs, buy-in from your team, and getting every interviewer in the company onboard. But it’s a worthy process.

We have a free resource on structured interviews to help you out! It combines insights from Google, the OPM, the Department of Veteran Affairs, and I/O psychology research. You can download it here.

For a hiring process that embraces both candidate fairness and high validity, consider pairing structured interviews, one-way interviews, and hard-skills tests at various stages of the hiring funnel to find qualified candidates quickly and effectively. 

Employee Relations
Pre-Employment Tests
HR explainers
TABLE OF CONTENTS

Get ademo.

Learn how pre-employment assessments can help you hire better.
Get a Demo
eSkill Pre-Employment assessment reporting dashboard displayed on desktop computer