Candidates are mass-applying with AI-generated resumes that all sound the same, while recruiters rely on keyword filters that reward gaming over genuine fit. The result is a hiring system where nobody can tell real signal from noise. Claire McTaggart explains how the incentives broke, why AI made it worse before it could make it better, and how deep research and reasoning will finally break the cycle.
.jpg)
The talent acquisition industry has managed to achieve something remarkable: we've created a system where candidates, recruiters, and hiring managers are all worse off than they were a decade ago. Candidates send out hundreds of applications and hear nothing. Recruiters are buried under thousands of low-intent applications per role. The resumes that make it through screening are either keyword-stuffed to the point of meaninglessness or, increasingly, completely fabricated.
Nobody planned this. There's no villain here. What we're living through is the predictable outcome of rational actors responding to a series of well-intentioned changes that, when combined, created a set of perverse incentives. And then AI showed up and poured gasoline on the fire.
But here's what makes this moment interesting: the same technology that's currently exacerbating the crisis is also the only thing capable of fixing it. The fundamental problem is that we've lost signal in an ocean of noise, and it turns out that's exactly the kind of problem these models are starting to get genuinely good at solving.
Think back to what applying for a job used to look like. You'd go to a company's career page, fill out an application that asked you to manually enter your entire work history, answer screening questions that repeated information already on your resume, upload your resume anyway, maybe write a cover letter. The process took 30 minutes and required you to answer the same questions in 10 different ways. It was tedious, it wasted applicants' time, and nobody particularly enjoyed it. But that friction served a purpose: it was a credible signal of interest. If someone went through that much effort, you could reasonably believe they actually wanted the job.
Then LinkedIn introduced one-click apply in 2011, followed by Indeed in 2014. The optimization was all around candidate experience and volume. Companies wanted more applicants, better top-of-funnel numbers, more optionality. They launched employer branding campaigns designed to appeal to everyone. They wrote job descriptions that explicitly told people "don't worry if you don't meet all these requirements, apply anyway!" The goal was to cast the widest possible net.
It worked. Application volumes went through the roof. A role that used to get 30 applications now got 300. Which created an immediate problem: a recruiter managing 15 open requisitions physically cannot review 300 resumes per role with any real care. They needed shortcuts.
The available shortcut was keyword filtering. Does this resume contain "Python"? Does it have "B2B SaaS"? Does it mention "MBA"? If not, automatic reject. It was crude, but when you're drowning in applications, crude beats nothing.
Here's where things get interesting. Candidates are remarkably ingenious. They figured out pretty quickly that the filter was looking for specific keywords. So the optimal strategy became obvious: take the job description, extract every relevant term, and stuff your resume with those exact words. Some people got creative about it, adding keywords in white text at size 3 font so the ATS would parse them but humans wouldn't see them.
And because your odds of getting through the filter on any given application had dropped to something like 1 in 300, you had to play a volume game. Apply to 200 jobs, maybe you hear back from 5. Tools emerged to automate this. A candidate could send out 500 applications in a week without breaking a sweat.
Which, of course, drove application volumes even higher. Which made recruiters lean even harder on keyword filters. Which made candidates more aggressive about gaming those filters. Each turn of the crank made the system worse for everyone involved.
This is what economists mean when they talk about equilibrium. Not that everyone's happy, but that nobody can unilaterally improve their position by changing behavior. Candidates can't stop mass-applying because then they'll never get interviews. Recruiters can't stop using keyword filters because they'll drown in the volume. Companies can't add friction back because they've spent years optimizing for candidate experience and low-friction applications.
We're stuck.
When ChatGPT launched in late 2022, there was a lot of optimistic talk about how AI was going to solve recruiting. It didn't. At least not immediately.
What it did do was give candidates dramatically more powerful tools to generate noise. You didn't need to manually keyword-stuff anymore. You could paste in a job description and your resume, and the model would rewrite your entire work history to perfectly match what the company was asking for. It would generate a personalized cover letter that sounded enthusiastic and thoughtful. You could do this hundreds of times, for hundreds of jobs, in an afternoon.
The "GPT resume" became ubiquitous almost overnight. And here's the fascinating part: they're all identical. Not just similar in tone, but actually structurally the same. Every single one follows the same pattern. "Built a widget that increased revenue by 25%." "Led cross-functional teams to deliver key initiatives." "Drove strategic alignment across stakeholders." It's like they were all written by the same slightly manic management consultant who learned English from reading too many LinkedIn thought leadership posts.
Which means the one thing resumes are supposed to do—help you differentiate between candidates—they now completely fail at. When I talk to heads of talent acquisition, this is the first thing they bring up. "Every resume looks exactly the same. I can't tell anyone apart."
The noise problem, which was already severe, became exponentially worse. And layered on top of that, you started seeing genuinely fraudulent applications at industrial scale. People using AI to generate completely fake work histories. Fake references. Deepfake technology showing up to video interviews.
We decided to run an experiment at SquarePeg. We posted a real role—a remote engineering position—and deliberately interviewed the fraudulent applicants to understand what we were dealing with. Our product manager ended up talking to something like 400 of them over a few months.
The patterns were almost comically obvious once you knew what to look for. They'd sail through the technical assessment because someone else was taking it for them, but then you'd ask a basic ice-breaker question like "what part of Miami do you live in?" and they'd pause for a second and say "the left-hand side." You'd ask about local restaurants and watch them frantically Googling in real-time. Sometimes entirely different people would show up for different stages of the interview process.
This isn't some niche problem affecting a handful of companies. We hosted a panel on fraud recently, and all three TA professionals on stage had hired fraudulent candidates. Some were dealing with active FBI investigations. One of our customers, their chief people officer, is in the same situation.
AI has exacerbated an already difficult problem. But the technology is incredibly well-suited to solve this vicious cycle.
The thing about large language models is that they're genuinely getting good at two specific capabilities that happen to be exactly what recruiting needs: research and reasoning.
Research, in this context, means something pretty specific. When a candidate lists five companies on their resume, a human recruiter is looking at the company names, the job titles, maybe the dates. That's about all they have time for when they're reviewing hundreds of applications. But AI can go much, much deeper.
What was that company actually doing during the time this person worked there? Did it double in size or shrink by half? Did they raise a major funding round? Did the engineering org grow or get cut? What products did they ship? Who were their competitors? What was the broader market context?
At SquarePeg, we pull in this kind of contextual data for every single applicant. Not scraped github, or inferred skills. Company research. We're not just matching keywords against a resume. We're building a picture of what someone actually experienced in their career. Because here's the thing: when every resume claims "drove growth initiatives in a fast-paced startup environment," knowing whether that startup actually grew, whether it was well-funded, whether the candidate's specific department was expanding or contracting—that context changes everything.
A human can't do this research for 1,000 applicants per role. It would take weeks. AI can do it in seconds.
The second capability is reasoning, which I'm using loosely here. These models are getting surprisingly good at evaluating credibility, detecting inconsistencies, understanding nuance in ways that go beyond simple pattern matching.
If every resume is keyword-stuffed, the model can look past that surface layer and ask: does this person's description of their work actually align with what their company was doing at the time? Are the timelines coherent? Do the claimed accomplishments make sense given the company's stage and the candidate's role? Is there any external evidence that corroborates what they're saying?
This is a completely different kind of filtering than "does this resume contain the word Python." It's closer to what a really experienced recruiter does when they're reading between the lines and picking up on signals that don't quite add up. Except it can happen at scale, for every applicant, not just the 15% who made it past the keyword screen.
Right now, the dominant strategy for candidates is to make their resume as generic and keyword-optimized as possible. That's what gets through the filter. But as more employers adopt tools that do actual research and reasoning—not just us, there are others working on this problem—that incentive structure is going to flip completely.
If you write "built a widget that increased revenue 25%," you're now indistinguishable from 500 other people who wrote exactly the same thing. But if you provide actual context—what you built, why it was technically interesting, what constraints you were working under, what you learned—that differentiation becomes valuable again.
AI screening tools will start rewarding authenticity and penalizing sameness. Not because we've programmed them to care about authenticity in some abstract moral sense, but because models that can do deep research can actually verify claims and surface what makes someone different.
This probably means we're moving toward a world where candidates include more information, not less. We're already seeing resumes get longer. People are sharing their technical blog posts, conference talks they've given, open source projects they've contributed to. The message is: here's the public record of my actual work, use all of it to understand what I'm capable of.
And crucially, AI can actually engage with all of that data. A human recruiter cannot read through someone's entire GitHub commit history, but a model can analyze it and tell you "this person made substantial contributions to three different open-source projects dealing with distributed systems problems." A recruiter can't read 50 technical blog posts, but a model can assess whether they demonstrate genuine deep understanding or just surface-level familiarity with buzzwords.
The fundamental shift here is from information scarcity to information abundance. For decades, the resume had to be one page because that's all a human could process. Everyone competed to compress their entire career into this tiny, standardized format. Now the constraint isn't human attention—it's synthesis. Can you make sense of a huge amount of disparate information and extract what matters?
That's a problem these models are actually quite good at.
The blatantly fraudulent cases, the North Korean operatives and deepfake interviewers, are actually the tractable part of this problem. Bad actors leave distinctive signals. The email was created last week. The phone number isn't registered in the country they claim to live in. They have zero LinkedIn connections at any company they supposedly worked for. Their resume claims experience with tools that didn't exist yet during the time period in question.
We've built detection systems for this at SquarePeg, and so have others. It's cat-and-mouse—they develop new tactics, we build new detection—but the fundamentally adversarial cases are spottable. We're seeing fraud rates as high as 30% for certain types of remote roles (data analyst and software engineer are the worst), and you can catch the vast majority of it if you're looking for the right signals.
What's harder, and honestly more interesting, is the gray area. Not bad actors trying to commit fraud, but desperate, real candidates who've embellished to the point where it crosses over into dishonesty.
Someone who was a "Director of DEI" reads the political winds, sees that's becoming a liability, and quietly updates their resume to say "Director of AI Operations." Someone who worked at a company that built SaaS products suddenly claims on their resume that they were doing AI/ML work, even though the company didn't actually ship an AI product until after they left. Someone inflates their team size by a factor of three, or claims credit for revenue impact they didn't really drive, or exaggerates their level of responsibility.
This has always happened. People have been embellishing resumes since resumes were invented. But it's easier now, the volume is higher, and the tools that help you generate plausible-sounding lies are dramatically more accessible.
This is where the research capability becomes critical. If someone claims they worked on AI products, you can actually check: what products did that company ship during their tenure? What were the engineering blog posts from that period talking about? What were they actively hiring for? The public record usually tells you what's real and what isn't.
The goal here isn't to be punitive or to play gotcha. The goal is to understand what's actually true so you can surface candidates who bring genuinely relevant experience, rather than people who've just gotten very good at SEO for resumes.
Some of our customers handle this really thoughtfully. When SquarePeg detects heavy keyword overlap, or someone applying to 15 different roles at the company within 60 seconds, they'll send a gentle rejection that says something like: "We noticed you've applied to many roles very quickly. If you're genuinely interested in this specific position, we'd love to see a more tailored application that gives us real context about why you're a fit. Feel free to reapply, and we guarantee our tools will give it a full review."
It's a way of signaling: we can see what you're doing, this approach isn't going to work here, but we're not slamming the door if you're willing to engage authentically.
There's a paradox here that's worth sitting with for a minute. As AI gets better at the research and analysis parts of recruiting, the human judgment component becomes more important, not less.
If AI can review 1,000 resumes, pull in contextual data on every company each candidate has worked for, assess credibility and fit, and give you a ranked, reasoned list of the 20 most promising people—then your job as a recruiter fundamentally changes. You're not spending your day doing bulk resume review. You're not scheduling 50 phone screens with people who were never actually qualified. You're not fighting with your ATS or manually updating spreadsheets.
You're doing the thing that actually matters: exercising judgment on candidates who've already cleared a much more sophisticated bar.
That phone screen becomes crucial. The in-person interview becomes crucial. The coding assessment or work sample becomes crucial. Because now you're evaluating people who've been surfaced as genuinely relevant based on deep analysis, and your job is to assess all the things AI fundamentally cannot: how they communicate, how they think through problems in real-time, whether their values align with the team, what they're like to work with.
This is what recruiting used to be before we buried everyone in process work. It's the craft part of the job. Understanding what makes someone likely to succeed in this specific role, at this specific company, at this specific moment in time.
Every TA leader I talk to says some version of the same thing: "I didn't get into this field to spend my day scheduling interviews and reviewing documents. I got into it because I'm good at identifying talent and building relationships with people." AI should be giving you that job back. Not by replacing you, but by clearing away all the noise so you can focus entirely on signal.
When people see how powerful these models are getting, there's a natural temptation to think: why don't we just build this internally? We've got engineers, we've got access to the same APIs everyone else does, how hard could it be?
The answer is: much, much harder than it looks.
We learned this the painful way at SquarePeg. When we started, we were trying to be an all-in-one platform. Sourcing, screening, analytics, messaging, even light ATS functionality. We were building 25 different things. And what we discovered is that it's nearly impossible to do 25 things at an extremely high level of quality. Something always suffers.
So we made a bet that's shaped everything about our roadmap since: we would go all-in on being absolutely the best at one very specific problem. Given a large pool of applicants, how do you identify who's qualified, who's real, and why they're qualified? That's it. That's the whole company.
That focus has let us build things that would be brutally difficult for a company to replicate internally. Deep research pipelines that pull in time-series data on company growth, product launches, funding events, news coverage. Fraud detection that evaluates dozens of signals across identity verification, behavioral patterns, historical consistency. Scoring systems that weight authenticity and contextual fit rather than keyword density.
And even with that singular focus, it's hard. These models are improving fast, but they're also unpredictable. Prompt engineering at scale is still more art than science. Integration with existing systems is always messier than the architecture diagrams suggest. Edge cases multiply. You think you've handled something, and then a customer comes back with a scenario you never imagined.
The companies that are going to win here aren't necessarily the ones building everything in-house. They're the ones that figure out how to orchestrate best-in-breed point solutions—systems of intelligence that plug seamlessly into your system of record. You get the power of highly specialized tools without the nightmare of logging into 17 different applications with 17 different passwords and 17 different user interfaces.
We've been moving hard toward embedded solutions and API-first architecture for exactly this reason. Talent teams don't want another tool. They want the functionality, surfaced in the place where they already do their work. If you're living in Greenhouse or Ashby or whatever ATS you use, that's where the intelligence should show up.
The future I think we're heading toward looks pretty different from what we have now.
Instead of candidates maintaining a single static resume that gets slightly tweaked for each application, they maintain a rich, public record of their actual work. Product contributions, technical writing, conference talks, side projects they built on Lovable or Replit, design portfolios, anything that demonstrates what they're capable of and what they care about.
Instead of companies posting a generic job description and hoping the right people apply during a seven-day window, they're transparent about their goals, their challenges, their growth trajectory. Not just "we need a Python developer" but real context about what problems they're trying to solve and why.
AI sits in the middle as a matchmaker. It understands the company's context deeply—we're scaling our data infrastructure to handle 10x growth, we just hired a new VP of Engineering who's rebuilding the team, our biggest technical challenge is migrating off a legacy system while maintaining uptime for enterprise customers. And it understands the candidate's context deeply—worked at three different companies that went through hypergrowth phases, built data pipelines that successfully handled major scale transitions, has written extensively about exactly these types of database migration challenges.
The match isn't based on keyword overlap. It's based on actual relevance. Did this person solve problems that are similar to what we're facing? Do their career choices suggest they'd be energized by what we're building? Is there credible evidence they can do the work?
We're not there just yet. We're still living in a world of resumes and job descriptions and application deadlines and seven-day windows. But the direction is clear, and we're starting to see early versions of it.
At SquarePeg, we can already take an employer's context (not just the job description, but what we know about their company, their growth stage, their competitive position, their technical challenges) and use that to evaluate not just current applicants, but everyone who's ever applied to the company historically. We can do broader market research on people who might be relevant but haven't applied yet.
You start to move away from "who happened to apply during the week after we posted this job" toward "who in the entire market has experience that would be genuinely valuable to us right now, and what's their level of interest in what we're building?"
That's a completely different model. And it's only viable because AI can do research and reasoning at a scale that humans simply can't.
So where does this leave us?
AI absolutely made hiring worse in the short term. It amplified noise, made fraud easier, turned resumes into indistinguishable slop generated by the same handful of prompts. That's real, and it's not over yet.
But the same technology is also the only realistic path out of the hole we've dug. The core problem we're facing is a catastrophic loss of signal in an overwhelming amount of noise. And that's precisely the kind of problem these models are increasingly good at solving.
The vicious cycle we've been trapped in—more applications leading to cruder filters leading to more gaming leading to more applications—only breaks when we can move beyond keyword matching toward genuine understanding. When we can reward authenticity instead of optimization. When we can do deep research at scale instead of seven-second resume skims.
We're in the messy middle of this transition right now. Adoption is still early, workflows haven't caught up, tech stacks are bloated with tools that don't talk to each other. But the fundamental capabilities are there and they're improving fast.
If I'm right about where this is headed, recruiting in 2026 is going to look and feel different than it has in a long time. Candidates will compete on what makes them genuinely unique rather than how well they can stuff keywords. Recruiters will spend their time on judgment and relationship-building rather than administrative process work. The system will reward signal instead of noise.
There's a pattern in technology where the first version of something makes the problem worse before it makes it better. Word processors initially made documents messier because everyone suddenly had 50 font choices. Email created inbox overload before we figured out filters and search. Social media amplified misinformation before we developed better detection tools.
We're in that first phase with AI and recruiting. The noise is worse than ever. But we're also closer than we've ever been to actually solving the underlying problem. That's not speculation—it's starting to happen. The tools exist. The capabilities are real. Now it's just a matter of adoption, integration, and letting the incentives realign.
Claire McTaggart is the Founder and CEO of SquarePeg, which provides AI-native applicant screening, scoring, and fraud detection for high-volume recruiting teams. SquarePeg works embedded in existing ATS platforms to help companies find signal in the noise of modern hiring.
Experience SquarePeg live and see how we streamline recruiting, rank top talent, and save you hours every hiring cycle.
