How to Interview Software Engineers in 2026 (Yes, They Should Use AI)
The popular take right now is that technical interviews are broken. Whiteboarding is theater. Take-homes are meaningless because candidates just feed the prompt to Claude and submit whatever comes back.
I think that's wrong. The formats are fine. What's broken is the bar.
Here's how I'd run interviews today.
Whiteboarding Still Works
When you watch someone work through a problem at a whiteboard, you're not testing whether they can recall the syntax for a binary search. You're seeing how they think under mild pressure. Do they ask clarifying questions? Do they structure the problem before jumping in? Do they talk through their reasoning, or do they just stare at the board and hope inspiration arrives?
Those signals are more valuable today than they were five years ago. The job has changed. You're no longer hiring someone to translate a ticket into working syntax. You're hiring someone to direct AI agents that can produce code at ten times the speed of a human. That person needs to reason about what's being produced, spot architectural problems, and explain why one approach beats another. A whiteboard conversation tells you whether they can do that.
I'd keep asking about data structure tradeoffs and system design. I'd keep asking "what happens to this under load?" and "what's the memory complexity here?" These aren't trivia questions. They're a proxy for whether the person can evaluate AI-generated output at all. Someone who can't explain why a hash map is a better fit than a sorted array here isn't going to catch it when an AI picks the wrong one.
The interview should feel like a technical conversation, not an exam. Two people working through a problem together, pushing back on each other, exploring tradeoffs. That's the format. The whiteboard is just the surface.
Take-Homes Should Be 10x Harder
Take-home exercises are still a good tool. And candidates should absolutely use AI. Banning AI tools at this point is like banning autocomplete. These are the tools of the job.
The mistake is giving the same take-home you gave in 2021. A weekend project that a competent developer could finish in three hours. A candidate who's good at prompting can generate something passable in forty minutes and submit it. You've learned almost nothing.
The fix is simple: make the take-home hard enough that you can't get a good result without using AI effectively. A task that would take a strong engineer a full weekend, and maybe an extra evening. Complex enough to have real architecture decisions. Scope to have edge cases worth thinking about. Room to do something genuinely impressive.
Then evaluate the output on two dimensions.
Two Dimensions to Assess
The first is the engineering quality of the code. This is what you've always evaluated, and it still matters:
- What architecture decisions did they make, and can they justify them?
- Is the code clean, or is it AI slop? (Verbose, over-abstracted, unnecessary error handling everywhere are the obvious tells.)
- Does it handle the edge cases, or did they stop at the happy path?
- Would you want to maintain this six months from now?
The AI makes this harder to evaluate, not easier, because passable-looking code is cheaper to produce than ever. You need to dig. Ask them to walk you through the trickiest part. Ask why they structured the data model that way. Ask what they'd change if the requirements added a new constraint.
The second dimension is how they used AI. This is the new one, and I think it's where you learn the most:
- What tools did they use? Did they set up automation, or did they paste prompts into a chat window all day?
- Did they plan the architecture first and then implement, or did they just prompt and accept whatever came back?
- How did they verify the output? Did they write tests? Run the code under realistic conditions?
- Can they tell you what the AI got wrong, and how they fixed it?
The candidates who just prompted their way through will show it. The code will be generic. The architecture will be textbook. They won't be able to explain the tradeoffs because there weren't any real decisions. The candidates who treated AI as a serious engineering tool will have a story: here's what I planned, here's what I delegated, here's where the AI produced something I didn't like and here's what I changed.
That story is the signal.
What Makes a Great Engineer in 2026
Hiring has always been about predicting future value. The question is whether your current evaluation actually tests for what makes someone valuable now.
Here's what I'm looking for.
Reading code critically. When 80% of the code in your codebase is machine-written, the most important skill is the ability to look at that code and see what's wrong with it. The architectural smell. The missing edge case. The security assumption that only holds in the happy path. An engineer who can't do this is going to approve a lot of bad code.
Thinking in systems. Not "does this function work?" but "how does this interact with the rest of the system over time, under load, when the requirements shift six months from now?" This is harder to test than syntax, but you can surface it in a whiteboard conversation by asking a follow-up question after they've solved the immediate problem. "What breaks first as this scales?"
Product judgment. Can they look at a feature and say "this solves the wrong problem" or "this is technically correct but users are going to hate it"? This matters more as AI makes individual feature implementation cheaper. The bottleneck moves to deciding what to build. Engineers who can engage with that question are significantly more valuable than engineers who wait for someone else to answer it.
Directing AI well. Can they scope a task clearly enough that an AI agent can execute it? Can they break work into pieces, verify the output, and course-correct when the agent goes sideways? This is a real skill and it's not the same as "can they use Cursor."
The AI-Native Junior
Junior hiring dried up fast after AI tools went mainstream. New grad hiring dropped roughly 50% from pre-pandemic levels by 2024, and two thirds of enterprises reduced entry-level positions. The logic seemed obvious: if AI tools produce output at a junior level, why pay a junior to do it?
Then things shifted. OpenAI and Anthropic started hiring juniors. Shopify kept around 1,000 interns a year. GitHub, Cloudflare, Netflix, and Amazon reversed their freezes. The GitHub CEO summed up the reasoning: "young people who are coming out of college, they're like AI native."
The AI-native junior is a real category. Not someone who needs six months to become productive. Someone who has used AI tools from day one, who thinks in terms of directing agents rather than typing code, and who can contribute quickly under good technical mentorship.
That last part is important. The model that works is a small number of senior engineers who own architecture and quality standards, paired with AI-native juniors who execute under that guidance. Think film crew, not factory floor.
That said, the knowledge gap is real. Addy Osmani described AI-generated junior output as "house of cards code": it looks complete, it passes review, and then it collapses when production throws something unexpected at it. The foundational knowledge that used to come from struggling through hard bugs is just absent.
The companies hiring juniors well are investing in the right kind of training alongside the tool fluency. They're pairing juniors with senior mentors, insisting on understanding the code rather than just producing it, and giving them problems with real complexity. The ones getting it wrong are treating juniors as prompt-typists and then wondering why the codebase is getting worse.
If you're interviewing juniors today, I'd add one question: ask them to explain something the AI built. Not walk through it. Explain it. Why this data structure? What does this function actually do? Why didn't you use a simpler approach here? If they can answer those questions, they're the kind of junior who's going to compound. If they can't, they're a prompt-typist regardless of what the take-home looks like.
The Mid-Level Squeeze
There's a dynamic in the hiring market right now that doesn't get discussed enough, and it hits three-to-seven year engineers hardest.
AI-native juniors with strong senior mentorship can produce output that rivals a mid-level engineer working alone. And seniors bring architectural judgment that AI amplifies rather than replaces. The mid-level engineer with comfortable, stagnant skills is getting squeezed from both sides.
"I know React" or "I've used Kubernetes" is no longer a moat. Not because those skills are worthless, but because the supply of people who can use AI to produce acceptable React or acceptable Kubernetes configuration has grown enormously. Commodity skills stay commodity.
The path through is to go deeper and broader at the same time. Understand why eventual consistency makes sense for this system but not that one. Understand the data model well enough to design it from scratch. Understand what the user actually needs well enough to push back on the spec. Develop the director mindset: running parallel AI agents, scoping work for autonomous execution, verifying output systematically.
The mid-level opportunity most people miss is product judgment. You've been close to the code long enough to see implementation tradeoffs. You've been around long enough to have seen the difference between a feature that shipped well and one that didn't. That's context you can build on. Develop the habit of asking "should we build this?" before "how do we build this?" and you start operating as a tech lead, not just a developer.
AI speeds up whatever you already are. If you stopped growing two years ago, it's going to expose that gap. If you're actively building judgment and fundamentals, it makes you significantly more valuable.
Advice for Candidates
If you're on the other side of the table, here's what I'd focus on.
Build something real with AI tools and then document the process, not just the result. What did you plan before you opened Cursor? What did you delegate? What did the AI produce that you rejected, and why? What would you change if the scope expanded? The decision-making is what the hiring manager wants to see, not just a working demo.
The best version of this is building something that earns real money from real users. Even a dollar. It forces you through the entire loop: product thinking, architecture, implementation, deployment, iteration based on actual feedback. That's a far stronger signal than a polished side project that never left a localhost.
Be honest about where you are on the AI adoption curve. Steve Yegge described a six-level scale ranging from "no AI tools at all" up to "running several parallel agents." If you're at level one or two, that's worth addressing. Yegge put it bluntly: "You're going to get fired and you're one of the best engineers I know." You don't need to be at the top of the scale. But you should be moving up it.
When you talk about your work in interviews, talk about the "why." Why this architecture and not the simpler one? Why did this tradeoff make sense? Why does this edge case matter more than that one? "What" is cheap now. Any AI tool can generate what. "Why" is what shows judgment, and judgment is what gets you hired.
Only 3% of developers report high trust in AI output, according to Stack Overflow's 2025 Developer Survey. Companies are desperate for engineers who can close that gap. If you can show that you read AI-generated code critically, spot what's wrong with it, and produce something better, you're demonstrating exactly the skill that's hardest to find right now.
I'm writing a book called "How to Be a Great Software Engineer in the Age of AI" that covers these ideas in depth, including the director mindset, the AI-native junior model, and how to build the kind of judgment that compounds over a career.
Recommended Resources
- Stack Overflow Developer Survey 2025, 3% trust figure and broader developer attitudes toward AI tools
- DORA 2025 Report, AI's effect on software delivery performance, including the quality trust gap
- Steve Yegge's posts on the AI adoption scale, worth finding through his GitHub blog
- Addy Osmani's writing on AI-assisted engineering and "house of cards code"