Hic Sunt Dracones

tomo@hicsuntdra.co

Why is Tech Interviewing Broken?

Why are technical interviews seen as broken these days? Photo by SEO Galaxy on Unsplash

Being an exceptional developer and being good at interviewing for that same developer job are two different skills.

It’s a disconnect that frustrates everyone. The best engineers might never make it past the first round, while smooth talkers who’ve memorized algorithm patterns sail through.

The question isn’t whether technical interviewing is broken—it’s understanding what we can do to fix it.

The Root of the Problem

Technical interviews are broken because they test the wrong things in the wrong ways. Consider the typical experience: candidates spend weeks preparing to solve problems that bear no resemblance to their future work, all while operating under constraints that would never exist in a real-world environment.

The relevance gap is striking. Front-end developers face questions about red-black trees. DevOps engineers are challenged with dynamic programming. These topics have little bearing on day-to-day effectiveness.

We’ve also stripped away real-world context. Programming happens within teams, with documentation, existing codebases, and the ability to research and iterate. Remove these elements, and you create a fake environment that reveals more about performance under bizarre conditions than it does about actual engineering capability.

The stress factor exacerbates the situation. Companies claim they want to see how candidates operate under pressure, but whiteboard anxiety bears no resemblance to debugging production issues or meeting deadlines. This forced stress causes talented engineers to underperform, distorting their actual abilities.

Finally, there’s inadequate preparation on both sides. While candidates spend months practicing, interviewers often wing it without clear evaluation criteria or calibration with their team, creating inconsistent and unfair assessments.

The Whiteboard Interview Paradox

The pros and cons of the whiteboard interview Photo by Kaleidico on Unsplash

The whiteboard interview remains both the most hated and most common practice. It is a contradiction that reveals something important about technical hiring challenges.

The criticism is valid. When did you last implement a breadth-first search from memory, without documentation, while someone watched over your shoulder? This artificial environment tests memorization and performance anxiety rather than problem-solving ability.

These interviews also eliminate collaboration—a core aspect of software development. They tell us nothing about a candidate’s ability to work with others, incorporate feedback, or contribute to team dynamics.

Yet whiteboard interviews persist for practical reasons. They offer scalability and consistency that many other formats lack. For companies conducting hundreds of interviews, standardized formats that can be easily administered provide enormous operational benefits. The standardization also helps reduce bias by ensuring that every candidate faces similar challenges and is evaluated against consistent criteria.

The key insight? Whiteboard interviews aren’t inherently evil—they’re just insufficient on their own. Used in conjunction with pair programming and technical discussions, they can provide valuable insights. The problem is when they become the sole determinant of a candidate’s fate.

The Take-Home Project Dilemma

Take-home projects were the perfect solution. Candidates can demonstrate skills in a comfortable environment, using familiar tools, with time to think. They can showcase not just coding ability but also attention to detail, testing practices, and documentation skills.

They excel at evaluating fundamental individual contributor skills—how candidates approach problems, structure solutions, and consider maintainability. But they create new problems.

The time investment is unsustainable. What companies call “four hours” often becomes an entire weekend. For candidates interviewing with multiple companies, this becomes an impossible burden that raises ethical questions about unpaid work.

There are also fairness concerns. Did the candidate have a quiet weekend to focus on their work? How much help did they receive? These unknowns make a fair comparison difficult.

Success with take-home projects requires careful scoping and careful time estimates. Some companies now offer compensation, acknowledging the value of candidates’ time and effort.

AI: The New Reality We Can’t Ignore

AI assistants, such as Claude and ChatGPT, have become integral to modern development. Pretending they don’t exist during interviews creates yet another artificial constraint that distances interviews from actual work.

Instead of viewing AI as a threat, we should see it as an opportunity to evolve our assessments. The riveting question isn’t whether candidates can solve problems without AI—it’s how effectively they solve problems with it.

Watching candidates interact with AI reveals crucial insights. Do they blindly implement suggestions or carefully evaluate output? Can they recognize when AI has proposed a suboptimal solution? How well do they decompose complex problems into AI-solvable components?

Even with AI assistance, time constraints continue to be a significant factor. Integrating AI-generated code, debugging issues, and ensuring the overall approach makes sense still requires genuine technical skill. By incorporating AI into interviews, we assess the skills that actually matter in today’s development environment.

What Should the Ideal Technical Interview Look Like?

The use of take-home projects in technical interviews Photo by Jainath Ponnala on Unsplash

The ideal technical interview doesn’t exist—but the ideal technical interview process does. It recognizes four key characteristics that make up a top software engineer:

  1. Technical and coding skills,
  2. Understanding the engineering context,
  3. Soft skills (communication, collaboration, and empathy),
  4. Adaptability.

Here’s how to assess them:

Start with real-world problems. Ask candidates to fix an actual bug from your codebase or implement a simplified version of a feature you’ve built. This shows how they navigate real code, understand patterns, and make practical trade-offs.

Include pair programming. Code alongside candidates to see not just their technical skills but their communication style, receptiveness to feedback, and collaborative approach. This mirrors actual development work far better than solo performances.

Have technical discussions. Explore past projects, technical decisions, and lessons learned. Ask candidates to explain complex systems they’ve built and trade-offs they’ve made. These conversations reveal how they think, not just what they know.

Don’t skip behavioral assessment. Understanding how candidates have handled conflict, learned from failure, or driven technical decisions provides crucial context for their technical abilities. These stories often reveal more about potential impact than any coding exercise.

Embrace AI as part of the process. Let candidates use the tools they’d actually use on the job. Evaluate how they leverage these tools, not whether they can work without them.

Respect candidates’ time. If you use take-home projects, scope them accurately. Consider compensation. Make the interview process itself a positive experience that reflects your engineering culture.

Fixing What’s Broken

Fixing technical interviews means accepting that a perfect evaluation is impossible. Instead, focus on gathering diverse signals that collectively paint a picture of a candidate’s potential.

Your interview process discloses your values. Are you looking for individuals who perform under pressure or those who thoughtfully solve real-world problems? Do you value memorization or learning ability? Competition or collaboration?

The companies that adapt most successfully will be those that remain focused on the fundamental question: what makes someone successful in this specific role, at this particular company, at this exact time?

The technical interview process may be broken, but it’s not beyond repair. By acknowledging its flaws and thoughtfully designing alternatives, we can create assessments that are not just more effective but more humane. In doing so, fixing technical interviews helps us build not just better teams, but a better industry.

🐉 March 31, 2025