We Stopped Asking Leetcode
Feb 18, 2026
Mahdin M Zahere
We used to ask leetcode in interviews. We stopped. Here's what we do instead.
I get why leetcode became the standard. It's objective, it's scalable, and it gives you a number you can compare across candidates. The problem is that it tests one thing — your ability to solve leetcode problems. It doesn't tell you if someone can debug a production issue at 2 AM, design a clean API, or ship a feature end-to-end without hand-holding.
We had engineers who crushed our coding rounds and then struggled to scope a real feature. We had candidates who bombed the algorithm question but had shipped products used by thousands of people. The signal-to-noise ratio was garbage.
So we killed it.
What we do instead
Every engineering candidate at Surface Labs gets a real-world project. Not a toy problem. Not a timed puzzle. A task that mirrors actual work they'd do in the first month on the job.
For a backend role, it might be: "Here's a simplified version of our lead routing system. A lead comes in with these attributes. Design the routing logic, handle edge cases, and write the tests." It takes 3–4 hours. We pay candidates for their time. And we review the project the way we'd review a PR — looking at design decisions, trade-offs, code quality, and how they handle ambiguity.
For a frontend role, it might be: "Build a small feature in our form builder. Here's the design, here's the API spec. Ship it." We're looking at how they structure components, how they think about state management, and whether the final product actually works — not whether they memorized the optimal solution to "number of islands."
For more senior roles, we do a system design session — but with our actual architecture, not a hypothetical "design Twitter." We walk through real problems we're solving and see how they think about the trade-offs we face every day.
Why this works better
Three reasons.
It's fair. Leetcode rewards people who've grinded hundreds of practice problems. Our approach rewards people who've shipped real software. We'd rather hire someone who spent their weekends building a side project than someone who spent them on LeetCode Premium.
It predicts performance. The tasks we give candidates are close enough to real work that how they perform on the task is a reasonable predictor of how they'll perform on the job. We've been running this process for over a year and the correlation between project quality and on-the-job performance is significantly better than what we saw with leetcode.
It respects the candidate's time — and ours. The candidate gets to show what they're actually good at. We get to evaluate what actually matters. Nobody walks out feeling like they got rejected because they couldn't implement Dijkstra's algorithm from memory.
The controversial take
If your interview process looks nothing like the actual job, you're filtering for the wrong people. You're building a team of great interviewers, not great engineers.
I've talked to senior engineers at top companies who can't pass their own company's coding screen. Think about what that means. The process they use to hire would reject them. That's broken.
We're not claiming our process is perfect. It takes more effort to run. It's harder to standardize. And it requires the interviewers to actually engage with the candidate's work instead of checking boxes on a rubric.
But we're hiring engineers who ship. And that's what matters.
What's the worst interview question you've ever been asked?


