What Google Looks for Coding Interview Candidates
What Google looks for in coding interview candidates: the 4-criterion rubric, where you're losing points, and how to train each skill.
The four criteria Google's interview rubric actually scores
Why verification is the hardest criterion and how to train it
Where most candidates lose points without realizing it
Which coding patterns Google tests more than other companies
You probably think you know what Google looks for coding interview evaluation: correct code, written fast. That's one of four criteria. Google interviewers score problem decomposition, coding clarity, verification, and communication. Writing a clean solution without explaining the reasoning routinely scores lower than narrating a slightly imperfect solution while proving each step correct.
That gap between "solved it" and "demonstrated the reasoning" changes how you should prepare.
What Google looks for in a coding interview
Google's interview rubric evaluates four criteria: problem decomposition, coding clarity, verification, and communication. The rubric isn't secret. Google's hiring documentation references these criteria publicly. Yet preparation still gravitates toward "solve the problem" as the only metric. Preparation platforms are optimised for problem solving, not for the other three criteria, so you end up training what the platform measures, not what the interviewer measures. Each criterion tests something different.
- Problem decomposition: Is the ability to break a problem into the right subproblems before writing any code. Google interviewers watch for whether you identify the core constraint, separate it from the noise, and articulate your plan before touching the keyboard. Jumping straight to code, even if the code is correct, signals weak decomposition.
- Coding clarity: Is what most people think the whole interview is about. Write readable code with meaningful variable names and handle edge cases. This criterion matters, but it's one of four.
- Verification: Is the hardest one to score well on. It means proving your solution is correct at each step, not after the fact. Trace variable state through edge cases mentally. Identify where the algorithm could break and explain why it doesn't. Don't wait for the interviewer to ask "what about an empty array?" Address it before they need to.
- Communication: Means explaining why you're making each decision as you go. "I'm using a hash map here because I need
O(1)lookup for the complement" counts. "I'm writing a for loop" doesn't.
The hardest criterion
Verification is the hardest criterion to train because standard practice environments don't require it. On LeetCode, you write code, hit Run, and the test cases tell you whether it works. That builds a habit of external verification. Google interviews require internal verification: proving correctness through reasoning before the code executes. It's easy to underestimate how much this affects your score.
Take the Punctual Arrival Speed problem, a Google tagged problem on Codeintuition that uses minimum predicate search. Given a list of distances and a fixed number of hours, find the minimum speed at which you can arrive on time, rounding each segment's travel time up to the next integer.
You'll probably recognise this as a binary search problem. Necessary, but not sufficient for a strong Google score. Verification means walking the interviewer through why binary search on the answer space works here.
Python
The four verification steps that earn the score:
- 1Identify the monotonic predicate: "If speed S works, every speed greater than S also works." That's the invariant that makes binary search valid on this answer space.
- 2Trace the boundary condition: At speed 1, total time equals the sum of all distances. At the maximum distance value, each segment takes at most 1 hour.
- 3Explain the discard logic at the midpoint: "If total time exceeds the limit, every speed at or below this midpoint also fails, so we discard the left half."
- 4Address the edge case: the last segment doesn't require rounding up, which changes the ceiling calculation for that segment specifically.
Done aloud while coding, this covers decomposition (identified the predicate structure), verification (proved the invariant and traced boundaries), and communication (narrated each decision). The code is fifteen lines. The reasoning around it is what earns the score.
“Binary search finds the answer. Proving why binary search is valid here is what Google scores.”
Where most candidates lose points
The four criteria create four distinct failure modes. You'll typically stumble on one or two, not all four.
- Silent solving: You understand the problem, write the correct solution, but don't explain your reasoning. The interviewer sees correct code but can't evaluate your thought process. This fails communication and partially fails decomposition, because the interviewer can't tell whether you identified the right subproblem or just pattern matched from memory.
- Skipping decomposition: You jump to code within the first two minutes. Even if the code is correct, the interviewer didn't see you break the problem down. For harder problems, you missed the chance to show the most valuable skill: identifying what kind of problem this is before deciding how to solve it.
- No edge case verification: You write a solution that handles the main case but don't trace what happens with empty inputs, single element arrays, or boundary values. On predicate search problems, forgetting to check the ceiling division edge case on the last segment comes up often.
- Pattern recall without reasoning: You recognise the pattern from practice and apply it correctly, but when the interviewer asks "why does this work?" you can't articulate the underlying invariant. You applied a known solution to a recognised shape without demonstrating that you understand why it's correct. That's the gap between near transfer and the kind of reasoning Google specifically tests.
How the 45 minute format shapes your approach
Google gives you one problem in 45 minutes. That sounds generous until you realise the rubric expects you to spend roughly half that time not writing code.
A strong 45 minute session typically breaks down like this:
- Minutes 0-8: Read the problem, ask clarifying questions, and restate the constraints in your own words. This is where decomposition happens. If you skip it, you've already lost points on one criterion before you've typed anything.
- Minutes 8-15: Talk through your approach. Name the pattern you're considering, explain why it fits, and describe the data structures you'll use. Don't write a single line yet. The interviewer is scoring your reasoning right now.
- Minutes 15-35: Code the solution while narrating. Each decision gets a sentence. "I'm initialising
loat 1 because speed can't be zero." "I'm using ceiling division here because partial hours round up for every segment except the last." This is where coding clarity and communication overlap. - Minutes 35-45: Trace through at least two test cases manually. One normal case, one edge case. Walk the interviewer through variable state at each step. This is pure verification, and it's where most candidates run out of time because they spent too long coding in silence.
The candidates who struggle aren't the ones who can't solve the problem. They're the ones who solve it in 20 minutes of silent coding and then have nothing structured to show for the remaining 25. Google's format rewards a slower, narrated approach over a fast, quiet one.
What coding patterns Google looks for specifically
Google's pattern emphasis is distinct from Amazon, Meta, and other top tier companies. Based on problem tagging data across 450+ company tagged problems, Google leans heavily on predicate search, a pattern where you binary search on the answer space rather than on a sorted array.
Problems like Punctual Arrival Speed, Trip Completion Frenzy, and Minimum Shipping Capacity all carry Google tags. They require a different mental model than classic binary search. Instead of searching for an element, you're searching for the boundary where a condition flips from false to true.
Google also tests backtracking (8 company tags across problems like N-Queens and Sudoku), counting through hash tables (9 tags), and sliding window variants (6-7 tags for both fixed and variable). But predicate search is where Google diverges most from other companies. Amazon tests broader pattern diversity. Meta tests more graph and backtracking problems. Google disproportionately rewards the ability to reason about search spaces.
If you're specifically targeting Google, spending time on predicate search patterns (both minimum and maximum variants) has higher ROI than adding more generic medium problems to your count.
Building the skills Google actually evaluates
Three of the four dimensions (decomposition, verification, communication) are process skills. You can't learn them by solving more problems. You learn them by practising the process around problem solving.
Decomposition improves when you train pattern identification explicitly. Before solving any problem, you should be able to name the pattern it belongs to and explain why. If your preparation doesn't include identification training, decomposition stays implicit. The identification lesson for minimum predicate search on Codeintuition, for instance, teaches the structural triggers that signal "this is an answer space binary search" before you attempt any problems.
Verification improves when you practise tracing variable state mentally, step by step, before running code. Hold a small program in your head and step through it as if you were the computer. The 500+ visual walkthroughs on Codeintuition trace every variable at every step. They train you to eventually do the same thing without the visuals.
What good verification sounds like in practice
Knowing you should verify isn't the same as knowing how. Here's what strong verification actually sounds like during an interview, using a simple example: checking whether a HashMap solution handles duplicate keys correctly.
You wouldn't just say "I think this handles duplicates." You'd say something like: "If the input contains [3, 3, 7] and I'm looking for a target of 6, my map stores index 0 for key 3. When I reach index 1, I check the map and find key 3 at index 0. Since 0 != 1, that's a valid pair. But if the target were 10, I'd need key 3 to pair with 7, so I'd overwrite key 3's index to 1 before moving on. The overwrite doesn't cause a problem because I've already checked the earlier index."
That's thirty seconds of talking. It covers the normal case, the duplicate case, and the overwrite edge case. The interviewer now knows you understand why the solution handles duplicates, not just that it does.
You can practise this without an interviewer. After solving any problem, close the IDE and explain your solution to an empty room. If you can't walk through variable state for two test cases from memory, you don't understand the solution well enough for Google's verification standard. The goal isn't to memorise the code. It's to understand the invariants well enough that tracing them feels natural.
Communication is the one most people neglect entirely. It improves when you practise solving under realistic constraints. Google gives you 45 minutes, no hints, and no problem title. If you've only ever practised in environments where the problem category is visible and you can retry indefinitely, the interview conditions create a mismatch. Codeintuition's Interview Mode hides the problem name, limits your execution attempts, and enforces time pressure by difficulty, which forces narration because you can't rely on trial and error.
For the complete picture on FAANG interview preparation, including how these four criteria fit into a broader preparation roadmap, see our FAANG coding interview preparation guide.
The Searching course covers all five search patterns from first principles, including the predicate search variants Google specifically emphasises. The free Arrays and Singly Linked List courses let you experience the same three phase teaching model (understand the invariant, identify the triggers, apply under pressure) on foundational patterns before moving to the search patterns Google prioritizes. Start there and see if learning why a pattern works before applying it changes how you approach the four criteria Google actually scores.
That Punctual Arrival Speed walkthrough above was fifteen lines of code. The reasoning around it, identifying the monotonic predicate, tracing boundary conditions, explaining the discard logic, catching the ceiling division edge case, covered all four criteria on Google's rubric. That's what training the rubric, not just the code, produces. The solution isn't the hard part. Proving it's correct while you build it is.
Train the full rubric, not just the code
Codeintuition teaches pattern identification and verification through visual walkthroughs before you solve a single problem. See what training all four criteria feels like. Permanently FREE