Two Sum Deconstructed: More Than an Array Problem
The two sum problem teaches complement search, hash maps, and tradeoffs. Trace the derivation step by step.
Why brute force Two Sum is slow and what specifically causes the `O(n²)`
How complement search turns a nested loop into a single pass
How to trace the hash map solution frame by frame
Why the Two Sum pattern generalises to Three Sum, Four Sum, and subarray problems
What makes the two sum problem the most frequently asked coding interview question across Google, Amazon, Meta, and practically every company that tests DSA?
The difficulty is low, but the solution, done properly, demonstrates three ideas interviewers want to see you reason about: complement search, hash table acceleration, and the time space tradeoff. It's tempting to memorise the two line hash map answer and move on. That's the wrong takeaway. The real value is understanding why it works.
O(n) solution replaces a nested search with a hash map lookup by recognising that the needed value is always deterministic. Understanding this complement insight matters more than memorising the code.What the two sum problem actually asks
The two sum problem gives you an array of integers and a target. You need to find two elements that add up to the target and return their indices.
The brute force approach is straightforward. For every element, you scan the rest of the array looking for its partner. You end up with two nested loops where every element checks every other element, producing O(n²) comparisons.
Python
The inner loop is the bottleneck. For each element at position i, you're asking whether any element after it adds up to the target. That question requires scanning up to n-1 remaining elements. Multiply that across n starting positions and you get n × n comparisons in the worst case.
Take a concrete case. With nums = [2, 7, 11, 15] and target = 9, the brute force checks (2,7), then (2,11), then (2,15), then (7,11), then (7,15), then (11,15). It finds (2,7) on the first check and returns. But imagine the pair sits at the very end of a 10,000-element array. The algorithm grinds through nearly 50 million comparisons before finding it. That's what happens when you don't know what you're looking for.
Most explanations skip something here. The inner loop isn't doing anything sophisticated. It's performing a linear search for a specific value. And linear search is exactly the operation that hash tables exist to eliminate.
The complement insight behind the two sum problem
Before touching any data structure, there's a conceptual shift that makes the O(n) solution possible.
When you're at element nums[i], you don't need to search for "some element that works." You know exactly what you need. The value target - nums[i] is the complement. It's deterministic. There's no ambiguity, no range to check, no condition to evaluate. One subtraction gives you the exact value you're looking for.
Two Sum's O(n) solution works by storing each visited element in a hash map and checking whether the complement (target minus the current element) already exists. This converts a nested loop into a single pass. The tradeoff is O(n) extra space for the hash map.
This reframing changes the problem entirely. Instead of "find a pair that sums to target," the question becomes "for each element, check whether its complement has already appeared." The first framing requires comparing pairs. The second requires a single lookup per element.
“The complement is deterministic. You don't search for a match. You compute it.”
From O(n²) to O(n) with a hash map
Once you know the complement, the remaining question is simple. How do you check whether a value already exists in O(1)? That's what a hash map gives you. You trade O(n) space for O(1) lookups. The solution, traced step by step:
Python
Walk through it with nums = [2, 7, 11, 15] and target = 9.
- 02
- 17
- 07
- 12
- 0{}
- 1{2: 0}
- 0No
- 1Yes, at index 0
- 0Store {2: 0}
- 1Return [0, 1]
At step 1, you compute complement = 9 - 7 = 2. You check whether 2 exists in the map. It does, stored at index 0. Done in one pass through the array, with O(n) time and O(n) space.
Notice what didn't happen. The array wasn't sorted (which would lose the original indices). No two pointers were needed (which requires sorted input). There was no backward scan at all. A single forward pass handled everything because the hash map remembered what came before.
Common mistakes when implementing two sum
The hash map solution is short, but there are a few traps that catch people in interviews more often than you'd expect.
- Inserting before checking: If you add
nums[i]to the map and then check whethercomplementexists, an element can match with itself. Considernums = [3, 2, 4]withtarget = 6. If you store3first and then look for6 - 3 = 3, you'll find the element you just stored. The fix is simple: always check the map before inserting. - Returning values, not indices: The problem asks for positions in the original array, not the elements themselves. It's an easy thing to mix up when you're writing quickly under time pressure. Using
enumerate()in Python or tracking the index manually in other languages keeps you on the right side of this. - Mishandling duplicates: If the input is
nums = [3, 3]andtarget = 6, the solution still works correctly because you check for the complement before storing the current element. At index 0, you look for3in an empty map, don't find it, and store{3: 0}. At index 1, you look for3again, find it at index 0, and return[0, 1]. The order of operations handles duplicates naturally, but only if you don't reverse it. - Assuming one solution: The standard LeetCode version guarantees a unique answer, but in interviews the problem might not come with that guarantee. You might be asked to return all valid pairs, or to handle the case where no pair exists. Clarifying constraints before writing code is part of what interviewers evaluate. Asking "can I assume exactly one valid pair?" takes five seconds and prevents a rewrite halfway through your solution.
Why this pattern generalises
The complement insight extends well beyond the two sum problem. The same reasoning appears whenever you need to find elements that satisfy a relationship to a known target.
Three Sum reduces to Two Sum. You fix one element, compute the remaining target, and run Two Sum (or two pointers on the sorted remainder) on what's left. The jump from Two Sum to Three Sum isn't a new algorithm. If you understand why complement search works, Three Sum's structure becomes visible before you read the solution.
The same pattern appears in Subarray Sum Equals K, where you're looking for a prefix sum complement rather than an element complement. The data is different, but the reasoning is the same. You know what you need, you store what you've seen, and you check on each step. The prefix sum version stores running sums instead of individual elements, but the logic is identical. Compute the target difference and check the map.
The complement idea stretches further than you'd expect. Even in problems that don't resemble Two Sum on the surface, the core question is often the same. Can you compute what you need instead of searching for it? That question alone eliminates entire categories of brute force approaches. It shows up in binary search on answer space, in prefix sum difference queries, and in DP state transitions where the optimal substructure depends on a computed remainder. The implementation varies, but the mental move stays the same.
This is also why Two Sum is a better learning problem than most people give it credit for. It's easy to dismiss because the code is short. But the reasoning behind it, the shift from "search for something" to "compute what you need and look it up," is a skill that compounds. You'll get the most out of Two Sum by spending time understanding the derivation, not by moving on to the next problem as fast as possible.
The tradeoff interviewers want you to explain
Interviewers don't just want the O(n) solution. They want you to articulate why you chose it over the brute force, and what you gave up.
O(1) average lookup per elementThe hash map solution trades space for time. In most interview contexts, that's the right tradeoff. Memory is cheap and n is large enough that O(n²) becomes too slow. But there are cases where O(1) space matters, particularly in embedded systems or memory constrained environments. Naming those cases shows the interviewer you're reasoning about the tradeoff, not just reciting the faster solution.
There's also the sorted array variant. If the input is already sorted (or you're allowed to sort it), the two pointer approach gives you O(n) time with O(1) space. Two pointers start at opposite ends and move inward based on whether the current sum is too large or too small. That gets you the best of both dimensions, but it only works when ordering is preserved or indices don't matter.
Presenting the hash map solution and stopping is the default. Describing the two pointer variant and explaining when each applies demonstrates something interviewers notice. You don't need to implement both variants. You need to know why both exist and what constraints make one preferable.
Codeintuition's Arrays course teaches the two pointer pattern through its understanding lesson before you attempt any problems. The complement insight, the two pointer variant, and the hash map acceleration are all taught as derivations, not as solutions to memorise.
Edge cases and follow-up variants
Interviewers often use Two Sum as a launching point for harder questions. Knowing the common follow-ups ahead of time lets you answer them without scrambling.
- Sorted input: If the array is already sorted, the interviewer expects you to mention the two pointer approach without being prompted. Starting one pointer at the beginning and one at the end, you move the left pointer right when the sum is too small and the right pointer left when the sum is too large. This gives
O(n)time withO(1)space, which is strictly better than the hash map version when sorting is free. - Return all pairs: The standard problem guarantees one answer. But if there are multiple valid pairs, you'll need to collect results instead of returning immediately. The hash map approach adapts well here. Instead of storing a single index per value, store a list of indices. Then for each complement match, generate all valid
(i, j)combinations. The time complexity staysO(n)for the scan, though the output size depends on how many pairs exist. - Large negatives: Two Sum works identically with negative values. The complement
target - nums[i]produces the correct value regardless of sign. This trips people up more in mental walkthroughs than in code, because negative arithmetic under pressure leads to sign errors. If you're tracing through an example during an interview, pick numbers that make the arithmetic obvious.[−1, 5, 3]withtarget = 2is easier to trace than[−47, 23, −8]withtarget = −55. - Streaming input: A less common but interesting variant asks you to handle elements arriving one at a time, where you can't revisit previous elements. The hash map solution already handles this naturally because it processes elements in a single forward pass. You don't need the full array upfront. Each element is checked against the map and then stored, which means the algorithm works on streams without modification. That property isn't obvious until you think about it, and mentioning it in an interview demonstrates that you understand the solution at a structural level rather than just a syntactic one.
Why the solution works
Two Sum is where interview preparation usually starts. It's also where thinking about why the solution works usually stops. The complement insight, hash table acceleration, and the time space tradeoff are reasoning tools that apply across hundreds of problems. Once you can derive the Two Sum solution from scratch, without recalling the code, you've built a skill that transfers to problems you haven't seen yet. That transfer is what matters. Memorising 200 solutions without the ability to derive a new one leaves you less prepared than internalising 20 patterns and applying them under pressure.
If you want to build this kind of first principles reasoning systematically, Codeintuition's learning path covers the full progression from arrays through dynamic programming. For a deeper look at how these patterns connect, see our guide on how to master DSA.
The next time you see an unfamiliar problem that mentions "find a pair" or "subarray with target sum," you won't need to remember a solution. You'll compute the complement and check the map.
Learn why patterns work, not just how to code them
The complement insight from Two Sum is one derivation in a full progression from arrays through dynamic programming. Start with the free Arrays course and build reasoning that transfers. Permanently FREE
O(n²) time with O(1) space. If the array is sorted (or you're allowed to sort), the two pointer approach achieves O(n) time with O(1) space. The hash map approach is preferred for unsorted arrays because it preserves the original indices and runs in O(n) time with a single pass.O(n) time with O(1) space. The hash map approach works on unsorted input and gives O(n) time with O(n) space. In interviews, the hash map version is usually expected because the problem asks for original indices, which sorting would destroy. Both are valid, and explaining when each applies shows deeper understanding than producing either one alone.