Memoization vs Tabulation: When Each Wins
Memoization vs tabulation compared with a coin change walkthrough. See exactly when top down beats bottom up and how to choose in interviews.
How memoization and tabulation solve DP from opposite directions
A frame by frame Coin Change walkthrough for both approaches
When memoization feels natural versus when tabulation wins
How tabulation enables space optimization with rolling arrays
You know that engineer who rewrites every recursive DP solution as a bottom up table? They're not being more rigorous. They're doing unnecessary work on half their problems. Memoization and tabulation decompose problems in opposite directions, and the direction you pick changes how naturally the solution comes together.
Two directions, same destination
Memoization stores results of recursive subproblems as they're encountered, working from the target down to base cases. Tabulation fills a table iteratively from base cases up to the target.
Both exploit the same property: overlapping subproblems. A problem with overlapping subproblems recomputes the same smaller problems multiple times during a naive recursive solution. Memoization and tabulation both eliminate that redundancy. They just do it from opposite ends.
With memoization, you start at the problem you actually want to solve. You recurse into smaller subproblems, and the first time you compute each one, you store the result. Every subsequent call to the same subproblem returns the cached value instead of recomputing it. The recursion itself defines which subproblems get solved and in what order.
With tabulation, you start at the smallest subproblems (the base cases) and systematically build up to the target. You fill a table where each entry depends only on entries you've already computed. No recursion. The iteration order is explicit, and you control it directly.
That difference isn't cosmetic. It changes how you think about the problem while you're building the solution.
Coin change - Top down
Take the coin change problem. Given coins of denominations [1, 3, 4] and a target amount of 6, find the minimum number of coins needed.
With memoization, you start at the target and ask "What's the fewest coins to make 6?" That breaks into three recursive calls, one for each coin denomination. minCoins(6) calls minCoins(5), minCoins(3), and minCoins(2). The recursive tree looks like this.
Python
Trace the execution frame by frame.
- 1
minCoins(6)branches intominCoins(5),minCoins(3),minCoins(2). - 2
minCoins(5)branches intominCoins(4),minCoins(2),minCoins(1). - 3
minCoins(4)branches intominCoins(3),minCoins(1),minCoins(0), which hits the base case and returns 0. - 4
minCoins(3)branches intominCoins(2),minCoins(0), andminCoins(-1)which is invalid. - 5
minCoins(2)branches intominCoins(1), andminCoins(-1)andminCoins(-2), both invalid. - 6
minCoins(1)callsminCoins(0), hits the base case, returns0. Result is1coin. Cached. - 7
minCoins(2)now resolves withmin(1+1) = 2coins. Cached. - 8
minCoins(3)resolves withmin(2+1, 0+1) = 1coin. Cached. - 9Back up to
minCoins(4), which triesminCoins(3)and gets a cache hit returning1, triesminCoins(1)and gets another cache hit returning1, andminCoins(0)returns the base case. Result ismin(1+1, 1+1, 0+1) = 1coin. Cached. - 10
minCoins(5)resolves using cached values for4,2, and1withmin(1+1, 2+1, 1+1) = 2coins. Cached. - 11
minCoins(6)resolves withmin(2+1, 1+1, 2+1) = 2coins (two coins of denomination 3).
The recursion explores only the subproblems that the target actually needs. Steps 9 and 10 show the payoff. minCoins(3) and minCoins(1) return instantly from the cache instead of recomputing their entire subtrees.
“With memoization, you don't decide which subproblems to solve. The recursion figures that out on its own.”
Coin change - Bottom up
Same problem, same coins [1, 3, 4], same target of 6. But now you start from the bottom.
With tabulation, you create a table dp[0..6] where dp[i] represents the minimum coins needed to make amount i. You fill it left to right, starting from what you already know.
Python
The table fills like this.
- 1
dp[0] = 0(base case). - 2For
dp[1], only coin1fits.dp[0] + 1 = 1. Result is1. - 3For
dp[2], coin1givesdp[1] + 1 = 2. Coins3and4don't fit. Result is2. - 4For
dp[3], coin1givesdp[2] + 1 = 3. Coin3givesdp[0] + 1 = 1. Coin4doesn't fit. Result is1. - 5For
dp[4], coin1givesdp[3] + 1 = 2. Coin3givesdp[1] + 1 = 2. Coin4givesdp[0] + 1 = 1. Result is1. - 6For
dp[5], coin1givesdp[4] + 1 = 2. Coin3givesdp[2] + 1 = 3. Coin4givesdp[1] + 1 = 2. Result is2. - 7For
dp[6], coin1givesdp[5] + 1 = 3. Coin3givesdp[3] + 1 = 2. Coin4givesdp[2] + 1 = 3. Result is2.
Answer is 2 coins. Same result, no recursion. But look at what happened. Every subproblem from 0 to 6 got computed, in order, whether the target needed all of them or not. For this small example, that's trivial. For problems with sparse dependency graphs where the target only touches a fraction of possible states, you might compute 5,000 subproblems when memoization would've touched 50.
O(amount × coins) for coin change. The difference is in how the computation unfolds, not in what it computes.Where each approach feels natural
The better question isn't "which is faster?" It's which direction makes the problem easier to think about.
Memoization tends to feel right when the recursive decomposition is obvious from reading the problem. If you can write the recurrence relation directly from the problem statement, memoization lets you translate that recurrence into code almost verbatim. It also shines when not all subproblems are needed. Problems with large state spaces where only a fraction of states are reachable from the target benefit from memoization's lazy evaluation. You don't compute what you don't need.
When the state space is complex or multidimensional (subproblems indexed by tuples like (position, remaining_capacity, items_used)), defining the iteration order for tabulation requires real thought. Memoization sidesteps that entirely because the recursion handles ordering for you.
Tabulation tends to feel right when subproblems have a clear, linear ordering. When you can enumerate states from smallest to largest and each state depends only on previously computed states, tabulation's loop structure maps directly to the problem. It's also the better starting point if you need space optimization later. Rolling arrays (keeping only the previous row of a 2D table) are straightforward with tabulation because you control the iteration. With memoization, the cache holds all computed states until the recursion unwinds.
The performance difference between the two is usually negligible for the same problem. Tabulation avoids recursion overhead (stack frames), and memoization avoids computing unnecessary subproblems. On problems where both feel natural, either works. The real distinction is which one lets you build the solution with less friction.
Recursive solutions do carry some stack overflow risk in theory. For typical interview problem sizes (n < 10,000), it rarely matters. But the theoretical difference exists, even if it won't change your approach on interview day.
Three questions that pick the direction for you
When you're staring at a new DP problem, you don't need a flowchart. Three questions get you to the right approach in under a minute.
- 1Recurrence clarity: Can you write the recurrence from the problem statement without thinking about iteration order? If yes, start with memoization. The recurrence is the code. You'll define your recursive function, add a cache, and move on. Coin change, longest common subsequence, and edit distance all fit this pattern.
- 2State space density: Does every subproblem from 0 to
nneed solving, or does the target only touch a sparse subset of the state space? Dense problems lean toward tabulation. You're filling the table regardless, so you might as well use a clean loop. Sparse problems lean toward memoization. You won't waste time on states that never contribute to the answer. - 3Space optimization: Will the interviewer ask about space? If you suspect a follow up about reducing memory, tabulation gives you a head start. Rolling arrays are mechanical once you've got the iterative version. With memoization, you'd need to restructure entirely.
Those three questions cover roughly 90% of interview DP problems. The remaining 10% are tree DP or graph DP where memoization wins by default because defining a clean iteration order over an irregular structure isn't worth the effort.
Common mistakes when choosing between memoization and tabulation
- Assuming tabulation is always "better": Engineers who've been told "iterative is faster than recursive" apply that rule to everything. For DP, the time complexity is identical. Tabulation's edge is space optimization potential, not raw speed. If you don't need space optimization, pick whichever is clearer.
- Missing base cases: With memoization, base cases are natural return statements in the recursion. With tabulation, forgetting to initialize
dp[0](or whatever the base case is) means your entire table builds onfloat('inf')or 0 incorrectly. The bug is silent because the code runs without errors. - Mechanical conversion: Not every memoized solution translates cleanly. When the state space is irregular (a graph based DP where states aren't enumerable in a simple loop), forcing tabulation creates convoluted iteration logic that's harder to debug and harder to verify. If the recursive version is clear, keep it.
- Ignoring state space size: Memoization caches every state it encounters. If the state space is enormous but only a small subset is reachable, that's efficient. If the state space is small and dense (like coin change with amount = 10,000), the cache and the table end up the same size, and tabulation's simpler memory access pattern gives it a slight practical edge.
What rolling array optimization actually looks like
The "space optimization" advantage of tabulation gets mentioned often but rarely shown concretely.
Take a 2D DP problem where dp[i][j] depends only on dp[i-1][j] and dp[i-1][j-1]. The 0/1 knapsack is the classic example. A naive tabulation builds the full n * W table. But since row i only reads from row i-1, you can keep just two rows and swap them after each item.
Python
That's O(W) space instead of O(n * W). For 1,000 items and capacity 10,000, you're storing 10,000 values instead of 10,000,000. The optimization is mechanical once you've identified which rows the current computation reads from. And it's only possible because tabulation gives you explicit control over iteration order.
With memoization, the cache holds everything until the recursion fully unwinds. You can't drop earlier states mid-computation without changing the approach entirely.
Where to go from here
Memoization and tabulation are two implementations of the same core idea, which Richard Bellman originally described as dynamic programming in the 1950s. The deeper skill is recognising when a problem has overlapping subproblems in the first place. If you're still building that identification instinct, our guide on how to identify DP problems walks through the three structural indicators that signal a problem needs dynamic programming. For more detail, see our dynamic programming interview guide.
For the formal decision framework on when to choose top down vs bottom up (including space optimization with rolling arrays), that's a separate question with its own analysis.
Codeintuition's Dynamic Programming course teaches both memoization and tabulation across 38 problems and 5 pattern families, with each pattern following the understand identify apply sequence. The coin change lesson walks through both directions with visual state tracking at every step, so you can see exactly how the cache and the table evolve frame by frame.
If you haven't worked through the 15 coding interview patterns yet, DP patterns make more sense after you've seen how pattern identification works in simpler contexts. The free Arrays and Singly Linked List courses cover sliding window and two pointer patterns with the same derivation first approach used in the DP course. Understanding why a window expands and contracts builds the same reasoning skill you'll need to understand why a DP recurrence transitions between states.
At some point you'll open a new DP problem and already know which direction to build from before writing a single line. When that happens, the memoization vs tabulation debate stops being a question and starts being a choice you make based on the problem in front of you.
See both DP directions traced frame by frame
Codeintuition's Dynamic Programming course teaches memoization and tabulation across 38 problems with visual state tracking at every step. Know which direction to build from before writing a line of code. Try it FREE on two full courses.
O(n) to O(1) or from O(n*m) to O(n) using a rolling array. Memoization can't do this because the cache holds all computed states. If the interviewer asks you to optimize space, that's your cue to switch to tabulation.n under 10,000), stack overflow from memoization is rare. But it's a real concern for production code or problems with very deep recursion (n over 100,000). Python's default recursion limit is 1,000, which you'd need to increase. Java and C++ handle deeper stacks by default. If stack depth is a concern, tabulation eliminates it entirely.