Memoization vs Tabulation
Memoization vs tabulation compared with a coin change walkthrough. See exactly when top-down beats bottom-up and how to choose in interviews.
What you will learn
How memoization and tabulation solve DP from opposite directions
A frame-by-frame Coin Change walkthrough for both approaches
When memoization feels natural versus when tabulation wins
How tabulation enables space optimization with rolling arrays
You know that engineer who rewrites every recursive DP solution as a bottom-up table? They're not being more rigorous. They're doing unnecessary work on half their problems. Memoization and tabulation decompose problems in opposite directions, and the direction you pick changes how naturally the solution comes together.
Two Directions, Same Destination
Memoization stores results of recursive subproblems as they're encountered, working from the target down to base cases. Tabulation fills a table iteratively from base cases up to the target.
Both exploit the same property: overlapping subproblems. A problem with overlapping subproblems recomputes the same smaller problems multiple times during a naive recursive solution. Memoization and tabulation both eliminate that redundancy. They just do it from opposite ends.
With memoization, you start at the problem you actually want to solve. You recurse into smaller subproblems, and the first time you compute each one, you store the result. Every subsequent call to the same subproblem returns the cached value instead of recomputing it. The recursion itself defines which subproblems get solved and in what order.
With tabulation, you start at the smallest subproblems (the base cases) and systematically build up to the target. You fill a table where each entry depends only on entries you've already computed. No recursion. The iteration order is explicit, and you control it directly.
That difference isn't cosmetic. It changes how you think about the problem while you're building the solution.
Coin Change Top Down
Take the coin change problem. Given coins of denominations [1, 3, 4] and a target amount of 6, find the minimum number of coins needed.
With memoization, you start at the target and ask "What's the fewest coins to make 6?" That breaks into three recursive calls, one for each coin denomination. minCoins(6) calls minCoins(5), minCoins(3), and minCoins(2). The recursive tree looks like this.
Python
Trace the execution frame by frame.
- 1
minCoins(6)branches intominCoins(5),minCoins(3),minCoins(2) - 2
minCoins(5)branches intominCoins(4),minCoins(2),minCoins(1) - 3
minCoins(4)branches intominCoins(3),minCoins(1),minCoins(0), which hits the base case and returns 0 - 4
minCoins(3)branches intominCoins(2),minCoins(0), andminCoins(-1)which is invalid - 5
minCoins(2)branches intominCoins(1), andminCoins(-1)andminCoins(-2), both invalid - 6
minCoins(1)callsminCoins(0), hits the base case, returns0. Result is1coin. Cached. - 7
minCoins(2)now resolves withmin(1+1) = 2coins. Cached. - 8
minCoins(3)resolves withmin(2+1, 0+1) = 1coin. Cached. - 9Back up to
minCoins(4), which triesminCoins(3)and gets a cache hit returning1, triesminCoins(1)and gets another cache hit returning1, andminCoins(0)returns the base case. Result ismin(1+1, 1+1, 0+1) = 1coin. Cached. - 10
minCoins(5)resolves using cached values for4,2, and1withmin(1+1, 2+1, 1+1) = 2coins. Cached. - 11
minCoins(6)resolves withmin(2+1, 1+1, 2+1) = 2coins (two coins of denomination 3).
The recursion explores only the subproblems that the target actually needs. Steps 9 and 10 show the payoff. minCoins(3) and minCoins(1) return instantly from the cache instead of recomputing their entire subtrees.
“With memoization, you don't decide which subproblems to solve. The recursion figures that out on its own.”
Coin Change Bottom Up
Same problem, same coins [1, 3, 4], same target of 6. But now you start from the bottom.
With tabulation, you create a table dp[0..6] where dp[i] represents the minimum coins needed to make amount i. You fill it left to right, starting from what you already know.
Python
The table fills like this.
- 1
dp[0] = 0(base case) - 2For
dp[1], only coin1fits.dp[0] + 1 = 1. Result is1. - 3For
dp[2], coin1givesdp[1] + 1 = 2. Coins3and4don't fit. Result is2. - 4For
dp[3], coin1givesdp[2] + 1 = 3. Coin3givesdp[0] + 1 = 1. Coin4doesn't fit. Result is1. - 5For
dp[4], coin1givesdp[3] + 1 = 2. Coin3givesdp[1] + 1 = 2. Coin4givesdp[0] + 1 = 1. Result is1. - 6For
dp[5], coin1givesdp[4] + 1 = 2. Coin3givesdp[2] + 1 = 3. Coin4givesdp[1] + 1 = 2. Result is2. - 7For
dp[6], coin1givesdp[5] + 1 = 3. Coin3givesdp[3] + 1 = 2. Coin4givesdp[2] + 1 = 3. Result is2.
Answer is 2 coins. Same result, no recursion. But look at what happened. Every subproblem from 0 to 6 got computed, in order, whether the target needed all of them or not. For this small example, that's trivial. For problems with sparse dependency graphs where the target only touches a fraction of possible states, you might compute 5,000 subproblems when memoization would've touched 50.
Where Each Approach Feels Natural
The better question isn't "which is faster?" It's which direction makes the problem easier to think about.
Memoization tends to feel right when the recursive decomposition is obvious from reading the problem. If you can write the recurrence relation directly from the problem statement, memoization lets you translate that recurrence into code almost verbatim. It also shines when not all subproblems are needed. Problems with large state spaces where only a fraction of states are reachable from the target benefit from memoization's lazy evaluation. You don't compute what you don't need.
When the state space is complex or multidimensional (subproblems indexed by tuples like (position, remaining_capacity, items_used)), defining the iteration order for tabulation requires real thought. Memoization sidesteps that entirely because the recursion handles ordering for you.
Tabulation tends to feel right when subproblems have a clear, linear ordering. When you can enumerate states from smallest to largest and each state depends only on previously computed states, tabulation's loop structure maps directly to the problem. It's also the better starting point if you need space optimization later. Rolling arrays (keeping only the previous row of a 2D table) are straightforward with tabulation because you control the iteration. With memoization, the cache holds all computed states until the recursion unwinds.
The performance difference between the two is usually negligible for the same problem. Tabulation avoids recursion overhead (stack frames), and memoization avoids computing unnecessary subproblems. On problems where both feel natural, either works. The real distinction is which one lets you build the solution with less friction.
Recursive solutions do carry some stack overflow risk in theory. For typical interview problem sizes (n < 10,000), it rarely matters. But the theoretical difference exists, even if it won't change your approach on interview day.
Common Mistakes When Choosing Between Memoization and Tabulation
- Assuming tabulation is always "better": Engineers who've been told "iterative is faster than recursive" apply that rule to everything. For DP, the time complexity is identical. Tabulation's edge is space optimization potential, not raw speed. If you don't need space optimization, pick whichever is clearer.
- Forgetting the base cases in tabulation: With memoization, base cases are natural return statements in the recursion. With tabulation, forgetting to initialize
dp[0](or whatever the base case is) means your entire table builds onfloat('inf')or 0 incorrectly. The bug is silent because the code runs without errors. - Converting memoization to tabulation mechanically: Not every memoized solution translates cleanly. When the state space is irregular (a graph-based DP where states aren't enumerable in a simple loop), forcing tabulation creates convoluted iteration logic that's harder to debug and harder to verify. If the recursive version is clear, keep it.
- Caching without checking the state space size: Memoization caches every state it encounters. If the state space is enormous but only a small subset is reachable, that's efficient. If the state space is small and dense (like coin change with amount = 10,000), the cache and the table end up the same size, and tabulation's simpler memory access pattern gives it a slight practical edge.
Going Deeper
Memoization and tabulation are two implementations of the same core idea, which Richard Bellman originally described as dynamic programming in the 1950s. The deeper skill is recognising when a problem has overlapping subproblems in the first place. If you're still building that identification instinct, our guide on how to identify DP problems walks through the three structural indicators that signal a problem needs dynamic programming. For more detail, see our dynamic programming interview guide.
For the formal decision framework on when to choose top-down vs bottom-up (including space optimization with rolling arrays), that's a separate question with its own analysis.
Codeintuition's Dynamic Programming course teaches both memoization and tabulation across 38 problems and 5 pattern families, with each pattern following the understand-identify-apply sequence. The coin change lesson walks through both directions with visual state tracking at every step, so you can see exactly how the cache and the table evolve frame by frame.
If you haven't worked through the 15 coding interview patterns yet, DP patterns make more sense after you've seen how pattern identification works in simpler contexts. The free Arrays and Singly Linked List courses cover sliding window and two-pointer patterns with the same derivation-first approach used in the DP course. Understanding why a window expands and contracts builds the same reasoning skill you'll need to understand why a DP recurrence transitions between states.
At some point you'll open a new DP problem and already know which direction to build from before writing a single line. When that happens, the memoization vs tabulation debate stops being a question and starts being a choice you make based on the problem in front of you.
Do you want to master data structures?
Try our data structures learning path made of highly visual and interactive courses. Get hands on experience by solving real problems in a structured manner. All resources you would ever need in one place for FREE