Memoization vs Tabulation

Memoization vs tabulation compared with a coin change walkthrough. See exactly when top-down beats bottom-up and how to choose in interviews.

15 minutes
Medium
Intermediate

What you will learn

How memoization and tabulation solve DP from opposite directions

A frame-by-frame Coin Change walkthrough for both approaches

When memoization feels natural versus when tabulation wins

How tabulation enables space optimization with rolling arrays

You know that engineer who rewrites every recursive DP solution as a bottom-up table? They're not being more rigorous. They're doing unnecessary work on half their problems. Memoization and tabulation decompose problems in opposite directions, and the direction you pick changes how naturally the solution comes together.

TL;DR
Memoization works top-down from the target, caching recursive results. Tabulation works bottom-up from base cases, filling a table. The right choice depends on the problem's structure, not on a blanket performance rule.

Two Directions, Same Destination

Memoization stores results of recursive subproblems as they're encountered, working from the target down to base cases. Tabulation fills a table iteratively from base cases up to the target.

Both exploit the same property: overlapping subproblems. A problem with overlapping subproblems recomputes the same smaller problems multiple times during a naive recursive solution. Memoization and tabulation both eliminate that redundancy. They just do it from opposite ends.

With memoization, you start at the problem you actually want to solve. You recurse into smaller subproblems, and the first time you compute each one, you store the result. Every subsequent call to the same subproblem returns the cached value instead of recomputing it. The recursion itself defines which subproblems get solved and in what order.

With tabulation, you start at the smallest subproblems (the base cases) and systematically build up to the target. You fill a table where each entry depends only on entries you've already computed. No recursion. The iteration order is explicit, and you control it directly.

That difference isn't cosmetic. It changes how you think about the problem while you're building the solution.

Coin Change Top Down

Take the coin change problem. Given coins of denominations [1, 3, 4] and a target amount of 6, find the minimum number of coins needed.

With memoization, you start at the target and ask "What's the fewest coins to make 6?" That breaks into three recursive calls, one for each coin denomination. minCoins(6) calls minCoins(5), minCoins(3), and minCoins(2). The recursive tree looks like this.

  1. Python

Trace the execution frame by frame.

  1. 1minCoins(6) branches into minCoins(5), minCoins(3), minCoins(2)
  2. 2minCoins(5) branches into minCoins(4), minCoins(2), minCoins(1)
  3. 3minCoins(4) branches into minCoins(3), minCoins(1), minCoins(0), which hits the base case and returns 0
  4. 4minCoins(3) branches into minCoins(2), minCoins(0), and minCoins(-1) which is invalid
  5. 5minCoins(2) branches into minCoins(1), and minCoins(-1) and minCoins(-2), both invalid
  6. 6minCoins(1) calls minCoins(0), hits the base case, returns 0. Result is 1 coin. Cached.
  7. 7minCoins(2) now resolves with min(1+1) = 2 coins. Cached.
  8. 8minCoins(3) resolves with min(2+1, 0+1) = 1 coin. Cached.
  9. 9Back up to minCoins(4), which tries minCoins(3) and gets a cache hit returning 1, tries minCoins(1) and gets another cache hit returning 1, and minCoins(0) returns the base case. Result is min(1+1, 1+1, 0+1) = 1 coin. Cached.
  10. 10minCoins(5) resolves using cached values for 4, 2, and 1 with min(1+1, 2+1, 1+1) = 2 coins. Cached.
  11. 11minCoins(6) resolves with min(2+1, 1+1, 2+1) = 2 coins (two coins of denomination 3).

The recursion explores only the subproblems that the target actually needs. Steps 9 and 10 show the payoff. minCoins(3) and minCoins(1) return instantly from the cache instead of recomputing their entire subtrees.

“With memoization, you don't decide which subproblems to solve. The recursion figures that out on its own.”
On top-down DP

Coin Change Bottom Up

Same problem, same coins [1, 3, 4], same target of 6. But now you start from the bottom.

With tabulation, you create a table dp[0..6] where dp[i] represents the minimum coins needed to make amount i. You fill it left to right, starting from what you already know.

  1. Python

The table fills like this.

  1. 1dp[0] = 0 (base case)
  2. 2For dp[1], only coin 1 fits. dp[0] + 1 = 1. Result is 1.
  3. 3For dp[2], coin 1 gives dp[1] + 1 = 2. Coins 3 and 4 don't fit. Result is 2.
  4. 4For dp[3], coin 1 gives dp[2] + 1 = 3. Coin 3 gives dp[0] + 1 = 1. Coin 4 doesn't fit. Result is 1.
  5. 5For dp[4], coin 1 gives dp[3] + 1 = 2. Coin 3 gives dp[1] + 1 = 2. Coin 4 gives dp[0] + 1 = 1. Result is 1.
  6. 6For dp[5], coin 1 gives dp[4] + 1 = 2. Coin 3 gives dp[2] + 1 = 3. Coin 4 gives dp[1] + 1 = 2. Result is 2.
  7. 7For dp[6], coin 1 gives dp[5] + 1 = 3. Coin 3 gives dp[3] + 1 = 2. Coin 4 gives dp[2] + 1 = 3. Result is 2.

Answer is 2 coins. Same result, no recursion. But look at what happened. Every subproblem from 0 to 6 got computed, in order, whether the target needed all of them or not. For this small example, that's trivial. For problems with sparse dependency graphs where the target only touches a fraction of possible states, you might compute 5,000 subproblems when memoization would've touched 50.

ℹ️ Info
Both implementations produce identical results with the same time complexity, O(amount times coins) for coin change. The difference is in how the computation unfolds, not in what it computes.

Where Each Approach Feels Natural

The better question isn't "which is faster?" It's which direction makes the problem easier to think about.

Memoization tends to feel right when the recursive decomposition is obvious from reading the problem. If you can write the recurrence relation directly from the problem statement, memoization lets you translate that recurrence into code almost verbatim. It also shines when not all subproblems are needed. Problems with large state spaces where only a fraction of states are reachable from the target benefit from memoization's lazy evaluation. You don't compute what you don't need.

When the state space is complex or multidimensional (subproblems indexed by tuples like (position, remaining_capacity, items_used)), defining the iteration order for tabulation requires real thought. Memoization sidesteps that entirely because the recursion handles ordering for you.

Tabulation tends to feel right when subproblems have a clear, linear ordering. When you can enumerate states from smallest to largest and each state depends only on previously computed states, tabulation's loop structure maps directly to the problem. It's also the better starting point if you need space optimization later. Rolling arrays (keeping only the previous row of a 2D table) are straightforward with tabulation because you control the iteration. With memoization, the cache holds all computed states until the recursion unwinds.

The performance difference between the two is usually negligible for the same problem. Tabulation avoids recursion overhead (stack frames), and memoization avoids computing unnecessary subproblems. On problems where both feel natural, either works. The real distinction is which one lets you build the solution with less friction.

Recursive solutions do carry some stack overflow risk in theory. For typical interview problem sizes (n < 10,000), it rarely matters. But the theoretical difference exists, even if it won't change your approach on interview day.

Common Mistakes When Choosing Between Memoization and Tabulation

  • Assuming tabulation is always "better": Engineers who've been told "iterative is faster than recursive" apply that rule to everything. For DP, the time complexity is identical. Tabulation's edge is space optimization potential, not raw speed. If you don't need space optimization, pick whichever is clearer.
  • Forgetting the base cases in tabulation: With memoization, base cases are natural return statements in the recursion. With tabulation, forgetting to initialize dp[0] (or whatever the base case is) means your entire table builds on float('inf') or 0 incorrectly. The bug is silent because the code runs without errors.
  • Converting memoization to tabulation mechanically: Not every memoized solution translates cleanly. When the state space is irregular (a graph-based DP where states aren't enumerable in a simple loop), forcing tabulation creates convoluted iteration logic that's harder to debug and harder to verify. If the recursive version is clear, keep it.
  • Caching without checking the state space size: Memoization caches every state it encounters. If the state space is enormous but only a small subset is reachable, that's efficient. If the state space is small and dense (like coin change with amount = 10,000), the cache and the table end up the same size, and tabulation's simpler memory access pattern gives it a slight practical edge.
⚠️ Warning
Don't rewrite a working memoized solution as tabulation just because "bottom-up is better." That rule of thumb doesn't account for state space complexity, code clarity, or whether space optimization is actually needed.

Going Deeper

Memoization and tabulation are two implementations of the same core idea, which Richard Bellman originally described as dynamic programming in the 1950s. The deeper skill is recognising when a problem has overlapping subproblems in the first place. If you're still building that identification instinct, our guide on how to identify DP problems walks through the three structural indicators that signal a problem needs dynamic programming. For more detail, see our dynamic programming interview guide.

For the formal decision framework on when to choose top-down vs bottom-up (including space optimization with rolling arrays), that's a separate question with its own analysis.

Codeintuition's Dynamic Programming course teaches both memoization and tabulation across 38 problems and 5 pattern families, with each pattern following the understand-identify-apply sequence. The coin change lesson walks through both directions with visual state tracking at every step, so you can see exactly how the cache and the table evolve frame by frame.

If you haven't worked through the 15 coding interview patterns yet, DP patterns make more sense after you've seen how pattern identification works in simpler contexts. The free Arrays and Singly Linked List courses cover sliding window and two-pointer patterns with the same derivation-first approach used in the DP course. Understanding why a window expands and contracts builds the same reasoning skill you'll need to understand why a DP recurrence transitions between states.

At some point you'll open a new DP problem and already know which direction to build from before writing a single line. When that happens, the memoization vs tabulation debate stops being a question and starts being a choice you make based on the problem in front of you.

Do you want to master data structures?

Try our data structures learning path made of highly visual and interactive courses. Get hands on experience by solving real problems in a structured manner. All resources you would ever need in one place for FREE

They have the same time complexity for any given problem. Memoization has slight recursion overhead from stack frames, while tabulation computes all subproblems even if some aren't needed. For interview-sized inputs, the difference is negligible, so choose whichever lets you write the solution more clearly under time pressure.
In theory, yes. In practice, the conversion isn't always straightforward. Problems with complex or irregular state spaces (like graph-based DP or tree DP) produce convoluted iteration logic when forced into tabulation. If the recursive decomposition is natural, memoization is often the cleaner implementation. Forcing bottom-up on a problem that thinks top-down adds complexity without benefit.
It tests whether you understand the underlying structure, not just one implementation. Doing both demonstrates that you see the same problem from two angles, which signals deeper understanding than pattern matching alone.
Tabulation has a concrete advantage when space optimization is possible. If the current state depends only on the previous row or the previous few states, you can reduce space from O(n) to O(1) or from O(n*m) to O(n) using a rolling array. Memoization can't do this because the cache holds all computed states. If the interviewer asks you to optimize space, that's your cue to switch to tabulation.
For typical interview problem sizes (n under 10,000), stack overflow from memoization is rare. But it's a real concern for production code or problems with very deep recursion (n over 100,000). Python's default recursion limit is 1,000, which you'd need to increase. Java and C++ handle deeper stacks by default. If stack depth is a concern, tabulation eliminates it entirely.
Was this helpful?