DP time complexity
Learn the universal formula for dp time complexity (states × transition cost) and space reduction with rolling arrays across 3 worked examples.
The universal formula that covers every DP time complexity derivation
How to count states and transition costs for 1D and 2D problems
How rolling arrays reduce DP space complexity by an entire dimension
Common mistakes that lead to wrong complexity claims in interviews
What's the dp time complexity of your solution? If that question makes you pause longer than the problem itself took to solve, you've got company. Most engineers can write a DP solution but can't explain why it runs in the time it does.
Every DP problem follows the same formula. Once you see it, dp time complexity stops being something you guess and starts being something you derive in seconds.
The universal formula for DP complexity
DP time complexity equals the number of distinct states multiplied by the cost of computing each state transition. Space complexity equals the number of states stored simultaneously. These two formulas work for every DP problem you'll encounter in an interview.
That's it. Climbing Stairs, Longest Common Subsequence, Matrix Chain Multiplication: they all follow the same two steps:
- 1Count the distinct states: How many unique subproblems can exist? This comes directly from your state definition. If your state is
dp[i], you havenstates. If your state isdp[i][j], you haven × mstates. - 2Determine the cost per transition: For each state, how much work does the recurrence do? If
dp[i] = dp[i-1] + dp[i-2], that'sO(1)per transition. Ifdp[i] = min(dp[j] + cost)for allj < i, that'sO(n)per transition.
Multiply them together, and that's your time complexity.
Why does this always work? Because memoization guarantees each state is computed exactly once. Tabulation fills each cell exactly once. Either way, the total work is (states computed) × (work per state). This follows directly from optimal substructure, the property that makes DP work in the first place.
Deriving dp time complexity for 1D problems
Start with the simplest case, Climbing Stairs, where you can take 1 or 2 steps at a time and need to count the ways to reach step n.
The state is dp[i] = number of ways to reach step i, giving n states (one per step from 1 to n). The recurrence is dp[i] = dp[i-1] + dp[i-2], which reads two previously computed values and adds them. That's O(1) work per transition, so the total is n states × O(1) = O(n).
Now consider Coin Change, where given coins of denominations [1, 3, 4], you need to find the minimum coins to make amount n.
The state is dp[i] = minimum coins for amount i, giving n + 1 states (amounts 0 through n). The recurrence is dp[i] = min(dp[i - coin] + 1) for each coin denomination. Each transition checks k coins, where k is the number of denominations. That's O(k) per transition, so the total is n states × O(k) = O(n × k).
Notice what changed. The state count stayed the same order (linear in n), but the transition cost jumped from O(1) to O(k). Most engineers correctly identify the O(n) state count for Coin Change but forget to multiply by the transition cost. They claim O(n), when the correct answer is O(n × k).
Python
You can see it directly in the nested loop structure. The outer loop iterates over n states, and the inner loop does k work per state, giving O(n × k) total.
Deriving dp time complexity for 2D problems
The same formula works in higher dimensions. Longest Common Subsequence (LCS) takes two strings of length m and n and finds the length of their longest common subsequence.
Here, dp[i][j] = LCS length for the first i characters of string A and the first j characters of string B. That gives (m + 1) × (n + 1) states. If A[i] == B[j], then dp[i][j] = dp[i-1][j-1] + 1. Otherwise, dp[i][j] = max(dp[i-1][j], dp[i][j-1]). Either branch does O(1) work per transition, so the total is m × n states × O(1) = O(m × n).
Python
But not every 2D DP has O(1) transitions. Take Edit Distance with a twist, where instead of single-character operations, you allow replacing a substring of length up to L. Each transition now needs to check L possible replacement lengths, making the cost per transition O(L) instead of O(1). The total becomes O(m × n × L).
You'll notice that textbooks and tutorials tend to show 2D examples where the transition is always O(1). That leaves readers assuming it's the norm. It isn't. The transition cost is problem-specific, and getting it wrong is probably the most common dp time complexity mistake in interviews.
“The state count tells you how many subproblems exist. The transition cost tells you how much work each one requires. Miss either piece and the complexity is wrong.”
Space complexity and the rolling array technique
Space complexity follows the same state-counting logic. The default space for a DP solution equals the number of states stored at any point during execution.
For 1D problems like Coin Change, the table has n + 1 entries. Space is O(n). For 2D problems like LCS, the table has (m + 1) × (n + 1) entries. Space is O(m × n).
You can often do better, though. If the current row of a 2D table depends only on the previous row, you don't need the full table. Two rows are enough: the current one and the previous one. This is the rolling array technique, and it drops space by an entire dimension.
For LCS, dp[i][j] depends on dp[i-1][j-1], dp[i-1][j], and dp[i][j-1]. All dependencies are in the current row or the row directly above, so two rows are enough.
Python
Space dropped from O(m × n) to O(n). Time stayed at O(m × n). The rolling array doesn't change the number of states you compute. It changes how many you keep in memory at once.
The 0/1 Knapsack problem uses the same trick. The standard solution has dp[i][w] where i is the item index and w is the remaining capacity. Space is O(n × W). Since each row depends only on the previous row, the rolling array reduces space to O(W).
Not every space reduction is a rolling array. Some 1D problems can reduce from O(n) to O(1). Fibonacci only needs the previous two values, so dp[i] = dp[i-1] + dp[i-2] runs in O(1) space with two variables. Same idea: figure out which states the current computation actually reads, and throw away everything else.
Common mistakes in DP complexity analysis
- Confusing recursion tree size with memoized state count: The naive recursive Fibonacci has O(2^n) calls. Memoized Fibonacci has O(n) unique states. The recursion tree is not the complexity when memoization is active. After caching, each state is computed once. The total is states × transition cost.
- Forgetting the transition cost: Claiming Coin Change is O(n) when it's O(n × k). The state count alone is never the full answer. Always ask yourself, "What does each state do to compute its value?"
- Double-counting with tabulation: Some engineers see the nested loops and multiply all loop bounds together. For Coin Change, the outer loop is
nand the inner isk, giving O(n × k). Correct. But for problems with variable inner loop bounds (like LIS where the inner loop isj < i), the total isn't n × n. It's the sum of 1 through n, which equals n(n+1)/2 and is still O(n²) but through a different accounting path. The formula still works:nstates × O(n) average transition = O(n²). - Claiming space is O(n) when it's O(n × k): If your memoization cache stores states indexed by
(amount, last_coin_used), the space is O(n × k), not O(n). Count the dimensions of your state, not just the first one. - Applying rolling arrays incorrectly: If
dp[i][j]depends ondp[i-2][j](two rows back, not one), a two-row rolling array drops the row you still need. Trace the dependency arrows before compressing.
Where complexity fits in the DP workflow
Complexity analysis is one piece of the derivation process. You also need to define the right state, construct the recurrence, and choose between memoization and tabulation.
Our guide on memoization vs tabulation covers the implementation decision, including when space optimization is the deciding factor. For the full picture on dynamic programming, including all five pattern families and the four-step derivation process, see our dynamic programming interview guide.
If you're still building confidence on general complexity analysis (loops, recursion trees, amortized analysis), our time and space complexity interview guide walks through the derivation approach that applies to all code, not just DP.
Codeintuition's Dynamic Programming course teaches all 5 DP pattern families across 38 problems, with complexity derivation built into every lesson. You don't memorize that Knapsack is O(n × W). You derive it from the state definition and the recurrence, the same way you'd need to in an interview. The free Arrays and Singly Linked List courses use the same derivation-first model on foundational patterns, with 63 lessons and 85 problems available at no cost (premium unlocks the full library at $79.99/year), permanently.
The next time an interviewer asks "what's the time complexity?", you won't need to pause. Count the states, identify the transition cost, multiply. One sentence. And because you derived it instead of memorizing it, the answer works even on a problem you've never seen before.
Do you want to master data structures?
Try our data structures learning path made of highly visual and interactive courses. Get hands on experience by solving real problems in a structured manner. All resources you would ever need in one place for FREE