Dynamic Programming Interview
Ace your dynamic programming interview with a four step derivation framework. Learn every major DP pattern, identification triggers, and common traps.
What you will learn
Why DP has the highest interview failure rate and how to fix it
The two conditions that make a problem dynamic programming
The four-step derivation process for any DP problem
When to use top down memoization vs bottom up tabulation
Identification triggers for the most common DP pattern families
Common DP mistakes and how to avoid each one
If you've ever walked into a dynamic programming interview and frozen on an unfamiliar problem, the gap isn't intelligence or effort. It's method. The engineer who memorised 40 DP solutions passed every practice session but failed the actual interview. The engineer who could derive a recurrence from scratch for a problem she'd never seen passed the Google screen on her first attempt. Same topic, same difficulty level, opposite outcomes.
DP shows up in roughly 20-25% of hard problems at FAANG companies and has the highest skip rate of any topic in engineer preparation. Most guides respond by listing more problems to solve. This one covers the systematic derivation process that makes those problems solvable, whether you've seen them before or not.
Why Dynamic Programming Has the Highest Interview Failure Rate
Dynamic programming is a systematic method for solving optimization problems where the solution depends on smaller overlapping subproblems. It works by defining a state space, constructing a recurrence relation, and building the solution either top down with memoization or bottom up with tabulation. That's the textbook version. In practice, the topic breaks people.
Most engineers who struggle with DP aren't struggling because it's inherently harder than graphs or trees. They're struggling because the way DP is typically taught skips the one skill that actually matters in interviews: derivation.
Standard DP preparation follows a predictable pattern. You open a problem on LeetCode, can't solve it, and read the editorial. It shows a dp array, a recurrence relation, and some base cases. After studying the recurrence until you can reproduce it, you move on to the next problem. Forty problems later, you can reproduce solutions for problems you've already seen. Then you sit in an interview, see an unfamiliar DP problem, and freeze. The recurrence you need isn't one you've memorised, and you don't have the skill to build one from scratch.
The issue isn't memory. Memorising a recurrence relation and deriving a recurrence relation are completely different cognitive skills. Memorisation gets you through practice. Derivation gets you through interviews. FAANG companies test derivation because they want to see whether you can handle problems you haven't practised, and recall-based preparation breaks down the moment the problem is novel.
Some DP problems genuinely require more mathematical insight than a repeatable derivation process. Game theory DP, boolean parenthesization, and certain combinatorial counting problems have an irreducible creative element. But 80%+ of DP problems that appear in coding interviews follow a derivable path, and if you can work that path consistently, you can handle the vast majority of what you'll face.
If you've felt that DP is fundamentally different from the rest of DSA, you're partially right. It feels impossible when the derivation skill is missing. Once you learn to build recurrences from state definitions, DP becomes as predictable as any other pattern family. The difficulty reputation comes from a teaching gap, not from the topic itself. Closing that gap doesn't require solving hundreds of problems. It requires practising the derivation on 35-40 well-chosen problems until the four-step process feels automatic.
The Two Conditions That Make a Problem Dynamic Programming
Every DP problem satisfies exactly two conditions. If either is missing, DP isn't the right choice.
1. Optimal substructure
Optimal substructure means the optimal solution to the full problem can be built from optimal solutions to its subproblems. You break the problem into smaller pieces, solve each piece optimally, and combine them into the overall answer. This property is what makes the recursive decomposition valid. Not every problem has it. Longest path in a general graph, for example, doesn't have optimal substructure because combining two locally longest paths doesn't produce the globally longest path.
2. Overlapping subproblems
Overlapping subproblems means the same subproblems get solved multiple times in the naive recursive version. This is what separates DP from divide-and-conquer. Merge sort has optimal substructure because you merge sorted halves, but each subproblem is unique, so there's nothing to cache. Fibonacci has both conditions. Computing fib(5) requires fib(4) and fib(3). But computing fib(4) also requires fib(3). That repeated computation is where caching (memoization) or precomputation (tabulation) creates the speedup.
The simplest way to check for both conditions in practice: write the brute-force recursive solution first. If the recursion tree shows the same function calls appearing in multiple branches, you have overlapping subproblems. If the problem asks for an optimal value (minimum cost, maximum profit, number of ways), you likely have optimal substructure.
Fibonacci makes both conditions concrete with minimal complexity, which is why it's the standard introductory DP example even though it rarely appears in interviews directly. Its value as an example isn't the problem itself but the clarity with which it demonstrates both conditions and the transformation from naive recursion to cached solution.
Python
The naive version of fib(5) makes 15 function calls, and most of them are redundant. The memoized version makes exactly 5, one per unique subproblem. That's a toy example, but the principle scales directly to every DP problem you'll encounter in interviews. The brute-force recursion defines the structure. The memoization eliminates the redundancy. For real interview problems with thousands of subproblems, the difference between exponential and polynomial time is the difference between a solution that times out and one that passes.
Recognising whether a problem has both conditions is itself a trainable skill. How to tell whether an unfamiliar problem requires DP covers the three-indicator test (optimal value, subproblem dependencies, overlapping computation) that catches the vast majority of DP problems before you write a single line of code.
The Four Step Derivation Process
This is the core of the guide and the reason most engineers fail at dynamic programming interviews. Every solvable DP problem follows the same four-step derivation. Learn the sequence once, and you can construct the solution for problems you've never seen.
There's a concept in learning science called desirable difficulty. Working through the derivation yourself, even when it feels slow and uncertain, builds the construction skill that recall-based practice never develops. The struggle is the training.
Let's walk through all four steps on a real dynamic programming interview problem, Edit Distance.
word1 and word2, find the minimum number of operations (insert, delete, replace) to convert word1 into word2.1. Define the state space
In step 1, you're transforming one string into another, and the transformation operates on prefixes. At any point, you've handled some prefix of word1 and matched it to some prefix of word2. So the state needs two dimensions representing how far you've processed into each string. Define dp[i][j] as the minimum operations to convert the first i characters of word1 into the first j characters of word2. That single sentence is the entire foundation. Everything else follows from it.
2. Construct the recurrence
In step 2, consider the last characters of both prefixes. If word1[i-1] equals word2[j-1], the characters already match, and no operation is needed. The answer is dp[i-1][j-1]. If they don't match, you have three choices. Replace word1[i-1] with word2[j-1], costing dp[i-1][j-1] + 1. Delete word1[i-1], costing dp[i-1][j] + 1. Or insert word2[j-1] into word1, costing dp[i][j-1] + 1. Take the minimum of all three.
3. Handle base cases
In step 3, converting an empty string to a string of length j requires j insertions. Converting a string of length i to empty requires i deletions. So dp[0][j] = j and dp[i][0] = i. These base cases are direct consequences of the state definition.
4. Choose the direction
In step 4, the recurrence depends on dp[i-1][j-1], dp[i-1][j], and dp[i][j-1], all with smaller indices. Bottom up tabulation fills the table left-to-right, top-to-bottom, and every dependency is already computed when you need it.
Python
Trace it with word1 = "kitten" and word2 = "sitting" to see the derivation produce a concrete result. Row 0 fills in as [0,1,2,3,4,5,6,7] because converting an empty string to "sitting" requires 7 insertions. Column 0 fills in as [0,1,2,3,4,5,6] because converting "kitten" to an empty string requires 6 deletions. Cell dp[1][1] compares 'k' versus 's', which differ, so it becomes 1 + min(dp[0][0], dp[0][1], dp[1][0]) = 1 + min(0,1,1) = 1. Every other cell follows the same recurrence, and you can verify any cell by checking its three dependencies. That traceability, being able to justify every value in the table, is exactly what an interviewer wants to see.
The derivation is identical for every DP problem. The content of each step changes (different state definitions, different recurrences), but the process stays the same. On Codeintuition, the Dynamic Programming course teaches each pattern through this exact derivation before any problem-solving begins. You don't see the recurrence first but build it yourself.
“The difference between memorising a recurrence and deriving one is the difference between recognising a problem you've seen and solving one you haven't.”
Top Down vs Bottom Up: When Each Wins
Both produce the same result for any given problem, and the trade-offs are practical rather than theoretical.
1. Top down (memoization)
Top down (memoization) starts from the original problem and recurses into subproblems, caching results along the way. You write the recursive solution first, then add a cache dictionary. The advantage is that you only solve subproblems that are actually needed. For problems with sparse dependency graphs where most of the state space is never visited, top down avoids unnecessary computation entirely.
2. Bottom up (tabulation)
Bottom up (tabulation) builds the solution from the smallest subproblems upward, filling a table iteratively. No recursion overhead, no stack overflow risk on deep inputs, and you can often optimise space by keeping only the previous row instead of the full table.
The same problem solved both ways. Take Coin Change, where you're given coins of certain denominations and need the minimum number to make a target amount.
Python
Python
Which should you use in interviews? Depends on the problem. Top down is often easier to write correctly because the recursive logic mirrors the problem statement directly. Bottom up is often easier to debug because you can inspect the table cell by cell. Most interviewers will happily accept either version.
There's an ongoing debate in competitive programming circles about whether top down is systematically slower due to recursion overhead and hash table lookups. In practice, the difference is negligible for interview-sized inputs. The skill that matters is being able to implement both for the same problem and articulate why you chose one over the other.
For a deeper comparison with additional examples, Memoization vs Tabulation covers the trade-offs in detail.
Dynamic Programming Patterns for Coding Interviews
DP isn't a single technique. It's a family of patterns, each with distinct state definitions, recurrence shapes, and identification triggers. The four-step derivation applies to all of them. But recognising which pattern fits a given problem is where most of the difficulty actually lives.
The identification triggers matter as much as the patterns themselves. Three of the most common interview patterns have distinctive signatures in problem statements.
1. Coin Change
Coin Change family. The problem asks for a minimum count or number of ways to reach a target value, and you have a set of denominations or step sizes to choose from. The "target sum with choices" shape is the signature. Variations include climbing stairs (step sizes 1 and 2) and making change with limited coin supply.
2. LCS and Edit Distance
LCS and Edit Distance family. The problem involves two strings (or two sequences) and asks about their shared or differed elements. "Longest common" or "minimum operations to transform" in the problem statement almost always means two-dimensional DP where dp[i][j] represents the answer for prefixes of both inputs. This family includes longest common substring, shortest common supersequence, and wildcard pattern matching.
3. Knapsack
Knapsack family. Items have weights and values, and you're given a capacity constraint. The binary choice (include or exclude each item) is the distinctive feature. The subset sum variant drops the "value" dimension entirely and just asks whether a target sum is reachable. Partition-equal-sum and minimum-partition-difference are Knapsack problems in disguise.
Each pattern follows the same four-step derivation. The state definition changes between patterns (1D array for linear problems, 2D for string alignment, boolean for subset sum), but the process of defining the state, building the recurrence from the choices available at each state, setting base cases, and choosing direction works identically every time.
Codeintuition's learning path teaches every pattern in the grid above with the identification lesson before any problem-solving begins. The DP course alone covers 38 problems across 5 pattern families, and each one starts with the trigger recognition step.
Beyond the core ten patterns, several less common DP categories show up in harder interviews. Boolean parenthesization asks you to count the ways to parenthesise a boolean expression to produce a target value, which requires interval DP where the state tracks a substring of the expression. Matrix chain multiplication is another interval DP variant where you're optimising the order of operations rather than the operations themselves. Word break determines whether a string can be segmented into dictionary words, sitting at the intersection of DP and string processing.
The 15 patterns that span all of DSA, not just DP, show how DP patterns connect to recursion and backtracking. In many cases, the brute-force version of a DP problem is backtracking, and recognising overlapping subproblems is what converts the exponential backtracking solution to a polynomial DP solution. That recognition, spotting the repeated work hiding inside a recursive tree, is one of the most valuable skills you can build for dynamic programming interviews.
Common DP Mistakes and the Mechanism Behind Each
These are the specific failure modes that cause engineers to struggle with dynamic programming interviews. Each one includes the cause so you can spot the trap before falling into it.
- Skipping the state definition: You jump straight to writing code without clearly defining what dp[i] represents. The recurrence feels wrong because you're computing something you haven't formally defined. Fix: write the English sentence "dp[i] represents the minimum/maximum/count of X for the first i elements" before any code. If you can't state it clearly, you haven't understood the problem yet.
- Memorising recurrences instead of deriving them: You can reproduce the Coin Change recurrence from memory, but you can't construct the Edit Distance recurrence from scratch. The skill gap shows up the moment the problem is unfamiliar. For every DP problem you practise, start from the state definition and build the recurrence step by step. Looking up the recurrence doesn't count as practice.
- Misaligned base cases: The recurrence is correct but the output is off by one because the base cases don't match the state definition. This happens when dp[0] means "empty input" in one problem and "first element" in another. Base cases must follow directly from your state definition. Trace the first two or three values by hand before continuing.
- Wrong dependency direction: You write a bottom up solution but fill the table in the wrong order, reading from cells that haven't been computed yet. Before writing the loops, trace the dependency arrows. If dp[i][j] depends on dp[i-1][j-1], you fill left-to-right, top-to-bottom. Getting this wrong produces silent errors that are extremely hard to debug.
- Treating DP as a black box: You run the code, get the right answer, and move on without tracing why each cell has its value. In interviews, you'll be asked to explain your reasoning. If you can't walk through the table and justify each entry, the interviewer loses confidence regardless of whether the code passes.
- Optimising space too early: You attempt the
O(n)space reduction before verifying the O(n-squared) solution is correct. The optimisation changes which cells you can access, and debugging a wrong answer in the compressed version is much harder than in the full table. Get correctness first with the full table, verify by tracing a few cells, then apply the space optimisation.
How to Know When You're Ready for Dynamic Programming Interviews
Readiness for DP comes down to one thing: whether you can execute the four-step derivation on an unfamiliar problem under time pressure. Problem count alone doesn't get you there.
The pattern across 200,000+ submissions on Codeintuition is consistent: engineers who master the derivation, not just the solutions, pass at rates well above the industry average. Derivation fluency predicts readiness far better than raw problem count.
The most efficient path to DP readiness: work through 5-8 problems from each major pattern family, deriving each solution from scratch rather than looking up the recurrence. Start with the most common interview patterns (Coin Change, LCS, Knapsack) and expand to LIS, Grid DP, and Edit Distance once the derivation feels automatic for the first three.
- ✓Given an unfamiliar DP problem, you can define the state space within 3 minutes by reading the constraints
- ✓You can construct the recurrence relation by reasoning about choices at each state, not by recalling a memorised formula
- ✓You can identify the base cases that are consistent with your state definition and verify them by tracing small inputs by hand
- ✓You can implement both top down and bottom up for the same problem and explain the trade-offs to an interviewer
- ✓You can trace through a small DP table by hand, explaining what each cell means and why it holds its value
- ✓Under a 30-minute timer with no hints, you can solve a medium-difficulty DP problem you haven't seen before
- ✓You can recognise which DP pattern a problem belongs to from the constraints alone, matching target-sum problems to Coin Change, two-string problems to LCS or Edit Distance, and weight-capacity problems to Knapsack
All items apply to you.
If every item feels solid, you're ready for the DP portion of a FAANG interview. If some feel shaky, the gaps are specific and each one is trainable with focused practice on the right pattern family.
Codeintuition's Dynamic Programming course covers 38 problems across 5 pattern families (Coin Change, LCS, LIS, Knapsack, Grid DP), with the four-step derivation taught explicitly before each pattern's problems begin. Premium includes timed assessments that simulate real interview pressure at $79.99/year. The free Arrays and Singly Linked List courses let you experience the same derivation-first teaching model on foundational patterns, so you can test whether the approach fits before committing to the full DP sequence.
You open an unfamiliar problem. The description mentions "minimum operations" and two input strings. You don't recognise the specific problem. But you define dp[i][j] as the minimum operations for the first i characters of one string and first j characters of the other. Your recurrence follows from the three possible operations, and the base cases are obvious from the state definition. The timer shows 22 minutes remaining, and you haven't started typing code yet. That's fine. The derivation is done and the implementation is mechanical from here.
Do you want to master data structures?
Try our data structures learning path made of highly visual and interactive courses. Get hands on experience by solving real problems in a structured manner. All resources you would ever need in one place for FREE