How to Identify DP Problems (3 Part Test)
Learn how to identify DP problems before seeing the solution. A 3 part test for optimal value, substructure, and overlapping subproblems.
The three indicators that tell you a problem needs dynamic programming
How to apply the 3 part test to Coin Change and Subset Sum
The difference between DP problems and greedy or plain recursion
How overlapping subproblems separate DP from regular recursion
Figuring out a problem is DP usually happens backwards. You try brute force, hit a time limit, read the editorial, and discover it was dynamic programming all along. That's recognition after the fact. It won't help in an interview where you can't check the editorial.
There's a better way. Three indicators tell you whether a problem needs dynamic programming, and you can check all three before writing any code.
How to identify DP problems using three indicators
A problem needs dynamic programming when three things are true. It asks for an optimal value over a defined space, decisions at each step constrain future decisions (optimal substructure), and the same subproblems recur across different decision paths (overlapping subproblems). This isn't a vague heuristic. Run any problem through these three checks, and you'll know whether DP applies before you start coding.
Indicator 1: The problem asks for an optimal value
DP problems ask you to minimise, maximise, or count something over a defined space. "Find the minimum number of coins." "Count the number of ways to reach the target." "Find the longest increasing subsequence." The trigger word isn't always "minimum" or "maximum." Sometimes it's "count" or "number of ways." But you're always looking for the best outcome across a set of choices.
If the problem asks you to enumerate all results rather than find the optimal one, that's backtracking territory, not DP.
Indicator 2: Optimal substructure
The optimal solution to the full problem can be built from optimal solutions to its subproblems. The decision you make at step k affects what's available at step k+1, and the best overall result comes from making the best choice at every level.
Take Coin Change. You want the minimum coins to make amount 11 using denominations [1, 5, 6]. If you pick a coin of value 5 first, the remaining subproblem is "minimum coins to make 6." The overall answer depends on the optimal answer to that smaller problem. That dependency chain, where each decision creates a smaller version of the same problem, is optimal substructure.
Indicator 3: Overlapping subproblems
Different decision paths lead to the same subproblem. In Coin Change with amount 11 and coins [1, 5, 6], the subproblem "minimum coins for amount 5" gets computed whether you start with a coin of 6 (leaving 5) or with five coins of 1 (also reaching 5 through a different path). Without memoisation, you'd recompute that subproblem every time you hit it.
This is what separates DP from plain recursion. A purely recursive problem like generating all permutations doesn't revisit the same subproblem. Each recursive call operates on a unique state. When subproblems do repeat, DP stores results and reuses them.
Applying the test to real problems
The three part test only matters if you can use it cold, under time pressure, on problems you haven't seen before. These three examples look different on the surface but all pass the same test.
Example 1: Coin change
[1, 5, 6] and a target amount of 11, find the minimum number of coins needed."Find the minimum number of coins" is a minimisation objective. Check passes immediately. For optimal substructure, choosing a coin of value c reduces the problem to "minimum coins for amount 11 - c," and the optimal answer for the full amount depends on optimal answers for smaller amounts. The overlap is clear too: amount 5 appears as a subproblem whether you subtract 6 from 11 or subtract five 1s. Multiple paths converge on the same remaining amount. All three pass. DP.
Python
Example 2: Longest increasing subsequence
[10, 9, 2, 5, 3, 7, 101, 18], find the length of the longest strictly increasing subsequence."Find the length of the longest" is a maximisation objective. The substructure is there too: the longest increasing subsequence ending at index i depends on the longest increasing subsequences ending at earlier indices j where arr[j] < arr[i]. Each position builds on the best result from earlier positions. The overlap shows up when you compute LIS ending at index 5 (value 7). You reference the LIS at indices 2, 3, and 4. But index 3's LIS already referenced index 2, so that subproblem gets reused by multiple later computations. Three for three. DP applies.
Example 3: Subset sum
{3, 34, 4, 12, 5, 2} and a target sum of 9, determine whether any subset sums to the target.This one's trickier because the problem doesn't ask for a minimum or maximum. It asks whether a valid combination exists. But the underlying DP machinery is the same. At each element, you decide to include or exclude it, and you're searching for a combination that hits a target. Count based and existence based problems share the same DP structure as optimisation problems.
The objective "can I achieve exactly 9?" is a feasibility variant of the optimisation pattern, so the optimal value check still applies. If you include element 3, the remaining problem is "does any subset of the remaining elements sum to 6?" Each decision produces a smaller instance of the same problem. And different inclusion/exclusion sequences can arrive at the same remaining sum with the same remaining elements, so the subproblem (remaining_sum=6, starting_index=2) might be reached through multiple paths. All three checks pass. DP works here.
“The test isn't 'does this feel like DP?' Run three checks, and the answer becomes a matter of evidence, not intuition.”
What DP is not
Two categories of problems look like they need DP but don't. Knowing the boundary saves you from reaching for DP when something simpler works.
Greedy problems: They have optimal substructure but don't have meaningful overlapping subproblems. The locally optimal choice at each step happens to produce the globally optimal result, so you never need to explore multiple decision paths. Making change with US coin denominations (25, 10, 5, 1) is the classic example. Always taking the largest coin that fits gives you the minimum coins. That works because of the specific denominations, though. Change the coins to [1, 5, 6] and greedy falls apart. For amount 10, greedy gives 6+1+1+1+1 = 5 coins when 5+5 = 2 coins is better. The full decision guide for DP vs greedy is a separate topic.
Plain recursion without memoisation: It is appropriate when the problem doesn't revisit subproblems. Generating all permutations of a string involves recursion, but each recursive call processes a unique remaining set. No subproblem appears twice through different paths. There's nothing to cache, so you don't need a DP table.
Reaching for DP when greedy or plain recursion works adds unnecessary complexity in your code and in the time you spend setting up state transitions.
When only two indicators are present
Sometimes you'll run the three part test and get a split result. Two checks pass clearly, but the third is ambiguous.
That's actually useful information. The missing indicator tells you which alternative method to consider.
- No overlapping subproblems: This points toward greedy or divide and conquer. Merge sort has optimal substructure (sorting the full array depends on sorting halves) and an optimisation objective (produce a sorted output). But the subproblems don't overlap because each recursive call operates on a different segment of the array. No caching needed, no DP table.
- No clear optimal value: You'll hit this with constraint satisfaction problems. "Can you partition this array into two subsets with equal sum?" doesn't ask for a minimum or maximum on the surface. But as you saw with the Subset Sum example, feasibility problems are a variant of the optimisation pattern. If the problem asks "does a valid combination exist?" and you can frame it as "is the count of valid combinations greater than zero?", the optimal value indicator still holds. Don't dismiss DP just because the word "minimum" doesn't appear in the problem statement.
- Questionable substructure: This is the rarest case, and it usually means you've misidentified the state. If subproblems recur but the optimal solution to a subproblem doesn't contribute to the optimal overall solution, your state definition is probably wrong. Redefine what
dp[i]represents. In most cases, fixing the state definition reveals the substructure that was always there.
Here's a concrete example of that rethinking process. You're given a grid and asked for the minimum cost path from the top left corner to the bottom right. You check: optimal value (minimum cost), yes. Optimal substructure (the best path to cell (i, j) depends on the best paths to (i-1, j) and (i, j-1)), yes. Overlapping subproblems? If you solve it recursively, the path to (2, 3) gets computed from both (2, 2) and (1, 3), which themselves share earlier cells. All three pass. But if you'd initially modelled it as a graph shortest path problem, the substructure might have looked unclear because graph formulations don't always expose the recurrence cleanly. Same problem, two framings, different results from the test. The indicators were there the whole time. You just needed the right state definition to see them.
The three part test isn't pass/fail on a single reading. When you get a split result, treat the missing indicator as a diagnostic. It tells you either which alternative to pursue or which part of your formulation needs rethinking.
Training the identification instinct
Knowing the three indicators intellectually isn't the same as spotting them under pressure. The gap between "I understand the test" and "I can apply it in 30 seconds on an unfamiliar problem" is real, and most platforms try to close that gap with volume alone.
Solve enough DP problems and you'll eventually develop a feel for it. That works, but it's slow. You can solve 50 DP problems and still not spot an unfamiliar one because you learned to recognise specific problems, not the underlying indicators.
Codeintuition's Dynamic Programming course takes a different approach. Before any problem solving begins, the identification phase trains you to recognise the triggers that point to DP. You don't start with "here's how to solve Coin Change." You start with "here's how to tell that a problem is a Coin Change type problem." Interviews don't label problems by pattern. You have to identify the method before you can apply it.
DP needs this more than most patterns. With sliding window, the triggers are fairly obvious: "contiguous range" plus an optimisation constraint. With DP, a problem might look like greedy, graph search, or backtracking until you notice the overlapping subproblems hiding underneath. That ambiguity is exactly why explicit identification training matters. Across 200,000+ submissions on the platform, completing the identification lessons before attempting problems consistently reduces time spent chasing wrong approaches.
Where to go from here
Identifying that a problem needs DP is step one. You still need to define the state, derive the recurrence relation, and decide between memoisation and tabulation.
Our memoization vs tabulation guide covers the implementation decision. For the complete picture on dynamic programming, including pattern derivation and all five DP pattern families, see our dynamic programming interview guide.
The identification first method described here is how every pattern module on Codeintuition works, including the Dynamic Programming course with 38 problems across 5 pattern families. Before solving any DP problem, you train on the three indicators that tell you DP applies. The free Arrays and Singly Linked List courses let you experience that same triggers first approach on simpler patterns like sliding window and two pointers. Start there and see if the three part test clicks on your next unfamiliar problem.
Go back to the Subset Sum example above. The problem didn't ask for a minimum or maximum. It asked whether a valid combination exists. The instinct would be to reach for backtracking. But the three part test caught the overlapping subproblems hiding underneath the existence question. That's the difference. You stop guessing which method fits and start checking for the structural indicators that tell you. Three questions, thirty seconds, and the problem type is settled before you write a line of code.
Want to identify DP problems before reading the editorial?
Codeintuition's Dynamic Programming course trains the three part identification test before any problem solving begins. Try the same approach on simpler patterns with the FREE Arrays course