DP vs Greedy Algorithm: The One Property That Decides
The dp vs greedy algorithm decision comes down to one property. Three worked examples show when greedy works and when you need DP.
The greedy choice property and why it determines the right approach
How to prove greedy correctness using a swap argument
Why coin change with non standard denominations breaks greedy
A counterexample test for deciding DP vs greedy on new problems
The dp vs greedy algorithm question trips up even experienced engineers. Two of them face the same optimization problem. One writes a greedy solution in 8 minutes, gets the optimal answer, and moves on. The other writes a greedy solution in 8 minutes, gets a wrong answer, and spends 20 minutes debugging before realizing the problem needs DP. Same speed, same experience level. The only variable was knowing which method the problem actually required.
What separates greedy from DP
A greedy algorithm picks the locally optimal choice at each step, banking on local optima adding up to a global optimum. Dynamic programming considers all subproblems because local choices don't guarantee the global best. The deciding factor is whether the greedy choice property holds for that specific problem.
That confusion costs real interview time.
They're easy to confuse because both require optimal substructure, meaning the optimal solution contains optimal solutions to its subproblems. Activity selection has it. Coin change has it. The 0/1 knapsack has it. All three share this property, but only one of those three accepts a greedy solution.
Optimal substructure tells you the problem can be decomposed. It doesn't tell you how to solve the decomposed pieces. Greedy commits to one choice at each step and never reconsiders. DP explores every choice and combines the best results. Greedy is faster when it's correct, and just wrong when it isn't.
Both techniques are usually learned in isolation, and the decision between them never gets practiced. So it becomes a guess, and guessing under interview pressure is expensive.
When greedy works: Activity selection
n activities, each with a start time and end time. Find the maximum number of non overlapping activities you can attend.The greedy method is straightforward. Sort by end time, always pick the activity that finishes earliest, skip anything that overlaps with your last selection.
Python
This runs in O(n log n) and gives the optimal result. But why is it correct?
Because picking the earliest finishing activity leaves the maximum remaining time for future selections. No other first pick can do better. You can prove this with a swap argument: take any optimal solution that doesn't choose the earliest finishing activity first. Swap that first pick for the earliest finishing one. The new solution is at least as good, because the earlier finish time can only free up more room for subsequent picks, never less.
That's the greedy choice property. The locally best decision (earliest finish) is always globally safe.
Greedy proofs show up less in interview prep than they should. Most resources skip straight from "greedy works here" to the implementation. But when you're facing an unfamiliar problem and wondering whether greedy applies, the swap argument is what turns "seems right" into "provably correct." Interviewers notice when you can explain why a solution works, not just recite the code.
When greedy breaks: Coin change
Take Coin Change with denominations [1, 3, 4] and target amount 6.
Greedy picks the largest denomination first. It takes 4, then 1, then 1, for a total of 3 coins.
The optimal answer is 3 + 3, just 2 coins.
The greedy choice property doesn't hold here. Picking the largest available coin blocks the globally optimal combination. The locally best choice (taking 4) prevents you from ever reaching the two-3s path.
With standard denominations like [1, 5, 10, 25], greedy happens to work because larger coins are designed to dominate smaller ones. Every quarter replaces at least two dimes, every dime replaces at least two nickels. That's a property of the currency system, not of the coin change problem itself. Change the denominations and greedy breaks.
That's the counterexample test. You don't need a formal proof that greedy fails. You need one input where it produces a suboptimal result. Finding that input takes 30 seconds with small numbers.
DP solves this by considering every denomination at every amount, building the answer from the bottom up.
Python
DP never commits until it's checked all alternatives. Slower, but correct when greedy isn't.
The dp vs greedy algorithm boundary: Knapsack
The knapsack family shows exactly where the boundary falls, because two versions of the same problem need different algorithms.
- Fractional knapsack: Allows you to take fractions of items, each with a weight and a value, up to a capacity limit. Greedy works. Sort by value to weight ratio, take items greedily until capacity fills. The greedy choice property holds because the highest ratio fraction is always the best use of remaining capacity.
- 0/1 knapsack: Same problem, but items are indivisible. You either take the whole item or leave it. Greedy by ratio fails. Consider items with weights
[1, 3, 4, 5], values[1, 4, 5, 7], and capacity7. Greedy picks the item with ratio 1.4 (weight 5, value 7), then the item with ratio 1.0 (weight 1, value 1). Total value: 8. But picking the items with weights 3 and 4 gives total value 9. Greedy missed the better combination because it committed to the heaviest high ratio item first, using up 5 of the 7 available units and leaving only room for the least valuable item.
“Optimal substructure tells you a problem can be decomposed. It doesn't tell you whether to explore all paths or commit to one. That's the dp vs greedy distinction.”
Same optimal substructure. Two versions, two correct algorithms. The only difference is whether you can divide items. When you can, the locally best ratio is globally safe. When you can't, committing early blocks a better combination.
Both problems look identical on the surface, and that's the whole point. The boundary between greedy and DP isn't always about different problems. Sometimes a single constraint change flips the correct algorithm. In an interview, that constraint is what to look for first.
Problems that trick you into greedy
Some problems look like they should accept a greedy solution because the locally optimal choice feels correct on the first few inputs you try. Three patterns come up repeatedly.
Longest increasing subsequence
You're given an array and need the longest strictly increasing subsequence. A natural greedy instinct is to always extend the current subsequence with the smallest available next element. That strategy produces a long subsequence, but not always the longest. Consider [3, 1, 8, 2, 5]. Greedy starting from 3 gives [3, 8] (length 2). Starting from 1 gives [1, 8] or [1, 2, 5] (length 3). The problem requires exploring multiple subsequence paths, and committing to one chain early closes off better options. DP tracks dp[i] as the length of the longest subsequence ending at index i, running in O(n^2) or O(n log n) with patience sorting.
Minimum path sum in a grid
Given an m x n grid of non-negative integers, find the path from top-left to bottom-right that minimizes the total sum, moving only right or down. Greedy says: at each cell, move toward the neighbor with the smaller value. On most grids, that looks reasonable. But a cheap neighbor can lead into an expensive corridor. Consider a 3x3 grid where moving right first hits low values but funnels you into a column of 9s, while moving down first hits a 3 but opens a path of 1s. Greedy can't see past the immediate step. DP fills the grid bottom-up with dp[i][j] = grid[i][j] + min(dp[i+1][j], dp[i][j+1]), and the O(m * n) cost is negligible for interview-sized inputs.
Edit distance
Given two strings, find the minimum number of insertions, deletions, and substitutions to transform one into the other. Greedy would match characters greedily from left to right, substituting or inserting when there's a mismatch. That works when the strings are similar, but a single early mismatch can cascade into unnecessary operations. The string pair "intention" and "execution" needs 5 operations optimally. Greedy strategies that commit to the first apparent fix routinely overshoot. DP builds a 2D table where dp[i][j] represents the edit distance between the first i characters of one string and the first j characters of the other.
In all three cases, the counterexample test catches the failure quickly. You don't need to memorize which problems are greedy and which need DP. You need to test your greedy hypothesis on a small input before committing.
The dp vs greedy algorithm test for interviews
When you hit an optimization problem you haven't seen, run this sequence instead of guessing.
- 1Check for optimal substructure: Can the optimal solution be built from optimal solutions to subproblems? If not, neither greedy nor DP applies.
- 2Try greedy: Identify the most natural "locally best" choice. For activity selection, that's earliest end time. For coin change, that's the largest denomination.
- 3Hunt for a counterexample: Construct a small input (size 3-5) where the greedy choice produces a suboptimal result. One counterexample is enough.
- 4If greedy survives, sketch the swap argument: Show that replacing the greedy choice with any alternative can't improve the result. Two sentences is usually enough.
- 5If a counterexample exists, switch to DP: Define the state, write the recurrence, decide between memoization and tabulation based on subproblem dependencies.
The dp vs greedy algorithm decision flow
Skipping this test entirely and defaulting to pattern matching is the norm, trying to recall what worked on a similar looking problem. That works fine when the problem is familiar, but falls apart on anything new. The counterexample test works even when you've never seen the problem before, because it doesn't depend on recognition. It depends on reasoning.
Codeintuition's learning path trains this decision layer directly. Before you attempt DP problems like the 0/1 knapsack, identification lessons walk through the structural signals that tell you DP applies, including when to rule out greedy first. The foundational courses (63 lessons, 85 problems) are free and cover the identification first method on patterns where the greedy-vs-DP decision comes up naturally. The full learning path with the complete DP course is $79.99/year.
What you give up choosing DP over greedy
When greedy is correct, it's almost always faster. Activity selection runs in O(n log n) with greedy. A DP formulation of the same problem takes O(n^2). Fractional knapsack is O(n log n) greedy versus O(n * W) DP for the 0/1 variant, where W is the capacity.
That speed gap matters in interviews for two reasons. First, greedy solutions are shorter to write. Fewer lines means fewer bugs and less debugging time. Second, interviewers sometimes ask follow-up questions about whether a faster approach exists. If you jump straight to DP on a problem where greedy works, you've left performance on the table and missed a chance to demonstrate deeper understanding.
But the reverse mistake is worse. A greedy solution that produces the wrong answer doesn't get partial credit. It gets a failed test case and a panicked rewrite with 15 minutes left. The time you "saved" by avoiding DP evaporates the moment your greedy approach breaks on an edge case.
The practical rule: try greedy first, test it with a counterexample, and fall back to DP only when you've confirmed greedy doesn't work. You'll use greedy on maybe 20-30% of optimization problems you encounter. The rest genuinely need DP. Knowing which category you're in before you start coding is the entire skill.
Beyond the DP or greedy decision
The dp vs greedy algorithm decision feeds into a larger set of algorithmic choices. For the complete DP foundation, from recurrence construction to state optimization, see the complete guide to dynamic programming for interviews.
If the identification step is where you struggle most, how to identify DP problems covers a three part test that catches the problems that get misclassified most often. The same reasoning that helps you spot the greedy choice property applies to other algorithmic decisions too, and how algorithmic intuition develops covers why.
Next time you face an unfamiliar optimization problem, don't guess. Run the counterexample test with a few small inputs. Thirty seconds of careful testing tells you whether to commit greedily or build the DP table, and that's the difference between 8 minutes on the right solution and 28 minutes discovering you picked the wrong one.
Ready to stop guessing between DP and greedy?
Train the counterexample test and swap argument on real optimization problems. Build the decision skill that saves 20 minutes in every interview, for FREE
O(n log n) while a DP formulation takes O(n^2). When greedy is provably correct, prefer it.4 for a target of 6, producing 4+1+1 (three coins), while the DP solution finds 3+3 (two coins). One non standard denomination set is enough to break greedy entirely.