DP vs greedy algorithm

The dp vs greedy algorithm decision comes down to one property. Three worked examples show when greedy works and when you need DP.

10 minutes
Intermediate
What you will learn

The greedy choice property and why it determines the right approach

How to prove greedy correctness using a swap argument

Why coin change with non-standard denominations breaks greedy

A counterexample test for deciding DP vs greedy on new problems

The dp vs greedy algorithm question trips up even experienced engineers. Two of them face the same optimization problem. One writes a greedy solution in 8 minutes, gets the optimal answer, and moves on. The other writes a greedy solution in 8 minutes, gets a wrong answer, and spends 20 minutes debugging before realizing the problem needs DP. Same speed, same experience level. The only variable was knowing which method the problem actually required.

TL;DR
One property separates greedy problems from DP problems: the greedy choice property. If the locally optimal choice at each step always leads to a globally optimal solution, greedy works. If it doesn't, you need DP. The test: find a single counterexample.

What separates greedy from DP

A greedy algorithm picks the locally optimal choice at each step, banking on local optima adding up to a global optimum. Dynamic programming considers all subproblems because local choices don't guarantee the global best. The deciding factor is whether the greedy choice property holds for that specific problem.

That confusion costs real interview time.

They're easy to confuse because both require optimal substructure, meaning the optimal solution contains optimal solutions to its subproblems. Activity selection has it. Coin change has it. The 0/1 knapsack has it. All three share this property, but only one of those three accepts a greedy solution.

Optimal substructure tells you the problem can be decomposed. It doesn't tell you how to solve the decomposed pieces. Greedy commits to one choice at each step and never reconsiders. DP explores every choice and combines the best results. Greedy is faster when it's correct, and just wrong when it isn't.

Most engineers learn both techniques in isolation and never practice the decision between them. So they guess, and guessing under interview pressure is expensive.

Important
Optimal substructure is necessary for both greedy and DP. It doesn't tell you which to use. The greedy choice property is the additional condition that separates them.

When greedy works: Activity selection

You're given n activities, each with a start time and end time. Find the maximum number of non-overlapping activities you can attend.

The greedy method is straightforward. Sort by end time, always pick the activity that finishes earliest, skip anything that overlaps with your last selection.

  1. Python

This runs in O(n log n) and gives the optimal result. But why is it correct?

Because picking the earliest-finishing activity leaves the maximum remaining time for future selections. No other first pick can do better. You can prove this with a swap argument: take any optimal solution that doesn't choose the earliest-finishing activity first. Swap that first pick for the earliest-finishing one. The new solution is at least as good, because the earlier finish time can only free up more room for subsequent picks, never less.

That's the greedy choice property. The locally best decision (earliest finish) is always globally safe.

Greedy proofs show up less in interview prep than they should. Most resources skip straight from "greedy works here" to the implementation. But when you're facing an unfamiliar problem and wondering whether greedy applies, the swap argument is what turns "seems right" into "provably correct." Interviewers notice when you can explain why a solution works, not just recite the code.

When greedy breaks: coin change

Take coin change with denominations [1, 3, 4] and target amount 6.

Greedy picks the largest denomination first. It takes 4, then 1, then 1, for a total of 3 coins.

The optimal answer is 3 + 3, just 2 coins.

The greedy choice property doesn't hold here. Picking the largest available coin blocks the globally optimal combination. The locally best choice (taking 4) prevents you from ever reaching the two-3s path.

With standard denominations like [1, 5, 10, 25], greedy happens to work because larger coins are designed to dominate smaller ones. Every quarter replaces at least two dimes, every dime replaces at least two nickels. That's a property of the currency system, not of the coin change problem itself. Change the denominations and greedy breaks.

That's the counterexample test. You don't need a formal proof that greedy fails. You need one input where it produces a suboptimal result. Finding that input takes 30 seconds with small numbers.

DP solves this by considering every denomination at every amount, building the answer from the bottom up.

  1. Python

DP never commits until it's checked all alternatives. Slower, but correct when greedy isn't.

The dp vs greedy algorithm boundary: knapsack

The knapsack family shows exactly where the boundary falls, because two versions of the same problem need different algorithms.

Fractional knapsack: Allows you to take fractions of items, each with a weight and a value, up to a capacity limit. Greedy works. Sort by value-to-weight ratio, take items greedily until capacity fills. The greedy choice property holds because the highest-ratio fraction is always the best use of remaining capacity.

0/1 knapsack: It is the same problem, but items are indivisible. You either take the whole item or leave it. Greedy by ratio fails. Consider items with weights [1, 3, 4, 5], values [1, 4, 5, 7], and capacity 7. Greedy picks the item with ratio 1.4 (weight 5, value 7), then the item with ratio 1.0 (weight 1, value 1). Total value: 8. But picking the items with weights 3 and 4 gives total value 9. Greedy missed the better combination because it committed to the heaviest high-ratio item first, using up 5 of the 7 available units and leaving only room for the least valuable item.

“Optimal substructure tells you a problem can be decomposed. It doesn't tell you whether to explore all paths or commit to one. That's the dp vs greedy distinction.”
The knapsack boundary

Same optimal substructure. Two versions, two correct algorithms. The only difference is whether you can divide items. When you can, the locally best ratio is globally safe. When you can't, committing early blocks a better combination.

Both problems look identical on the surface, and that's the whole point. The boundary between greedy and DP isn't always about different problems. Sometimes a single constraint change flips the correct algorithm. In an interview, that constraint is what to look for first.

The dp vs greedy algorithm test for interviews

When you hit an optimization problem you haven't seen, run this sequence instead of guessing.

  1. 1Check for optimal substructure: Can the optimal solution be built from optimal solutions to subproblems? If not, neither greedy nor DP applies.
  2. 2Try greedy: Identify the most natural "locally best" choice. For activity selection, that's earliest end time. For coin change, that's the largest denomination.
  3. 3Hunt for a counterexample: Construct a small input (size 3-5) where the greedy choice produces a suboptimal result. One counterexample is enough.
  4. 4If greedy survives, sketch the swap argument: Show that replacing the greedy choice with any alternative can't improve the result. Two sentences is usually enough.
  5. 5If a counterexample exists, switch to DP: Define the state, write the recurrence, decide between memoization and tabulation based on subproblem dependencies.

The dp vs greedy algorithm decision flow

💡 Tip
The counterexample test takes 30 seconds with small inputs. If you can't find one in 2-3 attempts, greedy is likely correct. But sketch the swap argument before committing.

Most engineers skip this entirely and default to pattern matching, trying to recall what worked on a similar-looking problem. That works fine when the problem is familiar, but falls apart on anything new. The counterexample test works even when you've never seen the problem before, because it doesn't depend on recognition. It depends on reasoning.

Codeintuition's learning path trains this decision layer directly. Before you attempt DP problems like the 0/1 knapsack, identification lessons walk through the structural signals that tell you DP applies, including when to rule out greedy first. 63 lessons and 85 problems covering foundational patterns are free. The full learning path with the complete DP course is $79.99/year.

Beyond the DP or greedy decision

The dp vs greedy algorithm decision feeds into a larger set of algorithmic choices. For the complete DP foundation, from recurrence construction to state optimization, see the complete guide to dynamic programming for interviews.

If the identification step is where you struggle most, how to identify DP problems covers a three-part test that catches problems most engineers misclassify. The same reasoning that helps you spot the greedy choice property applies to other algorithmic decisions too, and how algorithmic intuition develops covers why.

Next time you face an unfamiliar optimization problem, don't guess. Run the counterexample test with a few small inputs. Thirty seconds of careful testing tells you whether to commit greedily or build the DP table, and that's the difference between 8 minutes on the right solution and 28 minutes discovering you picked the wrong one.

Do you want to master data structures?

Try our data structures learning path made of highly visual and interactive courses. Get hands on experience by solving real problems in a structured manner. All resources you would ever need in one place for FREE

Try to find a counterexample where the locally optimal choice doesn't produce the global optimum. Use small inputs (size 3-5). If you find one, the problem needs DP. If greedy survives, confirm with a swap argument.
Yes, if the greedy choice property holds. Activity selection works with both, but greedy runs in O(n log n) while a DP formulation takes O(n squared). When greedy is provably correct, prefer it.
Standard currency denominations like [1, 5, 10, 25] have a dominance property where larger coins are always efficient multiples of smaller ones. With arbitrary denominations like [1, 3, 4], this structure breaks. Greedy picks 4 for a target of 6, producing 4+1+1 (three coins), while the DP solution finds 3+3 (two coins). One non-standard denomination set is enough to break greedy entirely.
Optimal substructure means the optimal solution to the full problem contains optimal solutions to its subproblems. Both greedy and DP require it. Coin change has optimal substructure, activity selection has it, and 0/1 knapsack has it, but each needs a different algorithm. The property tells you decomposition is possible, not how to handle the decomposed pieces. That second question, whether to commit to one choice or explore all of them, is answered by the greedy choice property. If the locally optimal choice is provably safe at every step, greedy works. If even one counterexample shows it isn't, you need DP.
For ruling out greedy, a single counterexample is definitive. Interviewers value this because it shows systematic reasoning rather than guessing. For confirming greedy works, follow up with a brief swap argument explaining why no alternative choice could improve the result. You don't need a full mathematical proof. Three sentences of reasoning is the standard interviewers look for.
Was this helpful?