Originally published on LeetCopilot Blog
You're faced with an optimization problem. "Find the minimum number of coins to make this amount."
You think: "Should I try greedy or dynamic programming?"
You try greedy—it gives the wrong answer for some test cases. You switch to DP—it works but takes forever to code. You wonder if there's a faster way you're missing.
This is the Greedy-DP Confusion: Both handle optimization problems. Both can give optimal answers. But they apply to different problem structures, and using the wrong one wastes time or produces incorrect solutions.
This guide teaches you how to systematically identify whether a problem needs greedy or DP—before you start coding the wrong approach.
TL;DR
- Greedy makes locally optimal choices without looking back. Works when local optimum leads to global optimum. Simple and fast, but only correct for specific problem structures.
- Dynamic programming solves overlapping subproblems and combines results. Guarantees optimal answer but is more complex and often slower.
- Key difference: Greedy commits to choices immediately. DP considers all options and chooses the best combination.
- When greedy works: Independent choices, matroid structure, or provable greedy choice property.
- When DP works: Overlapping subproblems + optimal substructure, where choices affect future choices.
- Beginner strategy: Try to identify if greedy works first. If you find a counterexample, switch to DP.
Beginner-Friendly Explanations
What Greedy Actually Means
A greedy algorithm makes the locally optimal choice at each step, hoping these choices lead to a globally optimal solution.
Analogy: Ordering food one dish at a time, always picking whatever looks best right now. You don't plan the whole meal—you just pick the best available option each moment.
Characteristics:
- Makes one choice per step
- Never reconsiders previous choices
- Fast: usually O(n) or O(n log n)
- Simple to implement
- Not always correct
What Dynamic Programming Actually Means
Dynamic programming breaks a problem into overlapping subproblems, solves each once, stores the results, and builds up the final answer by combining subproblem solutions.
Analogy: Planning a meal by considering all possible dish combinations, evaluating each complete meal plan, and picking the optimal one. You might store "best appetizer choice" to reuse while evaluating different main courses.
Characteristics:
- Considers all options (usually through recursion or iteration)
- Stores results to avoid re-computation (memoization or tabulation)
- Slower: often O(n²) or O(n × target)
- More complex to implement
- Guarantees optimal answer (if applicable)
The Core Difference
Greedy: "Make the best choice now, move on."
DP: "What if I tried all options and picked the best combination?"
Greedy commits immediately. DP explores exhaustively (but efficiently).
Step-by-Step Learning Guidance
Step 1: Identify Problem Type
First, confirm it's an optimization problem. Look for:
- "Find minimum/maximum..."
- "Find optimal..."
- "Find the longest/shortest..."
If it's not optimization (e.g., "count all ways"), it's not classic greedy vs. DP. (Though DP handles counting too.)
Step 2: Look for Greedy Signals
Greedy might work if:
Signal 1: Independent Choices
Each choice doesn't affect what choices are available later (beyond removing the chosen item).
Example: Activity Selection—choosing event A doesn't change which other events are available (beyond time conflicts).
Signal 2: Sortable Priority
There's a clear way to rank options such that always picking the "best" by that ranking produces optimal results.
Example: Fractional Knapsack—always picking highest value-per-weight item first.
Signal 3: Local Optimum = Global Optimum
Can you prove that the best local choice is always part of the best global solution?
If yes → Greedy works.
If you can construct a counterexample → Greedy fails.
Step 3: Look for DP Signals
DP is likely needed if:
Signal 1: Overlapping Subproblems
The same smaller version of the problem is solved repeatedly.
Example: Fibonacci—calculating fib(5) requires fib(4) and fib(3), and fib(4) requires fib(3) again.
Signal 2: Optimal Substructure
The optimal solution contains optimal solutions to subproblems.
Example: Shortest Path—the shortest path from A to C through B contains the shortest path from A to B.
Signal 3: Choices Affect Future Options
Current decisions change what's available or optimal later.
Example: 0/1 Knapsack—taking item A changes the remaining capacity, affecting which items you can still take.
Step 4: The Counterexample Test
If unsure whether greedy works:
- Identify the greedy strategy
- Try to construct an input where greedy gives wrong answer
- If you find one → greedy fails, use DP
- If you can't find one → greedy might work (but prove it or test thoroughly)
Example: Coin Change with coins = [1, 3, 4], target = 6
Greedy strategy: Always take largest coin possible.
- Take 4 → remaining 2
- Take 1, 1 → total coins: 3
Optimal: Take 3, 3 → total coins: 2
Counterexample found → greedy fails → use DP.
Common Problem Patterns
Greedy Works
| Problem Type | Why Greedy Works |
|---|---|
| Activity Selection | Earliest end time maximizes remaining time |
| Fractional Knapsack | Can always take "best" items partially |
| Huffman Coding | Merging smallest frequencies first is always optimal |
| Jump Game II (min jumps) | Always maximize reach from each position |
| Assign Cookies | Match smallest cookie to smallest child |
DP Required
| Problem Type | Why DP Required |
|---|---|
| 0/1 Knapsack | Taking an item fully affects what else fits |
| Coin Change (min coins) | Available denominations don't allow greedy |
| Longest Common Subsequence | Matching a character now affects future matches |
| Edit Distance | Each edit affects remaining transformation cost |
| Unique Paths with Obstacles | Blocked cells affect which routes remain |
Practical Preparation Strategies
Strategy 1: Build a Greedy/DP Intuition Map
Create a reference sheet:
GREEDY KEYWORDS:
- "Intervals" + "maximum non-overlapping" → Usually greedy
- "Fractional" allowed → Greedy (value density)
- "Jump" + "minimum steps" → Often greedy (max reach)
DP KEYWORDS:
- "Count ways" → DP (overlapping paths)
- "Minimum cost" + "choices affect each other" → DP
- "0/1" anything → DP (take or not take)
- "Subsequence" (non-contiguous) → DP
Strategy 2: Practice Both Approaches on Same Problem
When learning, try to solve greedy-solvable problems with DP too (and vice versa). This reveals:
- Why greedy is faster when it works
- Why DP is necessary when greedy fails
- The structural differences
Strategy 3: Learn Greedy Proof Techniques
For interviews, you may need to justify why greedy works. Common proof patterns:
- Exchange argument: Any solution can be improved by swapping to greedy choice
- Stays ahead: Greedy solution is always at least as good at each step
- Induction: If greedy works for k items, adding the (k+1)th greedy choice is still optimal
Strategy 4: Master DP Patterns First
DP appears more frequently in interviews. Master these DP patterns:
- 1D array (Climbing Stairs, House Robber)
- 2D array (Unique Paths, Edit Distance)
- Subsequence (LCS, LIS)
- Knapsack (0/1, unbounded)
Then learn greedy as special cases where DP shortcuts are valid.
Strategy 5: Use Guided Hints for Decision Making
When stuck between greedy and DP, tools like LeetCopilot can help you think through whether a greedy approach works by walking through the counterexample logic—without revealing the full solution.
Example Code: Same Problem, Both Approaches
Problem: Coin Change—find minimum coins to make target amount.
Greedy Approach (Fails for General Case)
function coinChangeGreedy(coins: number[], amount: number): number {
// Sort descending to try largest coins first
coins.sort((a, b) => b - a);
let count = 0;
let remaining = amount;
for (const coin of coins) {
count += Math.floor(remaining / coin);
remaining %= coin;
}
return remaining === 0 ? count : -1;
}
// Works for coins = [25, 10, 5, 1], amount = 41 → 4 coins
// Fails for coins = [1, 3, 4], amount = 6 → returns 3, optimal is 2
Dynamic Programming Approach (Correct)
function coinChangeDP(coins: number[], amount: number): number {
// dp[i] = minimum coins to make amount i
const dp = Array(amount + 1).fill(Infinity);
dp[0] = 0; // 0 coins needed for amount 0
for (let i = 1; i <= amount; i++) {
for (const coin of coins) {
if (coin <= i && dp[i - coin] + 1 < dp[i]) {
dp[i] = dp[i - coin] + 1;
}
}
}
return dp[amount] === Infinity ? -1 : dp[amount];
}
// Works for all cases: coins = [1, 3, 4], amount = 6 → returns 2
Why DP works here: Taking a coin affects what remaining amount you need, creating overlapping subproblems (dp[6] depends on dp[5], dp[3], dp[2], which share computations).
Common Mistakes to Avoid
Mistake 1: Assuming Greedy Always Works for "Minimize"
"Minimize coins" sounds like greedy, but coin change usually requires DP.
Fix: Always try to find a counterexample before committing to greedy.
Mistake 2: Using DP When Greedy Works
DP is more complex. If greedy works, you're overcomplicating.
Fix: Check for greedy signals first. Only use DP if greedy fails.
Mistake 3: Not Recognizing Hybrid Problems
Some problems combine greedy within DP (or DP within greedy).
Example: Meeting Rooms II—sort by start time (greedy idea), then use heap (DP-like state tracking).
Mistake 4: Confusing "Optimal Substructure" with "Greedy Works"
Both greedy and DP require optimal substructure. The difference is whether overlapping subproblems exist and whether choices are independent.
Mistake 5: Forgetting State Design for DP
DP requires careful state definition. "dp[i]" needs precise meaning.
Fix: Before coding DP, write out:
- What dp[i] represents
- Base cases
- Recurrence relation
FAQ
Can a problem be solved by both greedy and DP?
Yes. When greedy works, DP also works (just more complex). But DP doesn't guarantee a greedy solution exists. If you find a greedy solution, prefer it for simplicity.
How important is proving greedy correctness in interviews?
Interviewers may ask "How do you know greedy works here?" Have a brief justification ready. You don't need formal proof, but explaining the exchange argument or "always stays ahead" reasoning shows strong understanding.
What if I can't figure out which to use during an interview?
Start with brute force, then optimize. Brute force often resembles DP structure. If you notice repeated computations, add memoization. If greedy seems obvious, try it and check edge cases.
Are there problems where neither greedy nor DP works?
Yes—some optimization problems need other techniques (binary search on answer, mathematical formulas, graph algorithms). But for most LeetCode optimization problems, it's one of these two.
Which should I learn first?
DP—it's more universally applicable and appears more in interviews. Learn greedy as special cases where DP shortcuts valid.
Conclusion
Greedy and dynamic programming both solve optimization problems, but they apply to different structures.
Greedy works when local optimal choices lead to global optimal—when choices are independent and there's a clear priority for selection. It's faster and simpler but only correct for specific problems.
DP works when you have overlapping subproblems and optimal substructure—when choices affect future options and you need to consider all combinations. It's more complex but guarantees correctness.
The decision framework: Look for greedy signals first (sortable priority, independent choices). Try to find a counterexample. If you can't, greedy might work. If you can, use DP.
In practice, master DP first—it's more common in interviews. Learn greedy as efficient shortcuts for specific problem types. With practice, you'll develop intuition for which approach fits each problem.
Optimization problems don't have to be confusing. The key is recognizing the structural signals that indicate which approach applies—and knowing the counterexample test to validate your choice before committing to an implementation.
If you're looking for an AI assistant to help you master LeetCode patterns and prepare for coding interviews, check out LeetCopilot.
Top comments (0)