Backtracking algorithm explained
Backtracking algorithm explained through state space trees, pruning, and a full N-Queens walkthrough. Covers all 3 pattern types with decision triggers.
What backtracking is and how it differs from brute force
The three backtracking patterns and when each applies
How pruning eliminates invalid branches in N-Queens step by step
When to reach for backtracking vs DP, greedy, or exhaustive search
Most engineers describe backtracking as "try everything and undo what doesn't work." That's not wrong, but it's like describing binary search as "check the middle," technically accurate but completely useless for solving problems you haven't seen before. The backtracking algorithm explained at the right depth gives you a skeleton that applies to every problem in the category.
The engineers who struggle with backtracking in interviews aren't missing the concept. They're missing the mechanism. Backtracking isn't vague trial-and-error. It's an algorithm with a specific shape, and once you see that shape, every backtracking problem follows the same skeleton.
Backtracking algorithm from ground up
Backtracking builds solutions one decision at a time, walking a state space tree with depth-first search. When a partial solution violates the problem's constraints, the algorithm abandons that branch and backtracks to try the next option. That pruning is what separates backtracking from exhaustive search.
The difference matters more than it sounds. Exhaustive search generates every possible combination and checks each one after the fact. Backtracking generates partial combinations and abandons them the moment they can't lead anywhere valid. On N-Queens, exhaustive search evaluates all n^n board configurations. Backtracking evaluates a fraction of that because it never places a queen in a row where a conflict already exists.
The underlying model is a state space tree. Each node is a partial solution. Each edge is a decision: place a queen, include an element, extend a path. The root is the empty solution. Leaves are either complete valid solutions or dead ends where a constraint was violated.
Every backtracking algorithm does three things:
- 1Choose an option to extend the current partial solution
- 2Check whether the extended solution violates any constraints
- 3Undo the choice if the branch is dead (backtrack to the parent node)
That choose-check-undo loop, applied recursively on a state space tree, is the entire algorithm. The variation between problems isn't the loop itself. It's what constitutes a "choice," what the constraints are, and when you can prune.
For small inputs, this distinction is academic. If you're generating all subsets of a 10-element array, pruning saves negligible time. The advantage compounds on problems with tight constraints and large search spaces, which happens to be exactly what shows up in interviews.
The three backtracking patterns
Backtracking problems don't all look the same, but they fall into three pattern types based on the tree you're exploring and how aggressively you can prune. Codeintuition's Backtracking course teaches each as a distinct pattern with its own identification triggers.
1. Unconditional enumeration
Generates all possible arrangements without any filtering constraint: all subsets, all case transformations, all phone number letter combinations. The tree branches at every position, and every leaf is a valid output. There's no pruning because there are no constraints to violate. You're exploring the entire tree.
How do you spot it? The problem asks you to produce every valid arrangement of some input, and "valid" just means "complete," not "satisfying additional rules."
2. Conditional enumeration
Adds constraints that prune branches during generation. Problems like generating all valid parentheses combinations, finding all combinations that sum to a target, or producing all permutations of distinct elements fall here. The tree has the same branching shape as unconditional enumeration, but now some branches die early because a partial solution already violates the constraint.
Take Generate Parentheses for n=3. You're building strings of length 6 using open and close brackets. At each position, you could place either character, but two constraints prune the tree. You can't place a close bracket unless the count of open brackets exceeds the count of closed ones, and you can't place an open bracket if you've already used all n. These constraints eliminate invalid branches before reaching the leaf, which is the entire value of backtracking over brute force.
3. Backtracking search
Looks for one or more configurations that satisfy a global constraint, not just local ones. N-Queens, maze pathfinding, and Sudoku are all backtracking search problems. The pruning is heaviest here because each placement creates constraints that propagate to future decisions. Placing a queen on row 1, column 2 eliminates column 2, both diagonals, and row 1 from all future placements.
You'll recognise backtracking search when the problem asks you to find a valid configuration (not enumerate all arrangements), and each decision constrains future decisions.
Backtracking algorithm on N-Queens
Let's trace through N-Queens and watch the algorithm work. The problem asks you to place n queens on an n×n chessboard so that no two queens attack each other (same row, column, or diagonal).
The state space tree has n levels (one per row). At each level, you choose a column for that row's queen. The branching factor starts at n and shrinks as constraints eliminate columns.
Partial trace for n=4, showing where pruning kicks in:
Python
See what happened? Row 0, column 0 leads to a dead end after exploring 4 column options at row 1 and every sub-branch. Without pruning, you'd evaluate all 4^4 = 256 placements. With pruning, the algorithm tried about 20 nodes before finding the first solution. Early constraint checking made the difference. The implementation follows the choose-check-undo skeleton:
Python
All three pieces are right there. board.append(col) is the choose step, is_valid(row, col) is the constraint check that prunes, and board.pop() is the undo step. Every backtracking problem you'll encounter in interviews has this same structure, even when the domain looks completely different.
“The skeleton is always the same: choose, check constraints, recurse, undo. The only thing that changes between problems is what you're choosing and what violates the constraints.”
When to reach for backtracking
Backtracking isn't always the right approach. Three signals in a problem statement tell you it applies.
The problem asks for all valid configurations or arrangements. "Generate all," "find all," "list every" are triggers for enumeration patterns. If it asks for a count only, DP is probably more efficient.
Each decision constrains future decisions. If placing an element in position A eliminates options for position B, you're building a constrained search tree. That's backtracking territory.
There's no greedy property or optimal substructure. Greedy works when locally optimal choices produce globally optimal results. DP works when the problem has overlapping subproblems. Backtracking is for problems where you genuinely need to explore multiple branches because no shortcut exists.
The boundary between backtracking and DP is blurrier than most textbooks admit. Target Sum Combinations can use backtracking (enumerate all subsets that sum to target) or DP (count the number of ways to reach target). Backtracking gives you the actual combinations. DP gives you the count. The interview question determines which you need.
One prerequisite: the four recursion patterns (head, tail, multiple, multidimensional) are what backtracking builds on. Backtracking is recursion with a choose-check-undo wrapper. If recursion still feels unstable, solidify that first.
Common mistakes in interviews
Five mechanical errors account for most backtracking bugs in interviews.
- Forgetting to undo state: You modify a visited array or append to a path, but you don't reverse the change after the recursive call returns. Now later branches see corrupted state from earlier branches. This one is sneaky because your code won't crash, it'll just produce wrong results.
- Pruning too late: Checking constraints after reaching a leaf wastes the entire point of backtracking. Check constraints before recursing into a child node. The earlier you prune, the fewer nodes you explore.
- Pruning too aggressively: If your N-Queens validator accidentally checks row conflicts (which can't happen since you place one queen per row), you won't get wrong answers, but you'll waste computation. In other problems, over-pruning can miss valid solutions entirely.
- Confusing the base case: In enumeration, the base case is reaching a complete arrangement. In search, it's finding a valid configuration. Mix these up and you get either incomplete results or infinite recursion.
- Misidentifying the pattern type: Applying search-level constraint checking to an unconditional enumeration problem adds unnecessary complexity. Applying no constraint checking to a search problem produces brute force, not backtracking.
The bridge to dynamic programming
Backtracking connects to dynamic programming in a way that trips up a lot of people.
When a backtracking solution has overlapping subproblems (the same partial state gets explored multiple times through different decision paths), adding memoization transforms it into top-down DP. That transition from backtracking to memoized recursion to tabulation is a progression you'll use constantly in interview preparation.
Codeintuition's Backtracking course covers all three patterns with the identification triggers taught before problems begin. You learn to recognise which pattern type a problem needs from the problem statement alone, not from the problem title or category tag. The free Arrays course uses the same pattern-first teaching model on foundational topics, so you can test the approach before reaching the backtracking material.
Six months ago, a Generate Parentheses problem would've sent you searching for the solution. Now you read the problem statement, notice the constraint (open count must exceed close count at every step), recognise it as conditional enumeration, and write the choose-check-undo skeleton with the constraint as your pruning function. The problem didn't get easier, but your reasoning got more precise.
Do you want to master data structures?
Try our data structures learning path made of highly visual and interactive courses. Get hands on experience by solving real problems in a structured manner. All resources you would ever need in one place for FREE