DP on trees and graphs
Master DP on trees and graphs with the three-part postorder framework. Trace tree diameter step by step and learn when graph DP applies
Why tree DP is recursive DP with the tree as the recurrence structure
The three-part framework: leaves, internal nodes, root
How to trace tree diameter using postorder state propagation
When and how DP extends from trees to graphs through DAGs
You can solve coin change, longest common subsequence, and memoization vs tabulation problems without much trouble. Then you hit DP on trees and graphs for the first time, and the confidence disappears.
That reaction is more common than you'd think, and it's also misleading. Tree DP isn't a new category of dynamic programming. It's the same recurrence-based reasoning you already know, running on a different data structure.
What DP on trees and graphs actually means
Tree DP is recursive dynamic programming where the tree structure itself defines the subproblems. Each node's answer depends on the answers computed at its children. The state gets computed bottom-up through postorder traversal: process the children first, then combine their results at the parent.
Most explanations jump straight to code without connecting tree DP to the DP you already know. That's the part worth pausing on. Standard 1D DP moves a recurrence across an array index. 2D grid DP moves it across row and column coordinates. Tree DP moves the recurrence across the tree's edges. The mechanics are identical. Only the shape of the data changes.
Every tree DP problem shares three parts:
- 1Base case at the leaves: A leaf node has no children, so its state is defined directly. For tree diameter, a leaf contributes a height of 0. For maximum path sum, a leaf contributes its own value.
- 2State combination at internal nodes: Each internal node receives the computed states from its children and combines them. The combination logic is the recurrence relation. This is where the actual DP thinking happens.
- 3Answer at the root (or accumulated globally): Sometimes the final answer is what gets computed at the root. Other times, like with tree diameter, the answer is the maximum across all nodes, tracked in a variable outside the recursion.
That third point catches people off guard. Not every tree DP problem has its answer sitting at the root. Some track a global optimum that gets updated at every node during traversal.
The recurrence at each node is usually simple. Height is 1 + max(left, right). Subtree sum is node.value + left_sum + right_sum. The hard part of tree DP rarely lives in the recurrence itself. It's figuring out what the state should represent at each node, what each node needs to return to its parent, and whether the answer lives at the root or gets tracked separately.
Tracing tree diameter step by step
Tree diameter is the right problem to learn this on. It combines both answer types: a per-node computation (height) and a global optimum (diameter). Working through it node by node shows how tree DP actually operates.
Given a binary tree, find the length of the longest path between any two nodes. The path doesn't need to pass through the root.
At every node, two things hold at once.
- The height from this node downward equals 1 plus the maximum height of its children. This is what gets returned upward to the parent.
- The diameter candidate through this node equals the sum of its left child's height and right child's height. This gets compared against the global maximum.
Python
Trace this on a simple tree:
`
1
/ \
2 3
/ \
4 5
`Postorder visits nodes 4, 5, 2, 3, 1. Start at node 4 (a leaf), where left_h = 0 and right_h = 0, giving a diameter candidate of 0 and returning height 1. The same happens at node 5, returning height 1. At node 2, left_h = 1 and right_h = 1 produce a diameter candidate of 2, and the node returns height 2. Leaf node 3 returns height 1. Finally, node 1 gets left_h = 2 and right_h = 1, producing the largest diameter candidate of 3 and returning height 3.
The maximum diameter across all nodes is 3. The path runs 4, 2, 1, 3 (or 5, 2, 1, 3).
Notice what happened during that trace. The function returns one thing (height) but tracks another (diameter). This dual-purpose pattern shows up constantly in tree DP. The value you return to the parent isn't always the value you're trying to find.
That trace, stepping through each node in postorder and watching what gets computed versus what gets returned, is the mental model that makes tree DP problems click. You can't debug tree DP by reading the code linearly. You have to follow the recursion stack, tracking two parallel streams of information. One stream flows upward to parents. The other gets compared against the global answer. This kind of mental dry-running is the skill that separates engineers who can derive tree DP solutions from engineers who need to look them up.
“The value you return to the parent isn't always the value you're optimizing. That distinction is where tree DP clicks.”
When DP extends to graphs
Trees are clean targets for DP because they're acyclic. Every node has exactly one path from the root, and postorder gives you a natural bottom-up ordering where children are always processed before parents. There are no circular dependencies to deal with.
Graphs break that guarantee. A cycle means node A's answer might depend on B, which depends on C, which depends back on A. The recurrence can't be evaluated.
But acyclic graphs still work. If the graph is a directed acyclic graph (DAG), you can apply DP by processing nodes in topological order. Topological sort gives you the same guarantee postorder gives on trees: every dependency is resolved before you need it.
This is exactly how shortest path DP works on DAGs. You process nodes in topological order and relax outgoing edges using already-computed distances. You don't need a priority queue or Dijkstra. The topological ordering guarantees that when you process a node, every node it depends on has already been finalized. Same guarantee postorder provides on trees, just generalized to arbitrary DAGs.
What about graphs with cycles? You can't directly apply DP to a cyclic graph without transforming it first. Some problems convert the graph into a DAG by defining a state space that's inherently acyclic, like adding a visited bitmask to the state. Others use strongly connected component decomposition to contract cycles into single nodes, producing a DAG of components. This territory gets deep fast, and most interview problems sidestep it by giving you a tree or a DAG to begin with.
If a problem hands you a general graph and asks for a DP-style optimal answer, look for the DAG hiding inside the constraints before reaching for a recurrence.
Common mistakes in DP on trees and graphs
- Confusing traversal order: Tree DP almost always requires postorder. You need children's results before you can compute the parent's state. Using preorder means you're trying to compute a node's answer before its children have been processed. If your tree DP solution produces wrong results, check the traversal order first.
- Returning the wrong value: In tree diameter, the function returns height but the answer is diameter. Beginners often try to return the diameter directly from the recursive function, which breaks the recurrence. Return whatever the parent node needs to compute its own state. Track the global answer separately.
- Forgetting the base case: Null nodes should return 0 for height-based problems rather than -1 or some sentinel value. Getting the base case wrong shifts every computation by a constant, and your answers end up off by one at every node.
- Treating each tree DP problem as a separate thing to memorize: Tree diameter and maximum path sum aren't different "types" of tree DP. They're both postorder traversal with different state definitions. Once you understand the three-part framework (what state does each node compute, what does it return, what does it track globally), you can derive the solution for new tree DP problems without having seen them before.
Codeintuition's Binary Tree course teaches the postorder traversal patterns that tree DP builds on. The stateful postorder identification lesson trains the skill of recognizing when a problem requires tracking both a return value and a global state, which is the exact dual-purpose pattern tree diameter uses.
Where DP on tree and standard DP meet
Tree DP connects two skills that are usually taught separately: tree traversal and dynamic programming. If postorder feels solid but DP recurrences still feel shaky, the memoization vs tabulation guide walks through how recursive and iterative DP relate. For the bigger picture on dynamic programming, including pattern derivation and recurrence construction, see the complete DP guide.
If graphs are where traversal order gets confusing, the BFS vs DFS comparison covers when each applies and why topological sort matters for DAGs.
Codeintuition's learning path sequences binary trees before dynamic programming for a reason. The postorder patterns you learn in the tree course become the foundation for tree DP problems in the Dynamic Programming course. Tree DP doesn't require new theory. It requires fluency with the traversal mechanics the tree course builds.
Six months from now, you won't remember the specific code for tree diameter. But you'll remember the framework. Leaves define the base case, internal nodes combine children's states, and the answer either sits at the root or gets tracked globally. That mental model applies to every tree DP problem you'll encounter. The first time you look at an unfamiliar one and see the postorder recurrence without being told, you'll know it's working.
Do you want to master data structures?
Try our data structures learning path made of highly visual and interactive courses. Get hands on experience by solving real problems in a structured manner. All resources you would ever need in one place for FREE