Binary search tree explained
Binary search tree explained through its core invariant. Trace search, insert, and delete frame-by-frame to build real interview confidence.
The single invariant that defines every binary search tree
How search and insertion follow from one traversal rule
Why deletion requires three cases and how inorder successor works
When BSTs degrade to O(n) and what balanced trees fix
Most explanations teach BST operations as separate procedures. You memorise the search algorithm, then the insert algorithm, then three deletion cases. That's backwards. A binary search tree explained from its core invariant is one traversal with a different ending. Once you see the single rule that drives all of them, the data structure stops being a collection of steps to remember.
The one rule that makes a binary search tree work
A binary search tree is a binary tree where every node's left subtree contains only values smaller than the node, and every right subtree contains only values larger. This ordering invariant makes search, insertion, and deletion possible in O(log n) time on balanced trees.
That's the entire data structure. One constraint, nothing else.
The invariant gives you something arrays and linked lists don't. You can eliminate half the remaining nodes at every step. When you're looking for a value, you compare it to the current node. If it's smaller, you go left. If it's larger, you go right. You never need to check the other side. Same principle as binary search on sorted arrays, except the tree structure makes insertion and deletion efficient too. No element shifting required.
Search and insertion in a BST: The invariant in action
Both operations start the same way. You begin at the root and traverse down, going left when your target is smaller than the current node and right when it's larger.
Search ends when you find the target or hit a null pointer (the value isn't in the tree). Insertion ends when you hit a null pointer and create a new leaf node there.
The traversal logic is identical in both cases, which is the whole point. The only difference is what happens at the terminal step.
Walk through it on a BST with nodes [8, 3, 10, 1, 6, 14, 4].
` 8
/ \
3 10
/ \ \
1 6 14
/
4
`Searching for 6 starts at 8. Because 6 < 8, go left to 3. Then 6 > 3, so go right to 6. The value matches after three comparisons across three levels, and you never touched the right side of the tree.
Inserting 7 follows the same traversal, starting at 8. Because 7 < 8, go left to 3. Then 7 > 3, so go right to 6. Then 7 > 6, so go right again and hit a null pointer. Create node 7 as the right child of 6. The invariant still holds at every node.
“Search and insertion aren't two operations. They're one traversal with a different ending.”
Insertion always creates a leaf. It doesn't restructure the existing tree. That's what makes it simple, and it's also what eventually makes the tree fragile.
Deletion in a BST: Three cases, one principle
Deletion is where BSTs get tricky, because removing a node can violate the invariant. There are three cases, but they all follow from one question: how do you fill the gap left by the removed node while keeping left < node < right intact?
1. Deleting a leaf
You remove it directly and nothing else changes, because the node had no children to displace.
2. Deleting a node with one child
Replace the node with its only child. The child's subtree already satisfies the invariant relative to the deleted node's parent, so the tree stays valid. If you think about why this works, it's because a one-child node is essentially a pass-through point in the tree. Removing it and promoting the child is like removing a redundant link in a chain.
3. Deleting a node with two children
This is the case that trips people up. You can't just promote one child, because the other child's subtree would be orphaned or misplaced. The solution is to find the inorder successor, the smallest node in the right subtree. Copy its value into the node you're deleting, then delete the inorder successor itself. The successor is guaranteed to have at most one child (it's the leftmost node in a subtree), so its deletion falls into Case 1 or Case 2.
Trace through deleting node 3 from the tree above.
`
Step 1: Find node 3. It has two children (1 and 6). This is Case 3.
Step 2: Find inorder successor of 3.
Go right to 6, then left to 4.
Node 4 is the smallest value in the right subtree.
Step 3: Copy value 4 into node 3's position.
Step 4: Delete original node 4 from its old location.
Node 4 was a leaf (Case 1). Remove it.
Result:
8
/ \
4 10
/ \ \
1 6 14
`The invariant holds at every node after the operation: 1 < 4 < 6 and 4 < 8 < 10. The tree is still a valid BST, and node 4 sits exactly where the ordering requires it.
Performance of a BST: When balance breaks
Every BST operation takes O(h) time, where h is the height of the tree. For a balanced tree with n nodes, h is O(log n). That's the promise, and it holds as long as the tree stays roughly balanced.
But insert the values 1, 2, 3, 4, 5 in that order, and you get this:
`1
\
2
\
3
\
4
\
5
`That's a linked list. Every operation now takes O(n) because you have to walk the entire chain. The ordering invariant is technically satisfied (every left subtree is empty, every right subtree has larger values), but the balance that gives BSTs their speed advantage is completely gone.
The counterintuitive part: sorted input, which feels like the most organised data, produces the worst possible BST. Random input tends to produce a roughly balanced tree with height close to log n. Interviewers test this gap specifically. If you say "BST operations are O(log n)" without mentioning the balance requirement, you'll lose points.
The fix is self-balancing BSTs like AVL trees and Red-Black trees. They add rotation operations that restructure the tree after every insertion or deletion, guaranteeing O(log n) height regardless of input order. covers AVL trees and rotations in depth.
Common BST interview mistakes
Most BST interview failures aren't about forgetting the algorithm. They come from misunderstanding the invariant or skipping edge cases the invariant would've caught. Candidates who can recite the steps but can't explain why each step preserves the ordering get stuck the moment the problem deviates from the textbook version.
- Confusing BST ordering with heap ordering: A BST orders left-right (left subtree < node < right subtree). A heap orders parent-child (parent > children in a max-heap). They're different invariants, and mixing them up leads to wrong traversal logic.
- Checking only the immediate children: The invariant isn't "left child < node < right child." It's "every node in the left subtree is smaller." A value can be a valid child of its parent but violate the constraint relative to a grandparent. This is why
BST validatorproblems require passing upper and lower bounds down through the recursion. - Freezing on two-child deletion: Engineers who handle Case 1 and Case 2 confidently often stall when both children exist. If you know the inorder successor technique, you've reduced Case 3 to an already-solved case. Practice this specific path until it's automatic.
- Assuming balance is guaranteed: This is the most common conceptual mistake. BSTs aren't inherently balanced. You need a self-balancing variant (AVL, Red-Black) or random insertion order to get logarithmic performance. Always state the worst case when discussing BST time complexity in an interview.
Two directions from here
BSTs connect to two larger topics. On one side, the tree structure and its recursive operations feed into everything in binary trees: traversal patterns (preorder, inorder, postorder), BFS vs DFS trade-offs, and path-based problems. On the other, the ordering invariant connects to searching and sorting, where the same "eliminate half" principle drives time complexity analysis across algorithms.
Inorder traversal on a BST gives you sorted output, with the invariant doing the work. That sorted traversal pattern is behind BST validation, conversion to sorted arrays, and conversion to doubly linked lists. The Binary Search Tree course on Codeintuition covers all 4 BST patterns across 36 lessons and 41 problems, with each operation traced variable-by-variable through 76 illustrations.
Codeintuition's learning path places BSTs after binary trees and before heaps. That ordering is deliberate. Recursive patterns you build in binary trees transfer directly into BSTs, and the heap's parent-child invariant makes more sense once you've internalised the BST's left-right invariant. You can start with the free Arrays course (63 lessons, 85 problems, 15 patterns across the free tier, permanently free) and build toward BSTs at your own pace.
Memorising three deletion cases gets you through one problem. Deriving them from the ordering invariant means you won't need to memorise them at all. Start with the invariant.
Do you want to master data structures?
Try our data structures learning path made of highly visual and interactive courses. Get hands on experience by solving real problems in a structured manner. All resources you would ever need in one place for FREE