Graph algorithms coding interview

Graph algorithms coding interview guide covering the 5 core invariants from DFS to Dijkstra, with step-by-step traced walkthroughs.

20 minutes
Advanced
What you will learn

Why graph problems feel harder and how the representation reduction fixes that

The 5 core graph algorithm invariants and what each solves

How to trace BFS and DFS on grids and adjacency lists step by step

When to use topological sort, Dijkstra, and connected components

Which graph algorithms do you actually need for a coding interview? Most engineers preparing for a graph algorithms coding interview can list them fine. BFS, DFS, Dijkstra, maybe topological sort. But listing algorithms and knowing when to apply them are two different skills. The engineer who freezes on a graph problem during an interview usually knows the algorithms. What they don't have is a method for reducing the problem to a graph in the first place.

This guide builds that method. You'll learn the invariant behind each core graph algorithm, trace them on real problems, and walk away with a decision tree that makes algorithm selection mechanical.

TL;DR
Graph problems feel hard because the graph often isn't given to you. Once you learn to see nodes (states) and edges (transitions) in any problem, the algorithm choice becomes a decision tree. This guide covers the 5 invariants behind DFS, BFS, topological sort, Dijkstra, and connected components, with traced walkthroughs you can apply to unseen problems.

Why graph algorithms feel different

Graph algorithms are methods for traversing, searching, or computing paths through nodes connected by edges. In coding interviews, they test whether you can reduce an unfamiliar problem to a graph structure and apply the right traversal or shortest-path algorithm under time pressure.

What makes them different from every other topic isn't the algorithms themselves but the problems.

An array problem hands you an array. A tree problem hands you a tree. But a graph problem might hand you a grid of 0s and 1s, a list of courses with prerequisites, a dictionary of words that differ by one letter, or a set of airline routes with ticket prices. Before you can apply any algorithm, you have to see the graph.

This is representation ambiguity. The algorithm isn't the hard part. Seeing the structure is.

Most prep resources skip this step entirely. They jump straight into how BFS works without teaching you how to recognize that a grid traversal problem is a BFS problem, or that a word ladder is a shortest-path problem on an implicit graph. You end up memorizing algorithm implementations without building the skill that actually matters in an interview, which is reducing the problem to a graph.

We see this across 10,000+ engineers on the platform. The ones who struggle with graph problems longest aren't missing algorithm knowledge. They're missing the reduction skill. But reduction follows a learnable pattern. Once you see it, graph problems stop being a special category and become the same apply-the-right-pattern exercise as everything else.

The reduction that turns every graph problem mechanical

Every graph problem, no matter how it's dressed up, comes down to two things. Nodes are states, and edges are transitions between states.

That sounds abstract until you apply it. Take a grid of 0s and 1s where you need to find the minimum number of steps from the top-left corner to the bottom-right corner, moving only through 1s.

The graph is hiding in the grid. Each cell with a value of 1 is a node. Two nodes share an edge if their cells are adjacent (up, down, left, right). The problem asks for minimum steps, which means shortest path in an unweighted graph. That tells you the algorithm is BFS.

Now take a different kind of problem. You're given a list of courses and their prerequisites, and you need to find a valid order to take them all.

Each course is a node. Each prerequisite relationship is a directed edge from the prerequisite to the dependent course. The problem asks for a valid dependency ordering, which tells you the algorithm is topological sort.

The reduction pattern holds even for problems that don't mention graphs at all. A word ladder problem (transform "hit" to "cog" one letter at a time) becomes a graph where each word is a node and edges connect words differing by one letter, so shortest transformation means BFS. An island counting problem on a grid has land cells as nodes and adjacent land cells as edges, so counting separate groups becomes connected components via DFS or BFS. A cheapest flights problem with at most K stops is a weighted graph where cities are nodes and flights are edges, pointing to modified Dijkstra or Bellman-Ford.

Once you see every problem as states (nodes) connected by transitions (edges), the algorithm choice follows from what you need. Exploring all paths means DFS. Shortest unweighted distance means BFS. Ordering dependencies means topological sort. Shortest weighted distance means Dijkstra.

Important
The reduction step is the skill that separates engineers who can solve graph problems from engineers who've memorized graph algorithms. Practice the reduction separately from the algorithm.

DFS and what its invariant actually solves

DFS has one rule. Explore as deep as possible along each branch before backtracking. Pick a neighbor, go as deep as you can, then come back and try the next neighbor.

That rule gives you three things in interviews. DFS answers connectivity questions by exhaustively visiting every reachable node from the starting point, so you can tell whether node B is reachable from node A. It enables path enumeration because trying every branch before backtracking naturally generates all paths from A to B. And it catches cycles when combined with three-state tracking (unvisited, in-progress, completed). If you encounter a neighbor that's currently in-progress (still on the call stack), you've found a cycle.

The Island Count problem is a clean example. You're given a grid of land (1) and water (0) cells and need to count the number of separate islands.

  1. Python

Trace it on a small grid.

`
1 1 0
0 1 0
0 0 1
`

You start at (0,0), which is land, so you increment the island count to 1 and launch DFS. The traversal marks (0,0) as visited, moves to (0,1) and marks it visited, then moves to (1,1) and marks it visited. No more unvisited land neighbors remain, so DFS backtracks. When the scan continues and reaches (2,2), it's unvisited land, so the count increments to 2 and DFS marks it visited. The final result is 2 islands.

The DFS here isn't doing anything clever. It's walking through every connected land cell and marking it visited. The insight is that each time you start a new DFS from an unvisited land cell, you've found a new island.

For pure connectivity problems like this, DFS and BFS are interchangeable. Both visit every reachable node. The choice only matters when you need shortest path (BFS) or need to explore all paths exhaustively (DFS). For counting connected components, pick whichever you're more comfortable tracing mentally. In an interview, you won't get penalized for choosing either traversal when the problem only requires connectivity.

Codeintuition's Graph course teaches DFS as a pattern with explicit identification triggers. The understanding lesson for the DFS pattern breaks down the invariant frame by frame, tracing visited state at every step, before you attempt a single problem. That's the difference between knowing what DFS does and being able to trace it mentally under pressure. For a deeper comparison of when to reach for each traversal, see our guide on BFS vs DFS.

BFS and when shortest distance matters

BFS flips the order: explore all neighbors at the current depth before moving to the next depth. You process nodes level by level, so the first time you reach a node, you've found the shortest path to it. Think about why: if every edge has equal weight, the first path to reach any node must be the shortest one, because BFS doesn't skip ahead. It exhausts every node at distance 1 before looking at distance 2, every node at distance 2 before distance 3, and so on. That guarantee only holds for unweighted graphs. The moment edge weights vary, BFS can't guarantee shortest path, and you need Dijkstra.

In interviews, BFS is the right pick whenever a problem asks for minimum steps, fewest moves, or shortest transformation on an unweighted structure. The trigger is two conditions together, where the graph is unweighted and you need minimum distance.

Consider the Minimum Steps in a Grid problem. You're given a grid of 0s and 1s and need to find the minimum number of steps from the top-left to the bottom-right, moving through 1s only.

  1. Python

Trace it on a 3x3 grid.

`1 0 1
1 1 0
0 1 1
`

The queue starts with (0,0) at distance 1. Processing it reveals one land neighbor, (1,0), which enters the queue at distance 2. The neighbor (0,1) is water, so it's skipped.

Processing (1,0) adds (1,1) at distance 3. Then (1,1) adds (2,1) at distance 4. Then (2,1) adds (2,2) at distance 5. When (2,2) comes off the queue, it's the target, so the algorithm returns 5.

BFS explored outward one layer at a time. It didn't find the target until distance 5 because there's no shorter path through the grid. If you'd used DFS, you might have found the target faster on some paths, but you wouldn't know it was the shortest without exhausting all possibilities.

The queue is what makes BFS work. Swap it for a stack and you get DFS, going depth-first down a single branch. Keep the queue and you spread wide across all neighbors. That's the entire difference. But the consequence, guaranteed shortest unweighted path, is why BFS is the right choice for this class of problems.

Codeintuition's BFS identification lesson teaches you to recognize BFS problems before you start coding. When you see an unweighted graph combined with a minimum distance question, you've found a BFS problem.

Topological sort and ordering dependencies

Topological sort answers one question about directed acyclic graphs (DAGs), which is what valid ordering of nodes makes every edge go from earlier to later.

The topological ordering invariant says to process a node only after all its predecessors have been processed. This only works on DAGs. If the graph has a cycle, no valid topological order exists, because you'd need A before B and B before A simultaneously.

You'll see this in interviews whenever a problem involves task scheduling, course prerequisites, build dependencies, or compilation order.

The Course Schedule problem is the classic example. You're given a number of courses and a list of prerequisite pairs, and you need to find one valid order to complete all courses.

The standard method uses in-degree counting. Compute the in-degree (number of incoming edges) for every node, then add all nodes with in-degree 0 to a queue, since those have no prerequisites and can be taken first. Process each node from the queue by adding it to the result and reducing the in-degree of each neighbor by 1. Whenever a neighbor's in-degree drops to 0, add it to the queue. If the result contains all nodes, you have a valid order. If not, the graph has a cycle.

  1. Python

Trace it with 4 courses and prerequisites [(1,0), (2,0), (3,1), (3,2)]. Course 0 is a prerequisite for courses 1 and 2, and both 1 and 2 are prerequisites for course 3.

Initial in-degrees are [0, 1, 1, 2]. Course 0 has in-degree 0, so it enters the queue first. Processing course 0 adds it to the result and reduces the in-degree of courses 1 and 2 to [0, 0, 0, 2]. Both now have in-degree 0, so both enter the queue. Processing course 1 reduces course 3's in-degree to 1, and processing course 2 reduces it to 0. Course 3 now enters the queue with no remaining prerequisites, and processing it completes the result as [0, 1, 2, 3]. All 4 courses are present, so this is a valid ordering.

The time complexity is O(V + E), same as BFS and DFS. You visit every node once and process every edge once.

Something that catches engineers off guard is that topological sort doesn't produce a unique answer. Any DAG can have multiple valid orderings, and [0, 2, 1, 3] would also work in the example above. This BFS-based approach (Kahn's algorithm) produces one valid ordering, and a DFS-based approach produces a different one, but both satisfy the dependency constraints.

For a deeper walkthrough of topological sort, including cycle detection and the DFS-based variant, see .

Dijkstra and why greedy works for shortest weighted paths

Dijkstra's algorithm solves single-source shortest path on graphs with non-negative edge weights. Its invariant is greedy settlement, meaning you always process the unvisited node with the smallest known distance, and once a node is settled, its distance is final.

Why does greedy work? If all edge weights are non-negative, any path through an unsettled node can only get longer. The moment you settle a node, no future path through other nodes could possibly be shorter. That's why the non-negative constraint matters. A single negative edge could make a settled node's distance wrong, and the whole algorithm falls apart.

The algorithm has four steps. Initialize the source distance to 0 and every other node's distance to infinity. Use a min-heap to always process the node with the smallest distance. For each neighbor, check if going through the current node gives a shorter distance, and if so, update the neighbor and add it to the heap. Once a node is popped from the heap, its distance is settled.

This runs in O((V + E) log V) with a binary heap.

Walk through a small example. You have 4 nodes (A, B, C, D) with weighted edges A→B (1), A→C (4), B→C (2), B→D (6), C→D (3). You want the shortest path from A to every other node.

Start with distances [A=0, B=∞, C=∞, D=∞]. Pop A (distance 0) and update B to 1 and C to 4. Pop B (distance 1). Going through B, C becomes min(4, 1+2) = 3, and D becomes 1+6 = 7. Pop C (distance 3). Going through C, D becomes min(7, 3+3) = 6. Pop D (distance 6) and all nodes are settled. Shortest paths from A are B=1, C=3, D=6.

Notice that when C was popped at distance 3, the earlier distance of 4 (from going directly A→C) was already replaced by the shorter path A→B→C. That's greedy settlement in action. Once C is settled at distance 3, no future path through unsettled nodes can improve it, because all remaining edges have non-negative weights.

Dijkstra published this in 1959, and the algorithm hasn't changed since. The invariant was right the first time.

What about negative edge weights? Dijkstra's greedy settlement breaks because a negative edge could invalidate an already-settled distance. For that case, you need Bellman-Ford, which relaxes all edges V-1 times at O(V * E). Interview problems almost always specify non-negative weights when they expect Dijkstra. If the problem mentions negative weights or asks you to detect negative cycles, that's a Bellman-Ford signal. Most interview graph problems use non-negative weights.

For a complete walkthrough of Dijkstra with priority queue implementation and edge cases, see .

Connected components and bipartite detection

Two graph concepts show up frequently in interviews but don't always get dedicated treatment in prep resources.

Connected components answer a simple question. How many separate groups exist in a graph? Run DFS or BFS from any unvisited node, mark everything reachable, increment a counter. The Island Count problem from the DFS section is exactly this. Each island is a connected component.

The more interesting variant uses Union-Find (Disjoint Set Union). Instead of traversing, you process edges one at a time and merge sets of connected nodes. Union-Find with path compression and union by rank runs in nearly O(1) per operation (amortized), making it faster than repeated DFS for dynamic connectivity problems where edges arrive over time. For the full breakdown, see .

Bipartite detection asks whether you can color every node with one of two colors such that no two adjacent nodes share the same color. The invariant is the two-coloring rule. Assign a color to the starting node, assign the opposite color to all its neighbors, and propagate outward. If you ever try to assign a color to a node that already has the same color as its neighbor, the graph isn't bipartite.

Interview problems about grouping, partitioning, and conflict detection are often bipartite detection in disguise. "Can you divide students into two groups such that no two students who dislike each other are in the same group?" That's two-coloring.

  1. Python

Both connected components and bipartite detection run in O(V + E) and use the same BFS/DFS traversal skeleton. The difference is what you track during traversal. For components, you track a visited set. For bipartiteness, you track a color assignment per node.

What graph interview rounds actually test

Five patterns cover the territory you need for interviews. Codeintuition's learning path teaches each as a separate module with explicit pattern identification training before you attempt problems.

Graph patterns taught from first principles
Depth-First Search
Explore connectivity, enumerate paths, detect cycles in directed and undirected graphs
Connected Components
Count and measure disconnected regions using DFS, BFS, or Union-Find
Two Coloring
Detect bipartiteness by assigning alternating colors during traversal
BFS Applications
Find shortest unweighted paths, minimum steps, and nearest distances
Dijkstra Applications
Compute minimum cost paths on weighted graphs with non-negative edges

Which companies test these hardest? Based on company tags across 450+ problems on the platform, Google, Amazon, Meta, and Uber all test graph algorithms heavily. Google's emphasis on predicate search (binary search on the answer space) extends to graph problems, where minimum cost path and cheapest flights combine Dijkstra with additional constraints. Amazon tests the broadest range of graph patterns, from connected components on grids to topological sort for dependency problems. Meta and Uber both lean on BFS grid variants.

None of these companies test graph algorithms in isolation. They test whether you can see the graph in a problem that doesn't look like a graph problem. The word ladder problem doesn't say "graph." The course scheduling problem doesn't say "topological sort." The interview tests whether you can make those connections yourself.

Common mistakes with graph algorithms (and the mechanism behind each)

These are the mistakes that show up most across 200,000+ problem submissions on the platform.

The most common one is not marking nodes as visited before adding them to the queue. This causes the same node to enter the queue multiple times, producing duplicate processing and wrong results. The fix is to mark visited when you add to the queue, not when you process from the queue.

Using DFS when the problem asks for shortest path is another frequent one. DFS finds a path, not the shortest one. If minimum distance matters, you want BFS for unweighted graphs and Dijkstra for weighted ones. A related mistake is applying Dijkstra to a graph with negative edge weights, which breaks greedy settlement because a settled node's distance could be wrong.

Topological sort only works on directed acyclic graphs. If the graph has a cycle, no valid ordering exists. Kahn's algorithm naturally detects this (the result won't contain all nodes if a cycle exists).

Then there's confusing directed and undirected graphs. Using the wrong model changes which algorithms apply and how adjacency lists get built. Similarly, not handling disconnected graphs trips people up. If you run DFS or BFS from a single starting node, you'll only visit nodes in that connected component. Problems requiring full coverage need traversals launched from every unvisited node.

How to know when you're ready for graph interviews

Graph readiness isn't about the number of problems you've solved. It's about whether you can perform the reduction and choose the algorithm without hints.

"Ready" looks like this in practice. You open a problem you haven't seen before. The description mentions a grid, or connections between objects, or dependencies. Instead of freezing, you identify the nodes, identify the edges, determine whether the graph is weighted or unweighted, directed or undirected. The algorithm choice follows from that analysis.

Before learning the reduction method, every graph problem feels unique and unpredictable. You read the problem, recognize it's "something with graphs," and don't know where to start. After, you recognize that a word ladder is BFS on an implicit graph, that a course schedule is topological sort, that minimum cost means Dijkstra. The problems aren't easier, but you can see them clearly.

If you can do all of the following without looking anything up, you're ready for graph algorithms in your coding interview.

This Describes You
  • Given a problem description, you can identify what the nodes and edges are without hints
  • You can trace DFS and BFS by hand on a 5-6 node graph while tracking visited state at each step
  • You can explain why BFS guarantees shortest path in unweighted graphs and DFS doesn't, and you can implement topological sort using in-degree counting
  • You can trace Dijkstra's algorithm on a weighted graph, settling one node per iteration, and you know when it fails (negative weights)
  • You can identify whether a problem needs DFS, BFS, topological sort, or Dijkstra within 2 minutes of reading the description
This Doesn't Describe You

All items apply to you.

If you're not there yet, start with the fundamentals. Codeintuition's Graph course covers all 5 patterns from first principles, with 289 illustrations tracing every algorithm frame by frame. The free tier gives you the Arrays and Singly Linked List courses (63 lessons, 85 problems, 15 patterns, permanently free, no time limit). Premium unlocks all 16 courses at $79.99/year. If graphs aren't your starting point, those foundation courses build the traversal and pointer skills that make graph algorithms click when you get there.

The first time you reduce a problem to nodes and edges without reading the hint, you'll know the method is working.

Do you want to master data structures?

Try our data structures learning path made of highly visual and interactive courses. Get hands on experience by solving real problems in a structured manner. All resources you would ever need in one place for FREE

Five cover the vast majority of interview problems: DFS, BFS, topological sort, Dijkstra, and connected components (via DFS/BFS or Union-Find). Bellman-Ford and Floyd-Warshall are worth understanding conceptually but show up far less often, so focus your practice on the core five.
They feel harder initially because of representation ambiguity, but they're more predictable once you learn the reduction. Graph problems fall into fewer categories than DP problems, and the decision tree for choosing the right algorithm is more mechanical. DP requires constructing a recurrence relation from scratch each time. Graph algorithms require recognizing nodes and edges, then applying a known traversal or shortest-path method.
Look for relationships between entities. If the problem involves connections (friends, routes, dependencies), positions on a grid, or transformations between states, it's likely a graph problem. Ask yourself whether you can define nodes (distinct states) and edges (transitions between states). If yes, you have a graph, even if the problem never uses the word. Word ladders, course scheduling, and island counting are all graph problems in disguise.
BFS finds shortest paths in unweighted graphs by processing nodes level by level, which guarantees the first arrival is the shortest. Dijkstra handles weighted graphs with non-negative edges by always settling the node with the smallest known distance. If all edge weights happen to be equal, Dijkstra reduces to BFS but carries priority queue overhead you don't need. The simple rule is BFS for unweighted graphs, Dijkstra for weighted graphs. If edge weights can be negative, you need Bellman-Ford instead, because Dijkstra's greedy settlement assumes no settled distance can be improved later.
Graphs first, if you're following a structured path. Graph traversals build naturally on recursion and queue concepts from earlier data structures, so the jump feels manageable. Dynamic programming requires different reasoning (optimal substructure, overlapping subproblems) that's harder to develop without recursive thinking as a foundation. Getting comfortable with graphs builds the traversal instincts that make DP's recursive decomposition feel more natural. Codeintuition's learning path follows this order, placing the Graph course before Dynamic Programming. That said, if you're already comfortable with recursion and memoization, you can tackle DP first without any issues.
Was this helpful?