Arrays vs linked lists
nderstand the arrays vs linked lists difference by tracing it to one root cause, memory layout. Know which to pick for any coding interview problem.
Why memory layout is the single root cause of every tradeoff
When arrays outperform linked lists and why cache locality matters
When linked lists win and what overhead they carry
ow to choose between them in a coding interview
Most engineers memorize the complexity table. Arrays give O(1) access, linked lists give O(1) insertion. That's the answer on every flashcard deck and in every textbook. But the arrays vs linked lists difference goes deeper than a table. When an interviewer asks you to justify your choice on a problem you've never seen, memorized rows won't help.
It comes down to one thing: how they sit in memory. Everything else follows from that.
What arrays and linked lists actually are
An array is a block of contiguous memory. When you create an array of 5 integers, the system allocates 5 consecutive memory slots. Element 0 sits at the base address, element 1 sits at base + (1 × element size), element 2 at base + (2 × element size), and so on.
Arrays store elements in contiguous memory, giving O(1) access by index but O(n) insertion at arbitrary positions. Linked lists store elements in scattered nodes connected by pointers, giving O(1) insertion at known positions but O(n) access. The right choice depends on which operation your problem performs most.
A linked list is a chain of nodes, where each node holds a value and a pointer to the next node. These nodes can live anywhere in memory. Node 0 might be at address 1000, node 1 at address 7500, node 2 at address 3200. The only way to find node 2 is to start at node 0 and follow the pointers.
One is contiguous, the other is scattered. Every performance difference between the two traces back to this single fact.
Why is the key difference
It all goes back to memory layout. Contiguous memory makes arrays fast at access because the system can calculate the exact memory address of any element with simple arithmetic. Want element 4? It's at base_address + (4 × element_size). One multiplication, one addition. Doesn't matter if the array has 10 elements or 10 million. The cost is the same.
Python
But contiguous memory also makes arrays slow at insertion. If you want to insert a value at index 2 in a 5-element array, every element from index 2 onward needs to shift one position right. That's 3 elements moving. In a million-element array, inserting at index 0 means shifting all million elements.
A linked list handles this differently. If you already have a pointer to the node before your insertion point, you create a new node, point it to the next node, and update the previous node's pointer. Two pointer operations, no shifting. The rest of the list doesn't move.
“One is a row of chairs bolted to the floor. Moving everyone down to make room takes effort. The other is a chain where you can unhook a link and splice in a new one.”
Notice the qualifier: "if you already have a pointer to the node." Without one, you have to traverse from the head to find it. That traversal costs O(n). The insertion itself is O(1), but finding the insertion point can cost O(n).
Most explanations stop at the table. The table is incomplete without this context, though. What operation does your specific problem perform most frequently? That's the real question.
When arrays win (and why cache locality matters)
Arrays dominate when your problem involves:
- Random access by index: Any problem where you need to jump to a specific position directly. Binary search, for example, requires accessing the middle element repeatedly. That's
O(1)per access in an array. In a linked list, finding the middle means traversing half the list every time, which breaks binary search entirely. - Iteration from start to end: When you're scanning every element, arrays are faster than their Big
Osuggests. Both arrays and linked lists are technicallyO(n)for a full traversal. But arrays are significantly faster in practice because of cache locality. - Space efficiency: Arrays store just the data. Linked list nodes carry overhead: each node stores a value plus a pointer (or two, for doubly linked lists). For small data types like integers, the pointer can double the memory usage per element.
This last part matters more than the complexity table in real systems, and most DSA content glosses over it. When your CPU reads element 0 of an array, it doesn't fetch just that one value. It loads an entire cache line (typically 64 bytes) into the CPU cache. Because array elements are contiguous, elements 1 through 15 are probably already in that cache line. Accessing them costs nanoseconds instead of the 100+ nanoseconds for a main memory fetch.
Linked list nodes are scattered across memory. Each pointer dereference is a potential cache miss, forcing the CPU to go back to main memory. For a million-element traversal, the gap between "mostly cache hits" and "mostly cache misses" can be 10x or more in wall-clock time.
When linked lists win (and when they don't)
Linked lists have a genuine advantage in a narrow set of situations:
- Frequent insertion or deletion at known positions: If your algorithm maintains a pointer to the position where changes happen, linked lists perform these operations in
O(1). LRU Cache is the textbook example. It maintains pointers to both the head and tail, and moves nodes to the front on every access. Doing this in an array would mean shifting elements on every operation. - Dynamic size with no reallocation: Arrays need to resize when they run out of space. Dynamic arrays (like Python's list) handle this automatically by doubling capacity, but the resize operation itself copies every element. Linked lists grow one node at a time. No copying, no capacity planning.
- Pointer-based manipulation: Reversing a segment, splitting a list into two, or merging two sorted sequences. These are pointer operations that don't require moving data. The Singly Linked List course covers seven distinct patterns built on this property: reversal, sliding window traversal, fast and slow pointers, split, merge, and reorder.
But linked lists lose more often than beginners expect. If you need to "insert at index k" and you don't already have a pointer to position k, you're traversing first (O(n)) and then inserting (O(1)). The total cost is O(n), same as an array insertion. The linked list advantage only shows up when your algorithm design gives you direct access to the node.
- Access by indexO(1)
- Search (unsorted)O(n)
- Insert at beginningO(n)
- Insert at endO(1) amortized
- Insert at known positionO(n)
- Delete at known positionO(n)
- Memory per elementData only
- Cache performanceExcellent
- Binary searchO(log n)
- Access by indexO(n)
- Search (unsorted)O(n)
- Insert at beginningO(1)
- Insert at endO(1) with tail pointer
- Insert at known positionO(1)
- Delete at known positionO(1)
- Memory per elementData + pointer(s)
- Cache performancePoor
- Binary searchNot feasible
The difference in a coding interview
When an interviewer asks you to choose between an array and a linked list, they aren't testing whether you've memorized the table above. They want to see if you can reason about which operations your solution will perform most frequently. The decision breaks down like this:
- ✓Random access needed (jumping to index i): use an array
- ✓Sequential iteration where performance matters: use an array (cache locality)
- ✓Binary search or divide-and-conquer by position: use an array
- ✓Frequent insert or delete at positions you already hold pointers to: use a linked list
- ✓Reversing, splitting, or merging segments by relinking: use a linked list
- ✓Both access and reordering needed (e.g., LRU Cache): combine a hash map with a linked list
All items apply to you.
That last one is worth expanding on. Many interview problems don't use either option in isolation. Consider LRU Cache, tested at 19 companies including Google, Amazon, and Meta. It combines a hash map with a doubly linked list. The hash map provides O(1) access to any node. The linked list provides O(1) insertion and deletion at the front and back. Neither alone solves the problem.
Patterns you'll use with arrays (two pointers, sliding window, interval merging) and patterns you'll use with linked lists (reversal, fast and slow pointers, merge) are fundamentally different. The Arrays course covers 8 patterns built on contiguous memory. Linked list courses cover 7 patterns built on pointer manipulation. Understanding why each pattern belongs where it does, not just that it does, is what separates memorizing solutions from reasoning through new problems.
Beyond the difference
The arrays vs linked lists difference is the first decision in a longer chain. Once you understand why memory layout determines performance, the same reasoning applies to stacks (array-based vs linked list-based), queues (circular array vs linked list), trees (array representation vs node-based), and graphs (adjacency matrix vs adjacency list). BFS vs DFS, for example, involves choosing between a queue and a stack. Both choices trace back to the same memory tradeoffs.
For an ordered progression through all of these, Codeintuition's learning path starts with arrays and linked lists (both free, permanently) and builds each topic on what you've already covered. Together, the Arrays and Singly Linked List courses cover 63 lessons, 85 problems, and 15 patterns for free. The full 16-course path is $79.99/year.
The first time you pick the right option before reading the hints, and your solution runs in optimal time because the memory layout matches your access pattern, the complexity table stops being something you memorize. It becomes something you can derive.
Do you want to master data structures?
Try our data structures learning path made of highly visual and interactive courses. Get hands on experience by solving real problems in a structured manner. All resources you would ever need in one place for FREE