Stacks are one of the most important and widely used data structures in computer science, and understanding them clearly will make many algorithms and real-world software tasks feel simpler and more organized. In very technical but friendly terms, a stack is a constrained Abstract Data Type (ADT) that stores elements in a strictly ordered sequence where insertions and deletions occur only at one distinguished end called the top, and the access discipline follows a Last-In-First-Out policy. In practice, this means the most recently inserted element is the first one available for removal or inspection. If this feels abstract, picture a pile of plates: you always add a new plate on the top and you always take the next plate from the top. This strict order gives stacks predictable behavior, which is why they are used for things like processing function calls, handling browser navigation history, and checking balanced symbols in code. This article introduces stacks in super simple English while keeping the definitions technically accurate, shows how to use and implement them, explains common operations and exceptions like underflow and overflow, compares array and linked list implementations, and walks through step-by-step examples and interview-style problems. If you want only the essentials quickly, jump to “The Crust” section, but if you stay, you’ll learn how stacks work from the ground up with clarity and confidence.
Stack – Introduction
The Crust: Read This If You’re In a Hurry
A stack is an ADT where all inserts and deletes happen at one end called the top, and it enforces LIFO (Last-In-First-Out) order: the last item pushed is the first item popped. The main operations are push (insert at top), pop (remove from top), and top/peek (read the top without removing). Errors include underflow (popping or peeking when empty) and overflow (pushing when a fixed array is full). Stacks are used for function calls and recursion, undo/redo in editors, browser back navigation, expression evaluation (infix-to-postfix and postfix evaluation), balancing brackets, and as helpers in many algorithms. Implementations include: (1) simple array (fast O(1) ops but fixed size), (2) dynamic array with repeated doubling (average O(1) push with occasional resizing; this is amortized O(1)), and (3) linked list (grows and shrinks gracefully with O(1) operations, but has pointer overhead). In interviews, you’ll see tasks like balancing symbols, reversing a stack using only stack ops, simulating a queue with two stacks, or packing multiple stacks into one array. If you remember “one end, LIFO, push/pop/top, underflow/overflow, and three standard implementations,” you already hold the core mental model. The details below expand each idea with steps, examples, code, tables, and comparisons so you can apply stacks confidently in code and in problem solving.
- Core idea: One-end access, LIFO order.
- Key ops: Push, Pop, Top/Peek, Size, IsEmpty, IsFull.
- Errors: Underflow on empty pop/top; overflow on full push (fixed arrays).
- Implementations: Simple array, dynamic array (doubling), linked list.
- Use cases: Function calls/recursion, undo/redo, browser back, bracket matching, expression evaluation.
What Is a Stack?
Formally, a stack is an ordered list and Abstract Data Type (ADT) that restricts insertion and deletion to a single endpoint called the top, enforcing a LIFO (Last-In-First-Out) or equivalently FILO (First-In-Last-Out) access policy. Practically, the stack maintains a sequence of elements such that the most recently pushed element is exactly the element returned by the next pop, and the top operation returns the same element without removal. This constrained access model makes stacks predictable and efficient because each operation touches only one end. If you visualize a cafeteria plate dispenser, each fresh plate is added on the top, and whenever you need a plate you take the one on top. This simple behavior is powerful in computing: it naturally models nested work such as function calls, recursive evaluations, and nested structures like parentheses in expressions. In all implementations, we keep a notion of the current top (an index for arrays or a pointer for linked lists). When the stack has no elements it is empty; pushing adds one element to the top; popping removes exactly one element, revealing the previous top. The deterministic, one-end rule is the heart of a stack and it is the reason why algorithms built on stacks are both elegant and fast.
How Stacks Are Used (Intuition)
Think of a developer working on a long project when a manager gives a new urgent task, and then a phone call interrupts both. The developer mentally “pushes” the current work onto a pending tray and switches to the higher priority task, and then pushes that task to answer the phone. When the call ends, the developer “pops” the top of the pending tray to resume the last interrupted task, and later pops again to get back to the original project. This mirrors the LIFO behavior of a stack, where the most recently paused task is the first to be resumed. Computers use the same logic for function calls: when a function is called, the current point of execution and local variables are pushed onto a call stack. If that function calls another function (or itself recursively), another frame is pushed. When each function finishes, its frame is popped, automatically restoring the previous state. This same structure helps us manage browser navigation (Back button pops the most recent page), processing undo operations in a text editor (each action is pushed and popped to revert), checking matching brackets in code, and evaluating expressions. The stack’s predictable one-end access is what makes all these workflows safe, simple, and very fast.
Stack ADT: Operations and Exceptions
As an Abstract Data Type, the stack defines a set of operations with precise behavior and error conditions, independent of the internal representation. For an integer stack, the main operations are: Push(int data) to insert at the top; int Pop() to remove and return the most recently inserted element; and auxiliary operations like int Top() (return the last inserted element without removing it), int Size() (number of stored elements), boolean IsEmpty() (whether the stack has zero elements), and in fixed-size implementations boolean IsFull() (whether no further push is possible). Two important exceptions must be considered: underflow occurs when calling Pop or Top on an empty stack; overflow occurs when calling Push on a completely full fixed-size array stack. In robust code, these conditions are treated as thrown exceptions or error states that must be handled by the caller. The elegance of a stack ADT is that clients do not need to know whether it uses arrays or linked lists; the semantics and guarantees remain the same. This separation of interface from implementation enables us to analyze timing and memory guarantees, pick the right implementation for the job, and write safer programs that clearly signal misuse via well-defined exceptions.
- Push(int data): Insert element at the top.
- int Pop(): Remove and return the top element; throws underflow if empty.
- int Top(): Return the top element without removing; throws underflow if empty.
- int Size(): Return the count of elements.
- boolean IsEmpty(): Return true if no elements are stored.
- boolean IsFull(): Return true if a fixed array stack cannot accept more pushes.
Implementations of Stack
There are three popular ways to implement a stack, each with different trade-offs in capacity management, memory usage, and operation cost. The simplest is the array-based implementation, where we store elements in a contiguous array, grow the top from left to right, and track the top index; this is very fast but has a fixed maximum size and can throw overflow on push when full. A refinement is the dynamic array implementation, which uses a resizable array that doubles its capacity when full; this makes average push time amortized O(1) by spreading the occasional expensive copy over many cheap pushes. The third is the linked list implementation, where each element is a node and we insert and delete at the head; this grows and shrinks gracefully with no fixed capacity, and all core operations remain O(1), though each element stores a reference, adding small overhead. Choosing between them depends on constraints: if you need a strict cap and simple memory footprint, fixed arrays are fine; if you want to handle unknown growth efficiently, dynamic arrays are ideal; if you value constant-time growth without copying and can afford per-node overhead, linked lists shine. Below, we detail each with code, complexity tables, and their pros and cons.
Simple Array Implementation
An array-based stack stores items in a preallocated array and uses an integer top index to mark the position of the last inserted element. The stack starts empty with top = -1. To push, we first check if top equals capacity – 1; if yes, we have an overflow condition and should throw an exception. Otherwise, we increment top and write the new value at data[top]. To pop, we check if top == -1; if yes, it is underflow; otherwise, we read data[top] and then decrement top. Top/peek simply returns data[top] after checking for empty state. Because every operation touches only the end and uses direct indexing, the time for Push, Pop, Top, Size, and IsEmpty is O(1). The main limitation is fixed capacity: once the array is full, you cannot push more without re-architecting or switching to a dynamic approach. This method is memory-compact and predictable, making it great for embedded contexts or when you know an upper bound on size. Below is a clear Java implementation demonstrating these ideas, followed by a complexity table and a list of limitations you should keep in mind when choosing this approach.
public class ArrayStack {
private final int[] data;
private int top; // index of the top element, -1 means empty
public ArrayStack(int capacity) {
if (capacity <= 0) throw new IllegalArgumentException("Capacity must be positive");
this.data = new int[capacity];
this.top = -1;
}
public void push(int value) {
if (isFull()) throw new IllegalStateException("Overflow: stack is full");
data[++top] = value;
}
public int pop() {
if (isEmpty()) throw new IllegalStateException("Underflow: stack is empty");
return data[top--];
}
public int top() {
if (isEmpty()) throw new IllegalStateException("Underflow: stack is empty");
return data[top];
}
public int size() {
return top + 1;
}
public boolean isEmpty() {
return top == -1;
}
public boolean isFull() {
return top == data.length - 1;
}
}
| Operation | Time Complexity | Space Complexity (for n pushes) |
|---|---|---|
| Push | O(1) | O(n) |
| Pop | O(1) | O(n) |
| Top/Peek | O(1) | O(n) |
| IsEmpty / IsFull / Size | O(1) | O(n) |
| DeleteStack (free array) | O(1) | O(1) additional |
- Advantages: Very fast constant-time operations; compact memory layout; minimal overhead.
- Disadvantages: Fixed maximum size; pushing on full stack throws overflow; no graceful growth.
- Best for: Known size bounds, embedded systems, predictable workloads.
Dynamic Array (Resizable) Implementation
The dynamic array stack builds on the simple array by enabling automatic growth using the repeated doubling strategy: when the internal array becomes full on a push, the stack allocates a new array of twice the capacity and copies existing elements over, then proceeds with the push. This makes the cost of resizing occasional and predictable: even though a resize costs O(n) copies, it happens only when capacity is reached, and over a sequence of n pushes the total work is linear, yielding amortized O(1) time per push. Many implementations also shrink the array when usage drops (for example, when size falls to 1/4 of capacity) to avoid memory waste; shrinking policies vary and should avoid oscillation. Underflow and top checks are the same as any stack. This approach removes the hard cap on growth (bounded by available memory) and keeps operations very fast on average. The code below demonstrates a clear Java implementation that doubles on growth and halves on aggressive shrink, and the table summarizes complexities. Remember that while average push is O(1), a push that triggers resizing is temporarily more expensive; this is normal and amortized analysis ensures the long-run average remains constant time.
public class DynamicArrayStack {
private int[] data;
private int top; // -1 means empty
public DynamicArrayStack() {
this.data = new int[1];
this.top = -1;
}
public void push(int value) {
if (top == data.length - 1) {
resize(data.length * 2); // repeated doubling
}
data[++top] = value;
}
public int pop() {
if (isEmpty()) throw new IllegalStateException("Underflow: stack is empty");
int v = data[top--];
// Optional shrink to save memory (avoid thrashing in real systems)
if (top + 1 > 0 && top + 1 == data.length / 4) {
resize(Math.max(1, data.length / 2));
}
return v;
}
public int top() {
if (isEmpty()) throw new IllegalStateException("Underflow: stack is empty");
return data[top];
}
public int size() {
return top + 1;
}
public boolean isEmpty() {
return top == -1;
}
private void resize(int newCapacity) {
int[] nd = new int[newCapacity];
System.arraycopy(data, 0, nd, 0, top + 1);
data = nd;
}
}
| Operation | Time Complexity | Space Complexity (for n pushes) |
|---|---|---|
| Push | O(1) amortized | O(n) |
| Pop | O(1) | O(n) |
| Top/Peek | O(1) | O(n) |
| IsEmpty / Size | O(1) | O(n) |
| Resizing events | O(n) per resize (infrequent) | O(n) |
- Advantages: No fixed cap; average-case constant-time push; simple code; great practical performance.
- Disadvantages: Occasional expensive resize; can momentarily allocate large memory; too many doublings may risk memory overflow in tight environments.
- Best for: General-purpose stacks where size varies and performance smoothness over many operations matters.
Linked List Implementation
In the linked list stack, each element is a node containing a value and a reference to the next node, and the stack’s top is the list’s head. Push creates a new node and links it in front of the current head in O(1). Pop removes the head node in O(1), after checking for underflow. Top/peek returns the head’s value in O(1). This approach naturally grows and shrinks without a predefined capacity, and there is no expensive copying as in array resizing. However, every node stores an extra reference, increasing memory overhead compared to a compact array, and in non-garbage-collected environments, deleting the whole stack requires traversing all nodes to free them (often counted as O(n) delete). In Java, setting head to null is O(1) but the spirit of the classic analysis considers explicit freeing cost. The implementation below is straightforward and keeps the constant-time guarantees for the core operations. The table summarizes complexities as commonly taught, and the bullet list clarifies when and why linked lists may be preferable to arrays.
public class LinkedListStack {
private static class Node {
int val;
Node next;
Node(int v, Node n) { this.val = v; this.next = n; }
}
private Node head; // top of stack
private int size;
public void push(int value) {
head = new Node(value, head);
size++;
}
public int pop() {
if (isEmpty()) throw new IllegalStateException("Underflow: stack is empty");
int v = head.val;
head = head.next;
size--;
return v;
}
public int top() {
if (isEmpty()) throw new IllegalStateException("Underflow: stack is empty");
return head.val;
}
public int size() {
return size;
}
public boolean isEmpty() {
return head == null;
}
// In languages without GC, freeing nodes is O(n)
public void deleteStack() {
while (head != null) {
Node tmp = head;
head = head.next;
tmp.next = null; // help GC
}
size = 0;
}
}
| Operation | Time Complexity | Space Complexity (for n pushes) |
|---|---|---|
| Push | O(1) | O(n) |
| Pop | O(1) | O(n) |
| Top/Peek | O(1) | O(n) |
| IsEmpty / Size | O(1) | O(n) |
| DeleteStack (free nodes) | O(n) | O(1) additional |
- Advantages: Grows/shrinks without copying; no fixed capacity; every op is constant-time.
- Disadvantages: Per-node pointer overhead; non-contiguous memory; delete-all may be O(n).
- Best for: Highly dynamic sizes or when you want to avoid amortized spikes from resizing.
Comparing Implementations
Choosing the “best” stack implementation depends on the context of your application: expected size, memory limits, and sensitivity to occasional latency spikes. A simple array stack gives the smallest overhead and consistently fast O(1) operations until the capacity is reached, at which point pushes fail with overflow. The dynamic array stack avoids fixed limits by resizing with repeated doubling, making push amortized O(1), and a sequence of n operations from empty costs O(n) overall; however, some pushes will be briefly expensive during resizing and can momentarily allocate more memory. The linked list stack guarantees O(1) per operation and grows gracefully without copying but pays a small memory tax per node and may be less cache-friendly. If you compare incremental resizing by +1 versus doubling, incremental resizing costs O(n^2) total over n pushes (amortized O(n) per push), which is poor; doubling achieves O(n) total (amortized O(1) per push) and is preferred. In practice, dynamic arrays are an excellent default, arrays are great when size is known, and linked lists are appropriate when growth patterns are unpredictable and you want to avoid resizing overheads. The bullets below summarize the trade-offs clearly.
- Array (fixed): Constant-time ops, minimal overhead; cannot grow; overflow when full.
- Dynamic array (doubling): Amortized O(1) push; occasional O(n) resize; good all-rounder.
- Linked list: True O(1) ops and graceful growth; extra references; potential O(n) delete-all.
- Incremental vs doubling: Incremental growth is O(n^2) total; doubling is O(n) total for n pushes.
Step-by-Step: How Push and Pop Work
Understanding the mechanics of push and pop makes stacks feel natural. In an array stack, the top index starts at -1 for an empty stack. Each push increments top and writes at that slot. Each pop reads at the current top and then decrements it. In a dynamic array stack, a push first checks whether top == capacity – 1; if so, it doubles capacity by allocating a new array and copying all elements (this is the rare expensive step), then proceeds to push in O(1). In a linked list stack, push creates a new node whose next pointer is the current head, and updates head to the new node; pop resets head to head.next and returns the value. All versions check for underflow before pop/top and may check for overflow in fixed arrays before push. These step-by-step rules guarantee LIFO order at all times. The sequence below shows the exact actions you would perform in code and what happens to memory and indices/pointers at each step so you can trace them confidently during debugging or interviews.
- Array Push: If top == capacity – 1, throw overflow; else set top = top + 1; write data[top] = value.
- Array Pop: If top == -1, throw underflow; else read value = data[top]; set top = top – 1; return value.
- Dynamic Array Push (when full): Allocate array of size 2 × capacity; copy 0..top; replace reference; then push normally (top++, write).
- Linked List Push: Node node = new Node(value, head); head = node; size++.
- Linked List Pop: If head == null, throw underflow; else value = head.val; head = head.next; size–; return value.
- Top/Peek: If empty, underflow; else return element at top (array) or head.val (linked list) without modifying structure.
Common Applications of Stacks
Stacks provide a disciplined way to process nested and undoable activities, which is why they appear across compilers, interpreters, user interfaces, and algorithmic toolkits. Directly, they enable balancing of symbols such as parentheses, brackets, and braces in code; perform infix-to-postfix conversion and postfix evaluation of arithmetic expressions; implement function calls and recursion with activation records on a call stack; compute spans in stock market problems; represent page-visited history in browsers (Back button); manage undo/redo sequences in editors; and match tags in HTML/XML. Indirectly, stacks act as auxiliary structures in algorithms like tree traversals, and they are components inside other data structures, for example when simulating queues using two stacks. The common pattern is nested scope or reversible action: whenever you temporarily move forward and plan to retrace steps in the reverse order, a stack is the simplest and safest model. Because operations are O(1), they scale well, and because the LIFO order is strict, the logic stays easy to reason about. The bullet points below collect typical stack applications you are likely to implement or encounter, each mapping directly to the push/pop/top operations and their guarantees.
- Direct applications: Balancing of symbols; infix-to-postfix conversion; evaluation of postfix; function calls and recursion; span problems; browser back navigation; undo/redo; matching HTML/XML tags.
- Indirect applications: Helper structure in tree/graph traversals; component within other data structures; simulating queues with two stacks.
Classic Interview Problems (With Time/Space)
Interview problems around stacks test whether you can model nested or reversible processes and leverage O(1) push/pop efficiently. A fundamental task is checking balanced symbols, where you scan an expression, push opening symbols, and pop/verify when you see closing symbols; this runs in O(n) with O(n) space in the worst case. Another common one is detecting whether a string is a palindrome using a stack by pushing the first half and comparing while scanning the second half; this is O(n) time and O(n) space. Reversing a stack using only push and pop can be done via recursion: pop all elements, then insert at bottom during the stack-unwinding phase; this costs O(n^2) time and O(n) stack space. You may also be asked to implement a queue using two stacks or a stack using two queues. More advanced variants include packing two stacks into one array by letting one grow from the left and the other from the right (both O(1) per op until they meet), and implementing three or m stacks in one array by shifting sections to make room (push can be O(n) when shifting). The steps and complexities below will help you articulate both correctness and performance in a crisp, structured way.
| Problem | Approach Summary | Time Complexity | Space Complexity |
|---|---|---|---|
| Balanced symbols | Push opens; on close, check matching top and pop; empty at end means balanced | O(n) | O(n) |
| Palindrome check | Push first half; compare second half with stack top; pop as you match | O(n) | O(n) |
| Reverse a stack with recursion | Pop all; on recursion unwind, insert each element at bottom | O(n^2) | O(n) |
| Queue with two stacks | in-stack for enqueue; out-stack for dequeue; transfer when out empty | Amortized O(1) per op | O(n) |
| Two stacks in one array | One grows left→right; other right→left; meet means full | O(1) per push/pop | O(1) extra |
| Three/m stacks in one array | Partition and shift middle/neighbor stacks when needed | Push can be O(n) | O(1) extra |
- Palindrome via stack (steps): Traverse until middle; push elements. If odd length, skip middle. Continue traversal, and for each element compare with stack top; if equal, pop and continue; else, not a palindrome. End with stack empty to succeed.
- Reverse stack (steps): Recursively pop until stack is empty. On return, insert popped element at bottom by popping all current elements, pushing target at bottom, then re-pushing those popped elements.
- Two stacks in one array (steps): Maintain leftTop starting at -1, rightTop starting at n. Push left: data[++leftTop] = x if leftTop + 1 != rightTop. Push right: data[–rightTop] = y if rightTop – 1 != leftTop. Pop from each side by moving its respective top pointer.
Example: Balanced Brackets Using a Stack (Java)
Let’s combine the concepts into a practical example that checks whether an expression has balanced brackets using a stack. The algorithm scans the string left to right, pushing opening symbols like ‘(’, ‘[’, and ‘{’ onto the stack. When it encounters a closing symbol, it checks whether the stack is empty (underflow would mean an extra closing symbol), then compares the top of the stack for the matching opening counterpart; if it matches, pop and continue, otherwise the expression is unbalanced. At the end, if and only if the stack is empty, the expression is balanced. This algorithm maps directly to the LIFO nature of nested pairs: the most recent unmatched opening must be the first to match a closing. The solution below uses Java and a character stack implemented via java.util.ArrayDeque (which is backed by a resizable array/deque); you can swap in any of the stack implementations shown earlier if you prefer. Following the code, the numbered list explains the control flow step-by-step so you can trace sample inputs like “{[(a+b)*c]+d}” and see exactly how pushes and pops enforce correctness in linear time with clear, local reasoning at every character.
import java.util.ArrayDeque;
import java.util.Deque;
public class BracketBalancer {
private static boolean isOpen(char c) {
return c == '(' || c == '[' || c == '{';
}
private static boolean isMatch(char open, char close) {
return (open == '(' && close == ')') ||
(open == '[' && close == ']') ||
(open == '{' && close == '}');
}
public static boolean isBalanced(String s) {
Deque<Character> stack = new ArrayDeque<>();
for (char c : s.toCharArray()) {
if (isOpen(c)) {
stack.push(c); // push opening
} else if (c == ')' || c == ']' || c == '}') {
if (stack.isEmpty()) return false; // underflow
char top = stack.pop();
if (!isMatch(top, c)) return false;
}
}
return stack.isEmpty(); // balanced if no unmatched opens remain
}
public static void main(String[] args) {
System.out.println(isBalanced("{[(a+b)*c]+d}")); // true
System.out.println(isBalanced("([)]")); // false
System.out.println(isBalanced("(((())))")); // true
}
}
- Initialize an empty stack. This is our memory of “currently open and unmatched” symbols.
- For each character: if it is an opening symbol, push it to mark that we are expecting a corresponding closing symbol later.
- If it is a closing symbol, first check underflow: an empty stack means there is no prior opening to match, so return false.
- If not empty, pop the top opening symbol and test whether it pairs with the current closing symbol using a simple matching function.
- If the pair matches, proceed; if not, the nesting order is broken (e.g., “[)” ), so return false immediately.
- After scanning all characters, return true only if the stack is empty, meaning every opening symbol was matched and popped in LIFO order.
| Algorithm | Time Complexity | Space Complexity |
|---|---|---|
| Balanced brackets using a stack | O(n) | O(n) |
Summary and Next Steps
Stacks formalize the idea of “do the most recent thing first,” and that one rule, enforced by LIFO, makes them remarkably powerful. You learned the precise ADT definition and the behavior of push, pop, and top, including how underflow and overflow are handled in robust code. You saw three standard implementations—simple array, dynamic array with repeated doubling, and linked list—each with code, complexity tables, and trade-offs, plus a clear comparison including why doubling beats incremental resizing via amortized analysis. We also connected stacks to real software scenarios such as function calls, undo/redo systems, browser histories, and compilers, and walked through classic interview problems with step-by-step logic and complexity tables. From here, a great next step is to code each implementation yourself, add thorough unit tests covering edge cases (empty stack pops, full array pushes), and practice problems like converting infix to postfix and evaluating postfix expressions. As you solve more problems, watch for the LIFO pattern—recent work undone first—and reach for a stack with confidence. Mastery of stacks will also prepare you for queues, deques, recursion strategies, and traversals that repeatedly lean on this elegant one-end data structure.