Stacks are a basic topic in data structures. A data structure is a way to store data and to work with data. We build programs on these ideas. This article gives you a clean start. It uses very simple words. It keeps ideas clear. It also stays correct in a technical way. You will learn what a stack is. You will learn how it works. You will see words like push, pop, top, LIFO, overflow, and underflow. Each word has a short meaning and an easy example. You will see how a stack helps in real life tasks. You will learn the main operations. You will see how to build a stack using arrays and using linked lists. An array is a block of items in order. A linked list is a chain of nodes. You will see the speed of each operation. This speed measure is called time complexity. You will also see how much memory we need. This is called space complexity. We show these in tables. We also give code in Java. We explain the code step by step. We show common uses like balancing brackets, undo action, and browser back button. We also show tricks like two stacks in one array. Read the short “crust” section if you want only the key points. If you want deep learning, read each section in order, and follow the steps and examples.
Stack – Introduction
Crust of the Article (Short Summary)
A stack is an ordered list where we add and remove items only at one end. This end is called the top. The rule is Last-In, First-Out (LIFO). The last item you put in is the first item you take out. Putting an item on the top is called a push. Taking an item from the top is called a pop. Looking at the top item without taking it is called top or peek. When you try to pop on an empty stack, it is an underflow error. When you try to push on a full stack (in a fixed array), it is an overflow error. A stack can be made with a simple array, a dynamic array with doubling (grow the array when full), or a linked list. All basic operations run in constant time O(1). With dynamic arrays, push is O(1) on average. Uses include symbol balancing, infix to postfix, postfix evaluation, function calls and recursion, browser back history, undo in editors, HTML/XML tag match, and many more. Arrays are simple and fast but need resize at times. Linked lists grow and shrink smoothly but use extra space for links. You can even store two or three stacks in one array by growing them toward each other. You can reverse a stack by recursion. You can simulate a queue with two stacks and a stack with two queues. This is the core idea: one door to enter and leave, and the newest item gets served first.
What Is a Stack? Clear Meaning and Core Rule
A stack is a very simple data structure. A data structure is a format to store data and to do actions on that data. A stack keeps items in order. Here, order means the time order in which items arrive. A stack allows insertion and deletion at only one end. That end is named the top. Insertion means adding a new item. Deletion means removing an item. The rule of a stack is LIFO (Last In, First Out). This means the last inserted item is the first removed item. Another name is FILO (First In, Last Out). The act of adding an item to the top is called a push. The act of removing the top item is called a pop. If we try to pop when the stack has no items, that is an underflow. If we use a fixed size array and we try to push when it is full, that is an overflow. These are exceptions. An exception is an error state raised by an operation that cannot be done. Think of a stack like a pile of plates in a cafeteria. Clean plates go on the top. When you need a plate, you take the top one. The first plate you put at the bottom is used last. This picture is simple. It matches the exact rule of a stack and helps you remember it well.
Real-Life Analogy: Plates and Office Tasks
Think about a pile of plates. You add a plate to the top. You always take the top plate first. This is a stack. Now think about office work. A developer works on a long task. The manager gives a new urgent task. The developer puts the long task aside. This is like a push of the long task into a pending stack, and then a push of the urgent task on top. The phone rings. The call is highest priority. The developer pushes the current task into the pending tray. After the call ends, the developer pops the last task from the pending tray and continues. If another call comes, the same thing happens. Finally, when all urgent tasks are done, the developer pops the long task and works on it again. In simple words, the newest task gets done first. Old tasks wait under newer tasks. This is the LIFO rule in daily life. These analogies make the rule easy to feel. They also show why a stack is natural for nested work. One thing starts, then another thing interrupts, and we must return to the last thing we paused. The stack remembers that order for us and makes the return steps very simple and safe.
Stack ADT: Operations and Exact Meanings
An ADT is an Abstract Data Type. It means we define what operations we can do and what each operation means. We do not fix how to store the data inside at this stage. For a stack of integers, the main operations are: Push(int data) inserts data on the top. int Pop() removes and returns the last inserted item on the top. The helper operations are: int Top() returns the top item without removing it. int Size() returns how many items are in the stack. int IsEmptyStack() tells if the stack is empty. int IsFullStack() tells if a fixed array stack is full. Exceptions can happen. Calling pop or top when the stack is empty throws an exception. Pushing into a full fixed array stack throws an exception. These rules define correct use. They also make testing easy. A good mental model is simple: the top is the only door. Items enter and leave at this door. The stack size goes up by one on push. It goes down by one on pop. The top moves up and down as items come and go. This simple rule gives us strong, predictable behavior in programs.
Time and Space: Performance at a Glance
| Operation | Meaning | Typical Time Complexity | Typical Space Complexity |
|---|---|---|---|
| Push | Add item at the top | O(1) per operation (amortized O(1) with dynamic arrays) | O(1) extra per item; O(n) total after n pushes |
| Pop | Remove top item | O(1) | O(1) |
| Top/Peek | Read top item without removing | O(1) | O(1) |
| Size | Count items | O(1) | O(1) |
| IsEmpty | Check if zero items | O(1) | O(1) |
| IsFull | Check if fixed array is full | O(1) | O(1) |
Implementations: Array, Dynamic Array, and Linked List
There are three common ways to build a stack. A simple array stack uses a fixed size array. We add items from left to right. We keep a variable top that points to the index of the most recent item. Push increases top and writes the item. Pop reads the item at top and then decreases top. If top is -1, the stack is empty. If top is at the last index and we push, it is overflow. This design is very fast. Each basic action is O(1). But the maximum size must be fixed, and it cannot change. A dynamic array stack starts like a simple array but can grow. When the array is full, we create a new array of double size and copy items. This is called repeated doubling. Each single push is O(1) on average, because we pay the copy cost rarely. This “average over many operations” is called amortized O(1). A linked list stack stores nodes. Each node holds a value and a link to the next node. Push inserts at the head. Pop removes the head. There is no overflow as long as memory is available. It grows and shrinks smoothly. Each basic action is O(1). But each node needs extra memory for the link. Each operation also touches memory for the node and the link, which adds small overhead. All three methods are correct and fast. Pick the one that matches your limits and growth needs.
Simple Array Stack: Complexity Table
| Operation | Time Complexity | Space Complexity (after n pushes) |
|---|---|---|
| CreateStack | O(1) | O(n) capacity reserved |
| Push | O(1) | O(1) extra |
| Pop | O(1) | O(1) |
| Top | O(1) | O(1) |
| Size | O(1) | O(1) |
| IsEmpty | O(1) | O(1) |
| IsFull | O(1) | O(1) |
| DeleteStack | O(1) | Frees O(n) capacity |
Dynamic Array Stack: Complexity Table
| Operation | Time Complexity | Space Complexity (after n pushes) |
|---|---|---|
| CreateStack | O(1) | O(1) initial |
| Push | O(1) amortized | O(n) |
| Pop | O(1) | O(1) |
| Top | O(1) | O(1) |
| IsEmpty | O(1) | O(1) |
| IsFull | O(1) | O(1) |
| DeleteStack | O(1) | Frees O(n) |
Linked List Stack: Complexity Table
| Operation | Time Complexity | Space Complexity (after n pushes) |
|---|---|---|
| CreateStack | O(1) | O(1) |
| Push | O(1) | O(1) per node; O(n) total |
| Pop | O(1) | O(1) |
| Top | O(1) | O(1) |
| IsEmpty | O(1) | O(1) |
| DeleteStack | O(n) | Frees O(n) |
Amortized Analysis: Incremental vs Doubling Strategy
When we grow an array by one slot each time it becomes full, we must copy many items often. This is called the incremental strategy. It makes push very slow on average. After n pushes, total time is O(n²). The average per push is O(n). When we double the array size each time it becomes full, we copy items rarely. This is the doubling strategy. After n pushes, total time is O(n). The average per push is O(1). The word amortized means we take the total time for many operations and divide by the number of operations to get an average cost per operation. Doubling does the heavy work only a few times at sizes 1, 2, 4, 8, and so on. The number of doublings is about log n. The total number of copied items is near 2n. So the average cost is constant. This is why dynamic array stacks are fast in practice. But note that too many doublings in a tight memory system may cause a memory overflow exception. Pick sizes with care. In cases of strict memory limits, a linked list stack can be a safe choice.
| Growth Strategy | Total Time for n Pushes | Amortized Time per Push |
|---|---|---|
| Increment by 1 when full | O(n²) | O(n) |
| Double size when full | O(n) | O(1) |
Java Code: Array-Based Stack (Fixed Capacity)
This version uses a fixed size array. It is simple and very fast. We use an integer top index to track the current top. When top == -1, the stack is empty. When top == capacity – 1, the stack is full. Push writes at ++top. Pop reads at top–. Top reads at top. We throw a runtime exception on underflow and overflow. An exception is a signal that the requested action cannot be done. This code is good when you know the maximum size in advance. It has O(1) time for all basic operations. It needs O(n) space for n items. It is a clean fit for many small tasks. Read the steps after the code to see how each method works in simple terms and in exact order. This helps you map the words to the lines. It also helps you debug when you build your own version.
public class ArrayStack {
private final int[] a;
private int top; // index of top element, -1 means empty
public ArrayStack(int capacity) {
if (capacity <= 0) throw new IllegalArgumentException("capacity must be > 0");
this.a = new int[capacity];
this.top = -1;
}
public void push(int data) {
if (isFull()) throw new RuntimeException("Overflow: stack is full");
a[++top] = data;
}
public int pop() {
if (isEmpty()) throw new RuntimeException("Underflow: stack is empty");
return a[top--];
}
public int top() {
if (isEmpty()) throw new RuntimeException("Underflow: stack is empty");
return a[top];
}
public int size() {
return top + 1;
}
public boolean isEmpty() {
return top == -1;
}
public boolean isFull() {
return top == a.length - 1;
}
}
Step-by-Step: How Each Method Works
- CreateStack: set top = -1. This marks the stack as empty.
- Push:
- Check if top is at last index. If yes, throw overflow exception.
- Increase top by 1.
- Write data at a[top].
- Pop:
- Check if top == -1. If yes, throw underflow exception.
- Read value at a[top].
- Decrease top by 1.
- Return the read value.
- Top/Peek:
- Check if top == -1. If yes, throw underflow exception.
- Return a[top] without changing top.
- Size: return top + 1.
- IsEmpty: return true if top == -1.
- IsFull: return true if top == a.length – 1.
Java Code: Dynamic Array Stack (Doubling)
This version grows when full. On push, if the array is full, we allocate a new array of double size. Then we copy old items to the new array. Then we write the new item. This makes single pushes fast on average. The average time per push is O(1). The total space after n pushes is O(n). We keep the same simple top logic. We do not shrink in this simple code. Shrink is optional and can cause extra copies. If memory is tight, you can add shrink on pop when usage is small. But be careful to keep amortized O(1) time. The steps below show how a push decides to resize. They also show how the copy happens. Read them to see the flow on a full push versus a normal push. This helps you reason about cost and behavior under load.
public class DynamicArrayStack {
private int[] a;
private int top;
public DynamicArrayStack() {
this.a = new int[1]; // start small
this.top = -1;
}
public void push(int data) {
if (top == a.length - 1) {
int[] b = new int[a.length * 2];
System.arraycopy(a, 0, b, 0, a.length);
a = b;
}
a[++top] = data;
}
public int pop() {
if (isEmpty()) throw new RuntimeException("Underflow: stack is empty");
return a[top--];
}
public int top() {
if (isEmpty()) throw new RuntimeException("Underflow: stack is empty");
return a[top];
}
public int size() {
return top + 1;
}
public boolean isEmpty() {
return top == -1;
}
}
Step-by-Step: Push with Doubling
- Check if top is at last index.
- If full:
- Make new array b with size 2 × old size.
- Copy all items from a to b.
- Set a = b.
- Increase top by 1.
- Write data to a[top].
- Return. The amortized cost is O(1). The copy cost is rare.
Java Code: Linked List Stack
This version uses nodes. A node stores a value and a link to the next node. The stack keeps a pointer to the head node. Push creates a new node and links it in front of the head. Pop removes the head and moves head to head.next. Top reads head.data. This design does not need resizing. It grows and shrinks with the number of items. Each operation is O(1). Delete of the whole stack is O(n) as we must free all nodes. The code below uses a small inner class Node. The steps after the code describe push and pop in simple order. This makes the idea of pointer changes clear and safe in your mind.
public class LinkedListStack {
private static class Node {
int data;
Node next;
Node(int d, Node n) { data = d; next = n; }
}
private Node head; // top of stack
public LinkedListStack() {
head = null;
}
public void push(int data) {
head = new Node(data, head);
}
public int pop() {
if (isEmpty()) throw new RuntimeException("Underflow: stack is empty");
int val = head.data;
head = head.next;
return val;
}
public int top() {
if (isEmpty()) throw new RuntimeException("Underflow: stack is empty");
return head.data;
}
public boolean isEmpty() {
return head == null;
}
public int size() {
int c = 0;
for (Node cur = head; cur != null; cur = cur.next) c++;
return c;
}
}
Step-by-Step: Pointer Changes on Push/Pop
- Push:
- Create a new node with given data.
- Set newNode.next to current head.
- Set head to newNode.
- Pop:
- Check if head is null. If yes, underflow.
- Read value at head.data.
- Move head to head.next.
- Return the value.
- Top:
- Check if head is null. If yes, underflow.
- Return head.data.
Applications: Where Stacks Shine
Stacks appear in many tasks. A task is a job that a program must do. A stack is perfect for nested and reversed order work. It keeps track of the last thing you started. It lets you finish it first, then go back. This is common in math, in code, and in user apps. Below are direct and indirect uses. Read each point with care. See how the LIFO rule solves the order problem. Picture how push and pop manage the flow. This is how the plate and office analogies map to real code and tools you use every day.
- Balancing of symbols: Check if brackets like (), {}, [] match and are in the right order.
- Infix to postfix conversion: Convert normal math form (A + B) to postfix (A B +).
- Postfix evaluation: Compute value of postfix expressions with a stack of numbers.
- Function calls and recursion: The call stack tracks active functions and local data.
- Finding spans: Stock span problem uses a stack to find daily spans fast.
- Browser Back button: Pages visited form a stack. Back pops the last page.
- Undo in editors: Each edit pushes a state. Undo pops and restores the last state.
- Matching tags in HTML/XML: Open tags push, close tags must match top and pop.
- Auxiliary structure in algorithms: Help in tree traversals and many graph steps.
- As component of other structures: You can simulate queues and more with stacks.
Algorithm Example: Check Balanced Symbols
This algorithm checks if an expression has balanced brackets. The word balanced means every open symbol has a matching close symbol and in the right order. We use a stack of characters. We push each open symbol. When we see a close symbol, we check the stack top. If it matches the right open symbol, we pop. If not, it is not balanced. At the end, if the stack is empty, it is balanced. If not empty, it is not balanced. The steps are clear and short. They follow the LIFO rule. The most recent open must close first. The time complexity is O(n). The space complexity is O(n) in the worst case, where n is the length of the string. This fits our simple model and shows how stacks solve nested structure checks in a clean way with small code.
public class Balancer {
public static boolean isBalanced(String s) {
DynamicArrayStack st = new DynamicArrayStack(); // stack of ints for chars
for (int i = 0; i < s.length(); i++) {
char c = s.charAt(i);
if (c == '(' || c == '{' || c == '[') {
st.push(c);
} else if (c == ')' || c == '}' || c == ']') {
if (st.isEmpty()) return false;
char open = (char) st.pop();
if (!matches(open, c)) return false;
} // ignore other chars
}
return st.isEmpty();
}
private static boolean matches(char open, char close) {
return (open == '(' && close == ')') ||
(open == '{' && close == '}') ||
(open == '[' && close == ']');
}
}
Step-by-Step: Balanced Symbols
- Start with an empty stack.
- Scan the string from left to right.
- When you see an open bracket, push it.
- When you see a close bracket:
- If the stack is empty, return false.
- Pop the top open bracket.
- Check if open and close form a pair. If not, return false.
- After the scan, if the stack is empty, return true. Else, return false.
| Property | Value |
|---|---|
| Time Complexity | O(n) |
| Space Complexity | O(n) |
Reverse a Stack Using Only Push and Pop
We can reverse all items in a stack using recursion. Recursion means a function calls itself to solve smaller parts of the same task. The idea is simple. First, pop all items until the stack is empty. Then, while returning from each call, insert the item at the bottom of the stack. This requires a helper method that takes the top items off, puts the saved item at the bottom, then restores the items. This keeps using only push and pop. The time complexity is O(n²). This is because each insertion at bottom can take O(n), and we do it n times. The space complexity is O(n) for the recursion call stack. This method is a classic. It is a clean way to practice stack thinking and recursive flow control.
- Pop all elements using recursion until the stack is empty.
- On the way back up, insert each saved element at the bottom by moving items out and back.
- Stop when all saved elements are placed in bottom-first order.
| Property | Value |
|---|---|
| Time Complexity | O(n²) |
| Space Complexity | O(n) (recursion stack) |
Two Stacks in One Array
We can store two stacks inside one array. We do this to save space. The trick is to grow them toward each other. We start one stack from the left end. We start the other stack from the right end. The left stack pushes by moving right. The right stack pushes by moving left. An overflow happens only when both stacks meet. The time complexity of push and pop for both stacks is O(1). The space overhead is O(1). This is a neat pattern for tight memory spaces. It avoids unused gaps and uses the full array well. The steps below show the exact push behavior for each side. The same plan applies to pop and top. It is short, clear, and easy to code with two top indices in one shared array.
- Keep two indices: leftTop starts at -1; rightTop starts at n.
- Push left stack:
- Check if leftTop + 1 == rightTop. If yes, overflow.
- Increase leftTop and write the value.
- Push right stack:
- Check if rightTop – 1 == leftTop. If yes, overflow.
- Decrease rightTop and write the value.
- Pop left stack: return a[leftTop–] if not empty.
- Pop right stack: return a[rightTop++] if not empty.
| Operation | Time Complexity | Space Complexity |
|---|---|---|
| Push (either stack) | O(1) | O(1) |
| Pop (either stack) | O(1) | O(1) |
Three or More Stacks in One Array
We can also keep three stacks, or even m stacks, in one array. One approach for three stacks is to fix the left stack on the left, the right stack on the right, and let the middle stack start in the center. When a push would collide with the middle stack, we try to shift the middle stack up or down to make room. The time to push can become O(n) in the worst case due to shifts. A balanced approach for the middle stack is to push to alternating sides (left, right, left, right) to keep it centered. For m stacks, we can divide the array into m segments. Each stack grows inside its segment. If a stack is full, we can try to shift neighbors if there is room. The worst-case time remains O(n) for a push that needs many shifts. The space overhead is O(1) for indices and base arrays. These tricks are useful when memory must be one block, and we must fit many stacks in one place.
| Scenario | Push Time Complexity | Pop Time Complexity | Space Complexity |
|---|---|---|---|
| 3 stacks with shifting middle | O(n) worst-case (on shifts) | O(1) | O(1) extra |
| m stacks with segment shifts | O(n) worst-case (on shifts) | O(1) | O(1) extra |
Array vs Linked List: Pros and Cons
Picking an implementation is about trade-offs. A trade-off is a choice between pros and cons. An array stack has very low per-operation cost. Push and pop are O(1). It is cache-friendly and simple. But a fixed array can overflow. A dynamic array fixes that with doubling. It still has O(1) amortized pushes but sometimes must copy. A linked list stack grows and shrinks smoothly with no resizing and no overflow until memory is out. Each operation is O(1), but each node needs extra memory for links. It can be slower due to pointer use and non-contiguous memory. If your max size is known and small, a fixed array is great. If the size grows unknown and large, a dynamic array or a linked list is better. If you need frequent create and delete of many small stacks, linked lists can be simpler to manage.
- Array Implementation
- Advantages: constant-time operations, simple code, great locality, small overhead.
- Disadvantages: fixed capacity unless resizing, occasional expensive doubling step, possible memory overflow on too many doublings.
- Linked List Implementation
- Advantages: grows and shrinks gracefully, no overflow until memory ends, constant-time operations.
- Disadvantages: extra space for links, pointer overhead, less cache-friendly.
Example: Using a Stack
Here is a small example that shows how to use push, pop, and top. We will push three numbers. Then we read the top. Then we pop all numbers and print them. The output order will be reverse of the push order because of the LIFO rule. This example helps you see the live flow. It also helps you test your own stack. Try to change the capacity in the array stack and see overflow errors. Try to pop from an empty stack and see underflow errors. These simple checks ensure your stack is correct and safe in a larger program.
public class StackDemo {
public static void main(String[] args) {
ArrayStack st = new ArrayStack(5);
st.push(10);
st.push(20);
st.push(30);
System.out.println("Top: " + st.top()); // 30
while (!st.isEmpty()) {
System.out.println(st.pop());
}
// Output:
// 30
// 20
// 10
}
}
Step-by-Step: What Happens
- Push 10: top moves from -1 to 0; a[0] = 10.
- Push 20: top moves to 1; a[1] = 20.
- Push 30: top moves to 2; a[2] = 30.
- Top: reads a[2] = 30; top stays 2.
- Pop: returns a[2] = 30; top moves to 1.
- Pop: returns a[1] = 20; top moves to 0.
- Pop: returns a[0] = 10; top moves to -1 (empty).
Glossary of New Words
- Data structure: a way to store and manage data.
- Stack: an ordered list with one end called top for both insert and delete.
- Top: the active end of the stack where push and pop happen.
- Push: add an item to the top.
- Pop: remove and return the top item.
- Top/Peek: look at the top item without removing it.
- LIFO: Last-In, First-Out order rule.
- Underflow: error when popping or peeking an empty stack.
- Overflow: error when pushing onto a full fixed array stack.
- ADT (Abstract Data Type): a set of operations and meanings without fixing inner storage.
- Array: a fixed-size block of items in order, accessed by index.
- Dynamic array: an array that grows by allocating a larger array and copying items.
- Doubling: resizing by making capacity twice the old capacity.
- Linked list: a chain of nodes where each node links to the next node.
- Time complexity: a measure of how running time grows with input size.
- Space complexity: a measure of how memory use grows with input size.
- Amortized: average cost per operation over a sequence of operations.
- Recursion: a function that calls itself to solve smaller parts of a problem.