When dealing with large trees, performing path queries (like finding the maximum value on a path between two nodes) or path updates can be slow. A naive approach takes $O(N)$, and while Segment Trees work for arrays, they don't directly apply to trees.
Heavy-Light Decomposition (HLD) is a powerful technique that partitions the edges of a tree into "heavy" and "light" edges, allowing us to decompose any path into $O(\log N)$ contiguous segments.
The Core Concept: Heavy vs. Light Edges
For each non-leaf node, we identify its "heavy" child—the child with the largest subtree. The edge to this child is a heavy edge, and all other edges to children are light edges.
- Heavy Path: A sequence of heavy edges.
- Decomposition: Any path from the root to a leaf contains at most $O(\log N)$ light edges.
- Linearization: By visiting heavy children first in a DFS, we can map the tree nodes to a linear array where each heavy path is a contiguous segment.
Heavy-Light Decomposition Implementation in Java
import java.util.*;
public class HLD {
private int n, curPos;
private List<Integer>[] adj;
private int[] parent, depth, heavy, head, pos, size;
public HLD(int n, List<Integer>[] adj) {
this.n = n;
this.adj = adj;
this.parent = new int[n];
this.depth = new int[n];
this.heavy = new int[n];
this.head = new int[n];
this.pos = new int[n];
this.size = new int[n];
Arrays.fill(heavy, -1);
dfsSize(0, -1, 0);
dfsDecompose(0, 0);
}
private int dfsSize(int u, int p, int d) {
size[u] = 1;
parent[u] = p;
depth[u] = d;
int maxSubtreeSize = -1;
for (int v : adj[u]) {
if (v != p) {
int subtreeSize = dfsSize(v, u, d + 1);
size[u] += subtreeSize;
if (subtreeSize > maxSubtreeSize) {
maxSubtreeSize = subtreeSize;
heavy[u] = v;
}
}
}
return size[u];
}
private void dfsDecompose(int u, int h) {
head[u] = h;
pos[u] = curPos++;
if (heavy[u] != -1) {
dfsDecompose(heavy[u], h);
}
for (int v : adj[u]) {
if (v != parent[u] && v != heavy[u]) {
dfsDecompose(v, v);
}
}
}
public int queryPath(int u, int v) {
int res = 0;
while (head[u] != head[v]) {
if (depth[head[u]] > depth[head[v]]) {
int temp = u; u = v; v = temp;
}
// Query segment tree from pos[head[v]] to pos[v]
// res = combine(res, segmentTree.query(pos[head[v]], pos[v]));
v = parent[head[v]];
}
if (depth[u] > depth[v]) {
int temp = u; u = v; v = temp;
}
// Query segment tree from pos[u] to pos[v]
// res = combine(res, segmentTree.query(pos[u], pos[v]));
return res;
}
}
Why use HLD?
| Feature | Heavy-Light Decomposition | Naive DFS/BFS |
|---|---|---|
| Path Query | $O(\log^2 N)$ (with Segment Tree) | $O(N)$ |
| Path Update | $O(\log^2 N)$ (with Segment Tree) | $O(N)$ |
| Complexity | High | Low |
| Best For | Dynamic path queries on large trees | Static or small trees |
Real-World Applications
- Network Routing: Finding the bottleneck capacity on a path in a hierarchical network.
- Game Development: Managing properties in complex scene graphs or transformation trees.
- Competitive Programming: A staple for advanced tree-related problems involving range queries.
Summary
Heavy-Light Decomposition is a sophisticated tool that bridges the gap between tree structures and linear range query data structures. By carefully partitioning the tree, it allows us to handle complex path operations with logarithmic efficiency. While the implementation is involved, the performance gains on large datasets are transformative.
