DSA

Heavy-Light Decomposition in Java: Optimizing Tree Queries

Master Heavy-Light Decomposition (HLD) in Java. Learn how to decompose a tree into paths to perform efficient path queries and updates in O(log² N) time.

Sachin Sarawgi·April 21, 2026·3 min read
#dsa#java#tree#hld#heavy-light decomposition#interview preparation#algorithms

When dealing with large trees, performing path queries (like finding the maximum value on a path between two nodes) or path updates can be slow. A naive approach takes $O(N)$, and while Segment Trees work for arrays, they don't directly apply to trees.

Heavy-Light Decomposition (HLD) is a powerful technique that partitions the edges of a tree into "heavy" and "light" edges, allowing us to decompose any path into $O(\log N)$ contiguous segments.

The Core Concept: Heavy vs. Light Edges

For each non-leaf node, we identify its "heavy" child—the child with the largest subtree. The edge to this child is a heavy edge, and all other edges to children are light edges.

  1. Heavy Path: A sequence of heavy edges.
  2. Decomposition: Any path from the root to a leaf contains at most $O(\log N)$ light edges.
  3. Linearization: By visiting heavy children first in a DFS, we can map the tree nodes to a linear array where each heavy path is a contiguous segment.

Heavy-Light Decomposition Implementation in Java

import java.util.*;

public class HLD {
    private int n, curPos;
    private List<Integer>[] adj;
    private int[] parent, depth, heavy, head, pos, size;

    public HLD(int n, List<Integer>[] adj) {
        this.n = n;
        this.adj = adj;
        this.parent = new int[n];
        this.depth = new int[n];
        this.heavy = new int[n];
        this.head = new int[n];
        this.pos = new int[n];
        this.size = new int[n];
        Arrays.fill(heavy, -1);
        
        dfsSize(0, -1, 0);
        dfsDecompose(0, 0);
    }

    private int dfsSize(int u, int p, int d) {
        size[u] = 1;
        parent[u] = p;
        depth[u] = d;
        int maxSubtreeSize = -1;
        for (int v : adj[u]) {
            if (v != p) {
                int subtreeSize = dfsSize(v, u, d + 1);
                size[u] += subtreeSize;
                if (subtreeSize > maxSubtreeSize) {
                    maxSubtreeSize = subtreeSize;
                    heavy[u] = v;
                }
            }
        }
        return size[u];
    }

    private void dfsDecompose(int u, int h) {
        head[u] = h;
        pos[u] = curPos++;
        if (heavy[u] != -1) {
            dfsDecompose(heavy[u], h);
        }
        for (int v : adj[u]) {
            if (v != parent[u] && v != heavy[u]) {
                dfsDecompose(v, v);
            }
        }
    }

    public int queryPath(int u, int v) {
        int res = 0;
        while (head[u] != head[v]) {
            if (depth[head[u]] > depth[head[v]]) {
                int temp = u; u = v; v = temp;
            }
            // Query segment tree from pos[head[v]] to pos[v]
            // res = combine(res, segmentTree.query(pos[head[v]], pos[v]));
            v = parent[head[v]];
        }
        if (depth[u] > depth[v]) {
            int temp = u; u = v; v = temp;
        }
        // Query segment tree from pos[u] to pos[v]
        // res = combine(res, segmentTree.query(pos[u], pos[v]));
        return res;
    }
}

Why use HLD?

Feature Heavy-Light Decomposition Naive DFS/BFS
Path Query $O(\log^2 N)$ (with Segment Tree) $O(N)$
Path Update $O(\log^2 N)$ (with Segment Tree) $O(N)$
Complexity High Low
Best For Dynamic path queries on large trees Static or small trees

Real-World Applications

  1. Network Routing: Finding the bottleneck capacity on a path in a hierarchical network.
  2. Game Development: Managing properties in complex scene graphs or transformation trees.
  3. Competitive Programming: A staple for advanced tree-related problems involving range queries.

Summary

Heavy-Light Decomposition is a sophisticated tool that bridges the gap between tree structures and linear range query data structures. By carefully partitioning the tree, it allows us to handle complex path operations with logarithmic efficiency. While the implementation is involved, the performance gains on large datasets are transformative.

📚

Recommended Resources

Java Masterclass — UdemyBest Seller

Comprehensive Java course covering Java 17+, OOP, concurrency, and modern APIs.

View Course
Effective Java, 3rd EditionMust Read

Joshua Bloch's classic guide to writing clear, correct, and efficient Java code.

View on Amazon
Java Concurrency in Practice

The authoritative book on writing thread-safe, concurrent Java programs.

View on Amazon

Practical engineering notes

Get the next backend guide in your inbox

One useful note when a new deep dive is published: system design tradeoffs, Java production lessons, Kafka debugging, database patterns, and AI infrastructure.

No spam. Just practical notes you can use at work.

Sachin Sarawgi

Written by

Sachin Sarawgi

Engineering Manager and backend engineer with 10+ years building distributed systems across fintech, enterprise SaaS, and startups. CodeSprintPro is where I write practical guides on system design, Java, Kafka, databases, AI infrastructure, and production reliability.

Found this useful? Share it: