PostgreSQL JIT Tuning: Understanding the Cost
Starting in PostgreSQL 11 (and enabled by default in PostgreSQL 12 and beyond), PostgreSQL introduced Just-In-Time (JIT) compilation. Using the LLVM framework, PostgreSQL can dynamically compile certain parts of an SQL query directly into native machine code while the query is executing.
On paper, this sounds fantastic. Faster execution, optimized machine code, and better CPU utilization.
However, in the real world of high-throughput backend engineering, the default JIT settings are notorious for causing unexpected CPU spikes, query latency regressions, and outright production outages in OLTP (Online Transaction Processing) systems.
In this deep dive, we will explore how PostgreSQL JIT actually works under the hood, how to identify if it is silently killing your application's performance, and the exact configuration parameters needed to tune it.
How JIT Works in PostgreSQL
Traditionally, when PostgreSQL executes a query, it uses an interpreter to evaluate expressions (like WHERE age > 18 or price * tax_rate), decode tuples (rows), and evaluate aggregates. The interpreter is highly optimized, but it still incurs overhead for branching, function calls, and type checking for every single row.
When JIT is enabled, PostgreSQL hands the query execution plan over to the LLVM compiler. LLVM compiles those SQL expressions into highly optimized native machine instructions specific to your server's CPU architecture.
JIT compilation is incredibly fast at executing queries, but it takes time to compile the query. If a query processes 50 million rows, spending 50 milliseconds to compile the query via LLVM is a brilliant trade-off because you might save 2 seconds of execution time. This is the definition of an analytical (OLAP) workload.
The OLTP Trap
The problem arises in OLTP systems. An OLTP workload typically consists of thousands of small, fast queries per second (e.g., SELECT * FROM users WHERE id = 123).
These queries are designed to execute in less than 1ms.
If PostgreSQL decides to JIT-compile one of these small queries (or a slightly more complex join that only returns 50 rows), the LLVM compiler might take 30ms to compile the query, only to save 0.1ms of execution time.
You have just increased your database latency by 3000% because the optimizer misjudged the compilation overhead.
Worse, LLVM compilation is extremely CPU intensive. If your database suddenly receives a spike of 500 concurrent complex queries and decides to JIT-compile all of them simultaneously, your PostgreSQL CPU utilization will hit 100%, causing a cascading failure across your entire microservice architecture.
Identifying JIT Overhead
How do you know if JIT is hurting your performance? The answer lies in the output of EXPLAIN ANALYZE.
If you run EXPLAIN ANALYZE on a slow query, scroll to the very bottom of the output. If JIT was triggered, you will see a block that looks like this:
JIT:
Functions: 14
Options: Inlining true, Optimization true, Expressions true, Deforming true
Timing: Generation 1.250 ms, Inlining 10.430 ms, Optimization 24.120 ms, Emission 18.500 ms, Total 54.300 ms
Execution Time: 58.120 ms
Look closely at those numbers. The total query execution time was 58.120 ms. However, the JIT Compilation Total was 54.300 ms.
93% of the query latency was spent compiling the query, not executing it! If JIT had been disabled, this query would likely have finished in ~5ms using the standard interpreter.
Tuning JIT for Production
If you are running a hybrid database or a pure OLTP database, relying on the PostgreSQL defaults is dangerous. Here is how you tune JIT to work for you, rather than against you.
The core settings revolve around Cost. PostgreSQL uses a cost-based optimizer to decide execution plans. You can tell PostgreSQL to only trigger JIT if the estimated cost of the query exceeds a certain threshold.
1. The Global Kill Switch (Pure OLTP)
If your database is entirely backing a web application, microservices, or an API, and you do not run massive analytical reports on this specific database instance, the safest and most performant option is to turn JIT completely off.
-- In postgresql.conf
jit = off
For 90% of web backend databases, this is the correct setting.
2. Tuning the Thresholds (Hybrid Workloads)
If you have a hybrid database where you do occasionally run massive analytical queries that scan millions of rows, you should leave jit = on, but drastically increase the cost thresholds.
The default configuration looks like this:
jit_above_cost = 100000(JIT is enabled if cost > 100k)jit_inline_above_cost = 500000(Functions are inlined if cost > 500k)jit_optimize_above_cost = 500000(Expensive LLVM optimization is applied if cost > 500k)
For modern hardware, 100000 is far too low. A moderately complex join on an OLTP table can easily hit a cost of 150,000, triggering a catastrophic compilation penalty.
A much safer configuration for a hybrid system:
-- In postgresql.conf
jit = on
jit_above_cost = 500000 -- Only JIT for very large queries
jit_inline_above_cost = 5000000 -- Only inline for massive queries
jit_optimize_above_cost = 5000000 -- Only optimize for massive queries
Remember that the PostgreSQL optimizer's cost is an estimate based on table statistics. If your pg_statistic data is stale (e.g., autovacuum hasn't run recently), the optimizer might wildly miscalculate the cost, inadvertently triggering JIT for a tiny query. Always ensure your statistics are up to date.
Summary
PostgreSQL's addition of LLVM JIT compilation is an engineering marvel that drastically speeds up analytical data crunching. However, in the context of backend engineering and OLTP databases, it is a loaded gun.
Monitor your EXPLAIN ANALYZE outputs, watch your CPU spikes, and don't be afraid to aggressively increase jit_above_cost—or turn jit off entirely—to protect your application's p99 latency.
