JIT Compilation

Java is often misunderstood as an “interpreted” language. While this is true for the first few milliseconds of execution, the modern HotSpot JVM uses a sophisticated Just-In-Time (JIT) compiler to turn frequently executed bytecode into highly optimized native machine code that often rivals C++ or Rust.

1. The Execution Lifecycle: Tiered Compilation

When you run a Java application, the JVM doesn’t compile everything at once. Instead, it uses a lazy approach called Tiered Compilation:

  1. Interpreter (Level 0): The JVM starts by interpreting bytecode line-by-line. This is slow but allows instant startup.
  2. C1 Compiler (Level 1-3): As a method becomes “warm” (called frequently), the C1 compiler compiles it to native code with simple optimizations. This provides a quick 10x speedup.
  3. C2 Compiler (Level 4): If a method becomes “hot” (called thousands of times), the C2 compiler (Server Compiler) kicks in. It performs aggressive, expensive optimizations based on runtime profiling data.

[!NOTE] Why not compile everything at start? AOT (Ahead-of-Time) compilation lacks runtime information. The JIT knows exactly which if branches are taken and which implementations of an interface are actually used, allowing it to generate better code than a static compiler in many cases.

2. Key Optimization Techniques

Method Inlining

The single most important optimization. The JIT replaces a method call with the body of the called method, removing the overhead of the call stack (pushing/popping frames) and enabling further optimizations.

Before Inlining:

public int add(int a, int b) {
    return a + b;
}

public int calculate() {
    return add(5, 10); // Method call overhead
}

After Inlining (Internal Representation):

public int calculate() {
    return 5 + 10; // Becomes constant 15
}

Escape Analysis

The JIT analyzes the scope of a new object. If an object is allocated inside a method and never escapes (i.e., is not returned or stored in a global field), the JIT can perform Scalar Replacement. It “explodes” the object into its primitive fields and stores them in CPU registers or the Stack, completely avoiding Heap allocation and Garbage Collection.

Dead Code Elimination

After inlining, if the JIT realizes a variable is never used or a branch is never taken (based on profiling), it simply deletes that code.

3. Deoptimization: The Safety Net

What happens if the JIT makes an aggressive assumption (e.g., “This interface only has one implementation”) and then you load a new class that breaks that assumption?

The JVM performs a Deoptimization (“Deopt”). It throws away the optimized machine code and falls back to the Interpreter. This allows the JIT to be speculative without being unsafe.


4. Interactive: JIT Visualizer

Visualize how a method transitions from cold interpretation to hot C2 compilation as execution count increases.

Calls: 0
Mode: Interpreted
Java Source
public int compute(int x) {
  return x * 2 + 1;
}
Execution Engine
// Reading bytecode... ILOAD 1 ICONST_2 IMUL ICONST_1 IADD IRETURN
Interp C1 (Simple) C2 (Optimized)

5. Summary

  • Tiered Compilation balances fast startup (Interpreter) with peak performance (C2).
  • Inlining reduces function call overhead and enables other optimizations.
  • Deoptimization allows the JVM to speculate aggressively and recover safely.