The talk reveals how Just In Time Compiler (e.g. JIT C2) from HotSpot/OpenJDK internally manages runtime optimizations for hot methods in comparison to ahead of time approach triggered by LLVM clang on similar C++ source code, emphasizing all of the internals and strategies used by each Compiler to achieve better performance. For each optimization there is a similar Java and C++ source code and corresponding generated assembly code in order to prove what really happens under the hood. Each test is covered by a dedicated language benchmark and conclusions.
- Different sequential sums (e.g. N elements array, N integers, two arrays, etc.)
- Loop unrolling, loop peeling
- Fields object layout
- Null checks
- Uncommon traps
- Lock coarsening
- Lock elision
- Virtual calls
- Scalar replacement
The tools used during our research study are: JITWatch, Java Measurement Harness, C++ Google Benchmark and perf. All test scenarios are launched against latest official Java release (e.g. 9.0.1) and a recent LLVM clang version (e.g. 5.0.0).
We aim to study what optimizations are generated by Java JIT C2 in comparison to optimizations triggered by LLVM clang on similar C++ source code, analyzing how powerful are ones (e.g. runtime by JIT C2) in comparison to the others (e.g. ahead of time by LLVM clang), what is the advantage and the cost of doing these optimizations at runtime (as Java does), what are the strengths and weakness of each of them. The talk might be as well interesting in the context of Java moving towards AOT, starting with 9 version. As a disclaimer, this talk is not a battle, we do not try to establish a winner between Java and C++, but just to study two different Compiler approaches.
Ionut Balosin, Luxoftionutbalosin
Software architect and technical trainer at Luxoft with 10+ years of experience in a wide variety of business applications. Particularly interested in software architecture and performance and tuning topics.