Top 3 Tips to Improve your Java Application’s Performance

 
iStock-918508394.jpg
 

For many years Java had a bad reputation for being slow. C, C++, and other languages that statically compile directly to machine code were the standard for writing applications that required low latency, stable execution with minimal jitter, and high performance as a whole. 

As far as performance is concerned, Java originally came with a disadvantage since it is a managed and interpreted language. The result of the static compilation of a Java program is a byte code that is interpreted and executed instruction by instruction by the Java Virtual Machine.

Since the memory of a Java application is managed by the JVM as well, there’s a level of overhead in the form of Garbage Collection that the JVM performs to clean up the unused objects. The JVM heap memory layout is also less efficient for objects, and arrays when compared to languages like C and C++.

However, throughout the years, there have been many advancements made in the JVM itself, that more than compensate for all those inefficiencies. And in recent years we see more and more companies use Java to develop low latency, high-performance applications for high-speed trading, scientific simulations, real-time bidding, mobile games, and more.

So in this article, I’m going to share with you 3 tips that you can apply to your Java application today to significantly improve its performance. 

  1. Reduce Garbage Collection

One of the main benefits of a managed language like Java is developers don’t need to worry about freeing and managing memory. The JVM maintains internal data structures that keep track of all allocated objects. Once certain (configurable) thresholds are met, the JVM performs a multi-step garbage collection. How GC works is outside the scope of this article, but the important fact is GC time and overhead is far from negligible. Except for Azule Zing JVM, all JVM GC algorithms have a Stop-The-World (STW) phase which freezes your entire application. During GC, the JVM deallocates objects that are not used by the program anymore. Also depending on the GC algorithms, objects are moved within the heap memory space to perform compaction and avoid fragmentation. Furthermore, the GC overhead is proportional to the size of heap memory we allocate for our Java program. 

So as developers we have a lot of control over our application’s performance by controlling the object allocation in our program. The biggest mistake I hear from Java developers is “memory allocation is fast nowadays so we shouldn’t care about it”Yes, allocations are cheap, but Garbage Collection is NOT.

Avoiding object allocations if possible is one way to go about it. But another very effective technique is called object pooling.

Basically, if you find yourself allocating an immutable object with the same data over and over again, you can simply allocate it once and cache it for subsequent usages.

Some common examples and candidates for object pooling are:

  1. Memory buffers

  2. Thread pools

  3. Connection pools

  4. Dynamically constructed Strings

The logic here is simple. The fewer objects you allocate, the less frequent GC cycles are, and the shorter they are. 

2. Keep Class Methods Short

If you were not convinced by now that keeping your methods short and readable is a good idea, here is another incentive for you. Keeping methods short will make your Java application faster.

As mentioned at the beginning of the article, Java is an interpreted language by default. Once the program is launched, the JVM is keeping stats for every method execution. Once certain thresholds are met, the JVM compiles those methods into machine code and stores them in memory. This is called Just In Time (JIT) Compilation. The more isolated your methods are, the more likely you are to reuse them in your application, and the faster it is for the JVM to detect that the method is worth compiling. When a method is complied into native code it goes through many optimizations that are determined by the JIT compiler based on the previous executions of the method in the interpreter mode. This results in a huge speedup and a much more efficient code.

Very large methods (the values depend on the JVM) will never be JIT compiled by the JVM and will forever remain interpreted.

But that is not all. Calling a method has its own overhead. Storing the local variables and jumping to another method, then restoring the local variables and retrieving the result is not cheap. For that the JVM has another optimization call inlining

Inlining is when the code within a method is simply copy-pasted into every place where the method is called. This way the overhead of calling a method is entirely eliminated.

Small methods are excellent candidates for inlining by the JVM. Longer methods are less likely to be considered for inlining and method above certain thresholds (depending on the JVM) will not be considered at all.

So follow programming best practices and keep your methods short.

3. Multithreading and Concurrency

By default, each Java application has only one thread of execution. In other words, our application is executed sequentially, instruction by instruction. But when we have independent parts in our application that can be executed in parallel, we can improve our application's performance by a factor of x2, x4, x8, x16 and even more depending on the hardware our application is running on.

Java and the JVM support Multithreading both as part of the JDK and internally, so by allocating pieces of our code to run on different threads we can achieve great speed up.

There are many benefits to multithreading, including a hidden one that I talk about in this article.

The drawback of multithreading and concurrency is the complexity involved in writing correct, performant, and maintainable concurrent code.

The appeal of multithreading is undeniable. Doubling or quadrupling your performance and saving huge operational costs to your company should drive any good developer to take advantage of it. And rightfully so.

But what often happens is multithreading brings unintuitive bugs and issues like deadlocks, live-locks, race conditions, and data races into our program. 

Defensive programmers often tend to overuse locking and synchronization as well as incorrectly use other concurrency primitives in the effort to avoid such issues. That often erases any speed up you were hoping to achieve through multithreading, making the application overly complex and sometimes even slower.

So my advice with multithreading is either use it wisely or don’t use it at all.

If you take the effort to learn it you can achieve tremendous performance and responsiveness so going the extra mile to master it really pays off.

Java Multithreading, Concurrency & Performance Optimization is a course where you can learn all of that, and it takes no more than about 4 hours to complete.

In this course, you will learn how to systematically optimize your Java application’s performance using multithreading, eliminating any guessing or uncertainty. It takes you through topics like Operating Systems fundamentals, memory organization, as well as covers advanced topics like lock-free algorithms, optimizing for latency and throughput, and much more. Check out the course to learn more.

More Articles

Previous
Previous

What makes JUnit the most popular Java Framework

Next
Next

The Hidden Benefits of Java Multithreading