Preparing for Java Virtual Threads

It was recently announced that virtual threads, a new JDK feature developed by the OpenJDK project Loom, will be available as a preview feature in the forthcoming Java 19 release.

One of the design goals of this new feature is to enable existing multi-threaded Java apps to adopt virtual threads with little or no change. This has to all intents and purposes been achieved in a couple of  ways. Firstly, virtual threads are able to run any existing Java code or native code – there are no restrictions in that respect. Secondly, there are no breaking changes to the existing Thread API, or associated language constructs such as synchronized blocks, should your code use these lower level thread management APIs. However, attaining the full benefits of virtual threads requires Java developers to make a few changes to the way they have solved certain problems in the past. This is euphemistically described by the Loom team as needing to ‘unlearn’ certain coding practices and ways of doing things. This article outlines these cases, highlighting how you can prepare your existing applications to maximise the benefits of virtual threads when they are finalised in a future release of Java; and more generally considers how the introduction of virtual threads may change the way we develop multithreaded Java apps in the future.

(Note – virtual threads are only available as a preview in Java 19, and there may well be further preview releases as the feature’s APIs and implementation are refined, and feedback from users is addressed).

1) Why Virtual Threads – A Quick Recap

First a brief recap on what virtual threads are all about, for those who need it.

Virtual threads are new lightweight implementations of Java’s Thread class that are scheduled by the JDK, rather than operating system, as has been the case in Java to date.

The primary goal of virtual threads is to improve the throughput of multi-threaded Java applications that are bound by blocking I/O, by optimising the use of hardware resources (primarily memory, but also CPU), without complicating the programming model. Blocking I/O includes file and network I/O – the latter for example common in distributed systems such as those which use the Microservices architectures with synchronous web APIs on both the client and server side. 

The enhanced JDK uses virtual threads ‘under the covers’ to optimise the use of more expensive system (aka platform) threads, allowing programmers to reap the benefits whilst still being able to use synchronous APIs that are simpler to write and maintain (read and debug) than the alternative of resorting to non-blocking ‘reactive’ APIs*. These synchronous APIs include the familiar one thread per request and per transaction JEE APIs (such as Servlet and JDBC). 

(*Whilst virtual threads may remove the need for reactive APIs purely from a performance perspective, they’re not a full replacement for them in all cases. Reactive APIs offer other benefits including support for back-pressure and composing concurrent requests, which can still be worthwhile in some apps).

With the recap out of the way, let’s get back to covering the cases where we may need to change existing code to take full advantage of the promise of virtual threads, covering both the why and the how.

2) Virtual Threads are Cheap to Create

Virtual threads are several orders of magnitude cheaper to use than platform threads, in terms of their resource (memory and CPU) usage. To date it’s only been possible for a Java app to have a few thousand active platform (operating system) threads. With virtual threads, the Java runtime can easily support hundreds of thousands of active threads.

In addition, blocking (waiting on I/O) for a virtual thread is also cheap – there is no longer a cost in terms of lower CPU utilisation as there is with platform threads. Therefore you no longer need to write code that avoids blocking a thread (e.g. by using reactive, async APIs) – it’s a waste of time. Now we have virtual threads, writing imperative code that blocks on I/O is ok. This is a benefit because imperative, blocking code is a lot easier to maintain (read and debug) than async code.

3) Thread Pools are No Longer Needed

Thread pools are primarily used to avoid the CPU overhead of thread creation. However with virtual threads this overhead is insignificant. So you also no longer need thread pools . One example of this, is the thread pool which a web server creates for the threads used to service incoming HTTP requests. Not only may your favourite pure Java web server no longer have a thread pool, but you’ll no longer have to size and tune it.

4) ExecutorService – Still Useful in Some Cases

If you’re not familiar with them, Executor is a Java API for executing async tasks which decouples the way in which the task is run. ExecutorService are specialisations of Executor that primarily add the ability to track and manage submitted tasks using a Future.  Most implementations are backed by a thread pool of a certain capacity, and an in-memory queue for holding submitted tasks until there are available threads. Will virtual threads also remove the need for ExecutorService? In some cases yes, but in others not…

4.1) Limiting the Number of Threads Used to Execute Async Tasks

Today ExecutorServices are commonly used to limit the no. of platform threads that your app uses to execute async tasks. This will be unnecessary because as mentioned above creating virtual threads is now cheap and you can create many of them.

4.2) As an API for creating Threads for Tasks

However, as mentioned above an ExecutorService also encapsulates how tasks are run – the number of threads used, and when they’re created. And the implementations provided by the core Java library provide the necessary code for you. Which is still valuable.  This is why in the first preview of virtual threads in Java 19 includes a new class of ExecutorService that creates a new virtual Thread to run each submitted task, on-demand. See the Javadocs for factory method Executors.newVirtualThreadPerTaskExecutor() which also notes that the number of threads created by the Executor is unbounded.  This new implementation of ExecutorService continues to provide application code with a convenient, out of the box way to run tasks asynchronously with a backwards compatible API, but now also takes advantage of virtual threads under the covers.

4.3) Capping the Number of Concurrent threads that can Access a shared Resource

ExecutorService are also sometimes* used to cap the number of threads that can concurrently access a shared resource, e.g. an I/O device, or a remote web service. 
For example, if you wanted to cap concurrent accesses to a shared resource to 100, then you could use the method Executors.newFixedThreadPool(int nThreads). This creates an ExecutorService backed by a fixed size pool of worker threads, which queues submitted tasks when the number concurrently executing reaches the limit.

As already stated above, pooling threads is no longer necessary when using virtual threads (as well as being something you want to avoid to improve throughput when performing blocking I/O). As a result, with the introduction of virtual threads, the Loom team advocates removing the use of ExecutorService for this use-case and changing your code to use a Semaphore instead, e.g.

class SomeClass {
  private Semaphore semaphore = new Semaphore(100);

  public void execute() {
    semaphore.acquire();
    try {
      accessSharedResource();
    } finally {
      semaphore.release();
    }
  }
}

Their argument for this change is that in this case the use of a Semaphore more clearly shows the intent than using an ExecutorService. Whilst that’s true, the counter argument is that it requires you to write and test extra code which isn’t needed when using an ExecutorService. Although as shown above the amount of extra code is small.

Side Note – There are at least a couple of scenarios in which using an ExecutorService to cap the number of concurrent threads that can access a shared resource won’t cut it, reducing the cases in which it is used for this purpose. Firstly, if your application is distributed (you’re running multiple instances), which is common for enterprise apps that need to meet requirements for availability or scalability, and you have an absolute upper limit on the shared resource usage by the application rather than just a need to more loosely constrain resource usage per instance, then using an ExecutorService won’t suffice. To solve this challenge you’ll need a distributed lock instead. Secondly, for some resources using a more specialised type of object pool is a better choice than directly using an ExecutorService (abstracted thread pool), as it can provide a higher-level API and additional features. For example, capping the number of connections to a relational database is best implemented using a (database) connection pool as it provides additional features such as a connection factory, connection validation, dynamic resizing, eviction etc.

5) Thread Synchronisation – Pinned Threads

In Java, ‘synchronized’ blocks provide a simple way to implement critical sections (aka mutual exclusions ) in your application code. Virtual threads continue to support the use of the synchronized keyword as part of its goal of achieving backwards compatibility. However, currently there is a limitation which means code executing in a synchronized block may in some cases hinder the throughput benefits of virtual threads.

Virtual threads continue to run on platform (O/S) threads under the covers. (Every active virtual thread has a ‘carrier’ platform thread). The trick virtual threads use to achieve their performance gain is to temporarily detach and park a virtual thread whenever it is blocked, so the same platform thread can be used to run another virtual thread. Blocking operations include a) I/O, b) when a thread is explicitly put to sleep, and c) when a thread enters a  synchronized block. In the current implementation of virtual threads, there is a limitation such that if a virtual thread is executing code in a synchronized block, it cannot be detached from its carrier platform thread – the virtual thread said to be ‘pinned’. If the code that’s executing is short lived / runs fast, then this isn’t an issue – no changes to your code are needed. However, if you have code containing a synchronized block that performs a potentially slower blocking operation then this will continue to limit your application throughput because you’ll be tying up one of your platform threads. It won’t be any worse than before, but you won’t be taking full advantage of virtual threads.

The Loom team has stated that they aim to fix this limitation, so the problem may no longer exist by the time virtual threads become GA. However, if it doesn’t get solved before then, and this is a significant enough performance issue for you, then you may want to do something about it.

First, if you want to identify occurrences of ‘pinned’ virtual threads in your own code or libraries you use, you can set the new Java system property jdk.tracePinnedThreads on launching your app. This will print a stack trace to the console whenever pinning occurs

Assuming you own the code that needs changing, the team’s advice is to replace the use of the synchronized block with Java’s ReentrantLock. For example –

Lock lock = new ReentrantLock();
lock.lock();
try {
  // Performs potentially slow blocking I/O
  doWork();
} finally {
  lock.unlock();
}

As stated in its Javadoc, a ReentrantLock has the same basic behaviour and semantics as the implicit monitor lock accessed using synchronised methods and statements, but with extended capabilities. As described in this post, there are a number of reasons why you might choose to use a ReentrantLock instead of a synchronized block. But if you don’t need ReentrantLock’s more advanced features then, as before, the solution requires a little more code and unit testing, although not much.

6) Adopting Virtual Threads (in the Future)

The above sections have covered the cases where we may need to change existing code, or solve problems in different ways in the future, to take full advantage of the promise of virtual threads. Before wrapping-up, it’s worth stating that the JDK will not create virtual threads in preference to platform threads by default, at least not for the foreseeable future. Instead developers have to explicitly choose to use a virtual thread rather than a platform thread. One of the reasons for this is that a virtual thread may not be appropriate for executing all types of tasks. As stated in the JavadocVirtual threads are suitable for executing tasks that spend most of the time blocked, often waiting for I/O operations to complete. Virtual threads are not intended for long running CPU intensive operations.”   

The preview release in Java 19 includes a number of new (draft) APIs supporting the use of virtual threads, including – 

Adopting the use of virtual threads will therefore require using one or more of the above APIs. Library and framework developers are likely to make the heaviest use of these new APIs, rather than application developers, particularly the ones provided by the Thread class.

For production apps, adoption of virtual threads is best deferred until after they have been finalised and become GA in a future version of Java. In the meantime, if you want to experiment with virtual threads you’ll need to install JDK 19 (currently an Early Access release) and enable preview features when compiling and running your app. For more details see the Further Reading section below.

7) Summary and Conclusion

Virtual threads are a new lightweight implementation of a Thread coming to Java. They’re available now as a preview in Java 19, allowing you to trial their usage in conjunction with existing apps. They promise increased throughput for apps whose performance is limited by blocking I/O, by making more effective use of O/S (platform threads), maximising the available hardware (CPU and memory) resources.  They achieve this by effectively multiplexing the use of O/S (aka platform) threads. 

As we’ve come to expected with Java, virtual threads are fully backwards compatible with existing Java code and the use of existing Thread APIs in multi-threaded code, so there are no enforced changes. However, they do render some coding practices unnecessary going forward, and in a small number of cases you may need to change your existing code to fully benefit from them. These cases include 

  • Writing synchronous code that blocks on I/O is now ok. (You no longer need to write asynchronous code just to avoid blocking a thread e.g. using CompletableFuture). 
  • Thread pools are no longer needed, because creating virtual threads is cheap. 
  • ExecutorService are no longer needed purely to limit the number of threads used to execute async tasks, or to cap the number of threads that can access a shared resource. In fact, to maximise the benefits of virtual threads you want to avoid capping the number of threads used. Use of ExecutorService still has its place though, as an out of the box implementation of an abstraction for creating threads used to run tasks. 
  • Virtual threads are fully compatible with the use of synchronized blocks but the first preview release in Java 19 contains a limitation that means code that performs blocking I/O in these blocks gets pinned to (any may hog) an O/S (platform) thread, reducing the throughput gains. However, this may be addressed in a future release and synchronized block code aside, the overall performance of an app that does blocking I/O is still likely to be better than it was before virtual threads.
  • For a couple of use-cases, the Loom project team are advocating alternative ways to code solutions that in the past have utilised ExecutorService and synchronized blocks. This does require writing and testing some extra application code which previously wasn’t necessary. However, these changes are not mandatory, and the amount of extra code seems insignificant.

8) Further Reading

If you’re interested in finding out more about this topic, I recommend reading the following online articles – 

[1] JEP 425: Virtual Threads (Preview), OpenJDK project. 22/06/22 – The proposal for extending the JDK to include virtual threads. This Java Enhancement Proposal (JEP) includes the following and more –

  • Goals (and Non-Goals) of virtual threads
  • Motivation for virtual threads
  • The “Using virtual threads vs. platform threads” section contains example code for using virtual threads. 
  • The “Virtual threads are a preview API, disabled by default” section explains how to enable the virtual threads preview API in JDK 19 to support trialling their use. 
  • The “Detailed changes” section describes in detail all of the changes that have been made to the JDK to support virtual threads, including the aforementioned APIs for using virtual threads.

[2] State of Loom (Part 1), Ron Pressler, May 2020 – An older article written by the Loom project lead Ron Pressler. Note the virtual thread APIs mentioned in this article may now be out of date (the JEP [1] should be treated as the up to date, authoritative source), however this article still provides a useful introduction to virtual threads and explains how the JDK has been adapted to use them.

Thanks.

Advertisement

Leave a comment

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s