Architecture performance

Time is Code, my Friend

For several years, the fast startup time of an application or service in Java has been a highly discussed topic. According to the authors of numerous articles, it seems if you don’t apply some techniques for a fast startup, your project is doomed.

This is of course nonsense, and I hope that everyone is sensible enough to realise that this is a hype and a sales and marketing trick to sell you some product. Because unless you are working on some project where start-up time is crucial, like financial transactions, the start-up time of 99.9% of the applications is not important at all (within limits).

Moreover, this startup time and topics like microservices and serverless, only focus on architecture and non-functional requirements. And, if the marketing hype is to be believed, it seems that functionality for the user is not important at all anymore – as long as you have an application that uses at least 5 of the current hypes.

Today, I want to add another interesting finding about applications or services focused on fast startup times. As a general rule, the faster your application starts, the less functionality it has.

The test

X starts up faster than Y. When applying Z you can achieve a 30% reduction in start-up time.  Some frameworks even made their core marketing tagline based on this.  Let us look at 10 solutions and compare them to see what we can learn about the results.

The idea of this test is to develop an application, slightly modified for the requirements of each framework, that has an endpoint capable of processing JSON. As a performance indicator, we measure the time when the first successful response is served by the runtime. I also captured the RSS of the process (for a description see the bottom of the text), to have an idea of the memory usage. And the third parameter I note is the size of the runtime.

Here is a listing of the runtimes:

Spring Boot 3.0. Project created using Spring Initializr using Spring Reactive web to make use of Netty.

Quarkus 2.16. Project created using using RestEasy Reactive and RestEasy Reactive JSONB.

Ktor 2.2. Project created using IntelliJ project wizard using Netty server, Content negotiation and Kotlin JSON serialisation.

HttpServer included in JVM with the addition of Atbash Octopus JSON 1.1.1 for the JSON Binding support. Custom minimal framework for handling requests and sending responses.

Payara 6. Jakarta EE 10 application running on Payara Micro 6

OpenLiberty. Jakarta EE 8 application running on OpenLiberty.

Wildfly 27. Jakarta EE 10 application running on WildFly 27

Helidon SE 3.1. The project was created using the Helidon Starter and includes JSON-P and JSON-B support.

Piranha. Jakarta EE 10 core profile application running on the Cor profile distribution

Micronaut 3.8. The project was created using Micronaut Launch and has support for JSON handling.

Why these 10? Because it is a wide range of frameworks in Java or Kotlin with various features and because it is the list of runtimes that I know well enough to create backend applications in.

The data is gathered when running JDK 17 (Zulu 17.30) on a machine with 16 GB of memory and 8-Core Intel i9 – 2,3 GHz processors. Just to give you all the details of the environment of the test in case you want to repeat the tests.

The determination of the time for the first response is performed by a Python 3 script to make sure we can measure very fast startup times.

All the information and code can be found in the Project FF GitHub repository.

The Results 

I will not keep you in suspense any longer, and present you the test results in the table below. The table is sorted according to the first request time from low to high.

FrameworkFirst RequestRSSArtefact size
Pure Java 158 ms41 Mb11 Kb
Helidon632 ms94 Mb5.7 Mb
Quarkus682 ms111 Mb14.1 Mb
Ktor757 ms111 Mb15.3 Mb
Micronaut956 ms116 Mb12.2 Mb
Piranha1060 ms140 Mb8.7 Mb
Spring Boot1859 ms184 Mb21.3 Mb
WildFly4903 ms313 Mb63.1 Mb
Open Liberty6268 ms356 Mb47.6 Mb
Payara Micro10018 ms438 Mb90.1 Mb

The placement of the frameworks is sometimes surprising, while other known facts are confirmed in this test.

But before I discuss some individual results, I want to highlight another aspect that you can see within the data. And which is actually more important than which framework is a few 100 ms faster.

Graph representing Start up time versus Artefact size

If we plot out the artefact size of the runtime against the first response time, we see a pretty good, almost linear relationship. And that is not really a surprise if you think about it. 

No Functionality is a Fast Startup

Looking at the table and the graph in the previous section, it is clear that we can achieve a fast first response when we have little to no functionality available in our application and runtime.

The evidence for that is shown in the example where a simple endpoint is created with only the HttpServer class of the JVM itself and a simple JSON handling library with only the basic functionality. It starts up in an astonishing 158 milliseconds. That is about 25% of the time required by Quarkus, for example.

And the trend of having little code and functionality available which results in a fast startup can also be seen in the results of other frameworks. Really modular frameworks, like Helidon SE, Quarkus, and Ktor are much faster than less modular ones like OpenLiberty and WildFly which are less modular, or Payara Micro which is not modular.

Hence the title “Time is Code, my Friend”, indicates that you can have a faster startup by sacrificing functionality within your application and the runtime, like observability. And yes, native compilation can help here but as said, for 99.9% of applications, startup time is irrelevant. As long as it is reasonably fast – it doesn’t matter if it is 1 or 5 seconds.

But actually, this means that the above results, and the majority of all those other blogs you can read about fast startup times, are completely useless.  Do you have an application that just takes in a JSON object and returns a response? Without interaction with any external system like a database or any other data storage, your application will startup fast but will not be very useful.

And another reason why the above results are useless is the environment in which they are executed: a machine with 8 dual-core CPUs.  If you allocate that configuration in your cloud provider, you will spend quite a lot of money each month. People hesitate to assign more than 0.5 or 1 CPU to their process which means that startup times must be multiplied by a factor.

So we should perform this kind of performance test in an environment with low CPU power and where the application is accessing databases where  latency is also a factor that you should not ignore.

Individual framework discussion

The first observation we can make is that when you really need high performance, do not use any framework or runtime but instead, create your own specialized solution. All the frameworks and runtimes must be generic to support most scenarios developer teams have. This comes with a cost, of course, since developing a specialised solution requires that you have to do much more. It is always a trade-off.

Secondly, Quarkus is less performant in startup time than I expected. They have a very elaborate and complex system to apply build time improvements so that the application can start up really fast. Yet, they reach only the same response times as Helidon SE and Ktor which don’t have such pre-processing. These build time improvements makes Quarkus much more complex, certainly under the hood, which will make maintenance of the framework much harder in the future and requires you to use a Quarkus extension for any kind of library you want to use in combination with Quarkus. There are currently more than 400 extensions because you need a specific extension to make use of the build time enhancements.

Some frameworks start up faster or slower than expected based on the artefact size. Piranha only starts at 1060 ms whereas the trend line says it should start at about 750 ms. So it is 25% slower than expected.  Also, OpenLiberty is slower than expected. On the other hand, WildFly, Quarkus, and Ktor are faster.

Atbash Runtime?

I didn’t include the results of Atbash Runtime in the table and the graph. For the simple reason that it is my pet project to learn about many aspects.  But here are the results:

FrameworkFirst RequestRSSArtefact Size
Atbash Runtime1537 ms168 Mb8.2 Mb

And I’m very proud of the result. Without any efforts focused on the startup time, just trying to create a modular Jakarta EE Core Profile runtime, I succeeded in beating the startup time of Spring Boot. And besides Helidon SE, it is the smallest one.


With the comparison of a simple application looking at the first response time and artefact size using 10 Java and Kotlin frameworks, I want to stress a few important factors:

– All the blogs and articles about startup time are useless since they are based on running a useless application in an unrealistic environment.  So we should stop focusing on all these architectural aspects and non-functional requirements and instead, development teams must focus on the things that matter to the end users of the applications!

– Modular runtimes and frameworks help you in removing functionality you do not need. This can help you to keep the startup time fast enough but also introduce some complexity in building and maintaining the artefacts for your environment.

– The build time enhancements of Quarkus make it faster than expected according to the artefact size. But they introduce complexity that can become a problem for the future of the framework. Other solutions like Helidon SE and Ktor achieve the same startup time without these tricks.

– Native compilation (AOT, Graal VM, etc…) can  reduce startup time and memory usage but are useless for 99.9% of the applications as they don’t benefit from those gains. And it will only introduce more complexity.

( I would like to thank Debbie for reviewing and commenting on the article)


The RSS (Resident Set Size) of a process is the portion of its memory that is held in RAM (Random Access Memory) and is used by the operating system to keep track of the memory usage of the process. It includes both the data and code segments of the process, as well as any shared libraries it is linked against.

The tests are performed with 10 times more free memory on the machine to make sure that the process is only using RAM.

Project FF, the Framework Frenzy, Framework Face-off, Framework Fireworks, or … the comparison and similarities between Jakarta EE Web Profile, Spring Boot, Quarkus, Kotlin Ktor, and a few others regarding creating backend applications with REST endpoints. It will host an extensive comparison between many frameworks regarding the functionality that you can find within Jakarta EE Web Profile, MicroProfile, and some other commonly used functionality in today’s back-end applications.

Atbash Training and Support

Do you need a specific training session on Jakarta or MicroProfile? Or do you need some help in getting up and running with your next project? Have a look at the training support that I provide on the page and contact me for more information.

Architecture Java EE MicroProfile

MicroProfile support in Java EE Application servers


With Java EE, you can create enterprise-type application quickly as you can concentrate on implementing the business logic.
You are also able to create applications which are more micro-service oriented, but some handy ‘features’ are not yet standardised. Standardisation is a process of specifying the best practices which of course takes some time to discover and validate these best practices.

The MicroProfile group wants to create standards for those micro-services concepts which are not yet available in Java EE. Their motto is

Optimising Enterprise Java for a micro-services architecture

This ensures that each application server, following these specifications, is compatible, just like Java EE itself. And it prepares the introduction of these specifications into Java EE, now Jakarta EE under the governance of the Eclipse Foundation.


There are already quite some specifications available under the MicroProfile flag. Have a look at the MicroProfile site and learn more about them over there.

The topics range from Configuration, Security (JWT tokens), Operations (Metric, Health, Tracing), resilience (Fault tolerance), documentation (OpenApi docu), etc …


Just as with Java EE, there are different implementations available for each spec. The difference is that there is no Reference Implementation (RI), the special implementation which goes together with the specification documents.
All implementations are equally.

You can find standalone implementations for all specs within the SmallRye umbrella project or at Apache (mostly defined under the Apache Geronimo umbrella)

There exists also specific ‘server’ implementations which are specifically written for MicroProfile. Mostly based on Jetty or Netty, all implementations are added to have a compatible server.
Examples are KumuluzEE, Hammock, Launcher, Thorntail (v4 version) and Helidon for example.

But implementations are also made available within Java EE servers which brings both worlds tightly integrated. Examples are Payara and OpenLiberty but more servers are following this path like WildfFly and TomEE.

Using MicroProfile in Stock Java EE Servers

When you have your large legacy application which still needs to be maintained, you can also add the MicroProfile implementations to the server and benefits from their features.

It can be the first step in taking out parts of your large monolith and place it in a separate micro-service. When your package structure is already defined quite well, the separation can be done relatively easily and without the need to rewrite your application.

Although adding individual MicroProfile applications to the server is not always successful due to the usage of the advanced CDI features in MicroProfile implementations. To try things out, take one of the standalone implementations from SmallRye or Apache (Geronimo) – Config is the probably easiest to test, add it to the lib folder of your application server.

Dedicated Java EE Servers

There is also the much easier way to try out the combination which is choosing a certified Java server which has already all the MicroProfile implementations on board. Examples today are Payara and OpenLiberty. But also other vendors are going this way as the integration has started for WildLy and TomEE.

Since the integration part is already done, you can just start using them. Just add the MicroProfile Maven bom to your pom file and you are ready to go.


This way, you can define how much Java EE or MicroProfile stuff you want to use within your application and can achieve the gradual migration from existing Java EE legacy applications to a more micro-service alike version.

In addition, there exists also maven plugins to convert your application to an uber executable jar or you can run your WAR file also using the hollow jar technique with Payara Micro for example.


With the inclusion of the MicroProfile implementations into servers like Payara and OpenLiberty, you can enjoy the features of that framework in your Java EE Application server which you are already familiar with.

It allows you to make use of these features when you need them and create even more micro-service alike applications and make a start of the decomposition of your legacy application into smaller parts if you feel the need for this.

Enjoy it.


The advantages of the monolith


There are various possible ‘architectures’ to use in your application. Monoliths, micro-services and self-contained systems to name a few. When something is a hype, one can easily forget that there is no silver bullet, that we should use the best solution for your problem.

And these days, a lot of people forget that monoliths are powerful when used correctly, in certain use cases.

So let’s have a look at the simplicity, easy integration and consistency aspects of the monolith.


Monoliths are simple in structure. You just have 1 code source repository which is compiled and packaged in one command to produce your single artifact which you need to deploy.

So for a new team member, getting up and running is easily and fast. Get the link to the source code, download and package it and run it on the local machine. Even if you need an application server, it can be simple as executing a script which sets it up correctly.

This in contrast with micro-services for instance. You have several code repositories, different packaging systems because you have different development languages and frameworks, additional instances for the configuration service, API gateway, the discovery service and so one.

Easy / fast integration

In a bigger monolith, the integration between different parts is easy and efficient.

Since everything is running within the same JVM integration is as simple as calling another java method. There is no need that the data is converted to JSON for example. So it is faster and you have no pitfalls with the conversion which can happen in some corner cases.

And since everything is running in the same process, you don’t have to take into account the fallacies of the distributed computing. Which is forgotten by the developers anyway. So no issues with network failures, latency, bandwidth latency, changing topology and all those pitfalls which arise when your application is distributed between different machines.


The consistency of your UI is very important for your end user. Each screen must be built in a similar manner with the same look and feel and layout. When the different parts of your application looks differently or behave differently, it is for the end user much more difficult to use it and makes it inefficient (and probably he will no longer use it)

With your monolith, you are using a single code language and the same framework for generating the UI. So with a bit of discipline of the developer (and a good UI/UX design), it is fairly easy to create uniform screens.

In the case of micro-services where you have maybe different frameworks for the UI for different parts of the application, it will be very difficult to achieve this uniformity.

Or even worse, you built a giant monolithic SPA on top of your fine-grained micro-services which breaks an important concept of the architectural vision, the independently deployable artifacts.

Monolith as the start

Since it is so easy to get started with a monolith, every application should begin as a monolith. You have your first production artifact ready much faster.

In the following phases, you can gradually split it up and evolve into a distributed system if your use cases mandate it.

Just be aware of your package structure within the monolith application. Make sure there is already a clean separation so that the extraction of the different pieces can be smooth.


The advantages of the monolith are often forgotten these days. However, it has all the features that can give you a quick first success with your application. And the time to market is in many cases shorter due to its simplicity and easy integration. And if you carefully pay attention to the structure of your code, you can even create large monolith successfully.

Have fun.


How small are micro-services?


Sometimes you see heated discussions about the size of a micro-service. After all, they should be small as you can read from the definition. But small is a subjective concept and in the end, size doesn’t have the do anything with micro-services.

This is part of my monolith, micro-service and self-contained systems comparison blog series.

Single responsibility or Domain Driven Design (DDD)

Some people are so focused on the size (lines of code or number of classes) of their micro-service that they split up a micro-service or create some kind of strange construct.

But the single responsibility principle of a micro-service is much more important. The services are created to handle the logic of a domain.

In an online webshop, you could have some micro-services related to the order, product or payment management for example. And when there is some part of your application that needs the logic of that domain, it can call that micro-service.

Pizza someone?

Other people try to control the size of the micro-service by defining the number of people who are working in the team creating it.

Since each micro-service is ‘owned’ by a team, they define the team size as the number of people who has enough with 2 pizzas. The so-called pizza team.

But if you have ‘big eaters’ your team has then 3 persons, 1 analyst, 1 developer and 1 tester? The pizza team as a metric for the size of a micro-service doesn’t make any sense.

How small is small?

Well, as small or big as needed to create the logic around the domain for which the micro-service is designed. And in the case it happens to be 1 million lines of code, so be it. Ok, you should try to split your micro service then into different parts, each related to a subdomain in that case. But in a sense, there nothing wrong with bigger micro-services.

The lines of code is not an issue as long as it isn’t blocking the continuous delivery to production. When it tends to become a monolith where the development of a feature blocks the release of the micro-service to production, then you have an issue.


Forget about the lines of code or the number of people in the team which is creating the micro-service as a metric. That is not a good way to handle your micro-service architecture.

The main reason why you should do micro-services is that you have separate services, one for each domain of your application. And keep in mind that you should be able to go into production at any time with your micro-service. Don’t let a big feature block you to release some small updates you have done in your code.

Have fun.

This website uses cookies. By continuing to use this site, you accept our use of cookies.  Learn more