Categories
ORM-Persistence performance

Why ORM and Data frameworks are not your best option

As you all know, the relational world of databases is quite different from the object-oriented one. Both can store the same data but they do it differently. Making it a challenge to retrieve and store it from an object-oriented language like Java to a SQL-based database.

In principle, frameworks like ORMs (Hibernate to name the most popular one) and ‘data’ frameworks, like Spring Data or the new Jakarta Data, manage to handle the conversation well. But only for simple cases.

This blog is mainly about the SQL case but also can be applied to the NoSQL frameworks. When you try to abstract away the underlying communication to the underlying system, you lose performance, functionality, and flexibility.

The JDBC connectivity

From the early days of Java, The Java Database Connectivity API made it possible to interact with the SQL databases. It allowed the Java developer to execute SQL statements and retrieve data for the application.

In those days, it was normal that every developer knew SQL very well and could write the most complex queries to retrieve the data required for the advanced use cases of their applications. Using joins, sub-queries, and grouping is not that hard and can easily be mastered within a week.

The challenge and difficulty is the usage of the JDBC API. In those early days, many APIs were designed so that the developer still needed to perform many actions. That leads to a lot of boilerplate code that is required to close statements, result sets, and connections. If you fail to do this properly, it results in resource leakage and failures in your application over time.

But there are other challenges. You need to manually assign each column value from a JDBC ResultSet to the properties of your objects. This is not the most rewarding piece of code that you as a developer write on a project.

Refactorings, where database fields change can also be a challenge since the SQLs you execute are actually just Strings, and thus at compile time of your application, they are not validated.

The benefits of the ORM

All the challenges described in the previous section, are addressed by the ORM tools like Hibernate. They simplify database interactions and bridge the gap between object-oriented programming languages and databases.

Firstly, ORM tools eliminate the need for developers to write repetitive and error-prone boilerplate code. That is hidden away in the framework code.

Another significant benefit of ORM tools is their ability to provide type-safe mappings between database tables and object-oriented classes. Initially defined through XML files and later on through the annotations available from Java 5 onwards, the tool handles the translation of data between the database and objects. Besides the mappings of single fields, it allows you to represent foreign key relations from the database. But also to represent concepts that do not exist in the database such as the OneToMany relation to have all children of each master record.

Although the SQL language is standardised, there are several versions like SQL-92, SQL-2003, and SQL-2023 where the JSON datatype is introduced. Not all databases support the same version and all databases use a custom, slightly different version. This is handled by the introduction of the ORM Query Language (Hibernate Query Language, JPA Query language, etc.) and a Dialect for each database that converts these queries to a format that is supported by the database.
This ORM Query Language is thus a subset of all possible features found in the databases and thus you cannot use the full power of the database.

Additionally, ORM tools offer a range of features such as support for database schema migrations, caching, and query optimization. They enable developers to work with databases more abstractly and intuitively, freeing them from having to think in terms of SQL statements and database-specific details. This abstraction allows for greater flexibility in choosing the underlying database system, but also in lower performance as it adds an additional rather complex layer and less functionality.

After the ORM tools were in use for several years, people saw that querying a table and filtering on one or a few fields, results in a handful of very similar statements.

This led to the creation of the data project like Spring Data. Instead of writing these few statements each time, these are derived from the method name of specially indicated interfaces.

List findByName(String name)

This kind of method just replaces 3 lines of code with the ORM tool.

The problems of the ORM

In the previous section, you can find many useful improvements in accessing the database by using an OM tool. But it has its own challenges and problems.

Probably the most common problem is the Lazy and Eager loading strategies and the ‘N+1 select issue’.
In almost all cases, you don’t need to retrieve the records of a table in isolation, but you also need to take into account the relations with other tables. This can be needed to have all the fields for filtering, or additional data for displaying on the screen.
Within the ORM tool, they are represented by the ManyToOne, OneToMany, or ManyToMany relation.

The ORM can decide to load the information of these related tables eagerly by including the table already in the query using a JOIN clause. Or after the main query is executed, issuing additional queries to load the details in the lazy case.

But this lazy case introduces the ‘N+1 select issue’ since after retrieving the results for the main query which has N rows, the ORM tool launches N queries to retrieve the detail collection of each row.

So is eagerly preferred over lazy loading then? No, not at all. Since many tables are connected, eagerly loading retrieves in most cases information from tables that are not needed, making queries complex and slow. You must decide on an individual basis if data is required or not.

Some real-world cases and best practices

In my 20+ years career as a Java Developer, I was called in on many projects that were already in production and experienced some issues or needed some advanced functionality.
In almost all cases, the issues could be retraced to how the ORM mapping and tool were used in the project.

I’ll briefly discuss some cases and explain the solutions that I applied to the problems.

One case is about the eager loading and the lack of a proper design for the Entity layer.
They called me in on a project where there was a performance problem on the main page of the application that showed some kind of overview, minimal dashboard, for the user. The page showed about 10 values, so not much information but it actually took about 45 seconds to load.

The reason was quickly found when I activated the SQL tracing to see what queries were sent to the database. To get the data for the main page, there were 1329 queries executed. The developer only issued 5 queries, but the development team used the lazy loading configuration and did not specify any FETCHING strategy on the queries themselves.

Since all tables are connected, which is the common case, some queries that touch many tables or collect data from some detail collections are not performant when relying on standard ORM tool behavior due to the lazy option.

In this case, the usage of eager loading would not solve the problem as the 5 queries would become very large, touching many tables within the database. These queries are also very slow, and thus not a solution.

The solution was actually very simple, create 5 ‘native queries’ that can retrieve the required values very efficiently from the database. The ORM tool can execute a native query. You still can make use of the default mapping, or retrieve a collection of values when the query is not returning entire table rows but only some values. But you bypass the ORM Query language conversion to SQL, so faster anyway, and you submit the ideal query to the database. Again the fastest option.

So, if you haven’t done it already, take that intermediate or advanced SQL course so that you become a pro in writing complex queries. Use native queries if you touch 3 or more tables as you can write them more efficiently than the ORM which is designed for simple cases. Use the Lazy fetching strategy but define in the query if you need the data or not by using a JOIN FETCH clause for your simple queries.

The second case I want to discuss is a project where they relied heavily on Spring Data. They used the method name convention to define the query or in several cases used the @Query annotation to instruct what should be executed.

During development, everything went smoothly but quickly some performance issues arose when running in production. The difference, in production there are not 10 or 20 records but 10,000 and more.

Since the development team used Spring Data, they included many eager relations and also used ToMany relations extensively to get the info they needed ‘automagically’. Since these options result in joining many tables, in many cases not needed for the situation, the queries became slow when using more records.

The solution in this case was again relying on native queries that could optimally retrieve the data, reduce the usage of Spring Data interface methods, and define query and fetch strategies within queries.

In general, avoid, or simply don’t use at all, the usage of the ToMany relations as they are not available ‘naturally’ in databases and require complex or additional queries.
Avoid the usage of the Data frameworks when querying more than 1 table or don’t use them at all since writing 3 statements is not a problem. Or is it?
Avoid the eager loading definition and specify the JOIN FETCH clause when you need the info.

This is of course a quick and limited overview. For example, did you know that you can’t use database pagination when you use JOIN FETCH? That you can’t use a sub-query in a JOIN in the ORM Query language?

Conclusion

The ORM tools solve a few important issues, the boilerplate code required with the JDBC API and the mapping of database field values to Java object properties. But it can introduce a lot of troubles in your projects regarding performance which is mostly only discovered when running in production with larger datasets.

So as a rule of thumb, do not use eager loading and use JOIN FETCH clauses when needed, do not use any ToMany relation as they make your queries complex, and write all your queries, other than the very simple one table ones, yourself using native queries.

Since we only should use a very limited set of functionality of the ORM tools like Hibernate, why don’t we drop it altogether and just use a tool that reduces the boilerplate code and solves the mapping issue (type-safe column names and values)? Tools like JOOQ or the Expose framework written in Kotlin, are 2 examples of tools that implement the best practices I described here without the overhead of an ORM.
Interested in an introduction to the Expose framework, I’ll give an overview in my next blog.

Training and support

Interested in a training about efficient ORM tools or Hibernate usage in your project? In need of an expert to help you solve a problem, feel free to contact me.

Do you need a specific training session on Jakarta EE, Quarkus, Kotlin or MicroProfile? Have a look at the training support that I provide on the page https://www.atbash.be/training/ and contact me for more information.

Categories
Jakarta EE ORM-Persistence

Why ORM and Data frameworks are not your best option

Subscribe to continue reading

Subscribe to get access to the rest of this post and other subscriber-only content.

Categories
Jakarta EE Ktor Security

Comparing JWT Token Usage in Spring Boot, Quarkus, Jakarta, and Kotlin Ktor: A Framework Exploration – Part 4

Since this topic became very extensive, I decided to split up the blog into 4 parts. To keep blog lengths manageable. Here is the split up

Part 1: Introduction
Part 2: Payara, Spring Boot and Quarkus
Part 3: Ktor and Atbash Runtime
Part 4: Discussion and conclusion (this one)

For an introduction around JWT Tokens, you can have a look at the first part of this blog. It also contains a description how the Keycloak service is created for the example programs described in this part.
Part 2 and 3 contains the description of the example application for each runtime.

Discussion

In parts 2 and 3, I showed the most important aspects of using a JWT token with Payara Micro (Jakarta EE), Spring Boot, Quarkus, Kotlin, and Atbash Runtime. The JWT tokens themselves are standardised but how you must use them in the different runtimes is not defined and thus different. Although there exists the MicroProfile JWT Auth specification, even those runtimes that follow it, have differences in how it should be activated and how roles should be verified, especially when you don’t want to check a role. The specification, besides duplicating a few things from the JWT specification itself like how validation needs to be done, only defines how a MicroProfile application should retrieve claim values.

It is obvious that for each runtime we need to add some dependency that brings in the code to handle the JWT tokens. But for several of these runtimes, you also need to activate the functionality. This is the case for Payara Micro through the @LoginConfig and also for Atbash Runtime since the functionality is provided there by a non-core module.

Another configuration aspect is the definition of the location of the certificates. Spring Boot is the only one that makes use of the OAuth2 / OpenId Connect well know endpoint for this. The other runtimes require you to specify the URL where the keys can be retrieved in a certain format. This allows for more flexibility of course and potential support for providers that do not follow the standard in all its extends. But since we are talking about security, it would probably be better that only those certified, properly tested providers would be used as is the case with the Spring Boot implementation.

The main difference in using a JWT token on runtime is how the roles are verified. Not only is not specified which claim should hold the role names, nor is it defined how the authorization should be performed. This leads to important differences between the runtimes.

Within Kotlin Ktor, We should define a security protocol for each different role we want to check and assign it a name. Or you create a custom extension function that allows you to specify the role at the endpoint as I have done in the example. But important to note is that we need to be explicit in each case. Which role or if no role at all is required, we need to indicate this.

This is not the case for the other runtimes, except the Atbash Runtime.

When you don’t use any annotation on the JAX-RS method with Payara Micro and Spring Boot, no role is required, only a valid JWT token. But with Quarkus, when not specifying anything, the endpoint becomes publicly accessible. This is not a good practice because when you as a developer forget to put an annotation, the endpoint becomes available for everyone, or at least any authenticated user for certain runtime. This violates the “principle of least privilege” that by default, a user has no rights and you explicitly need to define who is allowed to call that action. That is the reason why Atbash Runtime treats the omission of an annotation to check on roles as an error and hides the endpoint and shows a warning in the log.

If you do not want to check for a role when using Atbash Runtime, you can annotate the JAX-RS method with @PermitAll. The JavaDoc says “Specifies that all security roles are allowed to invoke the specified method(s)” and thus it is clearly about the authorization on the endpoint. But if you use @PermitAll in Payara Micro, the endpoint becomes publicly accessible, dropping also authentication. That is not the intention of the annotation if you ask me. Although the Javadoc might be to blame for this as it mentions “that the specified method(s) are ‘unchecked'” which might be interpreted as no check at all.

Conclusion

All major frameworks and runtimes have support for using JWT Tokens within your application to authenticate and authorise a client call to a JAX-RS endpoint. When adding the necessary dependency to have the code available and adding some minimal configuration like defining where the keys can be retrieved to verify the signature, you are ready to go. The only exception here might be Kotlin Ktor where you are confronted with a few manual statements about the verification and validation of the token. It is not completely hidden away.

The most important difference lies in how the check for the roles is done. And especially in the case that we don’t require any role, just a valid JWT token. Only Atbash Runtime applies the “principle of least privilege”. On the other runtimes, forgetting to define a check for a role leads to the fact that the endpoint becomes accessible to any authenticated user or even worse, publicly accessible.

There is also confusion around @PermitAll which according to the java doc is about authorization, but in Jakarta EE runtime like Payara Micro, the endpoint also suddenly becomes publicly accessible.

Interested in running an example on the mentioned runtimes, check out the directories in the https://github.com/rdebusscher/Project_FF/tree/main/jwt repo which work with KeyCloak as the provider.

Training and Support

Do you need a specific training session on Jakarta EE, Quarkus, Kotlin or MicroProfile? Have a look at the training support that I provide on the page https://www.atbash.be/training/ and contact me for more information.

Categories
Jakarta EE Ktor Security

Comparing JWT Token Usage in Spring Boot, Quarkus, Jakarta, and Kotlin Ktor: A Framework Exploration – Part 3

Since this topic became very extensive, I decided to split up the blog into 4 parts. To keep blog lengths manageable. Here is the split up

Part 1: Introduction
Part 2: Payara, Spring Boot and Quarkus
Part 3: Ktor and Atbash Runtime (this one)
Part 4: Discussion and conclusion

For an introduction around JWT Tokens, you can have a look at the first part of this blog. It also contains a description how the Keycloak service is created for the example programs described in this part.
Part 2 contains the description for Payara Micro, Spring Boot and Quarkus.

Ktor

Also within Ktor there is some excellent support for using JWT tokens although we need to code a little bit more if we want to have support for rotating public keys and easy checks on the roles within the tokens.

But first, Let us start again with the dependencies you need within your application.

        <!-- Ktor authentication -->
        <dependency>
            <groupId>io.ktor</groupId>
            <artifactId>ktor-server-auth-jvm</artifactId>
            <version>${ktor_version}</version>
        </dependency>
        <!-- Ktor support for JWT -->
        <dependency>
            <groupId>io.ktor</groupId>
            <artifactId>ktor-server-auth-jwt-jvm</artifactId>
            <version>${ktor_version}</version>
        </dependency>

We need a dependency to add the authentication support and another one for having the JWT token as the source for authentication and authorisation.

Just as with the Payara and Quarkus case, we need to define the location to retrieve the public key, expected issuer, and audience through the configuration of our application. In our example application, this is provided in the application.yml file.

jwt:
  issuer: "http://localhost:8888/auth/realms/atbash_project_ff"
  audience: "account"

We programmatically read these values in our own code, so the keys can be whatever you like, they are not predetermined as with the other runtimes. In the example, you see that we also don’t define the location of the public key endpoint as we can derive that from the issuer value in the case of KeyCloak. But you are free to specify a specific URL for this value of course.

Configuration of the modules in Ktor is commonly done by creating an extension function on Application object, as I have also done in this example. This is the general structure of this function

fun Application.configureSecurity() {

    authentication {
        jwt("jwt-auth") {
            realm = "Atbash project FF"
            // this@configureSecurity refers to Application.configureSecurity()
            val issuer = this@configureSecurity.environment.config.property("jwt.issuer").getString()
            val expectedAudience = this@configureSecurity.environment.config.property("jwt.audience").getString()
            val jwkUrl = URL("$issuer/protocol/openid-connect/certs")
            val jwkProvider = UrlJwkProvider(jwkUrl)

            verifier {
        // not shown for brevity              

            }

            validate { credential ->
                // If we need validation of the roles, use authorizeWithRoles
                // We cannot define the roles that we need to be able to check this here.
                JWTPrincipal(credential.payload)
            }

            challenge { defaultScheme, realm ->
                // Response when verification fails
                // Ideally should be a JSON payload that we sent back
                call.respond(HttpStatusCode.Unauthorized, "$realm: Token is not valid or has expired")
            }
        }
    }

}

The function jwt("jwt-auth") { indicates that we define an authentication protocol based on the JWT tokens and we name it jwt-auth. We can name it differently and can have even multiple protocols in the same application as long ask we correctly indicate which protocol name we want at the endpoint.

The JWT protocol in Ktor requires 3 parts, a verification part, a validation one, and lastly how the challenge is handled.

The verification part defines how the verification of the token is performed and will be discussed in more detail in a moment. We can do further validation on the token by looking at the roles that are in the token. If you have many different roles, this leads to many different named JWT protocols. Therefore I opted in this example to write another extension function on the Route object that handles this requirement more generically. And the challenge part is executed to formulate a response for the client in case the validation of the token failed.

The verifier method defines how the verification of the token is performed. We make use of the UrlJwkProvider which can read the keys in the JWKS format which contains keys in a JSON format. But it doesn’t try to reread the endpoint in case the key is not found. This also means we cannot apply rotating keys for signing the JWT tokens which is recommended in production. Therefore, we make use of a small helper which caches the keys but read the endpoint again when the key is not found. This functionality could be improved to avoid a DOS attack by calling your endpoint with some random key ids which would put Keycloak or the JWT Token provider under stress.

            val jwkProvider = UrlJwkProvider(jwkUrl)

            verifier {
                val publicKey = PublicKeyCache.getPublicKey(jwkProvider, it)

                JWT.require(Algorithm.RSA256(publicKey, null))
                    .withAudience(expectedAudience)
                    .withIssuer(issuer)
                    .build()

            }

The other improvement that you can find in the example is the validation part. Since you only have the credential as input for this validation, you can check if the token has a certain role, but you can’t make this check dynamic based on the endpoint. As mentioned, this would mean that for each role that you want to check, you should make a different JWT check.

The example contains an extension function on the Route object so that you can define the role that you expect. This is how you can use this new authorizeWithRoles function

        authorizeWithRoles("jwt-auth", listOf("administrator")) {
            get("/protected/admin") {
                call.respondText("Protected Resource; Administrator Only ")
            }
        }

So besides the name for the protocol we like to use, you can also define a set of roles that you expect to be in the token. The function itself is not that long but a little complex because we add a new interceptor in the pipeline used by Ktor to handle the request. If you want to look at the details, have a look at the example code.

If you just need a valid token, without any check on the roles, you can make use of the standard Ktor functionality

        authenticate("jwt-auth") {
            get("/protected/user") {
                val principal = call.authentication.principal<JWTPrincipal>()
                //val username = principal?.payload?.getClaim("username")?.asString()
                val username = principal?.payload?.getClaim("preferred_username")?.asString()
                call.respondText("Hello, $username!")
            }
        }

This last snippet also shows how you can get access to the claims within the token. You can access the principal associated with- the request by requesting call.authentication.principal<JWTPrincipal>() where you immediately make the cast to the JWTPrincipal class. This contains the entire token content easily accessible from within your Kotlin code as you can see in the example where I retrieve the preferred_username.

You can review all code presented here in the example https://github.com/rdebusscher/Project_FF/tree/main/jwt/ktor.

Atbash Runtime

Atbash Runtime is a small modular Jakarta EE Core profile runtime. So by default, it doesn’t has support for using JWT tokens. But since these tokens are the de facto standard, there is an Atbash Runtime module that supports them so that you can use it for your application.

As a dependency, you can add this JWT supporting module to your project

        <dependency>
            <!-- Adds JWT Support in the case we are using the Jakarta Runner, no addition of the MP JWT Auth API required -->
            <!-- Otherwise, when not using Jakarta Runner, the addition of JWT Auth API as provided is enough if you are using Atbash Runner Jar executable -->
            <groupId>be.atbash.runtime</groupId>
            <artifactId>jwt-auth-module</artifactId>
            <version>1.0.0-SNAPSHOT</version>
        </dependency>

Since we use the Jakarta Runner feature of the Atbash runtime, which allows you to execute your web application through a simple main method, we need to add the module itself. If you run your application as a war file, make sure you activate the JWT module within the configuration so that the module is active.

The JWT support within Atbash runtime is also based on the Microprofile JWT Auth specification, so you will see many similarities with the Payara and Quarkus examples we have discussed in part 2 of this blog.

Configuration requires the 3 values for public key location, expected issuer, and audience.

mp.jwt.verify.publickey.location=http://localhost:8888/auth/realms/atbash_project_ff/protocol/openid-connect/certs
mp.jwt.verify.issuer=http://localhost:8888/auth/realms/atbash_project_ff
mp.jwt.verify.audiences=account

You are also required to indicate the @LoginConfig (in case you are executing your application as a WAR file) so that the JWT Module is active for the application. But there is no need to define @DeclareRoles as Atbash Runtime takes the value of the individual @RolesAllowed as valid roles.

A difference with Payara, for example, is that you need to add @PermitAll to a method when you don’t want to check on any roles. Within Atbash Runtime there is the “principle of least privilege” implemented. If you don’t specify anything on a JAX_RS method no client can call it. This is to avoid that you forget to define some security requirements and expose the endpoint without any checks. The JavaDoc says “Specifies that all security roles are allowed to invoke the specified method(s)” and thus it is clearly what we need. Although, some runtimes, including Payara, interpret this differently and I’ll go deeper on this topic in part 4.

The example code is located at https://github.com/rdebusscher/Project_FF/tree/main/jwt/atbash.

Discussion

In the last part of the blog, I’ll have a discussion about similarities and differences. These differences are especially important when you don’t want to have a check on a role within the token.

Part 1: Introduction
Part 2: Payara, Spring Boot and Quarkus
Part 3: Ktor and Atbash Runtime (this one)
Part 4: Discussion and conclusion

Training and Support

Do you need a specific training session on Jakarta EE, Quarkus, Kotlin or MicroProfile? Have a look at the training support that I provide on the page https://www.atbash.be/training/ and contact me for more information.

Categories
Jakarta EE Ktor Security

Comparing JWT Token Usage in Spring Boot, Quarkus, Jakarta, and Kotlin Ktor: A Framework Exploration – Part 2

Since this topic became very extensive, I decided to split up the blog into 4 parts. To keep blog lengths manageable. Here is the split up

Part 1: Introduction
Part 2: Payara, Spring Boot and Quarkus (this one)
Part 3: Ktor and Atbash Runtime
Part 4: Discussion and conclusion

For an introduction around JWT Tokens, you can have a look at the first part of this blog. It also contains a description how the Keycloak service is created for the example programs described in this part.

Payara

As an example of how you can work with JWT tokens with Jakarta EE and MicroProfile, we make use of Payara Micro.

The JWT Token support is provided by MicroProfile, so add the dependency to your project.

        <dependency>
            <groupId>org.eclipse.microprofile</groupId>
            <artifactId>microprofile</artifactId>
            <version>6.0</version>
            <type>pom</type>
            <scope>provided</scope>
        </dependency>

We are using MicroProfile 6, which requires Jakarta EE 10 runtime as this is the version that is supported by the Payara Micro community edition.

As configuration, we need to provide the endpoint where the MicroProfile JWT Auth implementation can retrieve the public key that is required to validate the content of the token against the provided signature. This can be done by specifying mp.jwt.verify.publickey.location configuration key. Two other configuration keys are required, one that verifies if the issuer of the token is as expected and the audience claim is the other one.

Other configuration aspects are the indication that a JWT token will be used as authentication and authorization for the endpoints through the @LoginConfig annotation. The @DeclareRoles annotation is a Jakarta EE annotation that indicates which roles are recognised and can be used. These annotations can be placed on any CDI bean.

@LoginConfig(authMethod = "MP-JWT")
@DeclareRoles({"administrator"})

On the JAX-RS method, we can add the @RolesAllowed annotation to indicate the role that must be present in the token before the client is allowed to call the endpoint.

    @GET
    @Path("/admin")
    @RolesAllowed("administrator")
    public String getAdminMessage() {

When there is no annotation placed on the method, only a valid JWT token is required to call the endpoint. Also, have a look at the part 4 of this blog for some important info and differences between runtimes.

Through the MicroProfile JWT Auth specification, we can also access one or all the claims that are present in the token. The following snippet shows how you can access a single claim or the entire token in a CDI bean or JAX-RS resource class.

    @Inject
    @Claim("preferred_username")
    private String name;

    @Inject
    private JsonWebToken jsonWebToken;
    // When you need access to every aspect of the JWT token.

The entire example can be found in the project https://github.com/rdebusscher/Project_FF/tree/main/jwt/payara.

Spring Boot

Also, Spring Boot has excellent support for using JWT tokens for the authentication and authorization of rest endpoints. Besides the Spring Boot Security starter, the Oauth2 Resource Server dependency is required within your application. So you don’t need to handle the JWT token yourself in a programmatic way as some resources on the internet claim.

In our example, we use Spring Boot 3 and JDK 17.

    <dependency>
        <groupId>org.springframework.boot</groupId>
        <artifactId>spring-boot-starter-security</artifactId>
    </dependency>

    <dependency>
        <groupId>org.springframework.boot</groupId>
        <artifactId>spring-boot-starter-oauth2-resource-server</artifactId>
    </dependency>

In contrast to MicroProfile where you need to provide several configuration keys, Spring boot makes use of the OpenId Connect specification where it is defined that the endpoint .well-known/openid-configuration provides all info. This includes the location of the public key required for the validation of the token against the signature and the value of the issuer. The location can be specified through a Spring Configuration resource.

spring.security.oauth2.resourceserver.jwt.issuer-uri=http://localhost:8888/auth/realms/atbash_project_ff
spring.security.oauth2.resourceserver.jwt.audiences=account

The audience value is not required to be defined, Spring Boot works without it. But it is a recommended configuration aspect to make sure that tokens are correctly used, especially when you use tokens for multiple applications.

You can either define the requirements for the roles that should be present in the token using a Spring Bean that extends the WebSecurityConfigurerAdapter class and the HttpSecurity builder, but I prefer the method-based approach.
With this approach, you can define the required role using the @PreAuthorize annotation

    @GetMapping("/admin")
    @PreAuthorize("hasAuthority('administrator')")
    public String getAdminMessage() {

It makes it easier to find out which role is required before a client can call the endpoint and also easier to verify if you didn’t make any error in the security configuration of your application. This method-based approach requires a small activation and mapping between the roles within the token and the authority we check in the annotation.

@Configuration
@EnableMethodSecurity
public class MethodSecurityConfig {
}

The configuration for the JWT token roles is provided by a JwtAuthenticationConverter bean.

    @Bean
    public JwtAuthenticationConverter jwtAuthenticationConverter() {
        JwtGrantedAuthoritiesConverter grantedAuthoritiesConverter = new JwtGrantedAuthoritiesConverter();
        grantedAuthoritiesConverter.setAuthorityPrefix("");
        grantedAuthoritiesConverter.setAuthoritiesClaimName("groups");

        JwtAuthenticationConverter jwtAuthenticationConverter = new JwtAuthenticationConverter();
        jwtAuthenticationConverter.setJwtGrantedAuthoritiesConverter(grantedAuthoritiesConverter);
        return jwtAuthenticationConverter;
    }

Within the REST methods, we can have access to the JWT token claims, just as with the Jakarta EE and MicroProfile example. We need to add a JwtAuthenticationToken parameter to the method which allows access to claims through the getTokenAttributes() method.

    @GetMapping("/user")
    public String getUser(JwtAuthenticationToken authentication) {
        Object username = authentication.getTokenAttributes().get("preferred_username");

The entire example can be found in the project https://github.com/rdebusscher/Project_FF/tree/main/jwt/spring.

Quarkus

The Quarkus support is also based on MicroProfile, so you will see several similarities with the Payara case I described earlier. The Quarkus example is based on the recent Quarkus 3.x version. As a dependency, we need two artifacts related to the JWT support provided by the SmallRye project. Although it seems you do not need the build one at first sight, as it is about creating JWT tokens within your application, the example did not work without it.

        <dependency>
            <groupId>io.quarkus</groupId>
            <artifactId>quarkus-smallrye-jwt</artifactId>
        </dependency>
        <dependency>
            <groupId>io.quarkus</groupId>
            <artifactId>quarkus-smallrye-jwt-build</artifactId>
        </dependency>

Since the SmallRye JWT implementation is also using the SMicroProfile JWT auth specification, the configuration through key-value pairs is identical to the Payara one. We need to define the location of the publicKey, and the expected values for the issuer and audience. In the example, I have defined them in the application.properties file, a Quarkus-specific configuration source. But as long as they can be retrieved through any of the supported configuration sources, it is ok.

Since Quarkus is not a Jakarta-compliant runtime, it doesn’t require any indication that the application will make use of the JWT tokens for authentication and authorisation. The existence of the two dependencies we added earlier to the project is enough. In this case, it is similar to the Spring Boot case where we also did not do this.

On the JAX-RS resource methods, we can indicate if we need a certain role within the token, or that just the token itself is required and no specific role is required. If a role is required, we can make use of the same @RolesAllowed annotation we encountered in the Payara example or we need to add the @Authenticated annotation if we just need a valid token.

    @GET
    @Path("/admin")
    @RolesAllowed("administrator")
    public String getAdminMessage() {
        return "Protected Resource; Administrator Only ";
    }

    @GET
    @Path("/user")
    @Authenticated
    // No roles specified, so only valid JWT is required
    public String getUser() {
        return "Protected Resource; user : " + name;
    }

This @Authenticated annotation is defined in the Quarkus Security artifact, brought in transitively, and indicates that an authenticated user is required. Without this annotation, the endpoint would become publicly accessible, without the need for any token or authentication method.

More on that in a part 4 of this blog.

The retrieval of the claims is again identical to the Payara case. The example project can be found at https://github.com/rdebusscher/Project_FF/tree/main/jwt/quarkus.

Runtimes

The Ktor and Atbash Runtime versions of the example application are described in part 3.

Part 1: Introduction
Part 2: Payara, Spring Boot and Quarkus (this one)
Part 3: Ktor and Atbash Runtime
Part 4: Discussion and conclusion

Training and Support

Do you need a specific training session on Jakarta EE, Quarkus, Kotlin or MicroProfile? Have a look at the training support that I provide on the page https://www.atbash.be/training/ and contact me for more information.

Categories
Jakarta EE Ktor Security

Comparing JWT Token Usage in Spring Boot, Quarkus, Jakarta, and Kotlin Ktor: A Framework Exploration – Part 1

Since this topic became very extensive, I decided to split up the blog into 4 parts. To keep blog lengths manageable. Here is the split up

Part 1: Introduction (this one)
Part 2: Payara, Spring Boot and Quarkus
Part 3: Ktor and Atbash Runtime
Part 4: Discussion and conclusion

But don’t worry, all these 4 parts will be released within the same week so that those people that are eager to process it in one go, do not need to wait a long time before the series is published.

Introduction

As the demand for secure and efficient authentication and authorization mechanisms grows, JSON Web Tokens (JWT) have emerged as a favored choice for developers. JWT tokens provide a modern approach to verifying user identity and defining access privileges within web applications. In this blog post, we will delve into the usage of JWT tokens across various frameworks, namely Spring Boot, Quarkus, Jakarta, and Kotlin Ktor. By comparing their implementation approaches, we aim to provide insights into how JWT tokens are utilized within each framework and help you make a transition from one to another easier.

Understanding the Basics of JWT Tokens

At the core of JWT tokens lies a simple yet powerful structure that encompasses all the necessary information for secure authentication and authorization. Let’s dive into the basics of JWT tokens and explore their three essential components: the header, the body, and the signature.

1. Header

The header of a JWT token contains metadata about the token itself and the algorithms used to secure it. It typically consists of two parts: the token type, which is always “JWT,” and the signing algorithm employed, such as HMAC, RSA, or ECDSA. This header is Base64Url encoded and forms the first part of the JWT token.

2. Body (Payload):

The body, also known as the payload, carries the actual data within the JWT token. It contains the claims, which are statements about the user and additional metadata. Claims can include information like the user’s ID, name, email, or any other relevant data. The payload is also Base64Url encoded and forms the second part of the JWT token.

3. Signature

The signature is the crucial component that ensures the integrity and authenticity of the JWT token. It is created by combining the encoded header, the encoded payload, and a secret key known only to the server. The signature is used to verify that the token has not been tampered with during transmission or storage. It acts as a digital signature and prevents unauthorized modifications to the token. The signature is appended as the third part of the JWT token.

Self-Contained and Secure

One of the significant advantages of JWT tokens is their self-contained nature. Since all the necessary information is embedded within the token itself, there is no need for additional database queries or session lookups during authentication and authorization processes. This inherent characteristic contributes to improved performance and scalability.

To verify the authenticity and integrity of a JWT token, the recipient needs access to the public key or shared secret used to generate the signature. By retrieving the public key or shared secret, the recipient can verify the token’s signature and ensure that no tampering or unauthorized modifications have occurred. This mechanism provides a robust security layer, assuring that the token’s contents can be trusted.

User Roles in JWT Tokens

JWT tokens can also include user roles as part of their payload. User roles define the permissions and privileges associated with a particular user. By including this information in the JWT token, applications can determine the user’s authorization level and grant or restrict access to specific resources or functionalities accordingly. This granular approach to authorization allows for fine-grained control over user permissions within the application.

In the upcoming sections, we will explore how different frameworks incorporate these fundamental JWT token concepts into their authentication and authorization workflows. Understanding the core principles behind JWT tokens sets the stage for a comprehensive comparison, enabling us to evaluate the strengths and nuances of each framework’s implementation.

Example application

The same example application is made with different runtimes. It contains a couple of endpoints, they all require a valid token before they should be executed. One of the endpoints requires that the token contains the role of administrator.

GET /protected/user -> Hello username
GET /protected/admin -> Protected Resource; Administrator Only

The tokens utilised in our example are sourced from Keycloak, a reliable and widely adopted Authorization provider. Keycloak offers various standard flows for obtaining these tokens, catering to diverse authentication scenarios.

One of the commonly employed flows is the authorization code flow, which involves user interaction through dedicated screens provided by the Authorization provider. Users are prompted to log in and provide their credentials, following which Keycloak generates the necessary tokens for authentication and authorization purposes.

Alternatively, Keycloak supports a username and password-based approach where users can submit their credentials to a designated endpoint. This method allows Keycloak to validate the provided information and issue the relevant tokens required for subsequent authentication and authorization processes.

For our example, a custom realm with a configuration that is suitable for all our runtimes is created by setup_jwt_example.py and can be found in the directory https://github.com/rdebusscher/Project_FF/tree/main/jwt/keycloak. The script prepares the realm and a OpenId Connect client so that in response to a valid user name and password combination, a JWT token with the roles of the user is returned. It creates also two users, one of them having the admin role.

The Python script test_jwt_example.py can be used to test out the solution in each of the runtimes. It calls both endpoints with the two users that are defined. And so, one of the calls will result in an error since the non-administrator user is not allowed to call the administrator endpoint.

Runtimes

The different runtimes are discussed in part 2 and part 3 of this series.

Part 1: Introduction (this one)
Part 2: Payara, Spring Boot and Quarkus
Part 3: Ktor and Atbash Runtime
Part 4: Discussion and conclusion

Training and Support

Do you need a specific training session on Jakarta EE, Quarkus, Kotlin or MicroProfile? Have a look at the training support that I provide on the page https://www.atbash.be/training/ and contact me for more information.

Categories
Atbash Testing

Support for Database and Jakarta EE 10 within Jakarta EE Integration Testing Framework

In this release of the Atbash Integration testing framework, we continued improving the support for creating customised Docker images that run your application under test but also added 2 major features, support for databases and Jakarta EE 10.

Depending on the target version of your application, you can use version 1.2.0 if you are building a Jakarta EE 8 application, or 2.2.0 if your application depends on Jakarta EE 10. More on that later in this blog.

In previous blogs, we covered the basics of using the Integration testing framework. You can have a look at this blog for the introduction and the functionality that was provided in version 1.0.

With the second release, we added the option to use a custom Docker build file that will be used for the container with the application. And we added support for using Wiremock to mock the response of remote services. More details can be found in this blog.

Update : Presentation at Jakarta Tech Talks which can be viewed on YouTube channel.

Customised Build file

In this release, we continued adding support for customising the container that runs your application and that you want to test. You had already the option to supply a custom Docker Build file where the framework only adds the appropriate Docker ADD command to add the WAR file. Now you also can customise the build file just before it will be sent to the Docker engine for processing.

There is a DockerImageAdapter interface that can be implemented by the developer to adjust a few lines of the build file, including the default build file provided by the framework.

This allows for simple additions that are future-compatible with possible changes that will be made to the default build file in future versions.

Implementations of the interface are picked up by the ServiceLoader mechanism within Java. So don’t forget to define your class name in the appropriate service file.

Since there might be more than one implementation within the classpath, you can order them by adding a @Priority annotation. The implementations with a low value are called first. If no priority value is specified, a default value of 100 is taken. And of course, when multiple implementations have the same value, the exact order is undefined for the matching priority values.

When you only need to add some environment variables to the container, there is no need anymore to define a custom build file. You can make use of the AdditionalEnvParameters class. Within an implementation of the DockerImageAdapter you can access this class as follows

   public String adapt(String dockerFileContent, TestContext testContext) {
      AdditionalEnvParameters envParameters = testContext.getInstance(AdditionalEnvParameters.class);
        if (envParameters == null) {
            AdditionalEnvParameters envParameters = new AdditionalEnvParameters();
            testContext.addInstance(envParameters);
        }
        envParameters.add(key, value);

   }

The TestContext is an object that is added in this release and can be consulted for various information about the test that is running.
It holds for example the ContainerAdapterMetaData instance that holds all data about the test that is currently running.

And with the AdditionalEnvParameters.add(), you can add key-value pairs to the environment variable for the container.

Support for a database

In this version, it is now possible to define a database container and provide the configuration as a data source for the JPA functionality of your application.

To have this functionality, make sure you perform the following steps.

  • Add the Atbash Integration testing Database artefact to your project.
  • Add the @DatabaseContainerIntegrationTest annotation to the test class. The class must also extend from AbstractDatabaseContainerIntegrationTest.
  • Add the Database test container of your choice as a project dependency. This determines the database that will be started alongside the container with your application.
  • Add the JDBC Driver as the test scope. This must be a driver that works with the selected database. The driver is used for 2 purposes, it is added to the container with the application to provide access to the database and it allows during the setup of the test to prepare the database in a known state for the test. You can also access the database tables during the test through this Driver and DBUnit which is proved automatically.

Once these things are in place, a test execution performs the following tasks.

  • Determine the Database container to start by looking at which JDBC database container is on the classpath.
  • Determinate JDBC driver
  • Add the JDBC Driver to the application container in a runtime-specific way.
  • Start the database container
  • Provide the connection settings as environment variables to the application container
  • Create the tables required for the test by executing a SQL script that the developer provided
  • Add the data to the tables by reading the Excel file.

A variable is prepared that allows you to access the database tables during the execution of your test. For this, the functionality of the DBUnit library is used. This allows you to check that a call to an application endpoint resulted in the expected database changes.

Have a look at the examples https://github.com/atbashEE/jakarta-integration-testing/tree/main/example/database to get started with this new exciting feature.

Jakarta EE 10

Until now, The Jakarta EE testing framework supported only Jakarta EE 8. There is now also an artifact available specifically for Jakarta EE 10. version 2.x is now based on Jakarta EE 10 dependencies and thus can be added as a test dependency to your project.

All functionality that is available for Jakarta EE 8, is also available for this EE 10 version.

Conclusion

With this new release of the testing framework, more options are available to customise the container image that runs your application during the test. For example, you have now the possibility to specify those environment variables and values that need to the added without using a custom docker build script.

The two major features that are added to this version are of course the integration with a database container that makes it much easier when your application uses a database to test its functionality and the support for Jakarta EE 10.

Enjoy

Training and Support

Do you need a specific training session on Jakarta or MicroProfile? Or do you need some help in getting up and running with your next project? Have a look at the training support that I provide on the page https://www.atbash.be/training/ and contact me for more information.

Categories
Jakarta EE Observability

Using OpenTelemetry with Jakarta EE

OpenTelemetry (informally called OTEL or OTel) is a powerful observability framework that assists in generating and capturing telemetry data from the software. With the rise of now-distributed systems, it has become increasingly difficult for developers to see how their services depend on or affect other services, especially in critical situations such as deployments or outages. OpenTelemetry provides a solution to this problem by making it possible for both developers and operators to gain visibility into their systems.

It is the merger of two other standards, OpenTracing provided a vendor-neutral API for sending telemetry data over to an Observability back-end, and OpenCensus provided a set of language-specific libraries that developers could use to instrument their code and send to any one of their supported back-ends.

As a CNCF incubating project, it is providing a set of standardized, vendor-agnostic SDKs, APIs, and tools for ingesting, transforming, and sending data to an Observability back-end. There is an implementation of the API for Java available, so integration with any Jakarta EE runtime is very easy and doesn’t require any additional implementation.

What does it do?

If you are not familiar with Otel or Observability, then this little introduction is for you. Especially in a micro-services world, but also in other scenarios, it can be very helpful to trace back individual requests and where they spent their time on. Not that it is the main goal, it already captures how much time was needed before a response was sent back to the client. But it can also collect information about database calls, calls to other services, etc … You get information about each request and you can use it to analyse your environment and find the bottlenecks for example. As the saying goes “To measure is to know.” (Although that sounds way much better in dutch “meten is weten”).

You can analyse the information as all the different pieces of information are linked together and stored by a collector. A kind of data storage where you can retrieve and analyse the information.

Observability is much more than just tracking a request throughout your environment. It is also about metrics and processing logs. In this blog, I’m concentrating on request tracing.

Out of the box

The Java implementation of OTel provides integration with Jakarta REST out of the box through a Java Agent. You can find all the supported frameworks and libraries on this page. Some of them require a specific dependency, other frameworks, like JAX-RS require only the Java agent itself.

Personally, I’m not a fan of Java agents. It allows of course to add functionality to your project without the need to change anything in your project. But I don’t like fiddling with the command line to add the Java agent and large sections to define the configuration. And in most cases, you will require the OpenTelemetry API in your application anyway as you want to interact with the system and define for example which CDI methods you like to be included in the trace.

Manual integration

But also integrating the OTel functionality in the application yourself is very straightforward. As an example, I created an integration of the OpenTelemetry Java API for Jakarta EE 10 with the following features.

  • Traces the JAX-RS requests that are processed.
  • Starts a new Tracing or starts a new child span if the header indicates that tracing is already in progress.
  • You can include information about CDI method calls by using an annotation on the method.
  • A Filter for JAX-RS clients is available to propagate the OpenTelemetry information on the header
  • Automatically registers this filter when you have a MicroProfile Rest Client available.

And all this with around 10 classes.

What do you need to do?

If you are interested in creating this integration for your project yourself, have a look at the GitHub repository. It should give you all the information needed to create a specific integration yourself very rapidly.

If you want to use my version in your Jakarta EE 10 project, follow the following steps.

  • Add the maven artefact to your project
    <dependency>
    <groupId>be.atbash.jakarta</groupId>
    <artifactId>opentelemetry</artifactId>
    <version>0.9.0</version>
    </dependency>
  • Define the name of your service, and how it shows up in the OpenTelemetry collector, through a MicroProfile Config source with the key atbash.otel.service.name.
  • Configure, if needed to OTel connector connection through environment variables as required by the OpenTelemetry Specification.
  • Use @WithSpan on a CDI method, or register RestClientPropagationFilter on your JAX-RS client if you want to make use of it.

And that is it.

You can access the current Span by injecting it into your CDI bean.

@Inject
private Span currentSpan;

You can add attributes and events to the span using this instance. These attributes end up in your collector and events mark specific moments in the span.

You can also make use of Baggage Items to carry information to other parts of the requests. Although, you should not see this as a replacement for passing parameters to methods and other services that are called. For that reason, the Baggage Items are not propagated externally. However, by adding the W3CBaggagePropagator provided by the Otel Java implementation to the OpenTelemetry instance, this is possible.

To add something to a baggage item.

    Baggage.current()
            .toBuilder()
            .put("baggageItem", name)
            .build()
            .makeCurrent();

And to access it later on

Baggage.current().asMap();

Conclusion

The OpenTelemetry specification goal is to get insight into what your process is doing when processing requests from the users. Besides metrics and logging, an important part is tracing the requests through your entire environment. It is a language-agnostic specification so that information is compatible regardless of what language the processes are programmed in. But there are also implementations available for many languages, including Java.

This means that vendors or framework maintainers don’t need to do anything to get it integrated. And developers can just use the Java implementation if they need the functionality in their environment. There exists a Java Agent but also manual integration can be done very easily as showcased by the Jakarta EE 10 integration code available in the Github repo.

Atbash Training and Support

Do you need a specific training session on Jakarta or MicroProfile? Or do you need some help in getting up and running with your next project? Have a look at the training support that I provide on the page https://www.atbash.be/training/ and contact me for more information.

Enjoy

Categories
Atbash Core Profile JAX-RS

Run your Jakarta Application without Runtime

Java EE and now Jakarta EE have the packaging requirements that your application needs to be bundled in a WAR or EAR file and executed by your runtime.
Traditionally, that was the application server that was already configured and running and you deploy the WAR file, containing your application, to it.

These days, with an application runtime, you can launch the runtime and at the same time deploy and run the application by specifying the WAR file location at the command. This makes use of the executable JAR file functionality within Java.

An example of Payara Micro

java -jar payara-micro.jar /path/to/myapplication.war

But now with the addition of the Java SE Bootstrap API to JAX-RS 3.1, this might change. And maybe you don’t need a runtime anymore.

Java SE Bootstrap API

The idea of the Java SE Bootstrap API is that you can start the JAX-RS implementation from the public static void main method. Some of the implementations already had support for this in the past, but it is now available within the JAX-RS specification and thus useable with any certified implementation.

You might wonder how that this is possible without using a servlet container?

When your application or server has REST endpoints that can be called by the client (application), you indeed need to capture the HTTP request. Most of the time, this is done using a servlet container and a specific servlet was responsible to call the correct JAX-RS method depending on the URL.

But actually, you don’t need a servlet for that. Any piece of code that listens on a socket can perform this operation. You don’t need a servlet for this task.

There is even a simple HTTP server within the JDK itself, since Java SE 6, that can be used for this purpose. But also other libraries and products like Netty can be used.

REST without server

Now that you have an API to start your JAX-RS server from within your code, like from within your main method, you can have your application respond to user requests.

The following snippet shows what is needed to realise that.

   SeBootstrap.Configuration.Builder configBuilder = SeBootstrap.Configuration.builder();
   configBuilder.property(SeBootstrap.Configuration.PROTOCOL, "HTTP")
                .property(SeBootstrap.Configuration.HOST, "localhost")
                .property(SeBootstrap.Configuration.PORT, 8080);


   SeBootstrap.start(new DemoApplication(), configBuilder.build());

The DemoApplication class is the class that extends the JAX-RS Application class and provides all JAX-RS resource classes through the getClasses() method.

But you probably need more than just the JAX-RS support of Jakarta EE. What about CDI, JSON handling, etc …

Jersey has already an artefact where they integrate Weld, a CDI implementation that also can be started from pure Java SE since CDI 2.0 ‘Java EE 8, and the JSON support, JSON-P and JSON-P, by adding the Jersey media support modules for JSON.

So, you have already all the specifications that make up the Jakarta EE Core Profile, available without the need for any server. An example of such a project can be found in this example project.

Note that this option is different from the embedded server support some vendors provide. There is no specific glue code or additional functionality from the vendor in this case in action. Just some minimal linking between the different specifications.

Really, no server?

Yes, this is now possible and might be the right choice for your situation. It allows you to run your optimised combination of the specifications that you need for the application, right from your main method in Java. Similar to Spring Boot, Jakarta EE can now be used in a truly modular way and no longer require the potentially large set of specifications provided by the runtime that you don’t need.

But you might need to write some classes to better integrate some specification implementations beyond the ones of the Core Profile.

Atbash Runtime

And that is also the goal of the Atbash Runtime, a limited set of code to integrate the specifications seemingness but keep the modular aspect.

That is the reason for the addition of the JakartaRunner class that can be used to start your application from the main method. But at the same time use the module loader of Atbash Runtime so that all specification implementations that are on the classpath are integrated automatically, just like the case with Spring Boot.

It also adds a logging module on top so that you have already one concern less to make your application production worthy.

You can find an example of this approach in this project.

Conclusion

With the addition of the Java SE Bootstrap API to JAX-RS, it becomes now possible to create your own custom, modular runtime that can be started from the main method. This avoids the overhead of potentially many specifications and functionality from the runtime that are not needed for your case.

Atbash Runtime also supports this mode by providing an easy-to-use API that starts the module system of the product behind the scenes and seemingness integrates the different specifications that are found on the classpath.

Want to lean more about Jakarta EE 10 Core Profile?

This topic was discussed during my JakartaOne LiveStream event of 2022. Have a look at the presentation if you want to know more about the Core Profile “Explore the new Jakarta EE Core Profile“.

Atbash Training and Support

Do you need a specific training session on Jakarta or MicroProfile? Or do you need some help in getting up and running with your next project? Have a look at the training support that I provide on the page https://www.atbash.be/training/ and contact me for more information.

Enjoy

Categories
Atbash Core Profile Jakarta EE

CDI 4.0 Lite and Potential Pitfalls

The CDI Lite specification is specifically created for the new Jakarta EE 10 Core Profile. This lite version is created to allow the discovery of CDI bans and preparation of the CDI container at build time.

This blog goes a bit deeper into a potential problem that you can encounter when building Core profile applications and running them on a Jakarta runtime concerning this CDI 4.0 Lite.

CDI Lite

To populate the CDI container with all the beans and the metadata to create the beans on demand like for the RequestScoped context, the CDI code needs to perform some initialisation.

Until now, the only available option is to perform a scan of the classpath and find the classes eligible as a CDI bean. Another source for beans is the CDI Portable Extensions that can register additional beans programmatically by the runtime or the application itself.

In the last few years, there are many initiatives to reduce the startup time of applications in various ways. The scanning of the classpath is a process that can be moved to the build phase if you have all classes available at build time. As you know, the JVM allows additional classes and libraries are added at runtime that was not available at build time. So, in case you move the metadata and bean discovery to build time, you need to make sure all classes are known and available or you need to provide the info.

The CDI Lite specification introduced the BuildCompatibleExtension interface to have the ability to move the scanning to the build phase of your application. This mechanism works similarly to the CDI Portable Extensions, where you have the opportunity to execute some methods during one of the 5 phases of the process.

At the same time, some classes are marked as not relevant for CDI Lite, as they are not really useful for the applications that are built with the Core Profile. Examples are the SessionScope and ConversationScope since REST applications are typically stateless. Keeping session information is in this case not required.

One single API artefact

The underlying narrative for creating the CDI Lite solution makes sense, but the single API solution is less obvious.

CDI lite contains most of the classes of the previous version of CDI and a few additional ones, like the BuildCompatibleExtension interface.
CDI full comprises CDI lite and a few additional concepts like the SessionScoped annotation that were available from the previous version of CDI but which didn’t fit into the CDI lite concept.

But the CDI 4.0 API artefact contains all classes, those of CDI Lite and CDI full.

Your project has only the option to add this single CDI API dependency to your project, as provided since the runtime has already these classes. And when your application is compiled and built against this API, it might fail at runtime.

You as a developer could have used or referenced a class from the CDI Full specification but you use a Core Profile certified runtime. This runtime only needs to support the CDI Lite classes and is not required to implement the other ones of CDI Full. So the application you successfully build fails at runtime. And you didn’t get notified of this problem during build time.

Testing for CDI Full classes usage

For this reason, I have created a small library that can inform you if you made use of one of the CDI Full classes in your code. You can write a test based on this code that fails when you make use of such a class. This way you get warned upfront that it might not run in production.

To make use of this library, add the following dependency to your project

    <dependency>
        <groupId>be.atbash.jakarta</groupId>
        <artifactId>core-profile-tester</artifactId>
        <version>1.0.0</version>
        <scope>test</scope>
    </dependency>

With this dependency, you can write the following test method

@Test
void testCompliance() {
    TestCoreProfile tester = new TestCoreProfile("be.atbash");
    Assertions.assertThat(tester.violations()).isEmpty();
}

The TestCoreProfile can determine those classes in the package you specify as a parameter, and in sub-packages, where you make use of a CDI Full class. If the list is empty, you are safe and good to go running it on a Core profile implementation.

Instead of using your top-level package, you can also specify a package from one of the CDI libraries that you have added to your project. This will tell you if that CDI library is safe to use on a Core Profile product.

Conclusion

CDI Lite groups all concepts of the CDI specification that is supported on the new Jakarta EE Core Profile. It also allows gathering information for the CDI container at build time.
But there is only 1 single artefact, containing all CDI Lite and Full classes. This means that even if your application compiles and builds successfully, it might fail on a Core Profile product. This is because you used some CDI Full-only classes that are not supported in Core Profile.
With the Jakarta Core Tester, you can find out if you referenced one of the CDI Full classes and it will run safely on your runtime.

Want to lean more about Jakarta EE 10 Core Profile?

This topic was discussed during my JakartaOne LiveStream event of 2022. Have a look at the presentation if you want to know more about the Core Profile “Explore the new Jakarta EE Core Profile“.

Atbash Training and Support

Do you need a specific training session on Jakarta or MicroProfile? Or do you need some help in getting up and running with your next project? Have a look at the training support that I provide on the page https://www.atbash.be/training/ and contact me for more information.

Enjoy

This website uses cookies. By continuing to use this site, you accept our use of cookies.  Learn more