Categories
Jakarta EE Ktor Security

Comparing JWT Token Usage in Spring Boot, Quarkus, Jakarta, and Kotlin Ktor: A Framework Exploration – Part 4

Since this topic became very extensive, I decided to split up the blog into 4 parts. To keep blog lengths manageable. Here is the split up

Part 1: Introduction
Part 2: Payara, Spring Boot and Quarkus
Part 3: Ktor and Atbash Runtime
Part 4: Discussion and conclusion (this one)

For an introduction around JWT Tokens, you can have a look at the first part of this blog. It also contains a description how the Keycloak service is created for the example programs described in this part.
Part 2 and 3 contains the description of the example application for each runtime.

Discussion

In parts 2 and 3, I showed the most important aspects of using a JWT token with Payara Micro (Jakarta EE), Spring Boot, Quarkus, Kotlin, and Atbash Runtime. The JWT tokens themselves are standardised but how you must use them in the different runtimes is not defined and thus different. Although there exists the MicroProfile JWT Auth specification, even those runtimes that follow it, have differences in how it should be activated and how roles should be verified, especially when you don’t want to check a role. The specification, besides duplicating a few things from the JWT specification itself like how validation needs to be done, only defines how a MicroProfile application should retrieve claim values.

It is obvious that for each runtime we need to add some dependency that brings in the code to handle the JWT tokens. But for several of these runtimes, you also need to activate the functionality. This is the case for Payara Micro through the @LoginConfig and also for Atbash Runtime since the functionality is provided there by a non-core module.

Another configuration aspect is the definition of the location of the certificates. Spring Boot is the only one that makes use of the OAuth2 / OpenId Connect well know endpoint for this. The other runtimes require you to specify the URL where the keys can be retrieved in a certain format. This allows for more flexibility of course and potential support for providers that do not follow the standard in all its extends. But since we are talking about security, it would probably be better that only those certified, properly tested providers would be used as is the case with the Spring Boot implementation.

The main difference in using a JWT token on runtime is how the roles are verified. Not only is not specified which claim should hold the role names, nor is it defined how the authorization should be performed. This leads to important differences between the runtimes.

Within Kotlin Ktor, We should define a security protocol for each different role we want to check and assign it a name. Or you create a custom extension function that allows you to specify the role at the endpoint as I have done in the example. But important to note is that we need to be explicit in each case. Which role or if no role at all is required, we need to indicate this.

This is not the case for the other runtimes, except the Atbash Runtime.

When you don’t use any annotation on the JAX-RS method with Payara Micro and Spring Boot, no role is required, only a valid JWT token. But with Quarkus, when not specifying anything, the endpoint becomes publicly accessible. This is not a good practice because when you as a developer forget to put an annotation, the endpoint becomes available for everyone, or at least any authenticated user for certain runtime. This violates the “principle of least privilege” that by default, a user has no rights and you explicitly need to define who is allowed to call that action. That is the reason why Atbash Runtime treats the omission of an annotation to check on roles as an error and hides the endpoint and shows a warning in the log.

If you do not want to check for a role when using Atbash Runtime, you can annotate the JAX-RS method with @PermitAll. The JavaDoc says “Specifies that all security roles are allowed to invoke the specified method(s)” and thus it is clearly about the authorization on the endpoint. But if you use @PermitAll in Payara Micro, the endpoint becomes publicly accessible, dropping also authentication. That is not the intention of the annotation if you ask me. Although the Javadoc might be to blame for this as it mentions “that the specified method(s) are ‘unchecked'” which might be interpreted as no check at all.

Conclusion

All major frameworks and runtimes have support for using JWT Tokens within your application to authenticate and authorise a client call to a JAX-RS endpoint. When adding the necessary dependency to have the code available and adding some minimal configuration like defining where the keys can be retrieved to verify the signature, you are ready to go. The only exception here might be Kotlin Ktor where you are confronted with a few manual statements about the verification and validation of the token. It is not completely hidden away.

The most important difference lies in how the check for the roles is done. And especially in the case that we don’t require any role, just a valid JWT token. Only Atbash Runtime applies the “principle of least privilege”. On the other runtimes, forgetting to define a check for a role leads to the fact that the endpoint becomes accessible to any authenticated user or even worse, publicly accessible.

There is also confusion around @PermitAll which according to the java doc is about authorization, but in Jakarta EE runtime like Payara Micro, the endpoint also suddenly becomes publicly accessible.

Interested in running an example on the mentioned runtimes, check out the directories in the https://github.com/rdebusscher/Project_FF/tree/main/jwt repo which work with KeyCloak as the provider.

Training and Support

Do you need a specific training session on Jakarta EE, Quarkus, Kotlin or MicroProfile? Have a look at the training support that I provide on the page https://www.atbash.be/training/ and contact me for more information.

Categories
Jakarta EE Ktor Security

Comparing JWT Token Usage in Spring Boot, Quarkus, Jakarta, and Kotlin Ktor: A Framework Exploration – Part 3

Since this topic became very extensive, I decided to split up the blog into 4 parts. To keep blog lengths manageable. Here is the split up

Part 1: Introduction
Part 2: Payara, Spring Boot and Quarkus
Part 3: Ktor and Atbash Runtime (this one)
Part 4: Discussion and conclusion

For an introduction around JWT Tokens, you can have a look at the first part of this blog. It also contains a description how the Keycloak service is created for the example programs described in this part.
Part 2 contains the description for Payara Micro, Spring Boot and Quarkus.

Ktor

Also within Ktor there is some excellent support for using JWT tokens although we need to code a little bit more if we want to have support for rotating public keys and easy checks on the roles within the tokens.

But first, Let us start again with the dependencies you need within your application.

        <!-- Ktor authentication -->
        <dependency>
            <groupId>io.ktor</groupId>
            <artifactId>ktor-server-auth-jvm</artifactId>
            <version>${ktor_version}</version>
        </dependency>
        <!-- Ktor support for JWT -->
        <dependency>
            <groupId>io.ktor</groupId>
            <artifactId>ktor-server-auth-jwt-jvm</artifactId>
            <version>${ktor_version}</version>
        </dependency>

We need a dependency to add the authentication support and another one for having the JWT token as the source for authentication and authorisation.

Just as with the Payara and Quarkus case, we need to define the location to retrieve the public key, expected issuer, and audience through the configuration of our application. In our example application, this is provided in the application.yml file.

jwt:
  issuer: "http://localhost:8888/auth/realms/atbash_project_ff"
  audience: "account"

We programmatically read these values in our own code, so the keys can be whatever you like, they are not predetermined as with the other runtimes. In the example, you see that we also don’t define the location of the public key endpoint as we can derive that from the issuer value in the case of KeyCloak. But you are free to specify a specific URL for this value of course.

Configuration of the modules in Ktor is commonly done by creating an extension function on Application object, as I have also done in this example. This is the general structure of this function

fun Application.configureSecurity() {

    authentication {
        jwt("jwt-auth") {
            realm = "Atbash project FF"
            // this@configureSecurity refers to Application.configureSecurity()
            val issuer = this@configureSecurity.environment.config.property("jwt.issuer").getString()
            val expectedAudience = this@configureSecurity.environment.config.property("jwt.audience").getString()
            val jwkUrl = URL("$issuer/protocol/openid-connect/certs")
            val jwkProvider = UrlJwkProvider(jwkUrl)

            verifier {
        // not shown for brevity              

            }

            validate { credential ->
                // If we need validation of the roles, use authorizeWithRoles
                // We cannot define the roles that we need to be able to check this here.
                JWTPrincipal(credential.payload)
            }

            challenge { defaultScheme, realm ->
                // Response when verification fails
                // Ideally should be a JSON payload that we sent back
                call.respond(HttpStatusCode.Unauthorized, "$realm: Token is not valid or has expired")
            }
        }
    }

}

The function jwt("jwt-auth") { indicates that we define an authentication protocol based on the JWT tokens and we name it jwt-auth. We can name it differently and can have even multiple protocols in the same application as long ask we correctly indicate which protocol name we want at the endpoint.

The JWT protocol in Ktor requires 3 parts, a verification part, a validation one, and lastly how the challenge is handled.

The verification part defines how the verification of the token is performed and will be discussed in more detail in a moment. We can do further validation on the token by looking at the roles that are in the token. If you have many different roles, this leads to many different named JWT protocols. Therefore I opted in this example to write another extension function on the Route object that handles this requirement more generically. And the challenge part is executed to formulate a response for the client in case the validation of the token failed.

The verifier method defines how the verification of the token is performed. We make use of the UrlJwkProvider which can read the keys in the JWKS format which contains keys in a JSON format. But it doesn’t try to reread the endpoint in case the key is not found. This also means we cannot apply rotating keys for signing the JWT tokens which is recommended in production. Therefore, we make use of a small helper which caches the keys but read the endpoint again when the key is not found. This functionality could be improved to avoid a DOS attack by calling your endpoint with some random key ids which would put Keycloak or the JWT Token provider under stress.

            val jwkProvider = UrlJwkProvider(jwkUrl)

            verifier {
                val publicKey = PublicKeyCache.getPublicKey(jwkProvider, it)

                JWT.require(Algorithm.RSA256(publicKey, null))
                    .withAudience(expectedAudience)
                    .withIssuer(issuer)
                    .build()

            }

The other improvement that you can find in the example is the validation part. Since you only have the credential as input for this validation, you can check if the token has a certain role, but you can’t make this check dynamic based on the endpoint. As mentioned, this would mean that for each role that you want to check, you should make a different JWT check.

The example contains an extension function on the Route object so that you can define the role that you expect. This is how you can use this new authorizeWithRoles function

        authorizeWithRoles("jwt-auth", listOf("administrator")) {
            get("/protected/admin") {
                call.respondText("Protected Resource; Administrator Only ")
            }
        }

So besides the name for the protocol we like to use, you can also define a set of roles that you expect to be in the token. The function itself is not that long but a little complex because we add a new interceptor in the pipeline used by Ktor to handle the request. If you want to look at the details, have a look at the example code.

If you just need a valid token, without any check on the roles, you can make use of the standard Ktor functionality

        authenticate("jwt-auth") {
            get("/protected/user") {
                val principal = call.authentication.principal<JWTPrincipal>()
                //val username = principal?.payload?.getClaim("username")?.asString()
                val username = principal?.payload?.getClaim("preferred_username")?.asString()
                call.respondText("Hello, $username!")
            }
        }

This last snippet also shows how you can get access to the claims within the token. You can access the principal associated with- the request by requesting call.authentication.principal<JWTPrincipal>() where you immediately make the cast to the JWTPrincipal class. This contains the entire token content easily accessible from within your Kotlin code as you can see in the example where I retrieve the preferred_username.

You can review all code presented here in the example https://github.com/rdebusscher/Project_FF/tree/main/jwt/ktor.

Atbash Runtime

Atbash Runtime is a small modular Jakarta EE Core profile runtime. So by default, it doesn’t has support for using JWT tokens. But since these tokens are the de facto standard, there is an Atbash Runtime module that supports them so that you can use it for your application.

As a dependency, you can add this JWT supporting module to your project

        <dependency>
            <!-- Adds JWT Support in the case we are using the Jakarta Runner, no addition of the MP JWT Auth API required -->
            <!-- Otherwise, when not using Jakarta Runner, the addition of JWT Auth API as provided is enough if you are using Atbash Runner Jar executable -->
            <groupId>be.atbash.runtime</groupId>
            <artifactId>jwt-auth-module</artifactId>
            <version>1.0.0-SNAPSHOT</version>
        </dependency>

Since we use the Jakarta Runner feature of the Atbash runtime, which allows you to execute your web application through a simple main method, we need to add the module itself. If you run your application as a war file, make sure you activate the JWT module within the configuration so that the module is active.

The JWT support within Atbash runtime is also based on the Microprofile JWT Auth specification, so you will see many similarities with the Payara and Quarkus examples we have discussed in part 2 of this blog.

Configuration requires the 3 values for public key location, expected issuer, and audience.

mp.jwt.verify.publickey.location=http://localhost:8888/auth/realms/atbash_project_ff/protocol/openid-connect/certs
mp.jwt.verify.issuer=http://localhost:8888/auth/realms/atbash_project_ff
mp.jwt.verify.audiences=account

You are also required to indicate the @LoginConfig (in case you are executing your application as a WAR file) so that the JWT Module is active for the application. But there is no need to define @DeclareRoles as Atbash Runtime takes the value of the individual @RolesAllowed as valid roles.

A difference with Payara, for example, is that you need to add @PermitAll to a method when you don’t want to check on any roles. Within Atbash Runtime there is the “principle of least privilege” implemented. If you don’t specify anything on a JAX_RS method no client can call it. This is to avoid that you forget to define some security requirements and expose the endpoint without any checks. The JavaDoc says “Specifies that all security roles are allowed to invoke the specified method(s)” and thus it is clearly what we need. Although, some runtimes, including Payara, interpret this differently and I’ll go deeper on this topic in part 4.

The example code is located at https://github.com/rdebusscher/Project_FF/tree/main/jwt/atbash.

Discussion

In the last part of the blog, I’ll have a discussion about similarities and differences. These differences are especially important when you don’t want to have a check on a role within the token.

Part 1: Introduction
Part 2: Payara, Spring Boot and Quarkus
Part 3: Ktor and Atbash Runtime (this one)
Part 4: Discussion and conclusion

Training and Support

Do you need a specific training session on Jakarta EE, Quarkus, Kotlin or MicroProfile? Have a look at the training support that I provide on the page https://www.atbash.be/training/ and contact me for more information.

Categories
Jakarta EE Ktor Security

Comparing JWT Token Usage in Spring Boot, Quarkus, Jakarta, and Kotlin Ktor: A Framework Exploration – Part 2

Since this topic became very extensive, I decided to split up the blog into 4 parts. To keep blog lengths manageable. Here is the split up

Part 1: Introduction
Part 2: Payara, Spring Boot and Quarkus (this one)
Part 3: Ktor and Atbash Runtime
Part 4: Discussion and conclusion

For an introduction around JWT Tokens, you can have a look at the first part of this blog. It also contains a description how the Keycloak service is created for the example programs described in this part.

Payara

As an example of how you can work with JWT tokens with Jakarta EE and MicroProfile, we make use of Payara Micro.

The JWT Token support is provided by MicroProfile, so add the dependency to your project.

        <dependency>
            <groupId>org.eclipse.microprofile</groupId>
            <artifactId>microprofile</artifactId>
            <version>6.0</version>
            <type>pom</type>
            <scope>provided</scope>
        </dependency>

We are using MicroProfile 6, which requires Jakarta EE 10 runtime as this is the version that is supported by the Payara Micro community edition.

As configuration, we need to provide the endpoint where the MicroProfile JWT Auth implementation can retrieve the public key that is required to validate the content of the token against the provided signature. This can be done by specifying mp.jwt.verify.publickey.location configuration key. Two other configuration keys are required, one that verifies if the issuer of the token is as expected and the audience claim is the other one.

Other configuration aspects are the indication that a JWT token will be used as authentication and authorization for the endpoints through the @LoginConfig annotation. The @DeclareRoles annotation is a Jakarta EE annotation that indicates which roles are recognised and can be used. These annotations can be placed on any CDI bean.

@LoginConfig(authMethod = "MP-JWT")
@DeclareRoles({"administrator"})

On the JAX-RS method, we can add the @RolesAllowed annotation to indicate the role that must be present in the token before the client is allowed to call the endpoint.

    @GET
    @Path("/admin")
    @RolesAllowed("administrator")
    public String getAdminMessage() {

When there is no annotation placed on the method, only a valid JWT token is required to call the endpoint. Also, have a look at the part 4 of this blog for some important info and differences between runtimes.

Through the MicroProfile JWT Auth specification, we can also access one or all the claims that are present in the token. The following snippet shows how you can access a single claim or the entire token in a CDI bean or JAX-RS resource class.

    @Inject
    @Claim("preferred_username")
    private String name;

    @Inject
    private JsonWebToken jsonWebToken;
    // When you need access to every aspect of the JWT token.

The entire example can be found in the project https://github.com/rdebusscher/Project_FF/tree/main/jwt/payara.

Spring Boot

Also, Spring Boot has excellent support for using JWT tokens for the authentication and authorization of rest endpoints. Besides the Spring Boot Security starter, the Oauth2 Resource Server dependency is required within your application. So you don’t need to handle the JWT token yourself in a programmatic way as some resources on the internet claim.

In our example, we use Spring Boot 3 and JDK 17.

    <dependency>
        <groupId>org.springframework.boot</groupId>
        <artifactId>spring-boot-starter-security</artifactId>
    </dependency>

    <dependency>
        <groupId>org.springframework.boot</groupId>
        <artifactId>spring-boot-starter-oauth2-resource-server</artifactId>
    </dependency>

In contrast to MicroProfile where you need to provide several configuration keys, Spring boot makes use of the OpenId Connect specification where it is defined that the endpoint .well-known/openid-configuration provides all info. This includes the location of the public key required for the validation of the token against the signature and the value of the issuer. The location can be specified through a Spring Configuration resource.

spring.security.oauth2.resourceserver.jwt.issuer-uri=http://localhost:8888/auth/realms/atbash_project_ff
spring.security.oauth2.resourceserver.jwt.audiences=account

The audience value is not required to be defined, Spring Boot works without it. But it is a recommended configuration aspect to make sure that tokens are correctly used, especially when you use tokens for multiple applications.

You can either define the requirements for the roles that should be present in the token using a Spring Bean that extends the WebSecurityConfigurerAdapter class and the HttpSecurity builder, but I prefer the method-based approach.
With this approach, you can define the required role using the @PreAuthorize annotation

    @GetMapping("/admin")
    @PreAuthorize("hasAuthority('administrator')")
    public String getAdminMessage() {

It makes it easier to find out which role is required before a client can call the endpoint and also easier to verify if you didn’t make any error in the security configuration of your application. This method-based approach requires a small activation and mapping between the roles within the token and the authority we check in the annotation.

@Configuration
@EnableMethodSecurity
public class MethodSecurityConfig {
}

The configuration for the JWT token roles is provided by a JwtAuthenticationConverter bean.

    @Bean
    public JwtAuthenticationConverter jwtAuthenticationConverter() {
        JwtGrantedAuthoritiesConverter grantedAuthoritiesConverter = new JwtGrantedAuthoritiesConverter();
        grantedAuthoritiesConverter.setAuthorityPrefix("");
        grantedAuthoritiesConverter.setAuthoritiesClaimName("groups");

        JwtAuthenticationConverter jwtAuthenticationConverter = new JwtAuthenticationConverter();
        jwtAuthenticationConverter.setJwtGrantedAuthoritiesConverter(grantedAuthoritiesConverter);
        return jwtAuthenticationConverter;
    }

Within the REST methods, we can have access to the JWT token claims, just as with the Jakarta EE and MicroProfile example. We need to add a JwtAuthenticationToken parameter to the method which allows access to claims through the getTokenAttributes() method.

    @GetMapping("/user")
    public String getUser(JwtAuthenticationToken authentication) {
        Object username = authentication.getTokenAttributes().get("preferred_username");

The entire example can be found in the project https://github.com/rdebusscher/Project_FF/tree/main/jwt/spring.

Quarkus

The Quarkus support is also based on MicroProfile, so you will see several similarities with the Payara case I described earlier. The Quarkus example is based on the recent Quarkus 3.x version. As a dependency, we need two artifacts related to the JWT support provided by the SmallRye project. Although it seems you do not need the build one at first sight, as it is about creating JWT tokens within your application, the example did not work without it.

        <dependency>
            <groupId>io.quarkus</groupId>
            <artifactId>quarkus-smallrye-jwt</artifactId>
        </dependency>
        <dependency>
            <groupId>io.quarkus</groupId>
            <artifactId>quarkus-smallrye-jwt-build</artifactId>
        </dependency>

Since the SmallRye JWT implementation is also using the SMicroProfile JWT auth specification, the configuration through key-value pairs is identical to the Payara one. We need to define the location of the publicKey, and the expected values for the issuer and audience. In the example, I have defined them in the application.properties file, a Quarkus-specific configuration source. But as long as they can be retrieved through any of the supported configuration sources, it is ok.

Since Quarkus is not a Jakarta-compliant runtime, it doesn’t require any indication that the application will make use of the JWT tokens for authentication and authorisation. The existence of the two dependencies we added earlier to the project is enough. In this case, it is similar to the Spring Boot case where we also did not do this.

On the JAX-RS resource methods, we can indicate if we need a certain role within the token, or that just the token itself is required and no specific role is required. If a role is required, we can make use of the same @RolesAllowed annotation we encountered in the Payara example or we need to add the @Authenticated annotation if we just need a valid token.

    @GET
    @Path("/admin")
    @RolesAllowed("administrator")
    public String getAdminMessage() {
        return "Protected Resource; Administrator Only ";
    }

    @GET
    @Path("/user")
    @Authenticated
    // No roles specified, so only valid JWT is required
    public String getUser() {
        return "Protected Resource; user : " + name;
    }

This @Authenticated annotation is defined in the Quarkus Security artifact, brought in transitively, and indicates that an authenticated user is required. Without this annotation, the endpoint would become publicly accessible, without the need for any token or authentication method.

More on that in a part 4 of this blog.

The retrieval of the claims is again identical to the Payara case. The example project can be found at https://github.com/rdebusscher/Project_FF/tree/main/jwt/quarkus.

Runtimes

The Ktor and Atbash Runtime versions of the example application are described in part 3.

Part 1: Introduction
Part 2: Payara, Spring Boot and Quarkus (this one)
Part 3: Ktor and Atbash Runtime
Part 4: Discussion and conclusion

Training and Support

Do you need a specific training session on Jakarta EE, Quarkus, Kotlin or MicroProfile? Have a look at the training support that I provide on the page https://www.atbash.be/training/ and contact me for more information.

Categories
Jakarta EE Ktor Security

Comparing JWT Token Usage in Spring Boot, Quarkus, Jakarta, and Kotlin Ktor: A Framework Exploration – Part 1

Since this topic became very extensive, I decided to split up the blog into 4 parts. To keep blog lengths manageable. Here is the split up

Part 1: Introduction (this one)
Part 2: Payara, Spring Boot and Quarkus
Part 3: Ktor and Atbash Runtime
Part 4: Discussion and conclusion

But don’t worry, all these 4 parts will be released within the same week so that those people that are eager to process it in one go, do not need to wait a long time before the series is published.

Introduction

As the demand for secure and efficient authentication and authorization mechanisms grows, JSON Web Tokens (JWT) have emerged as a favored choice for developers. JWT tokens provide a modern approach to verifying user identity and defining access privileges within web applications. In this blog post, we will delve into the usage of JWT tokens across various frameworks, namely Spring Boot, Quarkus, Jakarta, and Kotlin Ktor. By comparing their implementation approaches, we aim to provide insights into how JWT tokens are utilized within each framework and help you make a transition from one to another easier.

Understanding the Basics of JWT Tokens

At the core of JWT tokens lies a simple yet powerful structure that encompasses all the necessary information for secure authentication and authorization. Let’s dive into the basics of JWT tokens and explore their three essential components: the header, the body, and the signature.

1. Header

The header of a JWT token contains metadata about the token itself and the algorithms used to secure it. It typically consists of two parts: the token type, which is always “JWT,” and the signing algorithm employed, such as HMAC, RSA, or ECDSA. This header is Base64Url encoded and forms the first part of the JWT token.

2. Body (Payload):

The body, also known as the payload, carries the actual data within the JWT token. It contains the claims, which are statements about the user and additional metadata. Claims can include information like the user’s ID, name, email, or any other relevant data. The payload is also Base64Url encoded and forms the second part of the JWT token.

3. Signature

The signature is the crucial component that ensures the integrity and authenticity of the JWT token. It is created by combining the encoded header, the encoded payload, and a secret key known only to the server. The signature is used to verify that the token has not been tampered with during transmission or storage. It acts as a digital signature and prevents unauthorized modifications to the token. The signature is appended as the third part of the JWT token.

Self-Contained and Secure

One of the significant advantages of JWT tokens is their self-contained nature. Since all the necessary information is embedded within the token itself, there is no need for additional database queries or session lookups during authentication and authorization processes. This inherent characteristic contributes to improved performance and scalability.

To verify the authenticity and integrity of a JWT token, the recipient needs access to the public key or shared secret used to generate the signature. By retrieving the public key or shared secret, the recipient can verify the token’s signature and ensure that no tampering or unauthorized modifications have occurred. This mechanism provides a robust security layer, assuring that the token’s contents can be trusted.

User Roles in JWT Tokens

JWT tokens can also include user roles as part of their payload. User roles define the permissions and privileges associated with a particular user. By including this information in the JWT token, applications can determine the user’s authorization level and grant or restrict access to specific resources or functionalities accordingly. This granular approach to authorization allows for fine-grained control over user permissions within the application.

In the upcoming sections, we will explore how different frameworks incorporate these fundamental JWT token concepts into their authentication and authorization workflows. Understanding the core principles behind JWT tokens sets the stage for a comprehensive comparison, enabling us to evaluate the strengths and nuances of each framework’s implementation.

Example application

The same example application is made with different runtimes. It contains a couple of endpoints, they all require a valid token before they should be executed. One of the endpoints requires that the token contains the role of administrator.

GET /protected/user -> Hello username
GET /protected/admin -> Protected Resource; Administrator Only

The tokens utilised in our example are sourced from Keycloak, a reliable and widely adopted Authorization provider. Keycloak offers various standard flows for obtaining these tokens, catering to diverse authentication scenarios.

One of the commonly employed flows is the authorization code flow, which involves user interaction through dedicated screens provided by the Authorization provider. Users are prompted to log in and provide their credentials, following which Keycloak generates the necessary tokens for authentication and authorization purposes.

Alternatively, Keycloak supports a username and password-based approach where users can submit their credentials to a designated endpoint. This method allows Keycloak to validate the provided information and issue the relevant tokens required for subsequent authentication and authorization processes.

For our example, a custom realm with a configuration that is suitable for all our runtimes is created by setup_jwt_example.py and can be found in the directory https://github.com/rdebusscher/Project_FF/tree/main/jwt/keycloak. The script prepares the realm and a OpenId Connect client so that in response to a valid user name and password combination, a JWT token with the roles of the user is returned. It creates also two users, one of them having the admin role.

The Python script test_jwt_example.py can be used to test out the solution in each of the runtimes. It calls both endpoints with the two users that are defined. And so, one of the calls will result in an error since the non-administrator user is not allowed to call the administrator endpoint.

Runtimes

The different runtimes are discussed in part 2 and part 3 of this series.

Part 1: Introduction (this one)
Part 2: Payara, Spring Boot and Quarkus
Part 3: Ktor and Atbash Runtime
Part 4: Discussion and conclusion

Training and Support

Do you need a specific training session on Jakarta EE, Quarkus, Kotlin or MicroProfile? Have a look at the training support that I provide on the page https://www.atbash.be/training/ and contact me for more information.

Categories
Atbash Configuration

A MicroProfile Config implementation for plain Java SE

The ability to define configuration values for your application outside the deployment artifact is very important. It is one of the 12-factor items, the requirement that your application can be unaltered deployed on test, acceptance, production, and so on.

Many framework and runtimes have their proprietary solution and there were already several attempts to create a specification within the Java Enterprise world.
Currently, there is an effort going on to define Jakarta Config which is based on the MicroProfile Config specification.

Although MicroProfile Config specification is built on top of CDI concepts, the fundamentals can be used in plain Java SE. Also, other implementations, like SmallRye Config can be used in Java SE without the need for CDI to be present.

Extraction from Atbash Runtime

The goal of Atbash Runtime was to have a modular runtime that supports the specifications of the Jakarta EE 10 core profile. The development of this runtime started before the release of the Core profile so it was based on the Jakarta EE 9.1 specifications and used MicroProfile Config as the basis for the future to finish Jakarta Config specification.

Atbash Runtime version 0.3 already contained an implementation of the config specification and passed the MP Config TCK.

Recent experimentations with the Atbash JWT module on plain Java SE confirmed the need for an MP Config implementation that runs on pure Java SE. Since the implementation within the Atbash was already nicely separated into different packages, the new library was quickly ready.

Using Atbash MP Configuration SE

Before you can use the configuration values, you need to add the following artifact to your project.

    <dependency>
        <groupId>be.atbash</groupId>
        <artifactId>mp-config</artifactId>
        <version>1.0.1</version>
    </dependency>

This dependency brings in all required dependencies using the transitive functionality of Maven, including the MicroProfile Config 3.0 API.

Now you can retrieve a Config instance programmatically and retrieve the values just as described in the specification.

    Config config = ConfigProvider.getConfig();
    String value = config.getValue("value", String.class);

All functionality that doesn’t require CDI is supported, including

  • ConfigSources, the 3 default implementations with their default ordinal values and the possibility to define custom ones through the ServiceLoader mechanism.
  • Custom ConfigSourceProvider‘s can be loaded through the ServiceLoader mechanism.
  • Converter, the implicitly defined one as specified in the specification and the possibility to define custom converters using the ServiceLoader mechanism.
  • Support for optional values
  • Support for expressions where a value is a result of combining other configuration values and constant expressions.
  • Support for Config Profile defining the application phase (dev, test, …) on the property and ConfigSource level.
  • Support for ConfigBuilder and creating custom Config instances.

Since MicroProfile Config 3.0 is using the Jakarta namespace, you make use of the @Priority within the jakarta package to define the order in the converter list if you define a custom one.

 import jakarta.annotation.Priority;

Information about library

The code is compiled with JDK 11 source compatibility, and is thus useable in that or any higher version.

The library and all dependencies, including the MicroProfile MP Config API code take about 240 kB. Logging is performed through the SLF4J library, and thus a specific binding is required if you want to see the log.

The library can also be used in a GraalVM native image. No additional configuration is required to be able to include it as native compiled Java code.

Use case

There are several use cases where this new library can be useful.

  • A plain Java SE program where some configuration values are needed. By adding the dependency, you have the MicroProfile Config functionality available through that small compact library.
  • Use it in a Jakarta runtime that doesn’t support any configuration framework for your application. An example is Glassfish, including the upcoming version 7, where no configuration specification is available for your application.

Conclusion

A version of the MicroProfile Config API that runs on Java SE only can be useful in several cases. Not only for Java SE applications themselves but also for runtimes like Glassfish that still don’t support any configuration possibilities for the deployed applications. And config is essential for writing good enterprise applications.

The Atbash MP Config SE was extracted from the Atbash Runtime that has an implementation of this specification as part of the experimentations around the Jakarta EE Core profile.

Better name

I also need a better name for this new library So if you have any ideas, let me know on Twitter or use the feedback form on the Training page. From the suggestion, I will pick one at the end of October 2022.

Update november 2022: The chosen name is Atbash Delivery. It delivers the configuration from your environment to your application.

Atbash Training and Support

Do you need a specific training session on Jakarta or MicroProfile? Or do you need some help in getting up and running with your next project? Have a look at the training support that I provide on the page https://www.atbash.be/training/ and contact me for more information.

Enjoy

Categories
Atbash Configuration MicroProfile

Backward compatible configuration key values for MicroProfile Config

Introduction

With MicroProfile config, you can define the application configuration using key-value pairs which can be retrieved from various resources.
You can use it for defining the configuration in a very flexible way and this for is useful for your applications but also for frameworks which need some config.

But one day, you like to change the key for whatever good reason. But can you do this easily? If you have written the application, it probably is. But what if you have written a little framework. Do the developers read the release notes where you have stated the changes?

The backward compatibility struggle

When your configuration parameter is required, the change will quickly be detected by the developer. They upgrade to your new version and get an exception that the key is not defined. Annoying maybe but not that dramatic.
The scenario where the parameter is optional is a much greater threat. The developer has defined a custom value, overriding your default but by changing the key, the default value is picked up again. Unless the developer has written your release not notice that a change of the key name is required.

So we need a way to define the fact that the key config.key.old is now config.key.new and ideally the value for the old parameter should be picked up.

The Alias config ConfigSource

The solution for the above-described problem can be solved with the tools we have at our disposal within MicroProfile Config itself.
We can define a Config Source which will be consulted at the end of the chain. As you probably know, you can define multiple Config Sources. Each will be asked to provide a value for the key. If a source cannot supply the value, the next source is contacted.
When our ConfigSource is contacted at the end of the chain, we can see if the developer (of the framework in this case) has defined an alias for this parameter key. In this case, we define that the search for a value for config.key.new, should also be tried with key-value config.key.old. So our special ConfigSource just asks for the value of the config parameter with the old key. If there is a value found with this key value, it is returned. If nothing comes up, it returns null as required to have the default value then be selected.

The Atbash Alias config ConfigSource

The Alias config ConfigSource concept is thus fairly simple. The Atbash config extension contains this feature since his latest release (version 0.9.3)

The configuration is also fairly simple. We only need to configure the mapping from the old to the new key value. This can be done by adding a properties file on the classpath. The file much have the structure: Alias.<something>.properties and must be located within the config path. This file needs to be created by the framework developer in case he is changing one of the configuration key values.

In our example here, the contents of the config/alias.demo.properties should be

config.key.new=config.key.old

Do you want some more information and an example? Have a look at the demo in the Atbash demo repository.

And another nice thing, it works with Java 8 and Java 11 classpath.

Conclusion

By adding a ConfigSource at the end of the chain, we can make the key values of our configuration parameters backward compatible. In case the developer still uses the old values of the key, we can look up the new key value and put a warning in the log. This makes sure that the application still works and informs the developer of the changed name.

Have fun.

Categories
Configuration Resource

Extensible Resource API

Introduction

There are various scenarios where you like to use a resource, like a classpath resource, file or URL, and want to make it configurable for the developer. If you are creating a little framework for example which needs some data that needs to be adjustable depending on the application you are using it in, the resource should be easily configurable.
This is where this Atbash Resource API can be very handy.

Reading an InputStream

There are various sources which can give you an InputStream to the resource you are pointing to. File and URL are the two well-known classes for this. But getting them is different if you are using FileInputStream for example. And it is again different when you want to read a resource from the classpath.

The Resource API wants to uniform the way on how you can obtain an InputStream. The be.atbash.util.resource.ResourceUtil#getStream(java.lang.String) method takes a String, the Resource Reference, pointing to the resource you want to open and it will find out how it should retrieve the InputStream.
The prefix is the most important indicator of how the resource should be approached. By default the prefixes http:, classpath: and file: are supported. But other types can be implemented by the developer if needed.

ResourceReader

The Resource API is extensible so that other types of resources can be accessed. To do this, implement the be.atbash.util.resource.ResourceReader interface. The load() method tries to open the resource and returns the InputStream. The method is allowed to return null when the type of resource can’t be handled by this ResourceReader or when the resource doesn’t exist.
Each ResourceReader implementation should have the be.atbash.util.ordered.Order annotation on the implementation class so that the implementations can be tried in a certain order. Your custom implementation will be picked up by the Service Loader mechanism.
The implementations will then be consulted based upon the order, from low to high, to see if it can handle the resource reference. The method canRead() from ResourceReader is used for the verification of the resource reference existence.

With MicroProfile Config

The Resource API can be used with MicroProfile config. You can define a configuration parameter pointing to the default resource (like classpath) And the developer can then overwrite these values by specifying another resource using one of the supported MicroProfile methods.
Since you use the ResourceUtil#getStream(), any resource like file and URL can be supported.

Extending

As mentioned above, the ResourceReader can be used to create a custom implementation to read from specific resource. But it can also be very handy during testing. You can define a custom ResourceReader which reads some data from a Map for instance. That way you can easily point to different resources during testing.

You can have a look at the Atbash demos where a little demo is prepared. The class be.atbash.demo.utils.resource_api.spi.MapBasedResourceReader implements the ResourceReader interface.

Conclusion

This Resource API is a small and simple extensible API to get an InputStream from a resource like a file, URL or a resource on the classpath. It saves the developer to verify where the resource is located and calling the correct code. You can easily extend it by implementing the ResourceReader interface which can be very handy during testing.
And the last nice quality it has, it runs on Java 7, 8 and 11 (classpath option)

You can also have a look at some documentation here.

Have fun.

Categories
Architecture Java EE MicroProfile

MicroProfile support in Java EE Application servers

Introduction

With Java EE, you can create enterprise-type application quickly as you can concentrate on implementing the business logic.
You are also able to create applications which are more micro-service oriented, but some handy ‘features’ are not yet standardised. Standardisation is a process of specifying the best practices which of course takes some time to discover and validate these best practices.

The MicroProfile group wants to create standards for those micro-services concepts which are not yet available in Java EE. Their motto is

Optimising Enterprise Java for a micro-services architecture

This ensures that each application server, following these specifications, is compatible, just like Java EE itself. And it prepares the introduction of these specifications into Java EE, now Jakarta EE under the governance of the Eclipse Foundation.

Specification

There are already quite some specifications available under the MicroProfile flag. Have a look at the MicroProfile site and learn more about them over there.

The topics range from Configuration, Security (JWT tokens), Operations (Metric, Health, Tracing), resilience (Fault tolerance), documentation (OpenApi docu), etc …

Implementations

Just as with Java EE, there are different implementations available for each spec. The difference is that there is no Reference Implementation (RI), the special implementation which goes together with the specification documents.
All implementations are equally.

You can find standalone implementations for all specs within the SmallRye umbrella project or at Apache (mostly defined under the Apache Geronimo umbrella)

There exists also specific ‘server’ implementations which are specifically written for MicroProfile. Mostly based on Jetty or Netty, all implementations are added to have a compatible server.
Examples are KumuluzEE, Hammock, Launcher, Thorntail (v4 version) and Helidon for example.

But implementations are also made available within Java EE servers which brings both worlds tightly integrated. Examples are Payara and OpenLiberty but more servers are following this path like WildfFly and TomEE.

Using MicroProfile in Stock Java EE Servers

When you have your large legacy application which still needs to be maintained, you can also add the MicroProfile implementations to the server and benefits from their features.

It can be the first step in taking out parts of your large monolith and place it in a separate micro-service. When your package structure is already defined quite well, the separation can be done relatively easily and without the need to rewrite your application.

Although adding individual MicroProfile applications to the server is not always successful due to the usage of the advanced CDI features in MicroProfile implementations. To try things out, take one of the standalone implementations from SmallRye or Apache (Geronimo) – Config is the probably easiest to test, add it to the lib folder of your application server.

Dedicated Java EE Servers

There is also the much easier way to try out the combination which is choosing a certified Java server which has already all the MicroProfile implementations on board. Examples today are Payara and OpenLiberty. But also other vendors are going this way as the integration has started for WildLy and TomEE.

Since the integration part is already done, you can just start using them. Just add the MicroProfile Maven bom to your pom file and you are ready to go.

<dependency>
   <groupId>org.eclipse.microprofile</groupId>
   <artifactId>microprofile</artifactId>
   <version>2.0.1</version>
   <type>pom</type>
   <scope>provided</scope>
</dependency>

This way, you can define how much Java EE or MicroProfile stuff you want to use within your application and can achieve the gradual migration from existing Java EE legacy applications to a more micro-service alike version.

In addition, there exists also maven plugins to convert your application to an uber executable jar or you can run your WAR file also using the hollow jar technique with Payara Micro for example.

Conclusion

With the inclusion of the MicroProfile implementations into servers like Payara and OpenLiberty, you can enjoy the features of that framework in your Java EE Application server which you are already familiar with.

It allows you to make use of these features when you need them and create even more micro-service alike applications and make a start of the decomposition of your legacy application into smaller parts if you feel the need for this.

Enjoy it.

Categories
MicroProfile Project setup

MicroProfile 1.3 support for Jessie

Introduction

In a previous release of Jessie, there was support added for MicroProfile specifications. Initially, it was only for version 1.2 because this is the specification for which you have to most implementations available, Payara Micro, Open Liberty, WildFly Swarm and KumuluzEE.

Now I have added support for version 1.3 which is already supported by Open Liberty and Payara Micro.

MicroProfile 1.3

With the release of MicroProfile 1.3, there are a few specifications added to the mix.

OpenAPI 1.0

The specification defines the documentation of your JAX-RS endpoints using the OpenAPI v3 JSON or YAML specification.
The MicroProfile specification defines various ways on how this can be generated, like using specific annotations, fixed document, java based generator and filter.
More information and usage scenario can be found in the specification document.

OpenTracing 1.0

This specification will help you to keep track of the requests flow between all your micro-services. It has 2 main goals, define how the correlation id and additional information is transferred between different micro-services and the format of the trace records which are produced.

More information can be found in the document at link.

REST Client 1.0

The last addition is the most attractive one for developers I guess, at least for me. It builds on top of the JAX-RS client specification of Java EE/Jakarta EE.
It allows you to use type-safe access to your endpoints without the need to programmatic interact with the Client API.
You define with an interface how the JAX-RS endpoint should be called and by adding the required JAX-RS constraints (defining, for example, the method and the format like JSON) the JAX-RS client is generated dynamically.
You can read more about this nice feature in my previous blog post where I explored this specification and presented you with a client for Java SE.

What is available in Jessie?

In this release, support for MicroProfile 1.3 is added as mentioned in the introduction. It means you can select the version within a dropdown and later on, the server implementations capable of providing your selection are shown.

Not for all specifications added in this 1.3 version, have examples in the generated application. They will be added in a next version, but for those who want to get started, this version of Jessie can help them.

There are 2 other improvements added to this version
– Since there are quite some specifications now, you can specify for which of them you want a simple example in the generated application. This doesn’t restrict you in any way of using the other specifications but can help you to better keep the overview.
– A readme file is generated with more information about the specifications which are selected and how some of the specifications can be tested within the generated application.

Conclusion

Support for MicroProfile 1.3 version is added to Jessie at the request of some users who wanted to get started with it. Example code for some of the specifications will be added soon.

You can find Jessie here.

Have Fun

Categories
Atbash JAX-RS MicroProfile

MicroProfile Rest Client for Java SE

Introduction

One of the cool specifications produced by the MicroProfile group is the Rest Client for MicroProfile  available from MP release 1.3.

It builds on top of the Client API of JAX-RS and allows you to use type-safe access to your endpoints without the need to programmatic interact with the Client API.

MicroProfile compliant server implementations need to implement this specification, but nothing says we cannot expand the usage into other environments (with a proper implementation) like use in Java SE (JavaFX seems must useful here) and plain Java EE.

Atbash has created an implementation of the specification so that it can be used in these environments and will use it within Octopus framework to propagate authentication and authorization information automatically in calls to JAX-RS endpoints.

The specification

A few words about the specification itself. JAX-RS 2.x contains a client API which allows you to access any ‘Rest’ endpoint in a uniform way.

Client client = ClientBuilder.newClient();
WebTarget employeeWebTarget = client.target("http://localhost:8080/demo/data").path("employees");
employeeWebTarget.request(MediaType.APPLICATION_JSON).get(new GenericType<List<Employee>>() {
});

This client API is great because we can use it to call any endpoint, not even limited to Java ones. As long as they behave in a standard way.

But things can be improved, by moving away from the programmatic way of performing these calls, into a more declarative way.

If we could define some kind of interface like this

@Path("/employees")
public interface EmployeeService {

@GET
@Produces(MediaType.APPLICATION_JSON)
List<Employee> getAll();
}

And we just could ask an implementation of this interface which performs the required steps of creating the Client, WebTarget and invoke it for us in the background. This would make it much easier for the developer and makes it much more type-safe.

Creating the implementation of that interface is what MicroProfile Rest Client is all about.

The interface defined above can then be injected in any other CDI bean (need to add the @RegisterRestClient on the interface and preferably a CDI scope like @ApplicationScoped) and by calling the method, we actually perform a call to the endpoint and retrieve the result.

@Inject
@RestClient
private EmployeeService employeeService;

public void doSomethingWithEmployees() {
....
... employeeService.getAll();
....
}

Atbash Rest Client

The specification also allows for a programmatic retrieval of the implementation, for those scenarios where no CDI is available.
However, remember that if we are running inside a CDI container, we can always retrieve some CDI bean by using

CDI.current().select();

from within any method.

The programmatic retrieval is thus an ideal candidate to use it in other environments or frameworks like JavaFX (basically every Java SE program), Spring, Kotlin, etc …

The programmatic retrieval starts from the RestClientBuilder with the newBuilder() method.

EmployeeService employeeService = RestClientBuilder.newBuilder()
    .baseUrl(new URL("https://localhost:8080/server/data"))
    .build(EmployeeService.class);

The above retrieves also an implementation for the Employee retrieval endpoint we used earlier on in this text.

For more information about the features like Exception handling, adding custom providers and more, look at the specification document.

The Atbash Rest Client is an implementation in Java 7 which can run on Java SE. It is created mainly for the Octopus Framework to propagate the user authentication (like username) and authorization information (like permissions) to JAX-RS endpoints in a transparent, automatically way.

A ClientRequestFilter is created in Octopus which creates a JWT token (compatible with MicroProfile JWT auth specification) containing username and permissions of the current user and this Filter can then be added as a provider to the MicroProfile Rest Client to have this security information available within the header of the call.

Since the current Octopus version is still based on Java SE 7, no other existing implementation could be used (MicroProfile is Java SE 8 based). The implementation is based on the DeltaSpike Proxy features and uses any Client API compatible implementation which is available at runtime.

Compliant, not certified.

Since all the MicroProfile specifications are Java 8 based, the API is also converted to Java SE 7. Except for a few small incompatibilities, the port is 100% interchangeable.

The most notable difference is creating the builder with the newBuilder() method. In the original specification, this is a static method of an interface which is not allowed in Java 7. For that purpose, an abstract class is created, AbstractRestClientBuilder which contains the method.

Other than that, class and method names should be identical, which makes the switch from Atbash Rest Client to any other implementation using Java 8 very smooth.

The Maven dependency has following coordinates (available on Maven Central):

<dependency>
   <groupId>be.atbash.mp.rest-client</groupId>
   <artifactId>atbash-rest-client-impl</artifactId>
   <version>0.5</version>
</dependency>

If you are running on Java 8, you can use the Apache CXF MicroProfile Rest client

<dependency>
   <groupId>org.apache.cxf</groupId>
   <artifactId>cxf-rt-rs-mp-client</artifactId>
   <version>3.2.4</version>
</dependency>

Which also can be used in a plain Java SE environment. Other implementations like the ones within Liberty and Payara Micro aren’t useable standalone in Java SE.

Remark

This first release of Atbash Rest Client is a pre-release, meaning that not all features of the specification are completely implemented. It is just a bare minimum for the a POC within Octopus.
For example, the handling of exceptions (because the endpoint returned a status in the range 4xx or 5xx) isn’t completely covered yet.

The missing parts will be implemented or improved in a future version.

Conclusion

With the MicroProfile Rest Client specification, it becomes easy to call some JAX-RS endpoints in a type-safe way. By using a declarative method, calling some endpoint becomes as easy as calling any other method within the JVM.

And since micro-services need to be called at some point from non-micro-service code, Java SE executable implementation is very important. It makes it possible to call it from plain Java EE, Java SE, any other framework or a JVM based language.

Have fun.

This website uses cookies. By continuing to use this site, you accept our use of cookies.  Learn more