Categories
Jakarta EE

Using Locks in Jakarta EE CDI

Jakarta EE applications are multi-threaded. The code to generate the response for the users’ requests runs in his own thread. This seems natural as we like to support multiple users at the same time.

For each user, data is retrieved and processed to return to the user. There are situations where data is coming from a single source and some synchronisation is required. Reading the data is no problem, but when we want to update them, we should have the guarantee that all data is updated in one go. So that threads that are reading the data see consistent data.

The Jakarta EE EJB Specification has the @Lock annotation to handle the synchronisation aspects of the method within an EJB Singleton bean. This blog describes what you can do if your application makes only use of CDI beans or when you want to synchronise the access to a data structure from within different beans.

JVM ReentrantReadWriteLock

The JVM class ReentrantReadWriteLock is created for the use case that is described in the introduction.

An instance maintains a pair of associated locks, one for read-only operations and one for writing. The read lock may be held simultaneously by multiple reader threads, so long as there are no writers. The write lock is exclusive.

Every piece of code that access our shared data for retrieving some information, not updating, must be protected by the read lock that is provided by the instance. The method, can be multiple ones if needed, or statements that update the data structure need to have the write lock before they can proceed.

This way, reading is possible by one or more threads simultaneously unless another part of your code has the write lock at the moment. And changing the data can only be done when no one is reading or they have to wait until the update is completed.

In a central location, a JVM singleton for example, we instantiate the object.

ReentrantReadWriteLock readWriteLock = new ReentrantReadWriteLock();

The pattern for reading from the data structure is

Lock lock = readWriteLock.readLock()
try {
   lock.lock();

    // All the read statements go here

} finally {
   lock.unlock();
}

A similar pattern for changing the data structure based on the write lock.

Lock lock = readWriteLock.writeLock()
try {
   lock.lock();

    // All the write statements go here

} finally {
   lock.unlock();
}

There exists also a variant that you do not wait indefinitely to obtain the lock, but an InterruptedException is thrown if the lock can’t be acquired within the time limit that is specified.

More info can be found on the readme page of the repository.

A CDI annotation

The above-described solution can be turned into a CDI interceptor that performs the tasks, similar to the @Lock annotation of Jakarta EE EJB specification.

You can create this yourself, or you can make use of the Atbash Named Lock library that is now available.

For Jakarta EE 8, add the following dependency to your project.

     <dependency>
         <groupId>be.atbash.cdi</groupId>
         <artifactId>locked</artifactId>
         <version>1.0-SNAPSHOT</version>
     </dependency>

The snapshot version is already available on Maven central. The final version will be Ade available in a couple of weeks. Together with the version based on the jakarta namespace so that you can use it with Jakarta EE 9.

You now have the be.atbash.cdi.lock.Locked annotation that you can use on any CDI method

@Locked
public String getValues() {

To use the generic read lock. Or the method that updates the data structure and no other reading should be in progress

@Locked(operation = Locked.Operation.WRITE)
public void writeValue(String value) {

But as the name suggest, you can also specify the name of the lock that you want to use. And thus have more than one lock available within your application.

@Locked(name = "special")

This annotation will operate on another instance of the ReentrantReadWriteLock class and thus work with locks that are indecent of the generic named ones.

Side effect

The write lock, without using the read lock from the pair, can be used to synchronise a method.

@Locked(operation = Locked.Operation.WRITE)
public void writeValue(String value) {

This declaration means that only one thread at a time can execute the method writeValue, which has the same effect as the synchronized keyword from the java language. When the annotation is used on multiple CDI methods, you have the same effect as using a synchronized block that make use of a singleton object so that at any given time, only one annotated method within the entire JVM can be executed at the same time.

Conclusion

The Named locks for CDI is a small library that brings the capabilities of the @Lock from the EJB specification and the ReentrantReadWriteLock JVM class as an annotation to any Jakarta runtime.
It allows you to use the same philosophy of Jakarta EE, concentrate on the user logic, and infrastructure-related aspects are handled automatically or by annotation, for the synchronisation requirements to protect the access to a shared data structure within your application.

Categories
Atbash Jakarta EE

Testing the Jakarta EE Core Profile with Atbash Runtime.

What is the Core Profile?

The Jakarta EE specifications have already two profiles, the Full profile, and the Web profile. The Web Profile contains a set of specifications that are geared toward the typical Web Applications. It groups the typical Web Application specifications like Servlet, REST, JSON, JPA, Faces, Security, etc …

The Full profile contains all the Java Enterprise specifications and adds to the above list specifications like WebServices, Messaging, the full EJB specification, connectors, etc …

But the trend in the last years is, mainly due to the move to the cloud, to have smaller runtimes that only need a limited set of specifications.

The MicroProfile specifications are built on top of a limited set of Jakarta specifications, JAX-RS, CDI, JSON-P, and JSON-B.
MicroProfile, although many specifications are useful in all architectural cases, has a focus on microservices and smaller runtime. This has led to the idea to have a specific profile within Jakarta EE that groups a, specific adapted set of specifications in a Core Profile.

The goal of the profile is defined as (see here https://jakarta.ee/specifications/coreprofile/10/)

To provide a profile that contains a set of Jakarta EE Specifications targeting smaller runtimes suitable for microservices and ahead-of-time compilation.

Which specifications?

The idea, since the Core profile is not yet available, is to combine the following specifications

  • Jakarta Servlet
  • Jakarta REST (JAX-RS)
  • Jakarta CDI lite
  • Jakarta JSON-P
  • Jakarta JSON-B
  • Jakarta Configuration

The CDI lite specification focuses around using build-time Compatible Extensions so that the runtime can be used in Ahead-of-time compilation scenarios like GraalVM. When there are no runtime discovery features and everything is known at compile-time, it makes the native compilation much easier.

The Jakarta Configuration specification will be based on the current MicroProfile Configuration specification and is hopefully, after several attempts to standardise this within Java Enterprise, finally available.

What is Atbash Runtime?

The Core Profile is not yet available and will not be available in May 2022, the expected release date of Jakarta EE 10. Mainly because the Jakarta Configuration specification is not ready yet, and the work related to Jakarta EE 10 took more time so there was not enough time available to perform some work around this new Profile.

The idea of Jakarta specifications is also that it is based on some experience and not only on some theoretical assumptions. That was the basis of my idea to create a runtime that contains the specifications of the Core Profile.

The work started some 6 months ago in my spare time and the goal was to create a runtime that combines the mentioned specifications. At that time, only Jakarta EE 9.1 was available. So the current version of Atbash Runtime, version 0.3, is based on

  • Jakarta Servlet 5.0
  • Jakarta CDI 3.0
  • Jakarta REST 3.0
  • Jakarta JSON-P 2.0
  • Jakarta JSON-B 2.0
  • MicroProfile Config 3.0

And it combines the following frameworks

  • Jetty 11.0.8
  • Jersey 3.0.4
  • Weld 4.0.3
  • Jackson Databind 2.13.1
  • Custom implementation of MicroProfile Config based on SmallRye Config.

How can I try it out?

First of all, you can download the zip file to install the runtime from this download URL. It is only 13.3 Mb in size.

Unzip this into a directory of your choice. It will create several JAR files in a directory structure.

Start your application with

java -jar atbash-runtime.jar path-to/application.war

And the application is available on port 8080 (default, can be changed by command line parameter). The runtime is built with JDK 11 and tested on JDK 11, 17, and 18.

Besides this instance mode, there is also a domain mode so that you can remotely access the runtime to, for example, deploy applications on a running process.

It has a modular structure that will be explored more in future versions and has typical extras like an embedded mode, Arquillian adaptor, Docker image, and integration testing framework based on TestContainers.

You can read more about them in the user guide.

What is next?

The following ideas are on my table to experiment more with this runtime and the idea of a Core Profile Runtime

  • Upgrade to the Jakarta EE 10 versions of the specifications now that they become available.
  • Add security like the MicroProfile JWT specification.
  • Add some data access including the fast Java-native object graph persistence provided by MicroStream.

And of course, your ideas and feedback are valuable for the realisation of the Core profile with Jakarta EE.

Enjoy.

Categories
Atbash Jakarta EE 9

Jakarta EE 9 compatible release for Atbash Utils

With the release of the Jakarta EE 9 version at the end of November 2020, a 20-year long tradition of Java Enterprise is broken, the backward compatibility.

Due to some legal requirements from Oracle related to the donation of Java EE to the Eclipse Foundation, all the namespaces need to be changed. This means that package names have changed and that for example, the Servlet class changed from javax.servlet.http.HttpServlet to jakarta.servlet.http.HttpServlet.

And since your applications use more than just the Jakarta EE dependencies, the change of the namespace within Jakarta EE 9 is only the beginning. All other libraries using one of the Java Enterprise classes need to be changed also. And there are many of them, all of which need to be updated.

So we can expect to see an adapted version of each of those frameworks and libraries in the coming months. But if you want to play already with the new Jakarta EE 9 on the runtimes that provide you a preview of the Jakarta EE 9 version, there is an option.

With the Eclipse Transformer project you can convert a JAR library for instance to the new Jakarta EE 9 namespace. This means that you can easily adapt the framework or library that you are using in your Java Enterprise application.
But creating a Jakarta classifier version of your Maven dependency is in most cases not enough. The POM file of the Framework or library can have a reference to some ‘old’ dependencies sill using the javax namespace. This means that in many cases your application still has access to those classes. And a mistake is easily made in that situation.
The solution is to create a ‘temporary’ POM xml file for the transformed dependency that puts the correct dependencies into your application.

As an example, and for some of the Atbash projects that use these other frameworks and libraries, a repository is created to convert and create the javax version to a Jakarta one for MicroProfile and Apache Deltaspike. More can be read on the readme page of the project.

Although you can use this technique, it is cumbersome and not suitable for production usage. You can use it to prepare and test your application using the Jakarta namespace, but the only real solution is that each of those frameworks and libraries converts to use the Jakarta namespace.

So the CDI and JSF utilities created by Atbash, are now available on Maven central using the Jakarta namespace.

You can use

<dependency>
    <groupId>be.atbash.jakarta.utils</groupId>
    <artifactId>utils-cdi</artifactId>
    <version>1.0.1</version>
</dependency>

And

<dependency>
    <groupId>be.atbash.jakarta.utils</groupId>
    <artifactId>utils-jsf</artifactId>
    <version>1.0.1</version>
</dependency>

So that you do not need the trick with the Eclipse Transformer project. Other Atbash projects have already experimented with the Jakarta namespace (see the jakarta branch on the repositories) using the Eclipse Transformer for the dependencies. Once the actual jakarta compatible versions are released, they will be used and the Atbash project will be released.

Categories
Atbash Configuration MicroProfile

Backward compatible configuration key values for MicroProfile Config

Introduction

With MicroProfile config, you can define the application configuration using key-value pairs which can be retrieved from various resources.
You can use it for defining the configuration in a very flexible way and this for is useful for your applications but also for frameworks which need some config.

But one day, you like to change the key for whatever good reason. But can you do this easily? If you have written the application, it probably is. But what if you have written a little framework. Do the developers read the release notes where you have stated the changes?

The backward compatibility struggle

When your configuration parameter is required, the change will quickly be detected by the developer. They upgrade to your new version and get an exception that the key is not defined. Annoying maybe but not that dramatic.
The scenario where the parameter is optional is a much greater threat. The developer has defined a custom value, overriding your default but by changing the key, the default value is picked up again. Unless the developer has written your release not notice that a change of the key name is required.

So we need a way to define the fact that the key config.key.old is now config.key.new and ideally the value for the old parameter should be picked up.

The Alias config ConfigSource

The solution for the above-described problem can be solved with the tools we have at our disposal within MicroProfile Config itself.
We can define a Config Source which will be consulted at the end of the chain. As you probably know, you can define multiple Config Sources. Each will be asked to provide a value for the key. If a source cannot supply the value, the next source is contacted.
When our ConfigSource is contacted at the end of the chain, we can see if the developer (of the framework in this case) has defined an alias for this parameter key. In this case, we define that the search for a value for config.key.new, should also be tried with key-value config.key.old. So our special ConfigSource just asks for the value of the config parameter with the old key. If there is a value found with this key value, it is returned. If nothing comes up, it returns null as required to have the default value then be selected.

The Atbash Alias config ConfigSource

The Alias config ConfigSource concept is thus fairly simple. The Atbash config extension contains this feature since his latest release (version 0.9.3)

The configuration is also fairly simple. We only need to configure the mapping from the old to the new key value. This can be done by adding a properties file on the classpath. The file much have the structure: Alias.<something>.properties and must be located within the config path. This file needs to be created by the framework developer in case he is changing one of the configuration key values.

In our example here, the contents of the config/alias.demo.properties should be

config.key.new=config.key.old

Do you want some more information and an example? Have a look at the demo in the Atbash demo repository.

And another nice thing, it works with Java 8 and Java 11 classpath.

Conclusion

By adding a ConfigSource at the end of the chain, we can make the key values of our configuration parameters backward compatible. In case the developer still uses the old values of the key, we can look up the new key value and put a warning in the log. This makes sure that the application still works and informs the developer of the changed name.

Have fun.

Categories
Configuration Resource

Extensible Resource API

Introduction

There are various scenarios where you like to use a resource, like a classpath resource, file or URL, and want to make it configurable for the developer. If you are creating a little framework for example which needs some data that needs to be adjustable depending on the application you are using it in, the resource should be easily configurable.
This is where this Atbash Resource API can be very handy.

Reading an InputStream

There are various sources which can give you an InputStream to the resource you are pointing to. File and URL are the two well-known classes for this. But getting them is different if you are using FileInputStream for example. And it is again different when you want to read a resource from the classpath.

The Resource API wants to uniform the way on how you can obtain an InputStream. The be.atbash.util.resource.ResourceUtil#getStream(java.lang.String) method takes a String, the Resource Reference, pointing to the resource you want to open and it will find out how it should retrieve the InputStream.
The prefix is the most important indicator of how the resource should be approached. By default the prefixes http:, classpath: and file: are supported. But other types can be implemented by the developer if needed.

ResourceReader

The Resource API is extensible so that other types of resources can be accessed. To do this, implement the be.atbash.util.resource.ResourceReader interface. The load() method tries to open the resource and returns the InputStream. The method is allowed to return null when the type of resource can’t be handled by this ResourceReader or when the resource doesn’t exist.
Each ResourceReader implementation should have the be.atbash.util.ordered.Order annotation on the implementation class so that the implementations can be tried in a certain order. Your custom implementation will be picked up by the Service Loader mechanism.
The implementations will then be consulted based upon the order, from low to high, to see if it can handle the resource reference. The method canRead() from ResourceReader is used for the verification of the resource reference existence.

With MicroProfile Config

The Resource API can be used with MicroProfile config. You can define a configuration parameter pointing to the default resource (like classpath) And the developer can then overwrite these values by specifying another resource using one of the supported MicroProfile methods.
Since you use the ResourceUtil#getStream(), any resource like file and URL can be supported.

Extending

As mentioned above, the ResourceReader can be used to create a custom implementation to read from specific resource. But it can also be very handy during testing. You can define a custom ResourceReader which reads some data from a Map for instance. That way you can easily point to different resources during testing.

You can have a look at the Atbash demos where a little demo is prepared. The class be.atbash.demo.utils.resource_api.spi.MapBasedResourceReader implements the ResourceReader interface.

Conclusion

This Resource API is a small and simple extensible API to get an InputStream from a resource like a file, URL or a resource on the classpath. It saves the developer to verify where the resource is located and calling the correct code. You can easily extend it by implementing the ResourceReader interface which can be very handy during testing.
And the last nice quality it has, it runs on Java 7, 8 and 11 (classpath option)

You can also have a look at some documentation here.

Have fun.

Categories
Atbash Java EE MicroProfile Security

Winter 2018 Release train for Atbash libraries

Introduction

Atbash is a set of libraries which tries to make security easier. It contains features around cryptographic keys (RSA and EC keys for example), reading and writing keys in all the different formats, algorithms (like the Diffie-Hellman Algorithm), securing JAX-RS endpoints, and many more. The top-level project is Atbash Octopus which is a complete platform for declarative security for Java SE, Java EE and Micro-Services (MicroProfile).

Besides this security focus point, some useful Java SE and MicroProfile extensions are also made available instead of just bundling them with the security features which uses them.

A new set of Atbash features are ready and are released to Maven Central. This blog post gives a short overview of the different new features but in the coming weeks, the main features are described in more detail in separate blogs.

Winter 2018 release

The first time I released multiple Atbash libraries to Maven Central was last summer, with the Summer 2018 release. Also now there are multiple features ready so I found it a good time to release again a lot of the libraries together.

The 3 main top features in this winter release are

  • Reading and writing many cryptographic key types (RSA, EC, OCT, and DH) in many formats like PEM (including different encodings like PKCS1 and PKCS8), JWK, JWKSet, and Java KeyStore (‘old versions’ and PKCS12).
  • Easy encryption helper methods (using symmetric AES keys) and methods for creating JWE objects (Encrypted JWT tokens).
  • Implementation of the Diffie-Hellman algorithm to exchange data in an encrypted way without the need for exchanging the key.

The non-security highlights in this release are

  • An extensible Resource API
  • The option to define alternative keys for MicroProfile config.

More on them in the next section and in the upcoming dedicated blogs

Supported Java versions.

The goal of Atbash is also to bring you all those goodies, even when you are still stuck with Java 7. Therefore are libraries support Java 7 as the minimum version. But with the release of JDK 11 a few months ago, I made sure that all the libraries which are released as part of this Winter 2018 release, can also run on this new Long Term supported Java version. It only supports the Class-Path fully, there are no Java modules created.

New features

Atbash Utils
A library containing some useful utilities for Java SE, CDI, and JSF. In the new 0.9.3 version, there are 2 major new features implemented.

Resource API

With the resource API, it becomes easy to retrieve the content of a file, a class path resource, a URL or any custom defined location in a uniform way.

Based on the prefix, like classpath:, http:, etc …, the implementation will look up and read the resource correctly. But you can also define your own type, defining your custom prefix, and register is through the Service Loader mechanism.

This uniform reading of a resource is very handy when you need to read some resource content in your application (or your own framework) and wants to make the location of the file configurable.
You just need to retrieve the ‘location’ from the configuration and can call

ResourceUtil.getInstance().getStream();

Resource Scanner

Another addition, also related to resources, is the resource scanner to retrieve all class path resources matching a certain pattern. There are various use cases where it could be handy that you can retrieve a list of resources, like all files ending on .config.properties, from the class path. This allows having some extension mechanism that brings their own configuration files.

The alternative keys for MicroProfile config make use of this.

Atbash Config

The library contains a few useful extensions for MicroProfile Configuration. It has also an implementation of the MicroProfile Config 1.2 version for Java 7. It is a compatible version (not certified) as MP Config requires Java 8.

Alternative keys

A useful addition in this release is the alternative keys which can be defined. Useful in case you want to change the name of a config parameter key but you want to the users of your library the time to adapt to your new key value.
So you can basically define a mapping between 2 keys, old value, and a new value. Within your library code, you already use the new value. But when the Atbash Config library is on the classpath, the developer can still use the old name.

Atbash JWT support

A library to support many aspects of the JWT stack. Not only signed and encrypted JWT tokens, but also JWK (storing keys). And by extension, it has extensive support for cryptographic keys.

In this library, the most notable new features of this winter release can be found.

Reading and writing many cryptographic keys

Reading and writing cryptographic keys was already partially implemented in the previous version of the library. But now it supports more or less all possible scenarios.

On the one side, there is support for the different types

  • RSA keys
  • Elliptic Curve (EC) keys
  • OCT keys (just a set of bits usable for HMAC and AES)

And this can be stored (and read) in many formats

  • Asymmetric private keys in PEM (using PKCS1, PKCS8 or none encoding), JWK, JWKSet, and KeyStores (JKS, JCEKS, and PKCS12)
  • Asymmetric private keys in PEM, JWK, JWKSet, and KeyStores (JKS, JCEKS, and PKCS12)
  • OCT keys as JWK and JWKSet.

And this reading and writing are performed by a single method (all checking is done behind the scenes) namely KeyWriter.writeKeyResource and KeyReader.readKeyResource.

More info in a later blog post.

Encryption

There are various helper methods to perform encryption easier. The encryption can be performed with an OCT key, or this key can be generated based on a password or passphrase using the Key Derivation Functions.

A second possibility for encryption is the use of JWE, an encrypted version of the JWT token. In the blog post, an example will be given how you can easily create a JWE. And it is the same method to create a JWT, without encryption, just a different parameter.

Key Server

The Key Server is more an example of the Diffie-Hellman algorithm which is implemented in this release more than it is a production-ready component. With the Diffie-Hellman algorithm, it is possible to perform encryption of the data without the need to exchange the secret key. The same principle is used for SSL communication. But now it is performed by the application itself and thus no termination like the one performed by the firewalls is possible.

The blog will explain the Key Server principals and another use case where the posting of JSON data to an endpoint is transparently encrypted.

Future

More features from Octopus are also transferred from the original Octopus to the Atbash Octopus framework, including the support for OAuth2 and OpenId Connect. But there wasn’t enough time to migrate all the features that I wanted, so Octopus isn’t released is this time. But migration and improvements will continue to be performed.

With the release of JDK 11, more focus will be placed on this new version and thus the master branch will hold only code which will be compatible with Java 8 and Java 11. The support for Java 7 will be transferred to a separate branch and will go in maintenance mode. No real new features will be implemented anymore in this branch unless required for Atbash Octopus which will stay at Java 7 for a little longer.

Since the first application server supporting JDK 11 is released, and more of them coming in the next months, more focus will be placed on that runtime environment.

Conclusion

As you can read in the above paragraphs, this release contains some interesting features around security and configuration. And the good thing, they all can be used already with JDK 11 in Class-Path mode.

In the follow-up blogs, the mean new features will be discussed more in details, so I hope to welcome you again in the near future.

Have fun.

Categories
Architecture Java EE MicroProfile

MicroProfile support in Java EE Application servers

Introduction

With Java EE, you can create enterprise-type application quickly as you can concentrate on implementing the business logic.
You are also able to create applications which are more micro-service oriented, but some handy ‘features’ are not yet standardised. Standardisation is a process of specifying the best practices which of course takes some time to discover and validate these best practices.

The MicroProfile group wants to create standards for those micro-services concepts which are not yet available in Java EE. Their motto is

Optimising Enterprise Java for a micro-services architecture

This ensures that each application server, following these specifications, is compatible, just like Java EE itself. And it prepares the introduction of these specifications into Java EE, now Jakarta EE under the governance of the Eclipse Foundation.

Specification

There are already quite some specifications available under the MicroProfile flag. Have a look at the MicroProfile site and learn more about them over there.

The topics range from Configuration, Security (JWT tokens), Operations (Metric, Health, Tracing), resilience (Fault tolerance), documentation (OpenApi docu), etc …

Implementations

Just as with Java EE, there are different implementations available for each spec. The difference is that there is no Reference Implementation (RI), the special implementation which goes together with the specification documents.
All implementations are equally.

You can find standalone implementations for all specs within the SmallRye umbrella project or at Apache (mostly defined under the Apache Geronimo umbrella)

There exists also specific ‘server’ implementations which are specifically written for MicroProfile. Mostly based on Jetty or Netty, all implementations are added to have a compatible server.
Examples are KumuluzEE, Hammock, Launcher, Thorntail (v4 version) and Helidon for example.

But implementations are also made available within Java EE servers which brings both worlds tightly integrated. Examples are Payara and OpenLiberty but more servers are following this path like WildfFly and TomEE.

Using MicroProfile in Stock Java EE Servers

When you have your large legacy application which still needs to be maintained, you can also add the MicroProfile implementations to the server and benefits from their features.

It can be the first step in taking out parts of your large monolith and place it in a separate micro-service. When your package structure is already defined quite well, the separation can be done relatively easily and without the need to rewrite your application.

Although adding individual MicroProfile applications to the server is not always successful due to the usage of the advanced CDI features in MicroProfile implementations. To try things out, take one of the standalone implementations from SmallRye or Apache (Geronimo) – Config is the probably easiest to test, add it to the lib folder of your application server.

Dedicated Java EE Servers

There is also the much easier way to try out the combination which is choosing a certified Java server which has already all the MicroProfile implementations on board. Examples today are Payara and OpenLiberty. But also other vendors are going this way as the integration has started for WildLy and TomEE.

Since the integration part is already done, you can just start using them. Just add the MicroProfile Maven bom to your pom file and you are ready to go.

<dependency>
   <groupId>org.eclipse.microprofile</groupId>
   <artifactId>microprofile</artifactId>
   <version>2.0.1</version>
   <type>pom</type>
   <scope>provided</scope>
</dependency>

This way, you can define how much Java EE or MicroProfile stuff you want to use within your application and can achieve the gradual migration from existing Java EE legacy applications to a more micro-service alike version.

In addition, there exists also maven plugins to convert your application to an uber executable jar or you can run your WAR file also using the hollow jar technique with Payara Micro for example.

Conclusion

With the inclusion of the MicroProfile implementations into servers like Payara and OpenLiberty, you can enjoy the features of that framework in your Java EE Application server which you are already familiar with.

It allows you to make use of these features when you need them and create even more micro-service alike applications and make a start of the decomposition of your legacy application into smaller parts if you feel the need for this.

Enjoy it.

Categories
Atbash Overview Security

Atbash Summer release train

Introduction

All the Atbash repositories are still under heavy development, that is why they are released in one go. The last few days, such a release of almost all libraries is performed.

This gives a short overview of what you can find.

Big features

The big feature changes can be found in

  • Atbash JWT support related to cryptographic key support.
  • Atbash Rest client, a Java 7 port of the MicroProfile spec.
  • And Atbash Octopus where KeyCloak and MicroProfile JWT auth spec and interoperability between schemes are central in this release.

Cryptographic key support

Since there are many formats in which keys can be persisted (PEM, Java Key Stores, JWK, etc …), they are all internally stored as an AtbashKey. It contains the Key itself (as Java object), the identification and the type of the key (like RSA, private or public part, etc …)

Creating such keys can be achieved by using the class KeyGenerator, with the method generateKeys(). This class is available as CDI instance or can be instantiated directory in those environments/locations where no CDI is available.

The parameter of the generateKeys() method, defines which key(s) is created. This parameter can be created using a builder pattern.

RSAGenerationParameters generationParameters = new RSAGenerationParameters.RSAGenerationParametersBuilder()
        .withKeyId("the-kid")
        .build();
List<AtbashKey> atbashKeys = generator.generateKeys(generationParameters);

In the above example, multiple keys are generated since RSA is an asymmetric key and thus private and public parts are generated.

Writing of a key can be performed with the KeyWriter class. It has a method, writeKeyResource, which can be used to persist a key into one of the formats. The format is specified as a parameter of type KeyResourceType. This can indicate the required format like PEM, Key store, JWK, etc…

The specific type of PEM (like PKCS1, PKCS8, etc …) is defined by the configuration parameters.

Another parameter defines the password/passphrase for the key (if needed) and one for the file as a whole in the case of the Java KeyStore format for example.

The last functionality around key is then reading of all those keys in the supported format. This functionality is implemented in the KeyReader class. It is again a CDI bean which can be instantiated when no CDI environment is available.

It contains a readKeyResource() method which can read  all the keys in a resource (like PEM file, Java Key Store, JWK, etc …) As a parameter, an instance of KeyResourcePasswordLookup is supplied which retrieves a password in those case where it is needed (to read the file or decrypt the key)

The return of the method is a list because a resource can contain more than one key AtbashKeys.

This Key support is an initial version and will be improved in the further releases of the atbash-jwt-support releases with more features and more supported formats.

Atbash Rest Client

A first release was done mid-June and contained an implementation in Java 7 for Java SE and Java EE which is compatible with the MicroProfile Rest Client specification. (see here) It allows you to ‘inject’ or create (useful in Java SE environments) a system generated Rest client based on the definition of your JAX-RS endpoint defined in an interface class.

In this release, the RestClientBuilderListener from the MP Rest Client spec 1.1 is added and implemented so that we can define some additional providers in a general way. This is important for the Octopus release so that we can add the credentials, stored within the Octopus context, to the JAX-RS call automatically. Without the need to specify the providers manually.

Atbash Octopus

And of course, many new features are added to Octopus. They are migrated from the old Octopus or newly added.

The highlights are:

– Added support for KeyCloak server. JSF applications can use the authentication and authorization from KeyCloak configured realms. Also, the AccessToken from it can be based on in the header of other request and verified by JAX-RS endpoints. The only thing which is needed is the location of the KeyCloak server and the realm config in JSON (which is supplied by KeyCloak)

– The SPI option to pass the expected password for a user can now handle hashed passwords. Both the ‘standard’ algorithms from MessageDigest, like SHA-256 but also the key derivation function PBKDF2 can be defined easily.

– The authorization annotations, like @RequiresPermissions, can be specified on JAX-RS methods without the need to define those resources as CDI or EJB beans.

– Authentication and authorization information can be converted automatically to an MP JWT Auth compliant format and used in calls to JAX-RS endpoints. This makes it possible for example integrate JAX-RS resources protected by KeyCloak and MP JWT seamless.

And too much other features to describe here in detail. The user manual is also started and will be announced soon.

Overview all released frameworks

Utilities : 0.9.2

Set of utilities for Java SE, CDI and plain JSF which are very useful in many projects running in one of these environments.

  • Added utility class for HEX encoding (next to the BASE64 encoding)
  • Added support for byte arrays and encoding (HEX and BASE64) through the ByteSource class.

JSON-smart : 0.9.1

A small library (for Java 7) which can convert JSON to Java instances and vice versa.

  • Added support for @JsonProperty to define the name of JSON property.
  • Contains an SPI so that other naming annotations (like Jackson one) can be used.

Abash-config : 0.9.2

Extension for the MicroProfile Config implementations. Also a Java 7 port of Apache Geronimo Config.

  • Configuration for the base name (with serviceLoader class) is optional.
  • Port of MicroProfile Config 1.3 features to Java 7.

JWT Support : 0.9.0

Convert Java instances to JWT and vice versa and extensive support for Cryptographic keys (reading, writing, creating) supporting multiple types (like RSA, EC, and HMAC keys) and formats (like JWK, JWKSet, PEM, and KeyStore)

  • Support for reading and writing multiple formats (PEM, KeyStore, JWK and JWKSet).
  • Better support for JWT verification with keys using the concepts of KeySelector and KeyManager.

Atbash config server : 0.9.1

Configuration source for MP Config as a server supplying config through JAX-RS endpoints.

  • Added Payara micro as supported server to serve the configuration.

Atbash Rest Client : 0.5.1

Rest client implementation for Java 7.

  • Included RestClientBuilderListener from MP Rest Client 1.1 (to be able to define providers globally)

Octopus : 0.4

  • Integration with Keycloak (Client Credentials for Java SE, AuthorizationCode grant for Web, AccessToken for JAX-RS)
  • Supported for Hashed Passwords (MessageDigest ones and PBKDF2)
  • Support for MP rest Client and Providers available to add tokens for MP JWT Auth and Keycloak.
  • Logout functionality for Web.
  • Authentication events.
  • More features for JAX-RS integration (authorization violations on JAX-RS resource [no need for CDI or EJB], correct 401 return messages, … )
  • Support for default user filter (no need to define user filter before authorizationFilter)

Conclusion

The release contains a lot of goodies related to secure. In the comings months, new features will be added, support for Java 8 and 11 are planned and user manuals and cookbooks will be available to get you started with all those goodies.

The Atbash repositories with some more info and the code of course, can be found at GitHub.

Have fun.

Categories
MicroProfile Project setup

MicroProfile 1.3 support for Jessie

Introduction

In a previous release of Jessie, there was support added for MicroProfile specifications. Initially, it was only for version 1.2 because this is the specification for which you have to most implementations available, Payara Micro, Open Liberty, WildFly Swarm and KumuluzEE.

Now I have added support for version 1.3 which is already supported by Open Liberty and Payara Micro.

MicroProfile 1.3

With the release of MicroProfile 1.3, there are a few specifications added to the mix.

OpenAPI 1.0

The specification defines the documentation of your JAX-RS endpoints using the OpenAPI v3 JSON or YAML specification.
The MicroProfile specification defines various ways on how this can be generated, like using specific annotations, fixed document, java based generator and filter.
More information and usage scenario can be found in the specification document.

OpenTracing 1.0

This specification will help you to keep track of the requests flow between all your micro-services. It has 2 main goals, define how the correlation id and additional information is transferred between different micro-services and the format of the trace records which are produced.

More information can be found in the document at link.

REST Client 1.0

The last addition is the most attractive one for developers I guess, at least for me. It builds on top of the JAX-RS client specification of Java EE/Jakarta EE.
It allows you to use type-safe access to your endpoints without the need to programmatic interact with the Client API.
You define with an interface how the JAX-RS endpoint should be called and by adding the required JAX-RS constraints (defining, for example, the method and the format like JSON) the JAX-RS client is generated dynamically.
You can read more about this nice feature in my previous blog post where I explored this specification and presented you with a client for Java SE.

What is available in Jessie?

In this release, support for MicroProfile 1.3 is added as mentioned in the introduction. It means you can select the version within a dropdown and later on, the server implementations capable of providing your selection are shown.

Not for all specifications added in this 1.3 version, have examples in the generated application. They will be added in a next version, but for those who want to get started, this version of Jessie can help them.

There are 2 other improvements added to this version
– Since there are quite some specifications now, you can specify for which of them you want a simple example in the generated application. This doesn’t restrict you in any way of using the other specifications but can help you to better keep the overview.
– A readme file is generated with more information about the specifications which are selected and how some of the specifications can be tested within the generated application.

Conclusion

Support for MicroProfile 1.3 version is added to Jessie at the request of some users who wanted to get started with it. Example code for some of the specifications will be added soon.

You can find Jessie here.

Have Fun

Categories
Atbash JAX-RS MicroProfile

MicroProfile Rest Client for Java SE

Introduction

One of the cool specifications produced by the MicroProfile group is the Rest Client for MicroProfile  available from MP release 1.3.

It builds on top of the Client API of JAX-RS and allows you to use type-safe access to your endpoints without the need to programmatic interact with the Client API.

MicroProfile compliant server implementations need to implement this specification, but nothing says we cannot expand the usage into other environments (with a proper implementation) like use in Java SE (JavaFX seems must useful here) and plain Java EE.

Atbash has created an implementation of the specification so that it can be used in these environments and will use it within Octopus framework to propagate authentication and authorization information automatically in calls to JAX-RS endpoints.

The specification

A few words about the specification itself. JAX-RS 2.x contains a client API which allows you to access any ‘Rest’ endpoint in a uniform way.

Client client = ClientBuilder.newClient();
WebTarget employeeWebTarget = client.target("http://localhost:8080/demo/data").path("employees");
employeeWebTarget.request(MediaType.APPLICATION_JSON).get(new GenericType<List<Employee>>() {
});

This client API is great because we can use it to call any endpoint, not even limited to Java ones. As long as they behave in a standard way.

But things can be improved, by moving away from the programmatic way of performing these calls, into a more declarative way.

If we could define some kind of interface like this

@Path("/employees")
public interface EmployeeService {

@GET
@Produces(MediaType.APPLICATION_JSON)
List<Employee> getAll();
}

And we just could ask an implementation of this interface which performs the required steps of creating the Client, WebTarget and invoke it for us in the background. This would make it much easier for the developer and makes it much more type-safe.

Creating the implementation of that interface is what MicroProfile Rest Client is all about.

The interface defined above can then be injected in any other CDI bean (need to add the @RegisterRestClient on the interface and preferably a CDI scope like @ApplicationScoped) and by calling the method, we actually perform a call to the endpoint and retrieve the result.

@Inject
@RestClient
private EmployeeService employeeService;

public void doSomethingWithEmployees() {
....
... employeeService.getAll();
....
}

Atbash Rest Client

The specification also allows for a programmatic retrieval of the implementation, for those scenarios where no CDI is available.
However, remember that if we are running inside a CDI container, we can always retrieve some CDI bean by using

CDI.current().select();

from within any method.

The programmatic retrieval is thus an ideal candidate to use it in other environments or frameworks like JavaFX (basically every Java SE program), Spring, Kotlin, etc …

The programmatic retrieval starts from the RestClientBuilder with the newBuilder() method.

EmployeeService employeeService = RestClientBuilder.newBuilder()
    .baseUrl(new URL("https://localhost:8080/server/data"))
    .build(EmployeeService.class);

The above retrieves also an implementation for the Employee retrieval endpoint we used earlier on in this text.

For more information about the features like Exception handling, adding custom providers and more, look at the specification document.

The Atbash Rest Client is an implementation in Java 7 which can run on Java SE. It is created mainly for the Octopus Framework to propagate the user authentication (like username) and authorization information (like permissions) to JAX-RS endpoints in a transparent, automatically way.

A ClientRequestFilter is created in Octopus which creates a JWT token (compatible with MicroProfile JWT auth specification) containing username and permissions of the current user and this Filter can then be added as a provider to the MicroProfile Rest Client to have this security information available within the header of the call.

Since the current Octopus version is still based on Java SE 7, no other existing implementation could be used (MicroProfile is Java SE 8 based). The implementation is based on the DeltaSpike Proxy features and uses any Client API compatible implementation which is available at runtime.

Compliant, not certified.

Since all the MicroProfile specifications are Java 8 based, the API is also converted to Java SE 7. Except for a few small incompatibilities, the port is 100% interchangeable.

The most notable difference is creating the builder with the newBuilder() method. In the original specification, this is a static method of an interface which is not allowed in Java 7. For that purpose, an abstract class is created, AbstractRestClientBuilder which contains the method.

Other than that, class and method names should be identical, which makes the switch from Atbash Rest Client to any other implementation using Java 8 very smooth.

The Maven dependency has following coordinates (available on Maven Central):

<dependency>
   <groupId>be.atbash.mp.rest-client</groupId>
   <artifactId>atbash-rest-client-impl</artifactId>
   <version>0.5</version>
</dependency>

If you are running on Java 8, you can use the Apache CXF MicroProfile Rest client

<dependency>
   <groupId>org.apache.cxf</groupId>
   <artifactId>cxf-rt-rs-mp-client</artifactId>
   <version>3.2.4</version>
</dependency>

Which also can be used in a plain Java SE environment. Other implementations like the ones within Liberty and Payara Micro aren’t useable standalone in Java SE.

Remark

This first release of Atbash Rest Client is a pre-release, meaning that not all features of the specification are completely implemented. It is just a bare minimum for the a POC within Octopus.
For example, the handling of exceptions (because the endpoint returned a status in the range 4xx or 5xx) isn’t completely covered yet.

The missing parts will be implemented or improved in a future version.

Conclusion

With the MicroProfile Rest Client specification, it becomes easy to call some JAX-RS endpoints in a type-safe way. By using a declarative method, calling some endpoint becomes as easy as calling any other method within the JVM.

And since micro-services need to be called at some point from non-micro-service code, Java SE executable implementation is very important. It makes it possible to call it from plain Java EE, Java SE, any other framework or a JVM based language.

Have fun.

This website uses cookies. By continuing to use this site, you accept our use of cookies.  Learn more