MicroProfile support in Java EE Application servers

Introduction

With Java EE, you can create enterprise-type application quickly as you can concentrate on implementing the business logic.
You are also able to create applications which are more micro-service oriented, but some handy ‘features’ are not yet standardised. Standardisation is a process of specifying the best practices which of course takes some time to discover and validate these best practices.

The MicroProfile group wants to create standards for those micro-services concepts which are not yet available in Java EE. Their motto is

Optimising Enterprise Java for a micro-services architecture

This ensures that each application server, following these specifications, is compatible, just like Java EE itself. And it prepares the introduction of these specifications into Java EE, now Jakarta EE under the governance of the Eclipse Foundation.

Specification

There are already quite some specifications available under the MicroProfile flag. Have a look at the MicroProfile site and learn more about them over there.

The topics range from Configuration, Security (JWT tokens), Operations (Metric, Health, Tracing), resilience (Fault tolerance), documentation (OpenApi docu), etc …

Implementations

Just as with Java EE, there are different implementations available for each spec. The difference is that there is no Reference Implementation (RI), the special implementation which goes together with the specification documents.
All implementations are equally.

You can find standalone implementations for all specs within the SmallRye umbrella project or at Apache (mostly defined under the Apache Geronimo umbrella)

There exists also specific ‘server’ implementations which are specifically written for MicroProfile. Mostly based on Jetty or Netty, all implementations are added to have a compatible server.
Examples are KumuluzEE, Hammock, Launcher, Thorntail (v4 version) and Helidon for example.

But implementations are also made available within Java EE servers which brings both worlds tightly integrated. Examples are Payara and OpenLiberty but more servers are following this path like WildfFly and TomEE.

Using MicroProfile in Stock Java EE Servers

When you have your large legacy application which still needs to be maintained, you can also add the MicroProfile implementations to the server and benefits from their features.

It can be the first step in taking out parts of your large monolith and place it in a separate micro-service. When your package structure is already defined quite well, the separation can be done relatively easily and without the need to rewrite your application.

Although adding individual MicroProfile applications to the server is not always successful due to the usage of the advanced CDI features in MicroProfile implementations. To try things out, take one of the standalone implementations from SmallRye or Apache (Geronimo) – Config is the probably easiest to test, add it to the lib folder of your application server.

Dedicated Java EE Servers

There is also the much easier way to try out the combination which is choosing a certified Java server which has already all the MicroProfile implementations on board. Examples today are Payara and OpenLiberty. But also other vendors are going this way as the integration has started for WildLy and TomEE.

Since the integration part is already done, you can just start using them. Just add the MicroProfile Maven bom to your pom file and you are ready to go.

<dependency>
   <groupId>org.eclipse.microprofile</groupId>
   <artifactId>microprofile</artifactId>
   <version>2.0.1</version>
   <type>pom</type>
   <scope>provided</scope>
</dependency>

This way, you can define how much Java EE or MicroProfile stuff you want to use within your application and can achieve the gradual migration from existing Java EE legacy applications to a more micro-service alike version.

In addition, there exists also maven plugins to convert your application to an uber executable jar or you can run your WAR file also using the hollow jar technique with Payara Micro for example.

Conclusion

With the inclusion of the MicroProfile implementations into servers like Payara and OpenLiberty, you can enjoy the features of that framework in your Java EE Application server which you are already familiar with.

It allows you to make use of these features when you need them and create even more micro-service alike applications and make a start of the decomposition of your legacy application into smaller parts if you feel the need for this.

Enjoy it.

Atbash Summer release train

Introduction

All the Atbash repositories are still under heavy development, that is why they are released in one go. The last few days, such a release of almost all libraries is performed.

This gives a short overview of what you can find.

Big features

The big feature changes can be found in

  • Atbash JWT support related to cryptographic key support.
  • Atbash Rest client, a Java 7 port of the MicroProfile spec.
  • And Atbash Octopus where KeyCloak and MicroProfile JWT auth spec and interoperability between schemes are central in this release.

Cryptographic key support

Since there are many formats in which keys can be persisted (PEM, Java Key Stores, JWK, etc …), they are all internally stored as an AtbashKey. It contains the Key itself (as Java object), the identification and the type of the key (like RSA, private or public part, etc …)

Creating such keys can be achieved by using the class KeyGenerator, with the method generateKeys(). This class is available as CDI instance or can be instantiated directory in those environments/locations where no CDI is available.

The parameter of the generateKeys() method, defines which key(s) is created. This parameter can be created using a builder pattern.

RSAGenerationParameters generationParameters = new RSAGenerationParameters.RSAGenerationParametersBuilder()
        .withKeyId("the-kid")
        .build();
List<AtbashKey> atbashKeys = generator.generateKeys(generationParameters);

In the above example, multiple keys are generated since RSA is an asymmetric key and thus private and public parts are generated.

Writing of a key can be performed with the KeyWriter class. It has a method, writeKeyResource, which can be used to persist a key into one of the formats. The format is specified as a parameter of type KeyResourceType. This can indicate the required format like PEM, Key store, JWK, etc…

The specific type of PEM (like PKCS1, PKCS8, etc …) is defined by the configuration parameters.

Another parameter defines the password/passphrase for the key (if needed) and one for the file as a whole in the case of the Java KeyStore format for example.

The last functionality around key is then reading of all those keys in the supported format. This functionality is implemented in the KeyReader class. It is again a CDI bean which can be instantiated when no CDI environment is available.

It contains a readKeyResource() method which can read  all the keys in a resource (like PEM file, Java Key Store, JWK, etc …) As a parameter, an instance of KeyResourcePasswordLookup is supplied which retrieves a password in those case where it is needed (to read the file or decrypt the key)

The return of the method is a list because a resource can contain more than one key AtbashKeys.

This Key support is an initial version and will be improved in the further releases of the atbash-jwt-support releases with more features and more supported formats.

Atbash Rest Client

A first release was done mid-June and contained an implementation in Java 7 for Java SE and Java EE which is compatible with the MicroProfile Rest Client specification. (see here) It allows you to ‘inject’ or create (useful in Java SE environments) a system generated Rest client based on the definition of your JAX-RS endpoint defined in an interface class.

In this release, the RestClientBuilderListener from the MP Rest Client spec 1.1 is added and implemented so that we can define some additional providers in a general way. This is important for the Octopus release so that we can add the credentials, stored within the Octopus context, to the JAX-RS call automatically. Without the need to specify the providers manually.

Atbash Octopus

And of course, many new features are added to Octopus. They are migrated from the old Octopus or newly added.

The highlights are:

– Added support for KeyCloak server. JSF applications can use the authentication and authorization from KeyCloak configured realms. Also, the AccessToken from it can be based on in the header of other request and verified by JAX-RS endpoints. The only thing which is needed is the location of the KeyCloak server and the realm config in JSON (which is supplied by KeyCloak)

– The SPI option to pass the expected password for a user can now handle hashed passwords. Both the ‘standard’ algorithms from MessageDigest, like SHA-256 but also the key derivation function PBKDF2 can be defined easily.

– The authorization annotations, like @RequiresPermissions, can be specified on JAX-RS methods without the need to define those resources as CDI or EJB beans.

– Authentication and authorization information can be converted automatically to an MP JWT Auth compliant format and used in calls to JAX-RS endpoints. This makes it possible for example integrate JAX-RS resources protected by KeyCloak and MP JWT seamless.

And too much other features to describe here in detail. The user manual is also started and will be announced soon.

Overview all released frameworks

Utilities : 0.9.2

Set of utilities for Java SE, CDI and plain JSF which are very useful in many projects running in one of these environments.

  • Added utility class for HEX encoding (next to the BASE64 encoding)
  • Added support for byte arrays and encoding (HEX and BASE64) through the ByteSource class.

JSON-smart : 0.9.1

A small library (for Java 7) which can convert JSON to Java instances and vice versa.

  • Added support for @JsonProperty to define the name of JSON property.
  • Contains an SPI so that other naming annotations (like Jackson one) can be used.

Abash-config : 0.9.2

Extension for the MicroProfile Config implementations. Also a Java 7 port of Apache Geronimo Config.

  • Configuration for the base name (with serviceLoader class) is optional.
  • Port of MicroProfile Config 1.3 features to Java 7.

JWT Support : 0.9.0

Convert Java instances to JWT and vice versa and extensive support for Cryptographic keys (reading, writing, creating) supporting multiple types (like RSA, EC, and HMAC keys) and formats (like JWK, JWKSet, PEM, and KeyStore)

  • Support for reading and writing multiple formats (PEM, KeyStore, JWK and JWKSet).
  • Better support for JWT verification with keys using the concepts of KeySelector and KeyManager.

Atbash config server : 0.9.1

Configuration source for MP Config as a server supplying config through JAX-RS endpoints.

  • Added Payara micro as supported server to serve the configuration.

Atbash Rest Client : 0.5.1

Rest client implementation for Java 7.

  • Included RestClientBuilderListener from MP Rest Client 1.1 (to be able to define providers globally)

Octopus : 0.4

  • Integration with Keycloak (Client Credentials for Java SE, AuthorizationCode grant for Web, AccessToken for JAX-RS)
  • Supported for Hashed Passwords (MessageDigest ones and PBKDF2)
  • Support for MP rest Client and Providers available to add tokens for MP JWT Auth and Keycloak.
  • Logout functionality for Web.
  • Authentication events.
  • More features for JAX-RS integration (authorization violations on JAX-RS resource [no need for CDI or EJB], correct 401 return messages, … )
  • Support for default user filter (no need to define user filter before authorizationFilter)

Conclusion

The release contains a lot of goodies related to secure. In the comings months, new features will be added, support for Java 8 and 11 are planned and user manuals and cookbooks will be available to get you started with all those goodies.

The Atbash repositories with some more info and the code of course, can be found at GitHub.

Have fun.

MicroProfile 1.3 support for Jessie

Introduction

In a previous release of Jessie, there was support added for MicroProfile specifications. Initially, it was only for version 1.2 because this is the specification for which you have to most implementations available, Payara Micro, Open Liberty, WildFly Swarm and KumuluzEE.

Now I have added support for version 1.3 which is already supported by Open Liberty and Payara Micro.

MicroProfile 1.3

With the release of MicroProfile 1.3, there are a few specifications added to the mix.

OpenAPI 1.0

The specification defines the documentation of your JAX-RS endpoints using the OpenAPI v3 JSON or YAML specification.
The MicroProfile specification defines various ways on how this can be generated, like using specific annotations, fixed document, java based generator and filter.
More information and usage scenario can be found in the specification document.

OpenTracing 1.0

This specification will help you to keep track of the requests flow between all your micro-services. It has 2 main goals, define how the correlation id and additional information is transferred between different micro-services and the format of the trace records which are produced.

More information can be found in the document at link.

REST Client 1.0

The last addition is the most attractive one for developers I guess, at least for me. It builds on top of the JAX-RS client specification of Java EE/Jakarta EE.
It allows you to use type-safe access to your endpoints without the need to programmatic interact with the Client API.
You define with an interface how the JAX-RS endpoint should be called and by adding the required JAX-RS constraints (defining, for example, the method and the format like JSON) the JAX-RS client is generated dynamically.
You can read more about this nice feature in my previous blog post where I explored this specification and presented you with a client for Java SE.

What is available in Jessie?

In this release, support for MicroProfile 1.3 is added as mentioned in the introduction. It means you can select the version within a dropdown and later on, the server implementations capable of providing your selection are shown.

Not for all specifications added in this 1.3 version, have examples in the generated application. They will be added in a next version, but for those who want to get started, this version of Jessie can help them.

There are 2 other improvements added to this version
– Since there are quite some specifications now, you can specify for which of them you want a simple example in the generated application. This doesn’t restrict you in any way of using the other specifications but can help you to better keep the overview.
– A readme file is generated with more information about the specifications which are selected and how some of the specifications can be tested within the generated application.

Conclusion

Support for MicroProfile 1.3 version is added to Jessie at the request of some users who wanted to get started with it. Example code for some of the specifications will be added soon.

You can find Jessie here.

Have Fun

MicroProfile Rest Client for Java SE

Introduction

One of the cool specifications produced by the MicroProfile group is the Rest Client for MicroProfile  available from MP release 1.3.

It builds on top of the Client API of JAX-RS and allows you to use type-safe access to your endpoints without the need to programmatic interact with the Client API.

MicroProfile compliant server implementations need to implement this specification, but nothing says we cannot expand the usage into other environments (with a proper implementation) like use in Java SE (JavaFX seems must useful here) and plain Java EE.

Atbash has created an implementation of the specification so that it can be used in these environments and will use it within Octopus framework to propagate authentication and authorization information automatically in calls to JAX-RS endpoints.

The specification

A few words about the specification itself. JAX-RS 2.x contains a client API which allows you to access any ‘Rest’ endpoint in a uniform way.

Client client = ClientBuilder.newClient();
WebTarget employeeWebTarget = client.target("http://localhost:8080/demo/data").path("employees");
employeeWebTarget.request(MediaType.APPLICATION_JSON).get(new GenericType<List<Employee>>() {
});

This client API is great because we can use it to call any endpoint, not even limited to Java ones. As long as they behave in a standard way.

But things can be improved, by moving away from the programmatic way of performing these calls, into a more declarative way.

If we could define some kind of interface like this

@Path("/employees")
public interface EmployeeService {

@GET
@Produces(MediaType.APPLICATION_JSON)
List<Employee> getAll();
}

And we just could ask an implementation of this interface which performs the required steps of creating the Client, WebTarget and invoke it for us in the background. This would make it much easier for the developer and makes it much more type-safe.

Creating the implementation of that interface is what MicroProfile Rest Client is all about.

The interface defined above can then be injected in any other CDI bean (need to add the @RegisterRestClient on the interface and preferably a CDI scope like @ApplicationScoped) and by calling the method, we actually perform a call to the endpoint and retrieve the result.

@Inject
@RestClient
private EmployeeService employeeService;

public void doSomethingWithEmployees() {
....
... employeeService.getAll();
....
}

Atbash Rest Client

The specification also allows for a programmatic retrieval of the implementation, for those scenarios where no CDI is available.
However, remember that if we are running inside a CDI container, we can always retrieve some CDI bean by using

CDI.current().select();

from within any method.

The programmatic retrieval is thus an ideal candidate to use it in other environments or frameworks like JavaFX (basically every Java SE program), Spring, Kotlin, etc …

The programmatic retrieval starts from the RestClientBuilder with the newBuilder() method.

EmployeeService employeeService = RestClientBuilder.newBuilder()
    .baseUrl(new URL("http://localhost:8080/server/data"))
    .build(EmployeeService.class);

The above retrieves also an implementation for the Employee retrieval endpoint we used earlier on in this text.

For more information about the features like Exception handling, adding custom providers and more, look at the specification document.

The Atbash Rest Client is an implementation in Java 7 which can run on Java SE. It is created mainly for the Octopus Framework to propagate the user authentication (like username) and authorization information (like permissions) to JAX-RS endpoints in a transparent, automatically way.

A ClientRequestFilter is created in Octopus which creates a JWT token (compatible with MicroProfile JWT auth specification) containing username and permissions of the current user and this Filter can then be added as a provider to the MicroProfile Rest Client to have this security information available within the header of the call.

Since the current Octopus version is still based on Java SE 7, no other existing implementation could be used (MicroProfile is Java SE 8 based). The implementation is based on the DeltaSpike Proxy features and uses any Client API compatible implementation which is available at runtime.

Compliant, not certified.

Since all the MicroProfile specifications are Java 8 based, the API is also converted to Java SE 7. Except for a few small incompatibilities, the port is 100% interchangeable.

The most notable difference is creating the builder with the newBuilder() method. In the original specification, this is a static method of an interface which is not allowed in Java 7. For that purpose, an abstract class is created, AbstractRestClientBuilder which contains the method.

Other than that, class and method names should be identical, which makes the switch from Atbash Rest Client to any other implementation using Java 8 very smooth.

The Maven dependency has following coordinates (available on Maven Central):

<dependency>
   <groupId>be.atbash.mp.rest-client</groupId>
   <artifactId>atbash-rest-client-impl</artifactId>
   <version>0.5</version>
</dependency>

If you are running on Java 8, you can use the Apache CXF MicroProfile Rest client

<dependency>
   <groupId>org.apache.cxf</groupId>
   <artifactId>cxf-rt-rs-mp-client</artifactId>
   <version>3.2.4</version>
</dependency>

Which also can be used in a plain Java SE environment. Other implementations like the ones within Liberty and Payara Micro aren’t useable standalone in Java SE.

Remark

This first release of Atbash Rest Client is a pre-release, meaning that not all features of the specification are completely implemented. It is just a bare minimum for the a POC within Octopus.
For example, the handling of exceptions (because the endpoint returned a status in the range 4xx or 5xx) isn’t completely covered yet.

The missing parts will be implemented or improved in a future version.

Conclusion

With the MicroProfile Rest Client specification, it becomes easy to call some JAX-RS endpoints in a type-safe way. By using a declarative method, calling some endpoint becomes as easy as calling any other method within the JVM.

And since micro-services need to be called at some point from non-micro-service code, Java SE executable implementation is very important. It makes it possible to call it from plain Java EE, Java SE, any other framework or a JVM based language.

Have fun.

MicroProfile support for Jessie

Introduction

In some previous blog, I introduced Jessie, a tool for creating maven based applications which contain the skeleton for applications using Java EE, DeltaSpike, PrimeFaces, and Octopus (the declarative security framework for Java EE)

Although most people just copy and paste from an existing application, I find it very handy that I can generate very easily some as I create quite some demo type applications.

The last few months, besides Java EE applications, I’m also creating more and more Eclipse MicroProfile based demos. So in the last few weeks, I added support for MicroProfile into Jessie. It generates a demo app which uses the MicroProfile Specifications and specific support for the different server implementations.

MicroProfile 1.2

Version 1.2 is not the latest version of the specification. It is released end of September 2017 but is supported by quite a number of implementations as you can see on this overview page.

Besides the basic buildings blocks JAX-RS, JSON-P, and CDI, it contains

Configuration

This specification defines how configuration values can be read from different sources to customize your application.

Fault tolerance

It defines some various options for resilient micro-services like retry policies, bulkheads, circuit breakers, and fallback options.

JWT Authentication

This first version defines the basics of authentication and authorization information propagation to the micro-service using JWT. Inspired on OAuth2 and OpenId Connect, it is possible to define user information and the roles it has in a self-contained way so that no external parties need to be consulted.

Health Metrics

It defines the way how developers can expose some ‘telemetry’ data in a uniform way. It has support for things like the number of invocations, statistics on the execution time of endpoints, and so on.

Health Checks

The health checks are on a different level as the metrics. Which the checks we can verify if our service is still performing well. Or that we should kill it and start a new one. Besides standard checks (using memory, CPU, etc …) you can define your own checks very easily.

Application Generation

In this first version, the generation is still at a basic level. The online version of Jessie can be found here.

After you have selected the ‘MicroProfile’ technology stack, added some basic information about the maven artifact name, you can select one of the 4 implementations that support MicroProfile 1.2 completely

– WildFly swarm – 2017.12.1
– Liberty 17.0.0.3
– KumuluzEE 2.5.2
– Payara micro 5.181

The app contains examples of all specifications mentioned above (the JWT auth code is not always completely working due to small issues in configuration and calling the endpoint) and can be seen in action from the index.html overview page.

Conclusion

In this second official release of Jessie, you can now also generate MicroProfile 1.2 applications. You can choose one of the 4 supported implementations and all the required code for generating a fat jar application is in place.

In a future version of Jessie, newer releases of MicroProfile will be supported, the small known issues will be handled and selection of the included specifications will be possible.

have fun.

Atbash repositories overview

Now that I’m working a bit more than 6 months on the Atbash repositories, it is time to give you some overview of them.

For the moment, almost all of them are geared towards Java EE 7 and Java 7.

The idea is to create a MicroProfile compatible experience on Java EE 7. As much as possible of course. Since MicroProfile is based on Java 8, it is not always easy (or possible) to have an identical experience.

But the idea is to give the developers the possibility to have a “smooth” migration from Java EE 7 to MicroProfile by having all or the most important specifications, available on those servers.

Utility repository

Github

This contains some code used in multiple other Atbash repositories. It is not directly related to the goal but can have some usages in any project.

utils-se

– BASE64 encoder and decoder
– Utility class related to searching classes and resources on different class loaders (Current Thread, class loader of utility class or System class loader) and instantiating classes with advanced argument matching.
– Reading library or framework version from manifest file
– Check to see if an application is running within a CDI container
– Utilities related to verification and handling proxies (like determining original class)

– Reflection utilities useable in unit testing (so that we can read and set private properties without the need for setters and getters which would only be needed for testing)

utils-cdi

– Programmatic retrieval of CDI Beans, also for an optional bean (bean defined by Producer method which may or may not be present)
– Retrieval of a bean defined by Producer method involving generic types (issue due to type erasure)

– Fake bean manager useable within unit tests to supply some Mock Cdi bean instances when using programmatic CDI bean retrieval.

utils-jsf

– Creating a method expression based on the expression value
– Retrieving property values from JSF components (taking into account expressions and static values)
– Custom component finding which starts at the parent itself but then extends to the parent until found or view root is reached.

Atbash JSON (Java SE)

GitHub

Alternative JSON-B implementation (not following the specs!) to convert Java instances to JSON and vice versa. The code is adapted from the JSON smart framework (no longer maintained) for the use cases within Atbash (see Atbash JWT support) and extended to have customizations.

JWT Support (Java SE + CDI)

GitHub

The code in this repository is to support JWT (JSON Web Token, signed but also encrypted) as they appear in protocols like OAuth2 and OpenId Connect.

To support this encoding and decoding, there is also a unified handling of cryptographic keys. It is capable of reading them from a PEM file, a Java Keystore, a JWK and a JWKSet. Various encrypted formats for the Private Key are supported.

But the idea goes a bit further than just support OpenId Connect JWTs. Instead of exchanging data as JSON, why not exchange them as JWT. That way, we are not only transferring information but also make some guarantees like sender verification and end to end protection as we can detect changes through the signing.

Atbash Config (Java SE + CDI)

GitHub

The Atbash config is a Java 7 port of the MicroProfile Config specification and the Apache Geronimo implementation.

But there are also some extensions created like
– Support for custom named configuration files.
– Support for Stages so that based on a system property, values can be overridden in environments like Test environment.
– Support for custom formatted date values
– Logging of configuration values at startup of the application.
– Support for YAML format.

There is also a Config provider for testing available where the configuration values are stored in a HashMap.

Atbash Config server (MicroProfile Product)

GitHub

Allows defining the configuration values for MicroProfile application in a central place. These applications can retrieve the values using a JAX-RS endpoint.

There is also a client implementation available which, when added to the application, retrieves the values automatically, based on the configured endpoint of the config server.

Jerry (JSF)

GitHub

Jerry defines a JSF Renderer interceptor which allows you to perform various tasks on any JSF component.

It is the basis for a more advanced validation mechanism for JSF and the declarative security features of Octopus for JSF components.

Valerie (JSF, Bean Validation)

GitHub

Using the interceptor mechanism of Jerry, Valerie places the validation constraints of the Java properties (like not null, length etc) automatically on the JSF components it uses.
This way it allows to have the visual aspects of these constraints on the JSF component HTML representation and JSF validation without the need to put these constraints as JSF attribute values.

Octopus (Java SE, Java FX, CDI, JSF, JAX-RS)

GitHub

Octopus is a large framework consisting of small artifacts that bring you every aspect of authentication and authorization for Java EE and MicroProfile applications.

  • Permission-based framework
  • Secures URL, JSF components, and CDI and EJB method calls
  • Support for Java SE, JavaFX, JAX-RS and JSF
  • Integrates with OAuth2, OpenId Connect, LDAP, database, JWT, KeyCloak, CAS, …
  • Custom OpenId Connect solution within a microservices environment
  • Compatible with (and using) Microprofile (Config, Rest Client, MP-JWT, …), Java EE Security API, …
  • Very flexible, can be easily integrated within your application
  • Tightly integrated with CDI
  • Type-safe definition of permissions
  • Declarative declaration of JSF security (with tags, not using rendered attribute)
  • A custom voter can be created for more complex security requirements

Dependencies overview

The following Neo4J graph gives you an overview of the dependencies between these repositories and the relation with other libraries. Octopus is left out of the image because it would complicate it too much.

Have fun

Atbash Configuration Server

Introduction

MicroProfile Config specification defines how configuration values can be picked up by the application. It has the concept of a Configuration source and converters to supply the values you need.

By default, only a ConfigSource for a classpath defined file and environment and system variables are defined.

But since we have the concepts in place, we can create our custom ConfigSource to retrieve the configuration from a central server.

Configuration Server

Instead of putting the configuration of each application within the application itself, we can store it in a central location.

It is very convenient to have an overview of the configuration of all your application at one place. Which makes updating it also very simple.

The first version of the Atbash Configuration Server exposes the configuration of an application through a JAX-RS endpoint.

The URL

/config/{application}

will give you the configuration values as a JSON structure for the specified application.

{
   "pets":"dog,cat",
   "dateValue":"2017-11-15",
   "value2":"500",
   "classname":"be.atbash.config.examples.se.testclasses.ConvTestTypeWStringCt",
   "value1":"Classpath Value",
   "value3":"true",
   "dateValueFmt":"16-11-2017,dd-MM-yyyy"
}

Creating the Configuration server

The configuration server is available within the directory  <server> of the repository https://github.com/atbashEE/atbash-config-server

You can build the server in 3 flavors

1. Java EE war.

With the maven command, mvn package -Pee7-servera WAR file is created which can be run by any Java EE 7 compliant server (Java 8 required)

2. WildFly Swarm executable JAR

The Configuration server can also be wrapped within a WildFly swarm application with the mvn package -Pwildfly-swarmcommand.

3. WebSphere Liberty executable JAR

There is also a profile defined to create a WebSphere Liberty JAR file, use the mvn package -Plibertycommand.

More servers will be supported when they have support for Config Spec 1.2.

Configuration of Server

Define within a config-server.properties or config-sever.yaml file on the classpath the following configuration information.

– rootDirectory: Root directory containing the configuration files for the application. See further for a directory structure.

– applications: List of known applications.

example (as yaml)

rootDirectory : /Users/rubus/atbash/config-server/server/demo
applications : ["app1", "app2"]

Possible directory structure

/demo
├── /app1
│  ├── app1.properties
│  └── app1-test.properties
└── /app2
   └── app2.properties

A subdirectory for each application so that the configuration values are nicely separated for each application.

All the features of Atbash config (like stage, yaml support, date support, etc) are available.

Client configuration

A ConfigSource implementation is available which integrates the result of the call to the JAX-RS endpoint into the configuration of the application.

In this first version, Atbash configuration extension is required (which makes it a bit more work) but this requirement will be removed in the next version.

Add the Maven artifact to your dependencies

    <dependency>
        <groupId>be.atbash.config</groupId>
        <artifactId>atbash-config-client</artifactId>
        <version>${atbash-config.version}</version>
    </dependency>

Define the base name of the configuration file which contains the configuration of the Config server.

Create a class implementing the be.atbash.config.spi.BaseConfigurationName interface and defining this class for usage with the ServiceLoader mechanism of Java.

Define the class name within src/main/resources/META-INF/services/be.atbash.config.spi.BaseConfigurationName

be.atbash.config.examples.DemoBaseName

Define the configuration within the specified properties file.

config.server.url=http://localhost:8181
config.server.app=app1
config.server.stage=test

* config.server.url: The _root_ of the Config server endpoint.

* config.server.app: The name of your application

* config.server.stage: (optional) Indication of the stage of the application to retrieve specific values.

We can use all the regular features of Configuration or Atbash config to specify the stage value, like environment or system properties.

The client can be used in a Java SE, Java EE or a MicroProfile environment.

Conclusion

With the Atbash Configuration server, you can store the configuration values for all your applications at a central location. The values can be picked up by a JAX-RS call through the Client ConfigSource.

The server can be run by WildFly Swarm, Liberty or a regular Java EE 7 server. The client can be used for all environments capable of using the MicroProfile Configuration spec (including Java SE and Java EE)

A future version will bring you some more features and makes it easier to set things up.

Have fun.

Declarative Security permissions for Java FX FXML Views

Introduction

Using declarative permissions is the preferred way to have security constraints on GUI elements, separately from business logic and fine-grained because we are using individual permission/access rights.
With JavaServer Faces, you can add custom tags, but also within JavaFX FXML views you can do something similar. And recently I found the time to create a POC for an idea that I had some years ago.

Declarative Permissions

When defining indications within the view of your application, GUI elements can be active or not visible depend ending on the permissions the user has. By placing them within your view markup you have 2 major benefits:

  • The ‘definition’ when an element should be visible is done on the element itself and thus it is very clear on which element it operates
  • It isn’t mingled with your other business code.

Also, using permissions is much better and easier to work with than roles. When each feature is assigned specific permissions which the user needs to have, no application redeploy is needed when a feature can now be used by more users. You just have to assign the permission to the users.
Roles are assigned to users and thus when a single feature needs to be accessible by a group of people, you can’t assign the role to them as it would give them also access to other features. And thus you end up changing the code.

JavaFX FXML

With the FXML views, you can define how the view is presented to the end users. Just as with JavaServer Pages tags, it is also possible to create your own custom components to tailor the system to your needs.
Adding additional information to the standard components is also possible by means of the userData tag.

<Button mnemonicParsing="false" onAction="#permission" text="Only admin permission">
  <userData>
    <RequiresPermissions value="admin"/>
  </userData>
</Button>

You can add there information which can be used by your code later on. You can even add multiple data sets by using the JavaFX Collections.

Here in our case we add an instance of the Octopus JavaFX RequiresPermissions class and specify that the user needs the permission admin before the button will be visible.

In order to make this work, you need also to import the class (just as within regular java code)

<?import be.atbash.ee.security.octopus.javafx.authz.tag.RequiresPermissions?>

Processing the view

In order to remove the component from the view when the user hasn’t the required permissions, we need to scan the view just before it is ‘rendered’ to the screen.
This can be easily achieved by the excellent framework afterburner which was a method getView() defined within the FXMLView class.

Octopus defines a child class, be.atbash.ee.security.octopus.javafx.SecureFXMLView, which performs the following steps when the view is retrieved

  • Scan all view nodes
  • Verifies if there is an instance of the RequiresPermissions within the userData section
  • Checks if the user has the permission or not.
  • If (s)he has not, sets the node to not visible (node.setVisible(false) )

Octopus Java FX support

Next, to the processing of the view, Octopus has now also support for JavaFX views by helping the developer in creating a Login screen and the integration with the Octopus Core classes which can perform the required steps for authentication and authorization.

This support is built on top of the afterburner framework which makes developing JavaFX applications easy.

You can start the application by using the start method from the be.atbash.ee.security.octopus.javafx.SecureFXMLApplication class.

SecureFXMLApplication.start(stage, new LoginView(), new UserPageView());

The loginView itself must extend from be.atbash.ee.security.octopus.javafx.LoginFXMLView and contain components with ids user, password and loginButton.

The start() method must also a receive an instance of the first real page which will be shown after the user is authenticated. This is a FXML view where the java class extends from be.atbash.ee.security.octopus.javafx.SecureFXMLView. These views are scanned for their contents within the userData section and component are hidden when the user doesn’t have the required privileges.

Page navigation can be performed by be.atbash.ee.security.octopus.javafx.SecureFXMLApplication#showPage and this method also perform the hiding of components based on the authorization info of this view.

Demo

The Atbash Octopus framework code has a demo application to show this feature in action. It can be found at GitHub .

For those who want to have a quick idea how it looks like, I recorded also a short video which demonstrates the demo

Conclusion

Declarative permissions can also be used on JavaFX FXML views, just as you can do this with JavaServer Faces. Instead of custom tags who defines the required permissions, you have to define the permission requirements within the userData section which is available within each JavaFX component.
This is only a POC and the functionality will be extended in future versions of Atbash Octopus (as the rewrite of the Octopus framework is currently still in Alpha)

Have fun with it

Generating (Maven) Projects

Introduction

The first thing we do when starting a new application is setting up our project. A lot of people use Maven for this, and you have various options how you can do this.
Since I create a lot of projects for testing out various features of the Atbash projects or for trying things out (I have around 300 projects on my computer today) I was searching for some help to do this.

Application setup

When trying things out, I need to have some minimal project setup. Not only do I need the Maven pom file with the required dependencies, I also need some files like configuration or a login file for Octopus.

I do not need to have tools to generate me the JPA entities, JSF Screens or JAX-RS endpoints. Creating a few of them is not difficult with the current IDE’s and that automatic generation is mostly too opinionated so that it is not usable in production systems either.

So what are the options to have this minimal project setup

Copy and Paste

You can always copy such a minimal project you have already created and made some modifications to

  • Change the groupId and artifactId of the pom file
  • Update the dependencies
  • Update the configuration file to match your new dependencies and use case you want to test out.
  • Change the java package with your java code
  • Add or remove code depending on the feature for this use case.

This is a valid procedure and in most cases the most efficient one. You don’t need to learn any new tool, extend it with your features (Like Octopus in my case) but of course, it is not the most fun way of doing things.

JBoss Forge

JBoss Forge https://forge.jboss.org/ is a tool specifically designed to get your application up and running fast. It not only generates the project for you but also can help you in generating those well-structured classes like JPA entities, JAX-RS endpoints, and screens with JSF or Angular(JS).

I used it in the past and it can be a valuable tool but it is not the right thing for what I want.
Although you can generate your maven project, you need a script and/or a custom add-on for the features that I want (specific dependencies within the pom.xml file and some configuration and java files)

It also has a lot of options to generate code for you, but again this is sometimes opinionated and not what you or your company wants to use.

Other tools

Since there are various other tools on the internet, the topic seems to be important for a lot of people. Similar tools which I found are generjee and factorEE.
Not all of them are maintained anymore or do focus on the wrong things (for my use case) like generating code.
I also find a CLI very important in this case since it allows you to create an application very rapidly (and recreate it based on the same configuration)

Jessie

So, due to the lack of the right tool for me, I created one of my own called Jessie (Java EE Start project as it was geared at that time towards Java EE). Last year with the release of CDI 2.0, I did a rewrite of it using the Java SE CDI container and included the usage of ThymeLeaf as templating mechanism so that it became easier to generate files which can be customized.

In the OpenSource spirit of Atbash, I placed the code on Github  and created a front end for it.
You can play with it on the OpenShift V3 instance  or can download the code that just needs a few parameters within a YAML config file and the CLI version generates your project.

The first version has support for Java EE and additional frameworks like DeltaSpike, PrimeFaces, and Octopus, the security framework.

Since I’m also involved with MicroProfile lately, a version for MicroProfile and the support for the different vendor implementation like LibertyIO, WildFly Swarm, Payara Micro etc will be created later on.

Conclusion

When setting up a new project, the most efficient solution is creating it from scratch or start from a similar existing project and make the required adaptations.
Another useful tool is JBoss Forge, mainly when you are interested in generating some code.

Jessie has been created a few years ago as a toy and now available on GitHub and it was a good learning opportunity for CDI 2.0, templating with ThymeLeaf and creating a tool which runs in multiple environments (CLI, Java FX, Web/JSF)

Maybe it is also helpful for you.

Have fun.

Http Signatures for End-To-End protection

Introduction

In a distributed system, it is not enough to know who called you, but you also need to make sure the data which you receive are not altered in transit.
The use of tokens (like Bearer JWT tokens according to the OpenId Connect or MicroProfile JWT Auth specifications) can be captured and briefly used to send fake request to your endpoints.
And SSL (like using HTTPS connections) cannot guarantee that your message isn’t read or altered by someone due to the possibilities of SSL Proxy termination or even Man In The Middle Intermediates who fakes correct SSL.

Encryption or protection

The solution is that your processes somehow make sure that the communication between them is ‘safe’. If the processes themselves (your client and your server application) perform these tasks and not some network layer (SSL, in fact, doesn’t fit in any OSI based network layer) of your (virtual) machine.

When your process is performing these tasks, it is possible to create a security context between the 2 processes and then each process can ensure or detect that the message is unaltered or can’t be read, regardless of any Proxy or Man In The Middle.

Whether you need encryption of the message (because you have confidential data) or signing (sensitive data) depends on the data. But at least you need the signing of the message so that we can guarantee the message is actually sent by someone we trust and isn’t altered.

Http Signatures

There is a standard created to have a mechanism to implement this kind of End To End Protection. It is called ‘HTTP Signatures’ and designed by the IETF. https://tools.ietf.org/id/draft-cavage-http-signatures-09.html

It is very HTTP friendly as it adds an additional header to the request. Within this header, enough information is placed so that the target process can verify if the message was altered or not.

This header is called Signature

Signature : keyId="rsa-key-1",algorithm="rsa-sha256",headers="(request-target) date digest content-length",signature="Base64(RSA-SHA256(signing string))"

The specification defines the structure and framework how the header should be created and the verification process must be performed but leaves a lot of freedom to the developer to tailor this protection to his/her needs.

The idea is that there is a signature calculated based on some parameters which are mostly just other headers. And by calculating them on the receiving side, we can verify if these values are changed.

These parameters are described within the headers parameter, and in the example, the following headers are used (request-target) date digest content-length

The values of these headers are concatenated (there exist a few rules about how to do it) and for this string, the cryptographic hash is calculated the signature.
The calculation is done using the value specified by the algorithm parameter (rsa-sha256 in our example above) and from the key with id rsa-key-1 is used for this in our example.
There are a number of crypto algorithms allowed, but RSA keys are the most popular ones.
This signature value is a binary, so a BASE64 encoded value is placed within the header.

In theory, all headers can be used but some of them are more interesting than other ones.
For example

  • (request-target) is a pseudo header value containing the concatenation of the http method and the path of the URL. If you like to include also the hostname (which can be interesting if you want to make sure that a request for the test environments never hits production, you can add the host header in the set of headers which needs to be used for the calculation of the signature value.
  • date header contains the timestamp when the request is created. This allows is to reject ‘old’ requests.
  • digest is very interesting when there is a payload send to the endpoint (like with put and post). This header contains then the hash value of the payload and is a key point in the end to end protection we are discussing here.

On the receiving side, the headers which are specified within the signature parameter are checked. Like

  • date is verified to be within a skew value from the system time
  • digest is verified to see if the header value is the same as the calculated hash value from the payload.

And of course, the signature is verified. Here we decode the signature value with the public key defined in the keyId parameter, using the algorithm specified.
And that value should match the calculated value from the headers.

Using Http Signatures

I have created a Proof Of Concept how this can be integrated with JAX-RS, client and server side.

For the server side, it is quite simple since you can add filters and interceptors easily by annotating them with @Provider. By just then indicating if this interceptor and filters should need to do their work, verification of the HTTP Signature, you are set to go.

@Path("/order")
@RestSignatureCheck
public class OrderController {

On the client side, you can register the required filters and WriterInterceptor manually with the Client. Something like this

Client client = ClientBuilder.newClient();
client.register(SignatureClientRequestFilter.class);
client.register(SignatureWriterInterceptor.class);

One thing is on the roadmap is to integrate it with the MicroProfile Rest Client way of working so that you can register the Signatures Feature and you’re done.

The initial code can be found on Github.

Conclusion

With the Http Signatures specification, you can add a header which can be used to verify if the message is changed or not. This is a nice non-intrusive way of having an End-To-End protection implement which can complement the SSL features to be sure that no intermediary party changes the message.

Have fun.