A renderer interceptor concept for Java ServerFaces

Introduction

A renderer is an important concept within JSF. It is responsible for creating the HTML output of a page from the Component tree but also processing the response from the browser is their responsibility.

It takes the values from the attributes of a component to do their job. So developers mostly don’t interact directly with them, unless you are writing custom components and write a renderer yourself.

But sometimes it could be handy if we could create some kind of interceptor for the renderers methods so that we could have

  • Generic logic for component-based security
  • Retrieve validation info (like required, max size) from Beans so that information isn’t defined multiple times.

That is what Jerry and Valerie provide for you.

Origin

The first RendererInterceptor is committed some 10 years ago within the MyFaces-Extensions-CDI project (ExtVal). A set of useful extensions for Java EE programs covering CDI, Bean Validation, and JSF.

With the RendererInterceptor you have a before and after method for every major method in a Renderer which is decode, encodeBegin, encodeChildren, encodeEnd and getConvertedValue.

About 3 years ago, I extracted the RendererInterceptor concept from ExtVal to become Jerry and rewrote it to have a deep integration with CDI and target it exclusively to JSF 2.x. The Validation extensions parts for Bean Validation were bundled into Valerie.

That way, the most powerful code from ExtVal could be used standalone and Jerry could become the heart of the Declarative Permissions based framework Octopus for the JSF component security.

Now, a new rewrite is performed to bring it within the Atbash project family and restrict its usage to Java EE 7 (and later) using the CDI 1.1 features. This is the new release version 0.9.0.

The ComponentInitializer is greatly enhanced which I will explain in this blog.

ComponentInitializer

The ComponentInitializer is a specialized form of a RendererInterceptor and can change the JSF component just before it is encoded (HTML is generated) by the Renderer.

This can be handy if you want to perform some cross-cutting concern, so for all JSF components or all components of a certain type (input type) without the need to actually know all these JSF components or even adjust the renderers.

Classic examples are

  • Change the rendered attribute based on the presence of an inner tag to have declarative Component security (This is what Octopus does)
  • Change the look of components, like setting the background color of all required input fields.
  • Retrieve information from bean properties (like @NotNull) and set the required property. That way information is only required on the model classes and doesn’t need to be duplicated on the JSF components (This is what Valerie does)
@ApplicationScoped
public class RequiredInitializer implements ComponentInitializer {

   @Override
   public void configureComponent(FacesContext facesContext, UIComponent uiComponent, Map<String, Object> metaData) {
      InputText inputText = (InputText) uiComponent;

      String style = inputText.getStyle();
      if (style == null) {
         style = "";
      }

      if (inputText.isRequired()) {
         style = style + " background-color: #B04A4A;");

      } else {
         style = style.replace("background-color: #B04A4A;","");

      }
      inputText.setStyle(style);
   }

   @Override
   public boolean isSupportedComponent(UIComponent uiComponent) {
      return uiComponent instanceof InputText;
   }
}

 

The above example sets the background color of all input fields if they are required. This is just an example and probably you should implement such a functionality using styleClasses so that color can be defined outside the code in CSS.
All CDI beans implementing the ComponentInitializer interface are picked up automatically and applied to the correct JSF components based on the return value of the isSupportedComponent() method.

Due to the enhanced functionality within this version, a ComponentInitializer can be executed multiple times and thus we need to ‘undo’ our changed style in case the component is not required.

Just as in the previous release, by default, the ComponentInitializer is executed only once. When due to AJAX request, for example, the component is rendered again, the ComponentInitializer isn’t executed anymore. This is done to minimize the overhead of the feature.

However, when the component is in some kind of repeatable parent component, the Initializer is executed multiple times. The repeatable constructs detected are <ui:repeat> and UIData constructs like <h:datatable>.
This is required because the information can be different for each row like the case where the input field required property depends on the row (required=”#{_row.field.required}”)
This logic is added in this release and important to know when you upgrade from an older version.

RepeatableComponentInitializer

The new RepeatableComponentInitializer construction in this release is always executed, regardless of the ‘location’ of the component.

A use-case for this situation is the fact that a PrimeFaces table footer is always present, although no pagination is shown (resulting in a thin colored line). With a RepeatableComponentInitializer you can remove this footer (or upgrade to the upcoming 6.2 version where this is fixed by PrimeFaces code.

Setting up

You just need to add the Jerry (and/or Valerie) maven dependency to your JSF application.

<dependency>
   <groupId>be.atbash.ee.jsf</groupId>
   <artifactId>jerry</artifactId>
   <version>${atbash.jerry.version}</version>
</dependency>

And you are ready to go.

Some more information can be found in the user manuals for Jerry and Valerie.

When you have an application using an older version of Jerry and/or Valerie (0.4.1 and older) you can use the code within the Atbash migrator project to help you convert the import statements, dependencies and JSF namespaces to the new Atbash values.

Conclusion

With the RendererInterceptor you can perform all kind of useful code in a generic way, without knowing or changing the Renderers. The new release has a greatly improved ComponentInitializer execution (detecting repeated structures like <ui:repeat> and <h:dataTable>), the new RepeatableComponentInitializer concept and is part of the ever-growing Atbash projects.

Have fun.

 

Implicit converters for MicroProfile Config

Introduction

Last month (January 2018), an update of the MicroProfile Config specification, version 1.2, was released which makes it easier to convert the configuration values to class instances.
It adds the concept of implicit/automatic converters and removes the requirement for creating specific converters in many cases.
The first implementations are already available and Atbash Configuration is now also updated.
This blog describes what the implicit/automatic converters are and how you can use them.

Converters

The configuration values we define are Strings because they are defined as a set of Characters in the configuration files. Using the configuration could be limited to a system which is capable of reading those Strings and that the developers need to convert these values to the proper instance (like Integer, Boolean or application specific classes) themselves.
But in that case, each developer will write his own set of converters and in the case of Integers and Booleans, for example, they are all the same.
So the framework itself can take care of these converters. And since we have then already the concept of a convert, it is easy to foresee an SPI which allows the creation of custom Converters by the developer for their application specific classes.
These basic converters for Integer and Boolean for example where present since the first release of the spec. Just as the SPI to add your own converters.

Implicit Converters

But the creation of all those custom converters can be minimized further. Since our application specific classes can be created from a String, they probably have some means of instantiating them through a method which takes a String parameter
– The constructor
– A static method with names like valueOf or parse.
So with the new implicit/automatic converters feature, when we have some class like this
public class ClassWithStringConstructor {
    private String value;

    public ClassWithStringConstructor (String value) {
        this.value = value;
    }

    public String getValue() {
        return value;
    }
}
and have a parameter
some.key = Atbash

We can retrieve an instance of the class, with the value using

config.getValue("some.key", ClassWithStringConstructor.class)

or when we are in a CDI enabled context, we can also use injection

@Inject
@ConfigProperty("some.key")
private ClassWithStringConstructor parameterValue;

Collection type parameters

Another nice addition to this 1.2 release, is the use of collection type parameters. The ability to have multiple parameter values, like a list of values which is converted to an Array, List or Set.
It is quite common that a configuration value is, in fact, a list of value.
pets = dog,cat
previously, the developer needed to split the String and use the individual parts separately (although this is a very simple task with String.split() ), can be done automatically.
config.getValue("pets", String[].class)

With injection, we can automatically convert it to a List or a Set.

@Inject
@ConfigProperty(name = "pets")
private List<String> pets;

The List and Set variants can’t be used in a programmatic way (using config.getValue) since the second parameter doesn’t accept parameterized types (like List<String>).

When you have a value which contains a comma it in, you need to escape it
a,b,c\\,d

which results in 3 configuration values, where the last one is c,d.

Implicit combined with Collection.

We can even combine the collection type feature with the implicit/automatic converter feature. If we instead specify the class name like ClassWithStringConstructor instead of String, the configuration framework will use the appropriate way of converting.
config.getValue("pets", ClassWithStringConstructor[].class)

or

@Inject
@ConfigProperty(name = "pets")
private List<ClassWithStringConstructor> pets;

Octopus Config

As you could read in a previous blog, Octopus Config is a Java 7 port from Geronimo config. The support for the implicit/automatic converters and collections (like the arrays, List, and set) is taken from them.
Octopus Config itself is also updated so that the YAML usage supports the Collection type parameters. The example above with the pets can be written in the YAML equivalent format as
pets: [dog , cat]

The release 0.9.1 contains also some small other features are listed here. Most notable features are

– Improved Class converters based on different classloaders.
– Test artefact.
– Logging in Java SE environment
– Improvement information in logging with @ModuleConfigName
– Compatibility with another MicroProfile Config implementation on Java 8.

Conclusion

With the addition of the implicit/automatic converters, the need for writing custom converters can be dropped and conversion will take place by convention.
The Octopus Config version v0.9.1 is updated to incorporate these features together with some small other improvements and fixes.
Have fun.

Programmatic CDI access and Unit testing

Introduction

There are situations where you need to access the CDI system programmatically, instead of using annotations like @Inject

  • You want to retrieve a CDI bean within a context (class) which isn’t CDI-aware.
  • It is easier to retrieve an optional or multiple beans programmatically instead of using Instance.

Since CDI 1.1 (Java EE 6), this can be easily achieved by using

CDI.current();

The programmatic access is most useful if you are creating a library/framework or some reusable piece of functionality. But I guess everyone who creates more than 1 application is in the situation where things can be reused.

This blog shows the equivalent programmatic constructs but most importantly, how you can unit test your bean in those cases (no CDI container needed!)

Optional CDI bean

An optional bean is interesting in the situation where you define some basic functionality but in case the developer defines a CDI bean implementing the interface, that custom developer logic is executed.

public interface SomeInterface {}
// with classic injection
@Inject
private Instance<SomeInterface> optional;

   // Within a method
   if (!optional.isUnsatisfied()) {
      optional.get();
   }
// programmatic
// Within method
Instance<SomeInterface> instance = CDI.current().select(SomeInterface.class);
instance.isUnsatisfied() ? null : instance.get();

Multiple beans

Within a Chain of Responsibility pattern or the situation where you can have multiple add-ons which perform some logic in a certain situation, are the ideal situations for having multiple CDI beans implementing an interface.

// with classic injection
@Inject
private Instance<SomeInterface> addonInstances;

   // Within a method
   List<SomeInterface> addons = new ArrayList<>();
   addonInstances.iterator().forEachRemaining(
      addons::add
   );

// programmatic
// Within a method
List<SomeInterface> addons = new ArrayList<>();
for (SomeInterface addon : CDI.current().select(SomeInterface.class)) {
   result.add(addon);
}

Sending Events

One of the nice features of CDI is the ability to sends events between consumers and producers without the need of registering the consumers with the producer for example.

Producing the Event can be done using

// with classic injection
@Inject
private Event<Payload> payloadEvent;

   // Within a method
   payloadEvent.fire(new Payload());
// programmatic
// within a method
CDI.current().getBeanManager().fireEvent(new Payload());

Unit Testing

Now that you have used some CDI bean retrievals in a programmatic fashion, what happens when you execute a unit test?

java.lang.IllegalStateException: Unable to access CDI

When running with plain Java, without any CDI container, you get this message because the API doesn’t contain an implementation, only the specification of the behavior.

You can of course switch to integration testing, where you launch a proper CDI container. But you can, of course, stay with your basic unit testing and have a simple, minimalistic Cdi container implementation (which is by the way not very difficult to create)

Suppose we have following code

public SomeInterface readOptionalInstance() {
   Instance<SomeInterface> instance = CDI.current().select(SomeInterface.class);
   return instance.isUnsatisfied() ? null : instance.get();
}

And we like to test this piece of code then the optional bean is not defined with something like this

@Test
public void readOptionalInstance_NotPresent() {
   Assert.assertNull(viewBean.readOptionalInstance());
}

We need a simple CDI implementation. This is provided by the be.atbash.util.BeanManagerFake class, available in the tests artifact of atbash utils-cdi.

<dependency>
   <groupId>be.atbash.utils</groupId>
   <artifactId>utils-cdi</artifactId>
   <version>0.9.0</version>
   <classifier>tests</classifier>
   <scope>test</scope>
</dependency>

<dependency>
   <groupId>org.mockito</groupId>
   <artifactId>mockito-core</artifactId>
   <version>2.13.0</version>
   <scope>test</scope>
</dependency>

The CDI fake container (as in not real implementation, maybe not the correct term in the test concepts Dummy/Fake/Stub/Spy) is using Mockito underneath, so that is also a dependency we need to add to the project.

The test can then be adapted as follow

private BeanManagerFake beanManagerFake = new BeanManagerFake();

@After
public void cleanup() {
   beanManagerFake.deregistration();
}

@Test
public void readOptionalInstance_NotPresent() {
   beanManagerFake.endRegistration();
   Assert.assertNull(viewBean.readOptionalInstance());
}

We can instantiate the BeanManagerFake as a regular bean. Important here is the deregistration method after the test is finished. It is the idea to keep the fake CDI container between tests alive but only clear the contents. In the current version, the container is also unset which makes it necessary that the beanManagerFake variable is an instance variable (and not a class variable)

With the endRegistration method, we signal that all configuration of the CDI container has taken place and that the operational modus can start. It can give some instances of interfaces if needed as in our test example.

Here we didn’t register anything, so we get a null back.

This is updated test class which contains also a test in the case we specify an instance in the CDI container.

@RunWith(MockitoJUnitRunner.class)
public class ViewBeanTest {

   private ViewBean viewBean = new ViewBean();

   @Mock
   private SomeInterface mockClass;

   private BeanManagerFake beanManagerFake = new BeanManagerFake();

   @After
   public void cleanup() {
      beanManagerFake.deregistration();
   }

   @Test
   public void readOptionalInstance_NotPresent() {
      beanManagerFake.endRegistration();
      Assert.assertNull(viewBean.readOptionalInstance());
   }

   @Test
   public void readOptionalInstance_Present() {
      beanManagerFake.registerBean(mockClass, SomeInterface.class);
      beanManagerFake.endRegistration();
      Assert.assertNotNull(viewBean.readOptionalInstance());
      Assert.assertTrue(viewBean.readOptionalInstance() == mockClass)
   }
}

You can see that we can define a mock (with the @Mock of Mockito) as a bean within the CDI container with the registered bean. We supply the instance and the type to which this instance must be bound. Within CDI, an instance can be bound to multiple class instances like the class itself, the interface, Object.class etc …) This is also possible with BeanManagerFake as the second parameter of the registerBean method is a varargs with all the class values.

And if you run the above test, you will see they are all green, so we, in fact, get the exact instance of the CDI bean (here the mock) back as the return value of the method.

Of course, when you have very complex Cdi constructions, it is better to test them in a real CDI container. But if the focus is on the business logic, the simple CDI container for testing will do.

Code

The code of BeanManagerFake is available in the Atbash utils GitHub repository and artifacts are on Maven Central.

Conclusion

Using the programmatic interfaces of the CDI container is easier in those cases where you need to retrieve optional or multiple CDI beans. And of course in the context where the dependency injection isn’t available. And with the help of the BeanManagerFake, a simple CDI container for testing, Unit tests are possible which make is easier then to set up a full blown integration test with a container.

Have fun.

Key derivation functions

Introduction

Although initially intended for creating better secret keys, Key derivation functions are probably better for storing hashed passwords. This post tells you more about what a Key derivation function does, how you can use it for storing hashed passwords and how you can use it in the Octopus framework.

What is it?

For the symmetric encryption you need the secret key, an array of bytes basically. A password can also be converted to a byte array so a password can be used. Then the key doesn’t need to be stored and when the user types in the password, the message can be encrypted or decrypted.

But of course, passwords are not a really good candidate as the key for an encryption.
– passwords consist of characters which have all more or less the same bit pattern. So the ‘random’ spread of bits through-out the byte array is not good.
– password are mostly short, so your key is probably too small.

So that is where the Key derivation functions come into the picture. They take a password or passphrase (a sentence used as a password) and can generate a byte array out of this. It has the following characteristics
– It needs a salt, a byte array, to be able to do his work. So salting is mandatory.
– When the same password/passphrase and salt are presented to the Key Derivation function, it results in the same byte array outcome (so it is deterministic, no random outcome)
– It is a one-way procedure. So from the output buts, the original password / passphrase can’t be reconstructed.
– The output, the number of bits generated, is configurable (important because key sizes for encryption must have some minimum lengths)
– The functions are intentionally slow to counter any brute-force attack. And iterations can be applied to make it even more difficult.

You can read more background info on this wiki page

for hashed passwords

In the previous section, you can read why those Key derivation functions are very good for generating a secret key (for symmetric encryption). The properties are designed specifically for that purpose.

But very rapidly, they saw an alternative use for these functions, create hashed passwords. The reason why are easy to see
– When offered the same input, the same output is generated
– The input can’t be reconstructed from the output.

But they have a few other properties which make it a better candidate for storing passwords then the classic hashing functions.
– The computation requires a salt, so generating an output byte array is not possible without it. This in contrast to a classic hash algorithm where we add the salt to the password to overcome the use of rainbow tables.
– The computation of Key Derivation functions are slow, those of hash algorithms are fast. And performing the hashing function multiple times is not a good idea because of the increase in collisions.

Algorithms

So we are now at the point that we can say that Key Derivation functions are better for storing hashed passwords. So how can we get started with it in our Java programs?

There are different algorithms developed (just like there are different hash algorithms) like argon2, sCript, bCript or PBKDF2.
Only one is available by default on the JVM, PBKDF2, although most people say argon2 is the best algorithm. So let us see how we can use that one on the JVM.

Since Key derivation functions are something completely different then hash algorithms, these aren’t available from java.security.MessageDigest class. But there is another standard java class available which give you an instance of the algorithm, javax.crypto.SecretKeyFactory

The following snippets generate the BASE64 encoded output for a certain password. It also generates a salt value.

String password = "atbash";

String keyAlgorithmName = "PBKDF2WithHmacSHA256";
SecretKeyFactory keyFactory;
keyFactory = SecretKeyFactory.getInstance(keyAlgorithmName);

char[] chars = password.toCharArray();

byte[] salt = new byte[32];
new SecureRandom().nextBytes(salt);

int hashIterations = 1024;

int keySizeBytes = 32;

byte[] encoded = keyFactory.generateSecret(
        new PBEKeySpec(chars, salt, hashIterations, keySizeBytes * 8)).getEncoded();

String hashed = Base64.getEncoder().encodeToString(encoded);

System.out.println(hashed);

So using them us quite easily.

Octopus support

Also within Octopus, there is support added for the PBKDF2 algorithm within version 0.9.7.1. And since the way you use it is so similar then classic Hash algorithms, the only thing you need to do is set the algorithm name in the configuration file.

hashAlgorithmName=PBKDF2WithHmacSHA256

See also the chapter in the Octopus Cookbook

Conclusion

Key derivation functions are designed for creating a key which can be used in Symmetric encryption algorithms based on a password.

But it turns out that they are also very well suited to stored hashed passwords. So try them out and use them in your next project with or without the Octopus security framework.

Have fun.

 

Java EE Security API integration with Octopus

Introduction

The Java EE Security API (JSR-375) goal is to enhance the security features of the Java EE platform and to make sure that no custom configuration of the application server must be updated anymore when you want to use for example hashed passwords stored in a database table.
It was released together with Java EE 8 in September 2017 and one of the main concepts which are defined is the IdentityStore which validates the user credentials (like username and password) and retrieves the groups (the authorization grants) the user has.

The Octopus framework is mainly targeted to authorization features like declarative permissions useable for JSF components (with a custom tag) or annotations (to protect EJB methods for example). There are many authentication integrations available like OAuth2, OpenId Connect, KeyCloak, CAS server and custom integrations for database usage to name a few.

Since v 0.9.7.1, released on 27 December 2017, it is possible to integrate the IdentityStores from JSR-375 within Octopus and most of all, it runs also on Java EE 7.

The demo code can be found in the Octopus demo repository.

Project setup

The JSR-375 integration is defined within a Maven artifact specifically created for this purpose,

<dependency>
    <groupId>be.c4j.ee.security.octopus.authentication</groupId>
    <artifactId>security-api</artifactId>
    <version>${octopus.version}</version>
</dependency>

Together with the dependency for JSF on a Java EE 7 server,

<dependency>
    <groupId>be.c4j.ee.security.octopus</groupId>
    <artifactId>octopus-javaee7-jsf</artifactId>
    <version>${octopus.version}</version>
</dependency>

they can be found in the Bintray repository

<repository>
    <id>Bintray_JCenter</id>
    <url>https://jcenter.bintray.com</url>
</repository>

These are the other dependencies which we need

  • Java EE 7 (Provided) all the features of the Java EE 7 platform
  • PrimeFaces (Compile) The JSF Component library
  • Deltaspike (Compile/Runtime) Required dependency of Octopus but set as provided within Octopus so it is easier to specify the version you want to use within your application
  • Soteria (Compile) JSR-375 implementation and required since we are using a Java EE 7 server.
  • H2 database (Runtime) Our database where we want to store hashed passwords.

Security config

In this example, I want to show you the usage of a JSR-375 defined IdentityStore and a custom one.

The default defined IdentityStore is the Database one which stores the passwords hashed. It can be configured by putting an annotation on a CDI bean.

@DatabaseIdentityStoreDefinition(
        dataSourceLookup = "java:global/MyDS",
        callerQuery = "select password from caller where name = ?",
        groupsQuery = "select group_name from caller_groups where caller_name = ?",
        hashAlgorithmParameters = {
                "Pbkdf2PasswordHash.Iterations=3072",
                "Pbkdf2PasswordHash.Algorithm=PBKDF2WithHmacSHA512",
                "Pbkdf2PasswordHash.SaltSizeBytes=64"

        }
)

The datasource is defined within the DatabaseSetup class, which ensures the correct tables are created and filled with a few credentials.

This @DatabaseIdentityStoreDefinition is placed in our demo on the CDI bean which defines the second IdentityStore which is consulted when the database store didn’t contain the username we specified.

This custom store is created by implementing the interface javax.security.enterprise.identitystore.IdentityStore and override the validate() method.

Here we check if it is a certain fixed user and if the correct password is supplied. If so, it returns a few groups. Of course, you can write any kind of logic in these custom stores when the default ones don’t fit your needs.

Authorization

JSR-375 and Java EE in general, expect that each user has some groups defined in the external system which are then converted in roles of your application.

Octopus can also work with roles but one should use permissions because they are far more powerful. It has very powerful permission support like named, wildcard and domain permissions.

The groups of the user are interpreted as follows:
– The group is converted to a role with the same name (for the case you want to work with roles within Octopus)
– The group is converted to a named permission.
– The group is converted to a set of permissions by a CDI bean implementing the RolePermissionResolver which needs to be supplied by the developer.

This last option is probably the best and most powerful way to convert a group to a set of permissions and thus getting the maximum out of the features of Octopus.

No full integration

The described solution here is to integrate the IdentityStores of Java EE 8 (JSR-375) into the Octopus framework. It does not use the JASPIC as the underlying mechanism as JSR-375 describes, nor is there any integration with the other security features of Java EE like @RolesAllowed.

Java EE 8

And your application, like the demo, will be ready to convert to Java EE 8 with a minimal amount of effort. The only thing you need to do is take the Java EE 8 dependency (instead of the Java EE 7 one) and remove the Soteria dependency.

There will be no other changes required to make your application run on Java EE 8 and make use of all the features in that latest release.

Conclusion

With the Octopus framework, you have today already the most powerful authorization features available. And by using you are preparing yourself for the future when Java EE also has the authorization features available which are planned within the Java EE Security API.

Have fun.

The advantages of the monolith

Introduction

There are various possible ‘architectures’ to use in your application. Monoliths, micro-services and self-contained systems to name a few. When something is a hype, one can easily forget that there is no silver bullet, that we should use the best solution for your problem.

And these days, a lot of people forget that monoliths are powerful when used correctly, in certain use cases.

So let’s have a look at the simplicity, easy integration and consistency aspects of the monolith.

Simplicity

Monoliths are simple in structure. You just have 1 code source repository which is compiled and packaged in one command to produce your single artifact which you need to deploy.

So for a new team member, getting up and running is easily and fast. Get the link to the source code, download and package it and run it on the local machine. Even if you need an application server, it can be simple as executing a script which sets it up correctly.

This in contrast with micro-services for instance. You have several code repositories, different packaging systems because you have different development languages and frameworks, additional instances for the configuration service, API gateway, the discovery service and so one.

Easy / fast integration

In a bigger monolith, the integration between different parts is easy and efficient.

Since everything is running within the same JVM integration is as simple as calling another java method. There is no need that the data is converted to JSON for example. So it is faster and you have no pitfalls with the conversion which can happen in some corner cases.

And since everything is running in the same process, you don’t have to take into account the fallacies of the distributed computing. Which is forgotten by the developers anyway. So no issues with network failures, latency, bandwidth latency, changing topology and all those pitfalls which arise when your application is distributed between different machines.

Consistent

The consistency of your UI is very important for your end user. Each screen must be built in a similar manner with the same look and feel and layout. When the different parts of your application looks differently or behave differently, it is for the end user much more difficult to use it and makes it inefficient (and probably he will no longer use it)

With your monolith, you are using a single code language and the same framework for generating the UI. So with a bit of discipline of the developer (and a good UI/UX design), it is fairly easy to create uniform screens.

In the case of micro-services where you have maybe different frameworks for the UI for different parts of the application, it will be very difficult to achieve this uniformity.

Or even worse, you built a giant monolithic SPA on top of your fine-grained micro-services which breaks an important concept of the architectural vision, the independently deployable artifacts.

Monolith as the start

Since it is so easy to get started with a monolith, every application should begin as a monolith. You have your first production artifact ready much faster.

In the following phases, you can gradually split it up and evolve into a distributed system if your use cases mandate it.

Just be aware of your package structure within the monolith application. Make sure there is already a clean separation so that the extraction of the different pieces can be smooth.

Conclusion

The advantages of the monolith are often forgotten these days. However, it has all the features that can give you a quick first success with your application. And the time to market is in many cases shorter due to its simplicity and easy integration. And if you carefully pay attention to the structure of your code, you can even create large monolith successfully.

Have fun.

First release of Atbash Configuration project

Introduction

Although configuration is important and essential in any application, there is no real standardization available today.

There were several attempts to create a standard around it so that each application could read his parameters in a consistent way. But as of today, there is no final standardization, nor within Java SE, not within Java EE.

Since it is so important, The Eclipse MicroProfile project created a specification for reading parameters in their Micro-services architecture. And it was almost the first thing they did (which highlights the importance they gave it). That work is now continued under the wings of the JCP (as JSR-382) so that it can become a real standard within Java SE and Java EE.

Instead of using other frameworks and libraries which have support for reading configuration, I wanted to switch to the MicroProfile Config project. Because I wanted to be prepared for the feature but encountered a few issues.

Short version

Adapted the code of MicroProfile Config API and Apache Geronimo config to Java 7 and added my own extensions which compose Atbash Config.

The Maven artefacts are available on Maven Central and the user manual is here.

My issues with MicroProfile Config project

Issue is a heavy term, my requirements are not (yet) available within the Config project but the good thing is that they are planned within the JCP specification work.

I want to read configuration parameters

  1. With Java 7
  2. For any Java EE project, not limited to Eclipse MicroProfile applications.
  3. Use single artefact which contains the configuration parameters for different environments embedded in it (within different files)
  4. Use YAML structured configuration files.

And that is not possible for the moment because

  • The MicroProfile Config project is build using Java 8 code structures. So it is not possible to run it in a Java 7 environment.
  • The Apache Geronimo Configuration project implements the API and is a very good candidate to use in my Java EE programs. Because it is created as a standalone project (not as part of a server like other implementation such as Payara, KumuluzEE, WebSphere Liberty or others) and thus you can add this dependency to any Java EE application. But configuration file name is still related to MicroProfile (/META-INF/microprofile-config.properties).
  • MicroProfile Config project has support for overruling properties by specifying them in the environment or as JVM system parameters. But there is no feature to define multiple files nor the ability to define multiple files within the artefact for each environment.
  • Only properties files are supported. And when you have larger projects, it can be handy to use a YAML structure so that it is more clear which properties are closely related.

Solution

Adapt MicroProfile Config API and Apache Geronimo Configuration.

Except for the first issue, support for Java 7, my own configuration extension could be a solution for my requirements. So I had no other choice than to copy the code and adapt it. And although the code is licensed under the Apache OpenSource license (which allows me to do this) it doesn’t feel right. So I want to thank explicitly all those committers that created the code for their wonderful work and I also made explicitly a reference within the notice file and documentation.

Atbash configuration extension

The extension consist of a custom implementation of the ConfigSource interface. That way I’m able to register another source for reading configuration parameters and fulfill my other 3 requirements.

– Implement the BaseConfigurationName interface and define the class also for the ServiceLoader mechanism. Now I’m able to use any filename like demo.properties or demo.yaml when the value demo is specified as the base name.
– By specifying the environment with the JVM system parameter -S, I can add a prefix to the filename and supply additional or override parameters for a certain environment
– YAML file support is implemented with the use of Snakeyaml where the file is converted to key-value pairs.

For more information, you can read the manual.

Getting started

There are 3 ways how you can use the Maven artefacts

1. Use plain MicroProfile config with Java 7.

Add the adapted Geronimo config code to your project

<dependency>
   <groupId>be.atbash.config</groupId>
   <artifactId>geronimo-config</artifactId>
   <version>0.9</version>
</dependency>

2. Use the Atbash configuration extension with Java 7

You need the Geronimo config and atbash config project

<dependency>
   <groupId>be.atbash.config</groupId>
   <artifactId>geronimo-config</artifactId>
   <version>0.9</version>
</dependency>

<dependency>
   <groupId>be.atbash.config</groupId>
   <artifactId>atbash-config</artifactId>
   <version>0.9</version>
</dependency>

3. Use the Atbash configuration extension with Java 8

In this scenario, you don’t need the adapted code and can use the regular implementations

<dependency>
   <groupId>org.apache.geronimo.config</groupId>
   <artifactId>geronimo-config-impl</artifactId>
   <version>1.0</version>
</dependency>

<dependency>
   <groupId>be.atbash.config</groupId>
   <artifactId>atbash-config</artifactId>
   <version>0.9</version>
   <exclusions>
      <exclusion>
         <groupId>be.atbash.config</groupId>
         <artifactId>microprofile-config-api</artifactId>
      </exclusion>
   </exclusions>
</dependency>

Conclusion

MicroProfile Config is the first real standardization around reading parameters for configuration. They have built their implementations on top of Java 8.

So when you need a general usable library with Java EE 7 (which is using Java 7 as a base version), you can use the Atbash config code (adapted from the MicroProfile Config API and Apache Geronimo config as implementation) which has a few handy extensions.

This makes you future proof at the time JSR-382 will be adopted by the different application servers.

Thank you for reading until the bottom of this blog entry and have ave fun.

Release of JSR-375 extension for JWT based authentication/authorization with JAX-RS

Introduction

Since the release of Java EE Security API (JSR-375) specification, there is an easy, uniform API to define authentication and to some extend authorization.

With this specification inplace, there will be no need anymore to change Application server specific configuration files within any Java EE 8 compliant server.

Another good thing is that Soteria, the RI of this JSR-375 specification can also be used on most of the Java EE 7 servers.

The downside is that only the ‘classic’ mechanisms using Basic, Digest and Form authentication using LDAP or Database based validation is standardized.
The reasons for this were time (specification was under time pressure since it needed to be included in Java EE 8) and usage of external libraries (and license issues)

The usage of tokens is not standardized but can be created since the API is already available within the specification.

JWT based extension

The above observations lead me to create such an extension which is able to retrieve a JWT from the header within a JAX-RS call and authenticate/authorize the ‘remote user’ using the JSR-375 defined APIs.

If you want to skip the (longer) explanation in this blog entry, you can go straight to the GitHub repository with the compact instructions to get started.

When you are not familiar with JSON Web Tokens (JWT), have a look at the site jwt.io which explains it to you very nicely.

Since the JWT specification only defines the ‘structure’, as a developer, you are free to specify the content of the JWT Payload.

This variability leads to the following implemented flow:

– Extension is responsible for retrieving the token defined with the authorization header (through a custom HttpAuthenticationMechanism)
– Control is handed over to a JWTTokenHandler which needs to be implemented by the developer to verify if the JWT is valid (signature, timings, … check) and extract authentication and authorization information.
– This information is communicated to the low-level layers of the security API so that ‘request’ is now considered as authenticated and that roles can be verified.

JWT payload structure

The JWT token is expected as part of the authorization header with the bearer marker as in the following example:

Authorization:Bearer eyJraWQiOiI3NGYyNDYxOS1kNzIwLTQ1YWQtOTk2Yy0zNWNkNzNiNDdmZmQiLCJ0eXAiOiJKV1QiLCJhbGciOiJSUzUxMiJ9.eyJzdWIiOiJTb3RlcmlhIFJJIiwicmVhbG1fYWNjZXNzIjp7InJvbGVzIjpbInVzZXIiLCJtYW5hZ2VyIl19LCJleHAiOjE1MTAzNDgxMTMsImlhdCI6MTUxMDM0ODA4M30.Bk37fPLgymuLZfLq_hdxt94PDRwNAkPkoajegaLLMrsuCCeKEb5DRXcpT9kUyrwEzSFamg_19Y7IT-0utDw4yd_rEzq7lIG8UFc8qnoYi9692Es_sqzwu64x0dM1ODOAHYEPFhIXPo2-nxquYyayMJI5PN4WlTPrRgoFCkY6saxJGzAGlfQIqH3ozaMvEJn1GK3uj1_zglv2HHK1t8lliazRkmRI-p1k9A_HCnnpChb9czpUP_wXRN4HxbADldS5sM7lgsFeLZDB4oewAh9sTXiseH7HnbPfmQKF18vb9Vejc9XzIGf0CrSOf6yVTyYDChYhI1eB5z6Wv33ofFgbPg

If you decode the payload from the above example (yes payload is unprotected but safe due to the signature) you have the following information:

{"sub":"Soteria RI",
"realm_access":{"roles":["user","manager"]},
"exp":1510348113,
"iat":1510348083
}

The sub claim defines the authentication information (who is it), the realm_access defines the groups/roles the ‘user’ has (what is he allowed) and exp and iat can be used to have a token which expires and thus replay attacks are excluded.

As I already mentioned, the payload structure is not defined by JWT, nor by Java EE Security API (as tokens are not tackled yet).

But another organization, Eclipse MicroProfile, also created their specification around authentication within micro-services. Have a look at this post which goes deeper into the subject.

So why not using their recommended claims as it will become probably the (defacto) standard.

The example in the GitHub repository implements such an MP JWT payload (also using RSA keys as mentioned within the specification).

Setup

Have a look at the readme  for all the details on how you can use the extension.

In short, this is all that you need to do

Add the dependency

You need to add the Maven dependency to include the code for the JWT extension for jsr375

<dependency>
   <groupId>be.atbash.ee.security.jsr375</groupId>
   <artifactId>soteria-jwt</artifactId>
   <version>0.9</version>
</dependency>

Implement the be.atbash.ee.security.soteria.jwt.JWTTokenHandler.

It is responsible for loading the RSA keys and verifying if the JWT is valid (Signing and time restriction for example)
The implementation is also responsible for retrieving the user name and the groups from the JWT payload and supply them to the system. They are used to construct the required principal and role info.

Conclusion

Usage of tokens, like JWT and OAuth2 tokens, is not included in the Java EE Security API specification for the moment. But the interfaces are already in place to extend to the system in any way you like. Those are used to create these extensions so that you can start using them already today, On Java EE 8 and Java EE 7!

Have fun.

How small are micro-services?

Introduction

Sometimes you see heated discussions about the size of a micro-service. After all, they should be small as you can read from the definition. But small is a subjective concept and in the end, size doesn’t have the do anything with micro-services.

This is part of my monolith, micro-service and self-contained systems comparison blog series.

Single responsibility or Domain Driven Design (DDD)

Some people are so focused on the size (lines of code or number of classes) of their micro-service that they split up a micro-service or create some kind of strange construct.

But the single responsibility principle of a micro-service is much more important. The services are created to handle the logic of a domain.

In an online webshop, you could have some micro-services related to the order, product or payment management for example. And when there is some part of your application that needs the logic of that domain, it can call that micro-service.

Pizza someone?

Other people try to control the size of the micro-service by defining the number of people who are working in the team creating it.

Since each micro-service is ‘owned’ by a team, they define the team size as the number of people who has enough with 2 pizzas. The so-called pizza team.

But if you have ‘big eaters’ your team has then 3 persons, 1 analyst, 1 developer and 1 tester? The pizza team as a metric for the size of a micro-service doesn’t make any sense.

How small is small?

Well, as small or big as needed to create the logic around the domain for which the micro-service is designed. And in the case it happens to be 1 million lines of code, so be it. Ok, you should try to split your micro service then into different parts, each related to a subdomain in that case. But in a sense, there nothing wrong with bigger micro-services.

The lines of code is not an issue as long as it isn’t blocking the continuous delivery to production. When it tends to become a monolith where the development of a feature blocks the release of the micro-service to production, then you have an issue.

Conclusion

Forget about the lines of code or the number of people in the team which is creating the micro-service as a metric. That is not a good way to handle your micro-service architecture.

The main reason why you should do micro-services is that you have separate services, one for each domain of your application. And keep in mind that you should be able to go into production at any time with your micro-service. Don’t let a big feature block you to release some small updates you have done in your code.

Have fun.

Session Hijacking protection with Octopus framework

Session Hijacking

What is Session Hijacking? Most Web applications store information of the user ( the classic example is the shopping basket) in the HTTP session. But also the basic user information. So, whenever someone else knows the ID of the session, he can impersonate the victim and act on behalf of him/her.

The following image shows the attack (image by A. Mitra)

Octopus protects

But there exist a few techniques to identify if such an attack is in progress for a certain session. And this technique is implemented within the Octopus framework (from v0.9.7 on)

You don’t have to do anything specific to activate this features, it is on by default.

When such an attack is detected, the attacker gets a blank page with the message that access was denied due to a Session Hijacking attack detection. But also the actual victim can be informed of the attempt to take over his/her session.

See the following short video to see the feature in action.

Youtube video

The code used to create the video is on GitHub.

Have a look at the parameter session.hijacking.level to configure the feature in case you have issues or want to deactivate it.

Conclusion

With the Session Hijacking protection, Octopus tries to make the applications as safe as possible. It is only one of the security features it has. You saw also in the video that the session ids are changed when the user logs on and logs off, just as OWASP recommends it.

Have fun.