Dependency Injection

Dependency injection is one of the most important design patterns and is a key principle to a modular and component based architecture. The Java Standard for dependency injection is javax.inject (JSR330) that we use in combination with JSR250. Additionally, for scoping you can use CDI (Context and Dependency Injection) from JSR365.

There are many frameworks which support this standard including all recent Java EE application servers. Therefore in devonfw we rely on these open standards and can propagate patterns and code examples that work independent from the underlying frameworks.

Key Principles

Within dependency injection a bean is typically a reusable unit of your application providing an encapsulated functionality. This bean can be injected into other beans and it should in general be replaceable. As an example we can think of a use-case, a repository, etc. As best practice we use the following principles:

  • Stateless implementation
    By default such beans shall be implemented stateless. If you store state information in member variables you can easily run into concurrency problems and nasty bugs. This is easy to avoid by using local variables and separate state classes for complex state-information. Try to avoid stateful beans wherever possible. Only add state if you are fully aware of what you are doing and properly document this as a warning in your JavaDoc.

  • Usage of Java standards
    We use common standards (see above) that makes our code portable. Therefore we use standardized annotations like @Inject (javax.inject.Inject) instead of proprietary annotations such as @Autowired. Generally we avoid proprietary annotations in business code (logic layer).

  • Simple injection-style
    In general you can choose between constructor, setter or field injection. For simplicity we recommend to do private field injection as it is very compact and easy to maintain. We believe that constructor injection is bad for maintenance especially in case of inheritance (if you change the dependencies you need to refactor all sub-classes). Private field injection and public setter injection are very similar but setter injection is much more verbose (often you are even forced to have javadoc for all public methods). If you are writing re-usable library code setter injection will make sense as it is more flexible. In a business application you typically do not need that and can save a lot of boiler-plate code if you use private field injection instead. Nowadays you are using container infrastructure also for your tests (see testing) so there is no need to inject manually (what would require a public setter).

  • KISS
    To follow the KISS (keep it small and simple) principle we avoid advanced features (e.g. custom AOP, non-singleton beans) and only use them where necessary.

  • Separation of API and implementation
    For important components we should separate a self-contained API documented with JavaDoc from its implementation. Code from other components that wants to use the implementation shall only rely on the API. However, for things that will never be exchanged no API as interface is required you can skip such separation.

Example Bean

Here you can see the implementation of an example bean using dependency injection:

public class MyComponentImpl implements MyComponent {
  private MyOtherComponent myOtherComponent;

  public void init() {
    // initialization if required (otherwise omit this method)

  public void dispose() {
    // shutdown bean, free resources if required (otherwise omit this method)


Here MyComponentImpl depends on MyOtherComponent that is injected into the field myOtherComponent because of the @Inject annotation. To make this work there must be exactly one bean in the container (e.g. spring or quarkus) that is an instance of MyOtherComponent. In order to put a bean into the container, we can use @ApplicationScoped in case of CDI (required for quarkus) for a stateless bean. In spring we can ommit a CDI annotation and the @Named annotation is already sufficient as a bean is stateless by default in spring. If we always use @ApplicationScoped we can make this more explicit and more portable accross different frameworks. So in our example we put MyComponentImpl into the container. That bean will be called MyComponent as we specified in the @Named annotation but we can also omit the name to use the classname as fallback. Now our bean can be injected into other beans using @Inject annotation either via MyComponent interface (recommended when interface is present) or even directly via MyComponentImpl. In case you omit the interface, you should also omit the Impl suffix or instead use Bean as suffix.

Multiple bean implementations

In some cases you might have multiple implementations as beans for the same interface. The following sub-sections handle the different scenarios to give you guidance.

Only one implementation in container

In some cases you still have only one implementation active as bean in the container at runtime. A typical example is that you have different implemenations for test and main usage. This case is easy, as @Inject will always be unique. The only thing you need to care about is how to configure your framework (spring, quaruks, etc.) to know which implementation to put in the container depending on specific configuration. In spring this can be archived via the proprietary @Profile annotaiton.

Injecting all of multiple implementations

In some situations you may have an interface that defines a kind of "plugin". You can have multiple implementations in your container and want to have all of them injected. Then you can request a list with all the bean implementations via the interface as in the following example:

  private List<MyConverter> converters;

Your code may iterate over all plugins (converters) and apply them sequentially. Please note that the injection will fail (at least in spring), when there is no bean available to inject. So you do not get an empty list injected but will get an exception on startup.

Injecting one of multiple implementations

Another scenario is that you have multiple implementations in your container coexisting, but for injection you may want to choose a specific implementation. Here you could use the @Named annotation to specify a unique identifier for each implementation what is called qualified injection:

public class UserAuthenticator implements Authenticator {
public class ServiceAuthenticator implements Authenticator {
public class MyUserComponent {
  private Authenticator authenticator;
public class MyServiceComponent {
  private Authenticator authenticator;

However, we discovered that this pattern is not so great: The identifiers in the @Named annotation are just strings that could easily break. You could use constants instead but still this is not the best solution.

In the end you can very much simplify this by just directly injecting the implementation instead:

public class UserAuthenticator implements Authenticator {
public class ServiceAuthenticator implements Authenticator {
public class MyUserComponent {
  private UserAuthenticator authenticator;
public class MyServiceComponent {
  private ServiceAuthenticator authenticator;

In case you want to strictly decouple from implementations, you can still create dedicated interfaces:

public interface UserAuthenticator extends Authenticator {}
public class UserAuthenticatorImpl implements UserAuthenticator {
public interface ServiceAuthenticator extends Authenticator {}
public class ServiceAuthenticatorImpl implements ServiceAuthenticator {
public class MyUserComponent {
  private UserAuthenticator authenticator;
public class MyServiceComponent {
  private ServiceAuthenticator authenticator;

However, as you can see this is again introducing additional boiler-plate code. While the principle to separate API and implementation and strictly decouple from implementation is valuable in general, you should always consider KISS, lean, and agile in contrast and balance pros and cons instead of blindly following dogmas.


Here are the import statements for the most important annotations for dependency injection

import javax.inject.Inject;
import javax.inject.Named;
import javax.enterprise.context.ApplicationScoped;
// import javax.enterprise.context.RequestScoped;
// import javax.enterprise.context.SessionScoped;
import javax.annotation.PostConstruct;
import javax.annotation.PreDestroy;

Please note that with Jakarta EE the dependencies have changed. When you want to start with Jakarta EE you should use these dependencies to get the annoations for dependency injection:

<!-- Basic injection annotations (JSR-330) -->
<!-- Basic lifecycle and security annotations (JSR-250)-->
<!-- Context and dependency injection API (JSR-365) -->

Please note that with quarkus you will get them as transitive dependencies out of the box. The above Jakarate EE dependencies replace these JEE depdencies:

<!-- Basic injection annotations (JSR-330) -->
<!-- Basic lifecycle and security annotations (JSR-250)-->
<!-- Context and dependency injection API (JSR-365) -->


An application needs to be configurable in order to allow internal setup (like CDI) but also to allow externalized configuration of a deployed package (e.g. integration into runtime environment). Using Spring Boot (must read: Spring Boot reference) we rely on a comprehensive configuration approach following a "convention over configuration" pattern. This guide adds on to this by detailed instructions and best-practices how to deal with configurations.

In general we distinguish the following kinds of configuration that are explained in the following sections:

Internal Application Configuration

The application configuration contains all internal settings and wirings of the application (bean wiring, database mappings, etc.) and is maintained by the application developers at development time. There usually is a main configuration registered with main Spring Boot App, but differing configurations to support automated test of the application can be defined using profiles (not detailed in this guide).

Spring Boot Application

The devonfw recommends using spring-boot to build web applications. For a complete documentation see the Spring Boot Reference Guide.

With spring-boot you provide a simple main class (also called starter class) like this: com.devonfw.mtsj.application

@SpringBootApplication(exclude = { EndpointAutoConfiguration.class })
@EntityScan(basePackages = { "com.devonfw.mtsj.application" }, basePackageClasses = { AdvancedRevisionEntity.class })
@EnableGlobalMethodSecurity(jsr250Enabled = true)
@ComponentScan(basePackages = { "com.devonfw.mtsj.application.general", "com.devonfw.mtsj.application" })
public class SpringBootApp {

   * Entry point for spring-boot based app
   * @param args - arguments
  public static void main(String[] args) {, args);

In an devonfw application this main class is always located in the <basepackage> of the application package namespace (see package-conventions). This is because a spring boot application will automatically do a classpath scan for components (spring-beans) and entities in the package where the application main class is located including all sub-packages. You can use the @ComponentScan and @EntityScan annotations to customize this behaviour.

If you want to map spring configuration properties into your custom code please see configuration mapping.

Standard beans configuration

For basic bean configuration we rely on spring boot using mainly configuration classes and only occasionally XML configuration files. Some key principle to understand Spring Boot auto-configuration features:

  • Spring Boot auto-configuration attempts to automatically configure your Spring application based on the jar dependencies and annotated components found in your source code.

  • Auto-configuration is non-invasive, at any point you can start to define your own configuration to replace specific parts of the auto-configuration by redefining your identically named bean (see also exclude attribute of @SpringBootApplication in example code above).

Beans are configured via annotations in your java code (see dependency-injection).

For technical configuration you will typically write additional spring config classes annotated with @Configuration that provide bean implementations via methods annotated with @Bean. See spring @Bean documentation for further details. Like in XML you can also use @Import to make a @Configuration class include other configurations.

More specific configuration files (as required) reside in an adequately named subfolder of:


BeanMapper Configuration

In case you are still using dozer, you will find further details in bean-mapper configuration.

Security configuration

The abstract base class BaseWebSecurityConfig should be extended to configure web application security thoroughly. A basic and secure configuration is provided which can be overridden or extended by subclasses. Subclasses must use the @Profile annotation to further discriminate between beans used in production and testing scenarios. See the following example:

Listing 6. How to extend BaseWebSecurityConfig for Production and Test
public class TestWebSecurityConfig extends BaseWebSecurityConfig {...}

public class WebSecurityConfig extends BaseWebSecurityConfig {...}
WebSocket configuration

A websocket endpoint is configured within the business package as a Spring configuration class. The annotation @EnableWebSocketMessageBroker makes Spring Boot registering this endpoint.

public class WebSocketConfig extends AbstractWebSocketMessageBrokerConfigurer {
Database Configuration

To choose database of your choice , set in src/main/resources/config/ Also, one has to set all the active spring profiles in this and not in any of the other

Externalized Configuration

Externalized configuration is a configuration that is provided separately to a deployment package and can be maintained undisturbed by re-deployments.

Environment Configuration

The environment configuration contains configuration parameters (typically port numbers, host names, passwords, logins, timeouts, certificates, etc.) specific for the different environments. These are under the control of the operators responsible for the application.

The environment configuration is maintained in files, defining various properties (see common application properties for a list of properties defined by the spring framework). These properties are explained in the corresponding configuration sections of the guides for each topic:

For a general understanding how spring-boot is loading and boostrapping your see spring-boot external configuration. The following properties files are used in every devonfw application:

  • src/main/resources/ providing a default configuration - bundled and deployed with the application package. It further acts as a template to derive a tailored minimal environment-specific configuration.

  • src/main/resources/config/ providing additional properties only used at development time (for all local deployment scenarios). This property file is excluded from all packaging.

  • src/test/resources/config/ providing additional properties only used for testing (JUnits based on spring test).

For other environments where the software gets deployed such as test, acceptance and production you need to provide a tailored copy of The location depends on the deployment strategy:

  • standalone run-able Spring Boot App using embedded tomcat: config/ under the installation directory of the spring boot application.

  • dedicated tomcat (one tomcat per app): $CATALINA_BASE/lib/config/

  • tomcat serving a number of apps (requires expanding the wars): $CATALINA_BASE/webapps/<app>/WEB-INF/classes/config

In this you only define the minimum properties that are environment specific and inherit everything else from the bundled src/main/resources/ In any case, make very sure that the classloader will find the file.

Make sure your properties are thoroughly documented by providing a comment to each property. This inline documentation is most valuable for your operating department.

Business Configuration

Often applications do not need business configuration. In case they do it should typically be editable by administrators via the GUI. The business configuration values should therefore be stored in the database in key/value pairs.

Therefore we suggest to create a dedicated table with (at least) the following columns:

  • ID

  • Property name

  • Property type (Boolean, Integer, String)

  • Property value

  • Description

According to the entries in this table, an administrative GUI may show a generic form to modify business configuration. Boolean values should be shown as checkboxes, integer and string values as text fields. The values should be validated according to their type so an error is raised if you try to save a string in an integer property for example.

We recommend the following base layout for the hierarchical business configuration:



Often you need to have passwords (for databases, third-party services, etc.) as part of your configuration. These are typically environment specific (see above). However, with DevOps and continuous-deployment you might be tempted to commit such configurations into your version-control (e.g. git). Doing that with plain text passwords is a severe problem especially for production systems. Never do that! Instead we offer some suggestions how to deal with sensible configurations:

Password Encryption

A simple but reasonable approach is to configure the passwords encrypted with a master-password. The master-password should be a strong secret that is specific for each environment. It must never be committed to version-control. In order to support encrypted passwords in spring-boot all you need to do is to add jasypt-spring-boot as dependency in your pom.xml (please check for recent version here):


This will smoothly integrate jasypt into your spring-boot application. Read this HOWTO to learn how to encrypt and decrypt passwords using jasypt.

Next, we give a simple example how to encypt and configure a secret value. We use the algorithm PBEWITHHMACSHA512ANDAES_256 that provides strong encryption and is the default of jasypt-spring-boot-starter. However, different algorithms can be used if perferred (e.g. PBEWITHMD5ANDTRIPLEDES).

java -cp ${M2_REPO}/org/jasypt/jasypt/1.9.3/jasypt-1.9.3.jar org.jasypt.intf.cli.JasyptPBEStringEncryptionCLI password=masterpassword algorithm=PBEWITHHMACSHA512ANDAES_256 input=secret ivGeneratorClassName=org.jasypt.iv.RandomIvGenerator


Runtime: AdoptOpenJDK OpenJDK 64-Bit Server VM 11.0.5+10


input: secret
password: masterpassword
ivGeneratorClassName: org.jasypt.iv.RandomIvGenerator



Of course the master-password (masterpassword) and the actual password to encrypt (secret) are just examples. Please replace them with reasonable strong passwords for your environment. Further, if you are using devonfw-ide you can make your life much easier and just type:

devon jasypt encrypt

See jasypt commandlet for details.

Now the entire line after the OUTPUT block is your encrypted secret. It even contains some random salt so that multiple encryption invocations with the same parameters (ARGUMENTS) will produce a different OUTPUT.

The master-password can be configured on your target environment via the property jasypt.encryptor.password. As system properties given on the command-line are visible in the process list, we recommend to use an config/application.yml file only for this purpose (as we recommended to use for regular configs):

        password: masterpassword

Again masterpassword is just an example that your replace with your actual master password. Now you are able to put encrypted passwords into your and specify the algorithm.


This file can be version controlled (git-opts) and without knowing the masterpassword nobody is able to decrypt this to get the actual secret back.

To prevent jasypt to throw an exception in dev or test scenarios you can simply put this in your local config (src/main/config/ and same for test, see above for details):

Is this Security by Obscurity?
  • Yes, from the point of view to protect the passwords on the target environment this is nothing but security by obscurity. If an attacker somehow got full access to the machine this will only cause him to spend some more time.

  • No, if someone only gets the configuration file. So all your developers might have access to the version-control where the config is stored. Others might have access to the software releases that include this configs. But without the master-password that should only be known to specific operators none else can decrypt the password (except with brute-force what will take a very long time, see jasypt for details).

Mapping configuration to your code

If you are using spring-boot as suggested by devon4j your application can be configured by file as described in configuration. To get a single configuration option into your code for flexibility, you can use

private String myConfigurableField;

Now, in your you can add the property:

You may even use @Value("${}") to make the property optional.

Naming conventions for configuration properties

As a best practice your configruation properties should follow these naming conventions:

  • build the property-name as a path of segments separated by the dot character (.)

  • segments should get more specific from left to right

  • a property-name should either be a leaf value or a tree node (prefix of other property-names) but never both! So never have something like and

  • start with a segment namespace unique to your context or application

  • a good example would be «myapp» for the sender address of billing service emails send by «myapp».

Mapping advanced configuration

However, in many scenarios you will have features that require more than just one property. Injecting those via @Value is not leading to good code quality. Instead we create a class with the suffix ConfigProperties containing all configuration properties for our aspect that is annotated with @ConfigurationProperties:

@ConfigurationProperties(prefix = "myapp.billing.service")
public class BillingServiceConfigProperties {

  private final Email email = new Email();
  private final Smtp smtp = new Smtp();

  public Email getEmail() { return; }
  public Email getSmtp() { return this.smtp; }

  public static class Email {

    private String sender;
    private String subject;

    public String getSender() { return this.sender; }
    public void setSender(String sender) { this.sender = sender; }
    public String getSubject() { return this.subject; }
    public void setSubject(String subject) { this.subject = subject; }

  public static class Smtp {

    private String host;
    private int port = 25;

    public String getHost() { return; }
    public void setHost(String host) { = host; }
    public int getPort() { return this.port; }
    public void setPort(int port) { this.port = port; }


Of course this is just an example to demonstrate this feature of spring-boot. In order to send emails you would typically use the existing spring-email feature. But as you can see this allows us to define and access our configuration in a very structured and comfortable way. The annotation @ConfigurationProperties(prefix = "myapp.billing.service") will automatically map spring configuration properties starting with myapp.billing.service via the according getters and setters into our BillingServiceConfigProperties. We can easily define defaults (e.g. 25 as default value for myapp.billing.service.smtp.port). Also Email or Smtp could be top-level classes to be reused in multiple configurations. Of course you would also add helpful JavaDoc comments to the getters and classes to document your configuration options. Further to access this configuration, we can use standard dependency-injection:

private BillingServiceConfigProperties config;

For very generic cases you may also use Map<String, String> to map any kind of property in an untyped way. An example for generic configuration from devon4j can be found in ServiceConfigProperties.

For further details about this feature also consult Guide to @ConfigurationProperties in Spring Boot.

Generate configuration metadata

You should further add this dependency to your module containing the *ConfigProperties:


This will generate configuration metadata so projects using your code can benefit from autocompletion and getting your JavaDoc as tooltip when editing application.properites what makes this approach very powerful. For further details about this please read A Guide to Spring Boot Configuration Metadata.

Java Persistence API

For mapping java objects to a relational database we use the Java Persistence API (JPA). As JPA implementation we recommend to use hibernate. For general documentation about JPA and hibernate follow the links above as we will not replicate the documentation. Here you will only find guidelines and examples how we recommend to use it properly. The following examples show how to map the data of a database to an entity. As we use JPA we abstract from SQL here. However, you will still need a DDL script for your schema and during maintenance also database migrations. Please follow our SQL guide for such artifacts.


Entities are part of the persistence layer and contain the actual data. They are POJOs (Plain Old Java Objects) on which the relational data of a database is mapped and vice versa. The mapping is configured via JPA annotations (javax.persistence). Usually an entity class corresponds to a table of a database and a property to a column of that table. A persistent entity instance then represents a row of the database table.

A Simple Entity

The following listing shows a simple example:

public class MessageEntity extends ApplicationPersistenceEntity implements Message {

  private String text;

  public String getText() {
    return this.text;

  public void setText(String text) {
    this.text = text;

The @Entity annotation defines that instances of this class will be entities which can be stored in the database. The @Table annotation is optional and can be used to define the name of the corresponding table in the database. If it is not specified, the simple name of the entity class is used instead.

In order to specify how to map the attributes to columns we annotate the corresponding getter methods (technically also private field annotation is also possible but approaches can not be mixed). The @Id annotation specifies that a property should be used as primary key. With the help of the @Column annotation it is possible to define the name of the column that an attribute is mapped to as well as other aspects such as nullable or unique. If no column name is specified, the name of the property is used as default.

Note that every entity class needs a constructor with public or protected visibility that does not have any arguments. Moreover, neither the class nor its getters and setters may be final.

Entities should be simple POJOs and not contain business logic.

Entities and Datatypes

Standard datatypes like Integer, BigDecimal, String, etc. are mapped automatically by JPA. Custom datatypes are mapped as serialized BLOB by default what is typically undesired. In order to map atomic custom datatypes (implementations of`+SimpleDatatype`) we implement an AttributeConverter. ere is a simple example:

@Converter(autoApply = true)
public class MoneyAttributeConverter implements AttributeConverter<Money, BigDecimal> {

  public BigDecimal convertToDatabaseColumn(Money attribute) {
    return attribute.getValue();

  public Money convertToEntityAttribute(BigDecimal dbData) {
    return new Money(dbData);

The annotation @Converter is detected by the JPA vendor if the annotated class is in the packages to scan. Further, autoApply = true implies that the converter is automatically used for all properties of the handled datatype. Therefore all entities with properties of that datatype will automatically be mapped properly (in our example Money is mapped as BigDecimal).

In case you have a composite datatype that you need to map to multiple columns the JPA does not offer a real solution. As a workaround you can use a bean instead of a real datatype and declare it as @Embeddable. If you are using hibernate you can implement CompositeUserType. Via the @TypeDef annotation it can be registered to hibernate. If you want to annotate the CompositeUserType implementation itself you also need another annotation (e.g. MappedSuperclass tough not technically correct) so it is found by the scan.


By default JPA maps Enums via their ordinal. Therefore the database will only contain the ordinals (0, 1, 2, etc.) . So , inside the database you can not easily understand their meaning. Using @Enumerated with EnumType.STRING allows to map the enum values to their name ( Both approaches are fragile when it comes to code changes and refactoring (if you change the order of the enum values or rename them) after the application is deployed to production. If you want to avoid this and get a robust mapping you can define a dedicated string in each enum value for database representation that you keep untouched. Then you treat the enum just like any other custom datatype.


If binary or character large objects (BLOB/CLOB) should be used to store the value of an attribute, e.g. to store an icon, the @Lob annotation should be used as shown in the following listing:

public byte[] getIcon() {
  return this.icon;
Using a byte array will cause problems if BLOBs get large because the entire BLOB is loaded into the RAM of the server and has to be processed by the garbage collector. For larger BLOBs the type Blob and streaming should be used.
public Blob getAttachment() {
  return this.attachment;
Date and Time

To store date and time related values, the temporal annotation can be used as shown in the listing below:

public java.util.Date getStart() {
  return start;

Until Java8 the java data type java.util.Date (or Jodatime) has to be used. TemporalType defines the granularity. In this case, a precision of nanoseconds is used. If this granularity is not wanted, TemporalType.DATE can be used instead, which only has a granularity of milliseconds. Mixing these two granularities can cause problems when comparing one value to another. This is why we only use TemporalType.TIMESTAMP.

QueryDSL and Custom Types

Using the Aliases API of QueryDSL might result in an InvalidDataAccessApiUsageException when using custom datatypes in entity properties. This can be circumvented in two steps:

  1. Ensure you have the following maven dependencies in your project (core module) to support custom types via the Aliases API:

  2. Make sure, that all your custom types used in entities provide a non-argument constructor with at least visibility level protected.

Primary Keys

We only use simple Long values as primary keys (IDs). By default it is auto generated (@GeneratedValue(strategy=GenerationType.AUTO)). This is already provided by the class com.devonfw.<projectName>.general.dataaccess.api.AbstractPersistenceEntity that you can extend. In case you have business oriented keys (often as String), you can define an additional property for it and declare it as unique (@Column(unique=true)). Be sure to include "AUTO_INCREMENT" in your sql table field ID to be able to persist data (or similar for other databases).

n:1 and 1:1 Relationships

Entities often do not exist independently but are in some relation to each other. For example, for every period of time one of the StaffMember’s of the restaurant example has worked, which is represented by the class WorkingTime, there is a relationship to this StaffMember.

The following listing shows how this can be modeled using JPA:


public class WorkingTimeEntity {

   private StaffMemberEntity staffMember;

   public StaffMemberEntity getStaffMember() {
      return this.staffMember;

   public void setStaffMember(StaffMemberEntity staffMember) {
      this.staffMember = staffMember;

To represent the relationship, an attribute of the type of the corresponding entity class that is referenced has been introduced. The relationship is a n:1 relationship, because every WorkingTime belongs to exactly one StaffMember, but a StaffMember usually worked more often than once.
This is why the @ManyToOne annotation is used here. For 1:1 relationships the @OneToOne annotation can be used which works basically the same way. To be able to save information about the relation in the database, an additional column in the corresponding table of WorkingTime is needed which contains the primary key of the referenced StaffMember. With the name element of the @JoinColumn annotation it is possible to specify the name of this column.

1:n and n:m Relationships

The relationship of the example listed above is currently an unidirectional one, as there is a getter method for retrieving the StaffMember from the WorkingTime object, but not vice versa.

To make it a bidirectional one, the following code has to be added to StaffMember:

  private Set<WorkingTimeEntity> workingTimes;

  public Set<WorkingTimeEntity> getWorkingTimes() {
    return this.workingTimes;

  public void setWorkingTimes(Set<WorkingTimeEntity> workingTimes) {
    this.workingTimes = workingTimes;

To make the relationship bidirectional, the tables in the database do not have to be changed. Instead the column that corresponds to the attribute staffMember in class WorkingTime is used, which is specified by the mappedBy element of the @OneToMany annotation. Hibernate will search for corresponding WorkingTime objects automatically when a StaffMember is loaded.

The problem with bidirectional relationships is that if a WorkingTime object is added to the set or list workingTimes in StaffMember, this does not have any effect in the database unless the staffMember attribute of that WorkingTime object is set. That is why the devon4j advices not to use bidirectional relationships but to use queries instead. How to do this is shown here. If a bidirectional relationship should be used nevertheless, appropriate add and remove methods must be used.

For 1:n and n:m relations, the devon4j demands that (unordered) Sets and no other collection types are used, as shown in the listing above. The only exception is whenever an ordering is really needed, (sorted) lists can be used.
For example, if WorkingTime objects should be sorted by their start time, this could be done like this:

  private List<WorkingTimeEntity> workingTimes;

  @OneToMany(mappedBy = "staffMember")
  @OrderBy("startTime asc")
  public List<WorkingTimeEntity> getWorkingTimes() {
    return this.workingTimes;

  public void setWorkingTimes(List<WorkingTimeEntity> workingTimes) {
    this.workingTimes = workingTimes;

The value of the @OrderBy annotation consists of an attribute name of the class followed by asc (ascending) or desc (descending).

To store information about a n:m relationship, a separate table has to be used, as one column cannot store several values (at least if the database schema is in first normal form).
For example if one wanted to extend the example application so that all ingredients of one FoodDrink can be saved and to model the ingredients themselves as entities (e.g. to store additional information about them), this could be modeled as follows (extract of class FoodDrink):

  private Set<IngredientEntity> ingredients;

  public Set<IngredientEntity> getIngredients() {
    return this.ingredients;

  public void setOrders(Set<IngredientEntity> ingredients) {
    this.ingredients = ingredients;

Information about the relation is stored in a table called BILL_ORDER that has to have two columns, one for referencing the Bill, the other one for referencing the Order. Note that the @JoinTable annotation is not needed in this case because a separate table is the default solution here (same for n:m relations) unless there is a mappedBy element specified.

For 1:n relationships this solution has the disadvantage that more joins (in the database system) are needed to get a Bill with all the Orders it refers to. This might have a negative impact on performance so that the solution to store a reference to the Bill row/entity in the Order’s table is probably the better solution in most cases.

Note that bidirectional n:m relationships are not allowed for applications based on devon4j. Instead a third entity has to be introduced, which "represents" the relationship (it has two n:1 relationships).

Eager vs. Lazy Loading

Using JPA it is possible to use either lazy or eager loading. Eager loading means that for entities retrieved from the database, other entities that are referenced by these entities are also retrieved, whereas lazy loading means that this is only done when they are actually needed, i.e. when the corresponding getter method is invoked.

Application based on devon4j are strongly advised to always use lazy loading. The JPA defaults are:

  • @OneToMany: LAZY

  • @ManyToMany: LAZY

  • @ManyToOne: EAGER

  • @OneToOne: EAGER

So at least for @ManyToOne and @OneToOne you always need to override the default by providing fetch = FetchType.LAZY.

Please read the performance guide.
Cascading Relationships

For relations it is also possible to define whether operations are cascaded (like a recursion) to the related entity. By default, nothing is done in these situations. This can be changed by using the cascade property of the annotation that specifies the relation type (@OneToOne, @ManyToOne, @OneToMany, @ManyToOne). This property accepts a CascadeType that offers the following options:

  • PERSIST (for EntityManager.persist, relevant to inserted transient entities into DB)

  • REMOVE (for EntityManager.remove to delete entity from DB)

  • MERGE (for EntityManager.merge)

  • REFRESH (for EntityManager.refresh)

  • DETACH (for EntityManager.detach)

  • ALL (cascade all of the above operations)

See here for more information.

Typesafe Foreign Keys using IdRef

For simple usage you can use Long for all your foreign keys. However, as an optional pattern for advanced and type-safe usage, we offer IdRef.


An embeddable Object is a way to group properties of an entity into a separate Java (child) object. Unlike with implement relationships the embeddable is not a separate entity and its properties are stored (embedded) in the same table together with the entity. This is helpful to structure and reuse groups of properties.

The following example shows an Address implemented as an embeddable class:

public class AddressEmbeddable {

  private String street;
  private String number;
  private Integer zipCode;
  private String city;

  public String getNumber() {
    return number;

  public void setNumber(String number) {
    this.number = number;

  ...  // other getter and setter methods, equals, hashCode

As you can see an embeddable is similar to an entity class, but with an @Embeddable annotation instead of the @Entity annotation and without primary key or modification counter. An Embeddable does not exist on its own but in the context of an entity. As a simplification Embeddables do not require a separate interface and ETO as the bean-mapper will create a copy automatically when converting the owning entity to an ETO. However, in this case the embeddable becomes part of your api module that therefore needs a dependency on the JPA.

In addition to that the methods equals(Object) and hashCode() need to be implemented as this is required by Hibernate (it is not required for entities because they can be unambiguously identified by their primary key). For some hints on how to implement the hashCode() method please have a look here.

Using this AddressEmbeddable inside an entity class can be done like this:

  private AddressEmbeddable address;

  public AddressEmbeddable getAddress() {
    return this.address;

  public void setAddress(AddressEmbeddable address) {
    this.address = address;

The @Embedded annotation needs to be used for embedded attributes. Note that if in all columns of the embeddable (here Address) are null, then the embeddable object itself is also null inside the entity. This has to be considered to avoid NullPointerException’s. Further this causes some issues with primitive types in embeddable classes that can be avoided by only using object types instead.


Just like normal java classes, entity classes can inherit from others. The only difference is that you need to specify how to map a class hierarchy to database tables. Generic abstract super-classes for entities can simply be annotated with @MappedSuperclass.

For all other cases the JPA offers the annotation @Inheritance with the property strategy talking an InheritanceType that has the following options:

  • SINGLE_TABLE: This strategy uses a single table that contains all columns needed to store all entity-types of the entire inheritance hierarchy. If a column is not needed for an entity because of its type, there is a null value in this column. An additional column is introduced, which denotes the type of the entity (called dtype).

  • TABLE_PER_CLASS: For each concrete entity class there is a table in the database that can store such an entity with all its attributes. An entity is only saved in the table corresponding to its most concrete type. To get all entities of a super type, joins are needed.

  • JOINED: In this case there is a table for every entity class including abstract classes, which contains only the columns for the persistent properties of that particular class. Additionally there is a primary key column in every table. To get an entity of a class that is a subclass of another one, joins are needed.

Each of the three approaches has its advantages and drawbacks, which are discussed in detail here. In most cases, the first one should be used, because it is usually the fastest way to do the mapping, as no joins are needed when retrieving, searching or persisting entities. Moreover it is rather simple and easy to understand. One major disadvantage is that the first approach could lead to a table with a lot of null values, which might have a negative impact on the database size.

The inheritance strategy has to be annotated to the top-most entity of the class hierarchy (where `@MappedSuperclass`es are not considered) like in the following example:

public abstract class MyParentEntity extends ApplicationPersistenceEntity implements MyParent {

public class MyChildEntity extends MyParentEntity implements MyChild {

public class MyOtherEntity extends MyParentEntity implements MyChild {

As a best practice we advise you to avoid entity hierarchies at all where possible and otherwise to keep the hierarchy as small as possible. In order to just ensure reuse or establish a common API you can consider a shared interface, a @MappedSuperclass or an @Embeddable instead of an entity hierarchy.

Repositories and DAOs

For each entity a code unit is created that groups all database operations for that entity. We recommend to use spring-data repositories for that as it is most efficient for developers. As an alternative there is still the classic approach using DAOs.

Concurrency Control

The concurrency control defines the way concurrent access to the same data of a database is handled. When several users (or threads of application servers) concurrently access a database, anomalies may happen, e.g. a transaction is able to see changes from another transaction although that one did, not yet commit these changes. Most of these anomalies are automatically prevented by the database system, depending on the isolation level (property hibernate.connection.isolation in the jpa.xml, see here).

Another anomaly is when two stakeholders concurrently access a record, do some changes and write them back to the database. The JPA addresses this with different locking strategies (see here).

As a best practice we are using optimistic locking for regular end-user services (OLTP) and pessimistic locking for batches.

Optimistic Locking

The class com.devonfw.module.jpa.persistence.api.AbstractPersistenceEntity already provides optimistic locking via a modificationCounter with the @Version annotation. Therefore JPA takes care of optimistic locking for you. When entities are transferred to clients, modified and sent back for update you need to ensure the modificationCounter is part of the game. If you follow our guides about transfer-objects and services this will also work out of the box. You only have to care about two things:

  • How to deal with optimistic locking in relationships?
    Assume an entity A contains a collection of B entities. Should there be a locking conflict if one user modifies an instance of A while another user in parallel modifies an instance of B that is contained in the other instance? To address this , take a look at FeatureForceIncrementModificationCounter.

  • What should happen in the UI if an OptimisticLockException occurred?
    According to KISS our recommendation is that the user gets an error displayed that tells him to do his change again on the recent data. Try to design your system and the work processing in a way to keep such conflicts rare and you are fine.

Pessimistic Locking

For back-end services and especially for batches optimistic locking is not suitable. A human user shall not cause a large batch process to fail because he was editing the same entity. Therefore such use-cases use pessimistic locking what gives them a kind of priority over the human users. In your DAO implementation you can provide methods that do pessimistic locking via EntityManager operations that take a LockModeType. Here is a simple example:

  getEntityManager().lock(entity, LockModeType.READ);

When using the lock(Object, LockModeType) method with LockModeType.READ, Hibernate will issue a SELECT …​ FOR UPDATE. This means that no one else can update the entity (see here for more information on the statement). If LockModeType.WRITE is specified, Hibernate issues a SELECT …​ FOR UPDATE NOWAIT instead, which has has the same meaning as the statement above, but if there is already a lock, the program will not wait for this lock to be released. Instead, an exception is raised.
Use one of the types if you want to modify the entity later on, for read only access no lock is required.

As you might have noticed, the behavior of Hibernate deviates from what one would expect by looking at the LockModeType (especially LockModeType.READ should not cause a SELECT …​ FOR UPDATE to be issued). The framework actually deviates from what is specified in the JPA for unknown reasons.

Database Auditing
Testing Data-Access

For testing of Entities and Repositories or DAOs see testing guide.


We strongly recommend these principles:

  • Use the JPA where ever possible and use vendor (hibernate) specific features only for situations when JPA does not provide a solution. In the latter case consider first if you really need the feature.

  • Create your entities as simple POJOs and use JPA to annotate the getters in order to define the mapping.

  • Keep your entities simple and avoid putting advanced logic into entity methods.

Database Configuration

The configuration for spring and hibernate is already provided by devonfw in our sample application and the application template. So you only need to worry about a few things to customize.

Database System and Access

Obviously you need to configure which type of database you want to use as well as the location and credentials to access it. The defaults are configured in that is bundled and deployed with the release of the software. It should therefore contain the properties as in the given example:

  database.hibernate.dialect = org.hibernate.dialect.PostgreSQLDialect

The environment specific settings (especially passwords) are configured by the operators in For further details consult the configuration guide. It can also override the default values. The relevant configuration properties can be seen by the following example for the development environment (located in src/test/resources):


For further details about please see here. For production and acceptance environments we use the value validate that should be set as default. In case you want to use Oracle RDBMS you can find additional hints here.

Database Migration
Database Logging

Add the following properties to to enable logging of database queries for debugging purposes.

You typically want to pool JDBC connections to boost performance by recycling previous connections. There are many libraries available to do connection pooling. We recommend to use HikariCP. For Oracle RDBMS see here.


A common security threat is SQL-injection. Never build queries with string concatenation or your code might be vulnerable as in the following example:

  String query = "Select op from OrderPosition op where op.comment = " + userInput;
  return getEntityManager().createQuery(query).getResultList();

Via the parameter userInput an attacker can inject SQL (JPQL) and execute arbitrary statements in the database causing extreme damage.

In order to prevent such injections you have to strictly follow our rules for queries:

Limited Permissions for Application

We suggest that you operate your application with a database user that has limited permissions so he can not modify the SQL schema (e.g. drop tables). For initializing the schema (DDL) or to do schema migrations use a separate user that is not used by the application itself.


The Java Persistence API (JPA) defines its own query language, the java persistence query language (JPQL) (see also JPQL tutorial), which is similar to SQL but operates on entities and their attributes instead of tables and columns.

The simplest CRUD-Queries (e.g. find an entity by its ID) are already build in the devonfw CRUD functionality (via Repository or DAO). For other cases you need to write your own query. We distinguish between static and dynamic queries. Static queries have a fixed JPQL query string that may only use parameters to customize the query at runtime. Instead, dynamic queries can change their clauses (WHERE, ORDER BY, JOIN, etc.) at runtime depending on the given search criteria.

Static Queries

E.g. to find all DishEntries (from MTS sample app) that have a price not exceeding a given maxPrice we write the following JPQL query:

SELECT dish FROM DishEntity dish WHERE dish.price <= :maxPrice

Here dish is used as alias (variable name) for our selected DishEntity (what refers to the simple name of the Java entity class). With dish.price we are referring to the Java property price (getPrice()/setPrice(…​)) in DishEntity. A named variable provided from outside (the search criteria at runtime) is specified with a colon (:) as prefix. Here with :maxPrice we reference to a variable that needs to be set via query.setParameter("maxPrice", maxPriceValue). JPQL also supports indexed parameters (?) but they are discouraged because they easily cause confusion and mistakes.

Using Queries to Avoid Bidirectional Relationships

With the usage of queries it is possible to avoid exposing relationships or modelling bidirectional relationships, which have some disadvantages (see relationships). This is especially desired for relationships between entities of different business components. So for example to get all OrderLineEntities for a specific OrderEntity without using the orderLines relation from OrderEntity the following query could be used:

SELECT line FROM OrderLineEntity line WHERE = :orderId
Dynamic Queries

For dynamic queries we use QueryDSL. It allows to implement queries in a powerful but readable and type-safe way (unlike Criteria API). If you already know JPQL you will quickly be able to read and write QueryDSL code. It feels like JPQL but implemented in Java instead of plain text.

Please be aware that code-generation can be painful especially with large teams. We therefore recommend to use QueryDSL without code-generation. Here is an example from our sample application:

  public List<DishEntity> findOrders(DishSearchCriteriaTo criteria) {
    DishEntity dish = Alias.alias(DishEntity.class);
    JPAQuery<OrderEntity> query = newDslQuery(alias); // new JPAQuery<>(getEntityManager()).from(Alias.$(dish));
    Range<BigDecimal> priceRange = criteria.getPriceRange();
    if (priceRange != null) {
      BigDecimal min = priceRange.getMin();
      if (min != null) {
      BigDecimal max = priceRange.getMax();
      if (max != null) {
    String name = criteria.getName();
    if ((name != null) && (!name.isEmpty())) {
      // query.where(Alias.$(alias.getName()).eq(name));
      QueryUtil.get().whereString(query, Alias.$(alias.getName()), name, criteria.getNameOption());
    return query.fetch();
Using Wildcards

For flexible queries it is often required to allow wildcards (especially in dynamic queries). While users intuitively expect glob syntax the SQL and JPQL standards work different. Therefore a mapping is required. devonfw provides this on a lower level by LikePatternSyntax and on a high level by QueryUtil (see QueryHelper.newStringClause(…​)).


devonfw provides pagination support. If you are using spring-data repositories you will get that directly from spring for static queries. Otherwise for dynamic or generally handwritten queries we provide this via QueryUtil.findPaginated(…​):

boolean determineTotalHitCount = ...;
return QueryUtil.get().findPaginated(criteria.getPageable(), query, determineTotalHitCount);
Pagination example

For the table entity we can make a search request by accessing the REST endpoint with pagination support like in the following examples:

POST mythaistar/services/rest/tablemanagement/v1/table/search
  "pagination": {

    "pagination": {
        "size": 2,
        "page": 1,
        "total": 11
    "result": [
            "id": 101,
            "modificationCounter": 1,
            "revision": null,
            "waiterId": null,
            "number": 1,
            "state": "OCCUPIED"
            "id": 102,
            "modificationCounter": 1,
            "revision": null,
            "waiterId": null,
            "number": 2,
            "state": "FREE"
As we are requesting with the total property set to true the server responds with the total count of rows for the query.

For retrieving a concrete page, we provide the page attribute with the desired value. Here we also left out the total property so the server doesn’t incur on the effort to calculate it:

POST mythaistar/services/rest/tablemanagement/v1/table/search
  "pagination": {


    "pagination": {
        "size": 2,
        "page": 2,
        "total": null
    "result": [
            "id": 103,
            "modificationCounter": 1,
            "revision": null,
            "waiterId": null,
            "number": 3,
            "state": "FREE"
            "id": 104,
            "modificationCounter": 1,
            "revision": null,
            "waiterId": null,
            "number": 4,
            "state": "FREE"
Query Meta-Parameters

Queries can have meta-parameters and that are provided via SearchCriteriaTo. Besides paging (see above) we also get timeout support.

Advanced Queries

Writing queries can sometimes get rather complex. The current examples given above only showed very simple basics. Within this topic a lot of advanced features need to be considered like:

This list is just containing the most important aspects. As we can not cover all these topics here, they are linked to external documentation that can help and guide you.


If you are using the Spring Framework and have no restrictions regarding that, we recommend to use spring-data-jpa via devon4j-starter-spring-data-jpa that brings advanced integration (esp. for QueryDSL).


The benefits of spring-data are (for examples and explanations see next sections):

  • All you need is one single repository interface for each entity. No need for a separate implementation or other code artifacts like XML descriptors, NamedQueries class, etc.

  • You have all information together in one place (the repository interface) that actually belong together (where as in the classic approach you have the static queries in an XML file, constants to them in NamedQueries class and referencing usages in DAO implementation classes).

  • Static queries are most simple to realize as you do not need to write any method body. This means you can develop faster.

  • Support for paging is already build-in. Again for static query method the is nothing you have to do except using the paging objects in the signature.

  • Still you have the freedom to write custom implementations via default methods within the repository interface (e.g. for dynamic queries).


For each entity «Entity»Entity an interface is created with the name «Entity»Repository extending DefaultRepository. Such repository is the analogy to a Data-Access-Object (DAO) used in the classic approach or when spring-data is not an option.


The following example shows how to write such a repository:

public interface ExampleRepository extends DefaultRepository<ExampleEntity> {

  @Query("SELECT example FROM ExampleEntity example" //
      + " WHERE = :name")
  List<ExampleEntity> findByName(@Param("name") String name);

  @Query("SELECT example FROM ExampleEntity example" //
      + " WHERE = :name")
  Page<ExampleEntity> findByNamePaginated(@Param("name") String name, Pageable pageable);

  default Page<ExampleEntity> findByCriteria(ExampleSearchCriteriaTo criteria) {
    ExampleEntity alias = newDslAlias();
    JPAQuery<ExampleEntity> query = newDslQuery(alias);
    String name = criteria.getName();
    if ((name != null) && !name.isEmpty()) {
      QueryUtil.get().whereString(query, $(alias.getName()), name, criteria.getNameOption());
    return QueryUtil.get().findPaginated(criteria.getPageable(), query, false);


This ExampleRepository has the following features:

  • CRUD support from spring-data (see JavaDoc for details).

  • Support for QueryDSL integration, paging and more as well as locking via GenericRepository

  • A static query method findByName to find all ExampleEntity instances from DB that have the given name. Please note the @Param annotation that links the method parameter with the variable inside the query (:name).

  • The same with pagination support via findByNamePaginated method.

  • A dynamic query method findByCriteria showing the QueryDSL and paging integration into spring-data provided by devon.

Further examples

You can also read the JUnit test-case DefaultRepositoryTest that is testing an example FooRepository.


In case you need auditing, you only need to extend DefaultRevisionedRepository instead of DefaultRepository. The auditing methods can be found in GenericRevisionedRepository.


In case you want to switch to or add spring-data support to your devon application all you need is this maven dependency:

<!-- Starter for consuming REST services -->

Spring-data also has some drawbacks:

  • Some kind of magic behind the scenes that are not so easy to understand. So in case you want to extend all your repositories without providing the implementation via a default method in a parent repository interface you need to deep-dive into spring-data. We assume that you do not need that and hope what spring-data and devon already provides out-of-the-box is already sufficient.

  • The spring-data magic also includes guessing the query from the method name. This is not easy to understand and especially to debug. Our suggestion is not to use this feature at all and either provide a @Query annotation or an implementation via default method.

Data Access Object

The Data Access Objects (DAOs) are part of the persistence layer. They are responsible for a specific entity and should be named «Entity»Dao and «Entity»DaoImpl. The DAO offers the so called CRUD-functionalities (create, retrieve, update, delete) for the corresponding entity. Additionally a DAO may offer advanced operations such as query or locking methods.

DAO Interface

For each DAO there is an interface named «Entity»Dao that defines the API. For CRUD support and common naming we derive it from the ApplicationDao interface that comes with the devon application template:

public interface MyEntityDao extends ApplicationDao<MyEntity> {
  List<MyEntity> findByCriteria(MyEntitySearchCriteria criteria);

All CRUD operations are inherited from ApplicationDao so you only have to declare the additional methods.

DAO Implementation

Implementing a DAO is quite simple. We create a class named «Entity»DaoImpl that extends ApplicationDaoImpl and implements your «Entity»Dao interface:

public class MyEntityDaoImpl extends ApplicationDaoImpl<MyEntity> implements MyEntityDao {

  public List<MyEntity> findByCriteria(MyEntitySearchCriteria criteria) {
    TypedQuery<MyEntity> query = createQuery(criteria, getEntityManager());
    return query.getResultList();

Again you only need to implement the additional non-CRUD methods that you have declared in your «Entity»Dao interface. In the DAO implementation you can use the method getEntityManager() to access the EntityManager from the JPA. You will need the EntityManager to create and execute queries.

Static queries for DAO Implementation

All static queries are declared in the file src\main\resources\META-INF\orm.xml:

<?xml version="1.0" encoding="UTF-8"?>
<entity-mappings version="1.0" xmlns="" xmlns:xsi=""
  <named-query name="">
    <query><![SELECT dish FROM DishEntity dish WHERE dish.price <= :maxPrice]]></query>

When your application is started, all these static queries will be created as prepared statements. This allows better performance and also ensures that you get errors for invalid JPQL queries when you start your app rather than later when the query is used.

To avoid redundant occurrences of the query name ( we define a constant for each named query:

public class NamedQueries {
  public static final String FIND_DISH_WITH_MAX_PRICE = "";

Note that changing the name of the java constant (FIND_DISH_WITH_MAX_PRICE) can be done easily with refactoring. Further you can trace where the query is used by searching the references of the constant.

The following listing shows how to use this query:

public List<DishEntity> findDishByMaxPrice(BigDecimal maxPrice) {
  Query query = getEntityManager().createNamedQuery(NamedQueries.FIND_DISH_WITH_MAX_PRICE);
  query.setParameter("maxPrice", maxPrice);
  return query.getResultList();

Via EntityManager.createNamedQuery(String) we create an instance of Query for our predefined static query. Next we use setParameter(String, Object) to provide a parameter (maxPrice) to the query. This has to be done for all parameters of the query.

Note that using the createQuery(String) method, which takes the entire query as string (that may already contain the parameter) is not allowed to avoid SQL injection vulnerabilities. When the method getResultList() is invoked, the query is executed and the result is delivered as List. As an alternative, there is a method called getSingleResult(), which returns the entity if the query returned exactly one and throws an exception otherwise.

JPA Performance

When using JPA the developer sometimes does not see or understand where and when statements to the database are triggered.

Establishing expectations Developers shouldn’t expect to sprinkle magic pixie dust on POJOs in hopes they will become persistent.

— Dan Allen

So in case you do not understand what is going on under the hood of JPA, you will easily run into performance issues due to lazy loading and other effects.

N plus 1 Problem

The most prominent phenomena is call the N+1 Problem. We use entities from our MTS demo app as an example to explain the problem. There is a DishEntity that has a @ManyToMany relation to IngredientEntity. Now we assume that we want to iterate all ingredients for a dish like this:

DishEntity dish = dao.findDishById(dishId);
BigDecimal priceWithAllExtras = dish.getPrice();
for (IngredientEntity ingredient : dish.getExtras()) {
  priceWithAllExtras = priceWithAllExtras.add(ingredient.getPrice());

Now dish.getExtras() is loaded lazy. Therefore the JPA vendor will provide a list with lazy initialized instances of IngredientEntity that only contain the ID of that entity. Now with every call of ingredient.getPrice() we technically trigger an SQL query statement to load the specific IngredientEntity by its ID from the database. Now findDishById caused 1 initial query statement and for any number N of ingredients we are causing an additional query statement. This makes a total of N+1 statements. As causing statements to the database is an expensive operation with a lot of overhead (creating connection, etc.) this ends in bad performance and is therefore a problem (the N+1 Problem).

Solving N plus 1 Problem

To solve the N+1 Problem you need to change your code to only trigger a single statement instead. This can be archived in various ways. The most universal solution is to use FETCH JOIN in order to pre-load the nested N child entities into the first level cache of the JPA vendor implementation. This will behave very similar as if the @ManyToMany relation to IngredientEntity was having FetchType.EAGER but only for the specific query and not in general. Because changing @ManyToMany to FetchType.EAGER would cause bad performance for other usecases where only the dish but not its extra ingredients are needed. For this reason all relations, including @OneToOne should always be FetchType.LAZY. Back to our example we simply replace dao.findDishById(dishId) with dao.findDishWithExtrasById(dishId) that we implement by the following JPQL query:

SELECT dish FROM DishEntity dish
  LEFT JOIN FETCH dish.extras
  WHERE = :dishId

The rest of the code does not have to be changed but now dish.getExtras() will get the IngredientEntity from the first level cache where is was fetched by the initial query above.

Please note that if you only need the sum of the prices from the extras you can also create a query using an aggregator function:

SELECT sum(dish.extras.price) FROM DishEntity dish

As you can see you need to understand the concepts in order to get good performance.

There are many advanced topics such as creating database indexes or calculating statistics for the query optimizer to get the best performance. For such advanced topics we recommend to have a database expert in your team that cares about such things. However, understanding the N+1 Problem and its solutions is something that every Java developer in the team needs to understand.


IdRef can be used to reference other entities in TOs in order to make them type-safe and semantically more expressive. It is an optional concept in devon4j for more complex applications that make intensive use of relations and foreign keys.


Assuming you have a method signature like the following:

Long approve(Long cId, Long cuId);

So what are the paremeters? What is returned?

IdRef is just a wrapper for a Long used as foreign key. This makes our signature much more expressive and self-explanatory:

IdRef<Contract> approve(IdRef<Contract> cId, IdRef<Customer> cuId);

Now we can easily see, that the result and the parameters are foreign-keys and which entity they are referring to via their generic type. We can read the javadoc of these entities from the generic type and understand the context. Finally, when passing IdRef objects to such methods, we get compile errors in case we accidentally place parameters in the wrong order.

IdRef and Mapping

In order to easily map relations from entities to transfer-objects and back, we can easily also put according getters and setters into our entities:

public class ContractEntity extends ApplicationPersistenceEntity implements Contract {

  private CustomerEntity customer;


  @ManyToOne(fetch = FetchType.LAZY)
  @JoinColumn(name = "CUSTOMER_ID")
  public CustomerEntity getCustomer() {
    return this.customer;

  public void setCustomer(CustomerEntity customer) {
    this.customer = customer;

  public IdRef<Customer> getCustomerId() {
    return IdRef.of(this.customer);

  public void setCustomerId(IdRef<Customer> customerId) {
    this.customer = JpaHelper.asEntity(customerId, CustomerEntity.class);

Now, ensure that you have the same getters and setters for customerId in your Eto:

public class ContractEto extends AbstractEto implements Contract {

  private IdRef<Customer> customerId;


  public IdRef<Customer> getCustomerId() {
    return this.customerId;

  public void setCustomerId(IdRef<Customer> customerId) {
    this.customerId = customerId;

This way the bean-mapper can automatically map from your entity (ContractEntity) to your Eto (ContractEto) and vice-versa.

JpaHelper and EntityManager access

In the above example we used JpaHelper.asEntity to convert the foreign key (IdRef<Customer>) to the according entity (CustomerEntity). This will internally use EntityManager.getReference to properly create a JPA entity. The alternative "solution" that may be used with Long instead of IdRef is typically:

  public void setCustomerId(IdRef<Customer> customerId) {
    Long id = null;
    if (customerId != null) {
      id = customerId.getId();
    if (id == null) {
      this.customer = null;
    } else {
      this.customer = new CustomerEntity();

While this "solution" works is most cases, we discovered some more complex cases, where it fails with very strange hibernate exceptions. When cleanly creating the entity via EntityManager.getReference instead it is working in all cases. So how can JpaHelper.asEntity as a static method access the EntityManager? Therefore we need to initialize this as otherwise you may see this exception:

java.lang.IllegalStateException: EntityManager has not yet been initialized!
	at com.devonfw.module.jpa.dataaccess.api.JpaEntityManagerAccess.getEntityManager(
	at com.devonfw.module.jpa.dataaccess.api.JpaHelper.asEntity(

For main usage in your application we assume that there is only one instance of EntityManager. Therefore we can initialize this instance during the spring boot setup. This is what we provide for you in JpaInitializer for you when creating a devon4j app.

JpaHelper and spring-test

Further, you also want your code to work in integration tests. Spring-test provides a lot of magic under the hood to make integration testing easy for you. To boost the performance when running multiple tests, spring is smart and avoids creating the same spring-context multiple times. Therefore it stores these contexts so that if a test-case is executed with a specific spring-configuration that has already been setup before, the same spring-context can be reused instead of creating it again. However, your tests may have multiple spring configurations leading to multiple spring-contexts. Even worse these tests can run in any order leading to switching between spring-contexts forth and back. Therefore, a static initializer during the spring boot setup can lead to strange errors as you can get the wrong EntityManager instance. In order to fix such problems, we provide a solution pattern via DbTest ensuring for every test, that the proper instance of EntityManager is initialized. Therefore you should derive directly or indirectly (e.g. via ComponentDbTest and SubsystemDbTest) from DbTesT or adopt your own way to apply this pattern to your tests, when using JpaHelper. This already happens if you are extending ApplicationComponentTest or ApplicationSubsystemTest.


For database auditing we use hibernate envers. If you want to use auditing ensure you have the following dependency in your pom.xml:


Make sure that entity manager also scans the package from the devon4j-jpa[-envers] module in order to work properly. And make sure that correct Repository Factory Bean Class is chosen.

@EntityScan(basePackages = { "«my.base.package»" }, basePackageClasses = { AdvancedRevisionEntity.class })
@EnableJpaRepositories(repositoryFactoryBeanClass = GenericRevisionedRepositoryFactoryBean.class)
public class SpringBootApp {

Now let your [Entity]Repository extend from DefaultRevisionedRepository instead of DefaultRepository.

The repository now has a method getRevisionHistoryMetadata(id) and getRevisionHistoryMetadata(id, boolean lazy) available to get a list of revisions for a given entity and a method find(id, revision) to load a specific revision of an entity with the given ID or getLastRevisionHistoryMetadata(id) to load last revision. To enable auditing for a entity simply place the @Audited annotation to your entity and all entity classes it extends from.

@Entity(name = "Drink")
public class DrinkEntity extends ProductEntity implements Drink {

When auditing is enabled for an entity an additional database table is used to store all changes to the entity table and a corresponding revision number. This table is called <ENTITY_NAME>_AUD per default. Another table called REVINFO is used to store all revisions. Make sure that these tables are available. They can be generated by hibernate with the following property (only for development environments).

Another possibility is to put them in your database migration scripts like so.

  id BIGINT NOT NULL generated by default as identity (start with 1),
  timestamp BIGINT NOT NULL,
  user VARCHAR(255)
    revtype TINYINT,

Transaction Handling

Transactions are technically processed by the data access layer. However, the transaction control has to be performed in upper layers. To avoid dependencies on persistence layer and technical code in upper layers, we use AOP to add transaction control via annotations as aspect.

We recommend using the @Transactional annotation (the JEE standard javax.transaction.Transactional rather than org.springframework.transaction.annotation.Transactional). We use this annotation in the logic layer to annotate business methods that participate in transactions (what typically applies to most up to all business components):

  public MyDataTo getData(MyCriteriaTo criteria) {

In case a service operation should invoke multiple use-cases, you would end up with multiple transactions what is undesired (what if the first TX succeeds and then the second TX fails?). Therefore you would then also annotate the service operation. This is not proposed as a pattern in any case as in some rare cases you need to handle constraint-violations from the database to create a specific business exception (with specified message). In such case you have to surround the transaction with a try {} catch statement what is not working if that method itself is @Transactional.


Transaction control for batches is a lot more complicated and is described in the batch layer.


For general guides on dealing or avoiding SQL, preventing SQL-injection, etc. you should study data-access layer.

Naming Conventions

Here we define naming conventions that you should follow whenever you write SQL files:

  • All SQL-Keywords in UPPER CASE

  • Indentation should be 2 spaces as suggested by devonfw for every format.


The naming conventions for database constructs (tables, columns, triggers, constraints, etc.) should be aligned with your database product and their operators. However, when you have the freedom of choice and a modern case-sensitive database, you can simply use your code conventions also for database constructs to avoid explicitly mapping each and every property (e.g. RestaurantTable vs. RESTAURANT_TABLE).

  • Define columns and constraints inline in the statement to create the table

  • Indent column types so they all start in the same text column

  • Constraints should be named explicitly (to get a reasonable hint error messages) with:

    • PK_«table» for primary key (name optional here as PK constraint are fundamental)

    • FK_«table»_«property» for foreign keys («table» and «property» are both on the source where the foreign key is defined)

    • UC_«table»_«property»[_«propertyN»]* for unique constraints

    • CK_«table»_«check» for check constraints («check» describes the check, if it is defined on a single property it should start with the property).

  • Old RDBMS had hard limitations for names (e.g. 30 characters). Please note that recent databases have overcome this very low length limitations. However, keep your names short but precise and try to define common abbreviations in your project for according (business) terms. Especially do not just truncate the names at the limit.

  • If possible add comments on table and columns to help DBAs understanding your schema. This is also honored by many tools (not only DBA-tools).

Here is a brief example of a DDL:


-- *** Table ***
  ID                   NUMBER(19) NOT NULL,
  SEATS                INTEGER NOT NULL,
COMMENT ON TABLE RESTAURANT_TABLE IS 'The physical tables inside the restaurant.';
-- *** Order ***
  ID                   NUMBER(19) NOT NULL,
  TABLE_ID             NUMBER(19) NOT NULL,
  TOTAL                DECIMAL(5, 2) NOT NULL,
  STATUS               VARCHAR2(10 CHAR) NOT NULL,
COMMENT ON TABLE RESTAURANT_ORDER IS 'An order and bill at the restaurant.';

ATTENTION: Please note that TABLE and ORDER are reserved keywords in SQL and you should avoid using such keywords to prevent problems.


For insert, update, delete, etc. of data SQL scripts should additionally follow these guidelines:

  • Inserts always with the same order of columns in blocks for each table.

  • Insert column values always starting with ID, MODIFICATION_COUNTER, [DTYPE, ] …​

  • List columns with fixed length values (boolean, number, enums, etc.) before columns with free text to support alignment of multiple insert statements

  • Pro Tip: Get familiar with column mode of advanced editors such as notepad++ when editing large blocks of similar insert statements.


Database Migration

For database migrations we use Flyway. As illustrated here database migrations have three advantages:

  1. Recreate a database from scratch

  2. Make it clear at all times what state a database is in

  3. Migrate in a deterministic way from your current version of the database to a newer one

Flyway can be used standalone or can be integrated via its API to make sure the database migration takes place on startup.

Organizational Advice

A few considerations with respect to project organization will help to implement maintainable Flyway migrations.

At first, testing and production environments must be clearly and consistently distinguished. Use the following directory structure to achieve this distinction:


Although this structure introduces redundancies, the benefit outweighs this disadvantage. An even more fine-grained production directory structure which contains one sub folder per release should be implemented:


Emphasizing that migration scripts below the current version must never be changed will aid the second advantage of migrations: it will always be clearly reproducible in which state the database currently is. Here, it is important to mention that, if test data is required, it must be managed separately from the migration data in the following directory:


The migration directory is added to aid easy usage of Flyway defaults. Of course, test data should also be managed per release as like production data.

With regard to content, separation of concerns (SoC) is an important goal. SoC can be achieved by distinguishing and writing multiple scripts with respect to business components/use cases (or database tables in case of large volumes of master data [1]. Comprehensible file names aid this separation.

It is important to have clear responsibilities regarding the database, the persistence layer (JPA), and migrations. Therefore a dedicated database expert should be in charge of any migrations performed or she should at least be informed before any change to any of the mentioned parts is applied.

Technical Configuration

Database migrations can be SQL based or Java based.

To enable auto migration on startup (not recommended for productive environment) set the following property in the file for an environment.


For development environment it is helpful to set both properties to true in order to simplify development. For regular environments flyway.clean-on-validation-error should be false.

If you want to use Flyway set the following property in any case to prevent Hibernate from doing changes on the database (pre-configured by default in devonfw):


The setting must be communicated to and coordinated with the customer and their needs. In acceptance testing the same configuration as for the production environment should be enabled.

Since migration scripts will also be versioned the end-of-line (EOL) style must be fixated according to this issue. This is however solved in flyway 4.0+ and the latest devonfw release. Also, the version numbers of migration scripts should not consist of simple ascending integer numbers like V0001…​, V0002…​, …​ This naming may lead to problems when merging branches. Instead the usage of timestamps as version numbers will help to avoid such problems.

Naming Conventions

Database migrations should follow this naming convention: V<version>__<description> (e.g.: V12345__Add_new_table.sql).

It is also possible to use Flyway for test data. To do so place your test data migrations in src/main/resources/db/testdata/ and set property


Then Flyway scans the additional location for migrations and applies all in the order specified by their version. If migrations V0001__... and V0002__... exist and a test data migration should be applied in between you can name it V0001_1__....


We use SLF4J as API for logging. The recommended implementation is Logback for which we provide additional value such as configuration templates and an appender that prevents log-forging and reformatting of stack-traces for operational optimizations.

Maven Integration

In the pom.xml of your application add this dependency (that also adds transitive dependencies to SLF4J and logback):


The configuration file is logback.xml and is to put in the directory src/main/resources of your main application. For details consult the logback configuration manual. devon4j provides a production ready configuration here. Simply copy this configuration into your application in order to benefit from the provided operational and security aspects. We do not include the configuration into the devon4j-logging module to give you the freedom of customizations (e.g. tune log levels for components and integrated products and libraries of your application).

The provided logback.xml is configured to use variables defined on the config/ file. On our example, the log files path point to ../logs/ in order to log to tomcat log directory when starting tomcat on the bin folder. Change it according to your custom needs.

Listing 7. config/
Logger Access

The general pattern for accessing loggers from your code is a static logger instance per class. We pre-configured the development environment so you can just type LOG and hit [ctrl][space] (and then [arrow up]) to insert the code pattern line into your class:

public class MyClass {
  private static final Logger LOG = LoggerFactory.getLogger(MyClass.class);

Please note that in this case we are not using injection pattern but use the convenient static alternative. This is already a common solution and also has performance benefits.

How to log

We use a common understanding of the log-levels as illustrated by the following table. This helps for better maintenance and operation of the systems by combining both views.

Table 31. Log-levels
Log-level Description Impact Active Environments


Only used for fatal errors that prevent the application to work at all (e.g. startup fails or shutdown/restart required)

Operator has to react immediately



An abnormal error indicating that the processing failed due to technical problems.

Operator should check for known issue and otherwise inform development



A situation where something worked not as expected. E.g. a business exception or user validation failure occurred.

No direct reaction required. Used for problem analysis.



Important information such as context, duration, success/failure of request or process

No direct reaction required. Used for analysis.



Development information that provides additional context for debugging problems.

No direct reaction required. Used for analysis.

development and testing


Like DEBUG but exhaustive information and for code that is run very frequently. Will typically cause large log-files.

No direct reaction required. Used for problem analysis.

none (turned off by default)

Exceptions (with their stacktrace) should only be logged on FATAL or ERROR level. For business exceptions typically a WARNING including the message of the exception is sufficient.

Log Files

We always use the following log files:

  • Error Log: Includes log entries to detect errors.

  • Info Log: Used to analyze system status and to detect bottlenecks.

  • Debug Log: Detailed information for error detection.

The log file name pattern is as follows:

Table 32. Segments of Logfilename
Element Value Description


info, error, debug

Type of log file


e.g. mywebserver01

Name of server, where logs are generated


e.g. myapp

Name of application, which causes logs



date of log file

Example: error_log_mywebserver01_myapp_2013-09-16_0900.log

Error log from mywebserver01 at application myapp at 16th September 2013 9pm.

Output format

We use the following output format for all log entries to ensure that searching and filtering of log entries work consistent for all logfiles:

 [D: «timestamp»] [P: «priority»] [C: «NDC»][T: «thread»][L: «logger»]-[M: «message»]
  • D: Date (Timestamp in ISO8601 format e.g. 2013-09-05 16:40:36,464)

  • P: Priority (the log level)

  • C: Correlation ID (ID to identify users across multiple systems, needed when application is distributed)

  • T: Thread (Name of thread)

  • L: Logger name (use class name)

  • M: Message (log message)


 [D: 2013-09-05 16:40:36,464] [P: DEBUG] [C: 12345] [T: main] [L: my.package.MyClass]-[M: My message...]

In order to prevent log forging attacks we provide a special appender for logback in devon4j-logging. If you use it (see configuration) you are safe from such attacks.

Correlation ID

In order to correlate separate HTTP requests to services belonging to the same user / session, we provide a servlet filter called DiagnosticContextFilter. This filter takes a provided correlation ID from the HTTP header X-Correlation-Id. If none was found, it will generate a new correlation id as UUID. This correlation ID is added as MDC to the logger. Therefore, it will then be included to any log message of the current request (thread). Further concepts such as service invocations will pass this correlation ID to subsequent calls in the application landscape. Hence you can find all log messages related to an initial request simply via the correlation ID even in highly distributed systems.


In highly distributed systems (from clustering up to microservices) it might get tedious to search for problems and details in log files. Therefore, we recommend to setup a central log and analysis server for your application landscape. Then you feed the logs from all your applications (using logstash) into that central server that adds them to a search index to allow fast searches (using elasticsearch). This should be completed with a UI that allows dashboards and reports based on data aggregated from all the logs. This is addressed by ELK or Graylog.

Adding custom values to JSON log

When you use a external system for gathering distrubited logs, we strongly suggest that you use a JSON based log format (e.g. by using the provided logback.xml, see above). In that case it might be useful to log customs field to the produced JSON. This is fully supported by slf4j together with logstash. The trick is to use the class net.logstash.logback.argument.StructuredArguments for adding the arguments to you log message, e.g.

import static net.logstash.logback.argument.StructuredArguments.v;


  public void processMessage(MyMessage msg) {"Processing message with {}", v("msgId", msg.getId());

This will produce something like:

{ "timestamp":"2018-11-06T18:36:37.638+00:00",
  "message":"Processing message with msgId=MSG-999-000",


Security is todays most important cross-cutting concern of an application and an enterprise IT-landscape. We seriously care about security and give you detailed guides to prevent pitfalls, vulnerabilities, and other disasters. While many mistakes can be avoided by following our guidelines you still have to consider security and think about it in your design and implementation. The security guide will not only automatically prevent you from any harm, but will provide you hints and best practices already used in different software products.

An important aspect of security is proper authentication and authorization as described in access-control. In the following we discuss about potential vulnerabilities and protection to prevent them.

Vulnerabilities and Protection

Independent from classical authentication and authorization mechanisms there are many common pitfalls that can lead to vulnerabilities and security issues in your application such as XSS, CSRF, SQL-injection, log-forging, etc. A good source of information about this is the OWASP. We address these common threats individually in security sections of our technological guides as a concrete solution to prevent an attack typically depends on the according technology. The following table illustrates common threats and contains links to the solutions and protection-mechanisms provided by the devonfw:

Table 33. Security threats and protection-mechanisms
Threat Protection Link to details

A1 Injection

validate input, escape output, use proper frameworks

SQL Injection

A2 Broken Authentication

encrypt all channels, use a central identity management with strong password-policy


A3 Sensitive Data Exposure

Use secured exception facade, design your data model accordingly

REST exception handling

A4 XML External Entities

Prefer JSON over XML, ensure FSP when parsing (external) XML

XML guide

A5 Broken Access Control

Ensure proper authorization for all use-cases, use @DenyAll as default to enforce

Access-control guide especially method authorization

A6 Security Misconfiguration

Use devon4j application template and guides to avoid

tutorial-newapp and sensitive configuration

A7 Cross-Site Scripting

prevent injection (see A1) for HTML, JavaScript and CSS and understand same-origin-policy


A8 Insecure Deserialization

Use simple and established serialization formats such as JSON, prevent generic deserialization (for polymorphic types)

JSON guide especially inheritence, XML guide

A9 Using Components with Known Vulnerabilities

subscribe to security newsletters, recheck products and their versions continuously, use devonfw dependency management

CVE newsletter and dependency check

A10 Insufficient_Logging & Monitoring

Ensure to log all security related events (login, logout, errors), establish effective monitoring

Logging guide and monitoring guide

Insecure Direct Object References

Using direct object references (IDs) only with appropriate authorization


Cross-Site Request Forgery (CSRF)

secure mutable service operations with an explicit CSRF security token sent in HTTP header and verified on the server

CSRF guide


Escape newlines in log messages

logging security

Unvalidated Redirects and Forwards

Avoid using redirects and forwards, in case you need them do a security audit on the solution.

devonfw proposes to use rich-clients (SPA/RIA). We only use redirects for login in a safe way.

Advanced Security

While OWASP Top 10 covers the basic aspects of application security, there are advanced standards such as AVS. In devonfw we address this in the Application Security Quick Solution Guide.

Dependency Check

To address the thread Using Components with Known Vulnerabilities we integrated OWASP dependency check into the devon4j maven build. If you build an devon4j application (sample or any app created from our app-template) you can activate dependency check with the security profile:

mvn clean install -P security

This does not run by default as it causes some overhead for the build performance. However, consider to build this in your CI at least nightly. After the dependency check is performed, you will find the results in target/dependency-check-report.html of each module. The report will also be generated when the site is build (mvn site) even without the profile.

Penetration Testing

For penetration testing (testing for vulnerabilities) of your web application, we recommend the following tools:


Access-Control is a central and important aspect of Security. It consists of two major aspects:



Authentication is the verification that somebody interacting with the system is the actual subject for whom he claims to be.

The one authenticated is properly called subject or principal. However, for simplicity we use the common term user even though it may not be a human (e.g. in case of a service call from an external system).

To prove his authenticity the user provides some secret called credentials. The most simple form of credentials is a password.

Please never implement your own authentication mechanism or credential store. You have to be aware of implicit demands such as salting and hashing credentials, password life-cycle with recovery, expiry, and renewal including email notification confirmation tokens, central password policies, etc. This is the domain of access managers and identity management systems. In a business context you will typically already find a system for this purpose that you have to integrate (e.g. via LDAP). Otherwise you should consider establishing such a system e.g. using keycloak.

We use spring-security as a framework for authentication purposes.

Therefore you need to provide an implementation of WebSecurityConfigurerAdapter:

public class MyWebSecurityConfig extends WebSecurityConfigurerAdapter {

  private UserDetailsService userDetailsService;
  public void configure(HttpSecurity http) throws Exception {

As you can see spring-security offers a fluent API for easy configuration. You can simply add invocations like formLogin().loginPage("/public/login") or httpBasic().realmName("MyApp"). Also CSRF protection can be configured by invoking csrf(). For further details see spring Java-config for HTTP security.

Further, you need to provide an implementation of the UserDetailsService interface. A good starting point comes with our application template.

Preserve original request anchors after form login redirect

Spring Security will automatically redirect any unauthorized access to the defined login-page. After successful login, the user will be redirected to the original requested URL. The only pitfall is, that anchors in the request URL will not be transmitted to server and thus cannot be restored after successful login. Therefore the devon4j-security module provides the RetainAnchorFilter, which is able to inject javascript code to the source page and to the target page of any redirection. Using javascript this filter is able to retrieve the requested anchors and store them into a cookie. Heading the target URL this cookie will be used to restore the original anchors again.

To enable this mechanism you have to integrate the RetainAnchorFilter as follows: First, declare the filter with

  • storeUrlPattern: an regular expression matching the URL, where anchors should be stored

  • restoreUrlPattern: an regular expression matching the URL, where anchors should be restored

  • cookieName: the name of the cookie to save the anchors in the intermediate time

You can easily configure this as code in your WebSecurityConfig as following:

RetainAnchorFilter filter = new RetainAnchorFilter();
http.addFilterBefore(filter, UsernamePasswordAuthenticationFilter.class);
Users vs. Systems

If we are talking about authentication we have to distinguish two forms of principals:

  • human users

  • autonomous systems

While e.g. a Kerberos/SPNEGO Single-Sign-On makes sense for human users it is pointless for authenticating autonomous systems. So always keep this in mind when you design your authentication mechanisms and separate access for human users from access for systems.

Using JWT

For authentication via JSON Web Token (JWT) see JWT guide.

Mixed Authentication

In rare cases you might need to mix multiple authentication mechanisms (form based, basic-auth, SAMLv2, OAuth, etc.) within the same app (for different URLs). For KISS this should be avoided where possible. However, when needed, you can find a solution here.



Authorization is the verification that an authenticated user is allowed to perform the operation he intends to invoke.

Clarification of terms

For clarification we also want to give a common understanding of related terms that have no unique definition and consistent usage in the wild.

Table 34. Security terms related to authorization
Term Meaning and comment


A permission is an object that allows a principal to perform an operation in the system. This permission can be granted (give) or revoked (taken away). Sometimes people also use the term right what is actually wrong as a right (such as the right to be free) can not be revoked.


We use the term group in this context for an object that contains permissions. A group may also contain other groups. Then the group represents the set of all recursively contained permissions.


We consider a role as a specific form of group that also contains permissions. A role identifies a specific function of a principal. A user can act in a role.

For simple scenarios a principal has a single role associated. In more complex situations a principal can have multiple roles but has only one active role at a time that he can choose out of his assigned roles. For KISS it is sometimes sufficient to avoid this by creating multiple accounts for the few users with multiple roles. Otherwise at least avoid switching roles at run-time in clients as this may cause problems with related states. Simply restart the client with the new role as parameter in case the user wants to switch his role.

Access Control

Any permission, group, role, etc., which declares a control for access management.

Suggestions on the access model

For the access model we give the following suggestions:

  • Each Access Control (permission, group, role, …​) is uniquely identified by a human readable string.

  • We create a unique permission for each use-case.

  • We define groups that combine permissions to typical and useful sets for the users.

  • We define roles as specific groups as required by our business demands.

  • We allow to associate users with a list of Access Controls.

  • For authorization of an implemented use case we determine the required permission. Furthermore, we determine the current user and verify that the required permission is contained in the tree spanned by all his associated Access Controls. If the user does not have the permission we throw a security exception and thus abort the operation and transaction.

  • We avoid negative permissions, that is a user has no permission by default and only those granted to him explicitly give him additional permission for specific things. Permissions granted can not be reduced by other permissions.

  • Technically we consider permissions as a secret of the application. Administrators shall not fiddle with individual permissions but grant them via groups. So the access management provides a list of strings identifying the Access Controls of a user. The individual application itself contains these Access Controls in a structured way, whereas each group forms a permission tree.

Naming conventions

As stated above each Access Control is uniquely identified by a human readable string. This string should follow the naming convention:


For Access Control Permissions the «local-name» again follows the convention:


The segments are defined by the following table:

Table 35. Segments of Access Control Permission ID
Segment Description Example


Is a unique technical but human readable string of the application (or microservice). It shall not contain special characters and especially no dot or whitespace. We recommend to use lower-train-case-ascii-syntax. The identity and access management should be organized on enterprise level rather than application level. Therefore permissions of different apps might easily clash (e.g. two apps might both define a group ReadMasterData but some user shall get this group for only one of these two apps). Using the «app-id». prefix is a simple but powerful namespacing concept that allows you to scale and grow. You may also reserve specific «app-id»s for cross-cutting concerns that do not actually reflect a single app e.g to grant access to a geographic region.



The action that is to be performed on «object». We use Find for searching and reading data. Save shall be used both for create and update. Only if you really have demands to separate these two you may use Create in addition to Save. Finally, Delete is used for deletions. For non CRUD actions you are free to use additional verbs such as Approve or Reject.



The affected object or entity. Shall be named according to your data-model


So as an example shop.FindProduct will reflect the permission to search and retrieve a Product in the shop application. The group shop.ReadMasterData may combine all permissions to read master-data from the shop. However, also a group shop.Admin may exist for the Admin role of the shop application. Here the «local-name» is Admin that does not follow the «verb»«object» schema.


The module devon4j-security provides ready-to-use code based on spring-security that makes your life a lot easier.

Figure 6. devon4j Security Model

The diagram shows the model of devon4j-security that separates two different aspects:

  • The Identity- and Access-Management is provided by according products and typically already available in the enterprise landscape (e.g. an active directory). It provides a hierarchy of primary access control objects (roles and groups) of a user. An administrator can grant and revoke permissions (indirectly) via this way.

  • The application security defines a hierarchy of secondary access control objects (groups and permissions). This is done by configuration owned by the application (see following section). The "API" is defined by the IDs of the primary access control objects that will be referenced from the Identity- and Access-Management.

Access Control Config

In your application simply extend AccessControlConfig to configure your access control objects as code and reference it from your use-cases. An example config may look like this:

public class ApplicationAccessControlConfig extends AccessControlConfig {

  public static final String APP_ID = "MyApp";

  private static final String PREFIX = APP_ID + ".";

  public static final String PERMISSION_FIND_OFFER = PREFIX + "FindOffer";

  public static final String PERMISSION_SAVE_OFFER = PREFIX + "SaveOffer";

  public static final String PERMISSION_DELETE_OFFER = PREFIX + "DeleteOffer";

  public static final String PERMISSION_FIND_PRODUCT = PREFIX + "FindProduct";

  public static final String PERMISSION_SAVE_PRODUCT = PREFIX + "SaveProduct";

  public static final String PERMISSION_DELETE_PRODUCT = PREFIX + "DeleteProduct";

  public static final String GROUP_READ_MASTER_DATA = PREFIX + "ReadMasterData";

  public static final String GROUP_MANAGER = PREFIX + "Manager";

  public static final String GROUP_ADMIN = PREFIX + "Admin";

  public ApplicationAccessControlConfig() {

    AccessControlGroup manager = group(GROUP_MANAGER, readMasterData, PERMISSION_SAVE_OFFER, PERMISSION_SAVE_PRODUCT);
Configuration on Java Method level

In your use-case you can now reference a permission like this:

public class UcSafeOfferImpl extends ApplicationUc implements UcSafeOffer {

  public OfferEto save(OfferEto offer) { ... }
Check Data-Permissions
Access Control Schema (deprecated)

The access-control-schema.xml approach is deprecated. The documentation can still be found in access control schema.

Access Control Schema

With release 3.0.0 the access-control-schema.xml has been deprecated. You may still use it and find the documentation in this section. However, for new devonfw applications always start with the new approach described in access control config.

Legacy Access Control Schema Documentation

The file access-control-schema.xml is used to define the mapping from groups to permissions (see example from sample app). The general terms discussed above can be mapped to the implementation as follows:

Table 36. General security terms related to devon4j access control schema
Term devon4j-security implementation Comment





When considering different levels of groups of different meanings, declare type attribute, e.g. as "group".



With type="role".

Access Control


Super type that represents a tree of AccessControlGroups and AccessControlPermissions. If a principal "has" a AccessControl he also "has" all AccessControls with according permissions in the spanned sub-tree.

Listing 8. Example access-control-schema.xml
<?xml version="1.0" encoding="UTF-8"?>
  <group id="ReadMasterData" type="group">
      <permission id="OfferManagement_GetOffer"/>
      <permission id="OfferManagement_GetProduct"/>
      <permission id="TableManagement_GetTable"/>
      <permission id="StaffManagement_GetStaffMember"/>

  <group id="Waiter" type="role">
      <permission id="TableManagement_ChangeTable"/>

This example access-control-schema.xml declares

  • a group named ReadMasterData, which grants four different permissions, e.g., OfferManagement_GetOffer

  • a group named Waiter, which

    • also grants all permissions from the group Barkeeper

    • in addition grants the permission TableManagement_ChangeTable

    • is marked to be a role for further application needs.

The devon4j-security module automatically validates the schema configuration and will throw an exception if invalid.

Unfortunately, Spring Security does not provide differentiated interfaces for authentication and authorization. Thus we have to provide an AuthenticationProvider, which is provided from Spring Security as an interface for authentication and authorization simultaneously. To integrate the devon4j-security provided access control schema, you can simply inherit your own implementation from the devon4j-security provided abstract class AbstractAccessControlBasedAuthenticationProvider and register your ApplicationAuthenticationProvider as an AuthenticationManager. Doing so, you also have to declare the two Beans AccessControlProvider and AccessControlSchemaProvider, which are precondition for the AbstractAccessControlBasedAuthenticationProvider.

As state of the art devon4j will focus on role-based authorization to cope with authorization for executing use case of an application. We will use the JSR250 annotations, mainly @RolesAllowed, for authorizing method calls against the permissions defined in the annotation body. This has to be done for each use-case method in logic layer. Here is an example:

public class OrdermanagementImpl extends AbstractComponentFacade implements Ordermanagement {

  public PaginatedListTo<OrderCto> findOrdersByPost(OrderSearchCriteriaTo criteria) {

    return findOrderCtos(criteria);

Now this method can only be called if a user is logged-in that has the permission FIND_TABLE.


In some projects there are demands for permissions and authorization that is dependent on the processed data. E.g. a user may only be allowed to read or write data for a specific region. This is adding some additional complexity to your authorization. If you can avoid this it is always best to keep things simple. However, in various cases this is a requirement. Therefore the following sections give you guidance and patterns how to solve this properly.

Structuring your data

For all your business objects (entities) that have to be secured regarding to data permissions we recommend that you create a separate interface that provides access to the relevant data required to decide about the permission. Here is a simple example:

public interface SecurityDataPermissionCountry {

   * @return the 2-letter ISO code of the country this object is associated with. Users need
   *         a data-permission for this country in order to read and write this object.
  String getCountry();

Now related business objects (entities) can implement this interface. Often such data-permissions have to be applied to an entire object-hierarchy. For security reasons we recommend that also all child-objects implement this interface. For performance reasons we recommend that the child-objects redundantly store the data-permission properties (such as country in the example above) and this gets simply propagated from the parent, when a child object is created.

Permissions for processing data

When saving or processing objects with a data-permission, we recommend to provide dedicated methods to verify the permission in an abstract base-class such as AbstractUc and simply call this explicitly from your business code. This makes it easy to understand and debug the code. Here is a simple example:

protected void verifyPermission(SecurityDataPermissionCountry entity) throws AccessDeniedException;
Beware of AOP

For simple but cross-cutting data-permissions you may also use AOP. This leads to programming aspects that reflectively scan method arguments and magically decide what to do. Be aware that this quickly gets tricky:

  • What if multiple of your method arguments have data-permissions (e.g. implement SecurityDataPermission*)?

  • What if the object to authorize is only provided as reference (e.g. Long or IdRef) and only loaded and processed inside the implementation where the AOP aspect does not apply?

  • How to express advanced data-permissions in annotations?

What we have learned is that annotations like @PreAuthorize from spring-security easily lead to the "programming in string literals" anti-pattern. We strongly discourage to use this anti-pattern. In such case writing your own verifyPermission methods that you manually call in the right places of your business-logic is much better to understand, debug and maintain.

Permissions for reading data

When it comes to restrictions on the data to read it becomes even more tricky. In the context of a user only entities shall be loaded from the database he is permitted to read. This is simple for loading a single entity (e.g. by its ID) as you can load it and then if not permitted throw an exception to secure your code. But what if the user is performing a search query to find many entities? For performance reasons we should only find data the user is permitted to read and filter all the rest already via the database query. But what if this is not a requirement for a single query but needs to be applied cross-cutting to tons of queries? Therefore we have the following pattern that solves your problem:

For each data-permission attribute (or set of such) we create an abstract base entity:

@FilterDef(name = "country", parameters = {@ParamDef(name = "countries", type = "string")})
@Filter(name = "country", condition = "country in (:countries)")
public abstract class SecurityDataPermissionCountryEntity extends ApplicationPersistenceEntity
    implements SecurityDataPermissionCountry {

  private String country;

  public String getCountry() {

  public void setCountry(String country) { = country;

There are some special hibernate annotations @EntityListeners, @FilterDef, and @Filter used here allowing to apply a filter on the country for any (non-native) query performed by hibernate. The entity listener may look like this:

public class PermissionCheckListener {

  public void read(SecurityDataPermissionCountryEntity entity) {

  public void write(SecurityDataPermissionCountryEntity entity) {

This will ensure that hibernate implicitly will call these checks for every such entity when it is read from or written to the database. Further to avoid reading entities from the database the user is not permitted to (and ending up with exceptions), we create an AOP aspect that automatically activates the above declared hibernate filter:

public class PermissionCheckerAdvice implements MethodBeforeAdvice {

  private PermissionChecker permissionChecker;

  private EntityManager entityManager;

  public void before(Method method, Object[] args, Object target) {

    Collection<String> permittedCountries = this.permissionChecker.getPermittedCountriesForReading();
    if (permittedCountries != null) { // null is returned for admins that may access all countries
      if (permittedCountries.isEmpty()) {
        throw new AccessDeniedException("Not permitted for any country!");
      Session session = this.entityManager.unwrap(Session.class);
      session.enableFilter("country").setParameterList("countries", permittedCountries.toArray());

Finally to apply this aspect to all Repositories (can easily be changed to DAOs) implement the following advisor:

public class PermissionCheckerAdvisor implements PointcutAdvisor, Pointcut, ClassFilter, MethodMatcher {

  private PermissionCheckerAdvice advice;

  public Advice getAdvice() {
    return this.advice;

  public boolean isPerInstance() {
    return false;

  public Pointcut getPointcut() {
    return this;

  public ClassFilter getClassFilter() {
    return this;

  public MethodMatcher getMethodMatcher() {
    return this;

  public boolean matches(Method method, Class<?> targetClass) {
    return true; // apply to all methods

  public boolean isRuntime() {
    return false;

  public boolean matches(Method method, Class<?> targetClass, Object... args) {
    throw new IllegalStateException("isRuntime()==false");

  public boolean matches(Class<?> clazz) {
    // when using DAOs simply change to some class like ApplicationDao
    return DefaultRepository.class.isAssignableFrom(clazz);
Managing and granting the data-permissions

Following our authorization guide we can simply create a permission for each country. We might simply reserve a prefix (as virtual «app-id») for each data-permission to allow granting data-permissions to end-users across all applications of the IT landscape. In our example we could create access controls country.DE, country.US, country.ES, etc. and assign those to the users. The method permissionChecker.getPermittedCountriesForReading() would then scan for these access controls and only return the 2-letter country code from it.

Before you make your decisions how to design your access controls please clarify the following questions:
  • Do you need to separate data-permissions independent of the functional permissions? E.g. may it be required to express that a user can read data from the countries ES and PL but is only permitted to modify data from PL? In such case a single assignment of "country-permissions" to users is insufficient.

  • Do you want to grant data-permissions individually for each application (higher flexibility and complexity) or for the entire application landscape (simplicity, better maintenance for administrators)? In case of the first approach you would rather have access controls like and

  • Do your data-permissions depend on objects that can be created dynamically inside your application?

  • If you want to grant data-permissions on other business objects (entities), how do you want to reference them (primary keys, business keys, etc.)? What reference is most stable? Which is most readable?

JSON Web Token(JWT)

JWT is an open standard (RFC 7519) for creating Json based access token that assert some number of claims.

jwt flow

For more information about JWT click here


A KeyStore is a repository of certificates and keys (public key, private key, or secret key). They can be used for TSL transportation, for encryption and decryption as well as for signing. For demonstration you might create a keystore with openssl, with the following commands:

openssl req -x509 -newkey rsa:4096 -keyout key.pem -out cert.pem -days 365
openssl pkcs12 -export -in cert.pem -inkey key.pem -out example.p12

For Java tooling you may also try the following instead:

keytool -genkeypair -alias devonfw -keypass "password" -storetype PKCS12 -keyalg RSA -keysize 4096 -storepass "password" -keystore keystore.pkcs
Please use reasonable passwords instead of password what should be obvious. Also for the alias the value devonfw is just an example.

To use JWT support from devon4j you have to add following required dependency.


The following properties need to be configured in your file:

# location of the keystore file, can be any spring resource (such as file or classpath URIs)
# type of keystore e.g. "PKCS12" (recommended), "JKS", or "JCEKS"
# password the keystore is secured with. Consider using password encryption as described in devon4j configuration guide
# the algorithm for encryption/decryption and signing - see io.jsonwebtoken.SignatureAlgorithm
# alias of public/private key in keystore (for validation only public key is used, for creation private key is required)
# the following properties are used if you are validating JWTs (e.g. via JwtAuthenticationFilter)
# the following properties are only used if you are issuing JWTs (e.g. via JwtLoginFilter)
# the following properties enable backward compatiblity for devon4j <= 2021.04.002
# after microprofile JWT is used by default since 2021.04.003

See also JwtConfigProperties for details about configuration.

Authentication with JWT via OAuth

The authentication with JWT via OAuth (HTTP header), will happen via JwtAuthenticationFilter that is automatically added by devon4j-starter-security-jwt via JwtAutoConfiguration. With the starter and auto-configuration we want to make it as easy as possible for you. In case you would like to build a server app that e.g. wants to issue JWTs but does not allow authentication via JWT itself, you can use devon4j-security-jwt as dependency instead of the starter and do the spring config yourself (pick and choose from JwtAutoConfiguration).

To do this, you need to add the following changes in your BaseWebSecurityConfig:

  public JwtAuthenticationFilter getJwtAuthenticationFilter() {
    return new JwtAuthenticationFilter();

  public void configure(HttpSecurity http) throws Exception {
    // ...
    // add this line to the end of this existing method
    http.addFilterBefore(getJwtAuthenticationFilter(), UsernamePasswordAuthenticationFilter.class);
Login with Username and Password to get JWT

To allow a client to login with username and password to get a JWT for sub-sequent requests, you need to do the following changes in your BaseWebSecurityConfig:

  public JwtLoginFilter getJwtLoginFilter() throws Exception {

    JwtLoginFilter jwtLoginFilter = new JwtLoginFilter("/login");
    return jwtLoginFilter;

  public void configure(HttpSecurity http) throws Exception {
    // ...
    // add this line to the end of this existing method
    http.addFilterBefore(getJwtLoginFilter(), UsernamePasswordAuthenticationFilter.class);
Authentication with Kafka

Authentication with JWT and Kafka is explained in the Kafka guide.

Cross-site request forgery (CSRF)

CSRF is a type of malicious exploit of a web application that allows an attacker to induce users to perform actions that they do not intend to perform.


More details about csrf can be found at

Secure devon4j server against CSRF

In case your devon4j server application is not accessed by browsers or the web-client is using JWT based authentication, you are already safe according to CSRF. However, if your application is accessed from a browser and you are using form based authentication (with session coockie) or basic authentication, you need to enable CSRF protection. This guide will tell you how to do this.


To secure your devon4j application against CSRF attacks, you only need to add the following dependency:


Starting with devon4j version 2020.12.001 application template, this is all you need to do. However, if you have started from an older version or you want to understand more, please read on.

Pluggable web-security

To enable pluggable security via devon4j security starters you need to apply WebSecurityConfigurer to your BaseWebSecurityConfig (your class extending spring-boot’s WebSecurityConfigurerAdapter) as following:

  private WebSecurityConfigurer webSecurityConfigurer;

  public void configure(HttpSecurity http) throws Exception {
    // disable CSRF protection by default, use csrf starter to override.
	  http = http.csrf().disable();
	  // apply pluggable web-security from devon4j security starters
    http = this.webSecurityConfigurer.configure(http);
Custom CsrfRequestMatcher

If you want to customize which HTTP requests will require a CSRF token, you can implement your own CsrfRequestMatcher and provide it to the devon4j CSRF protection via qualified injection as following:

public class CsrfRequestMatcher implements RequestMatcher {
  public boolean matches(HttpServletRequest request) {

Please note that the exact name (@Named("CsrfRequestMatcher")) is required here to ensure your custom implementation will be injected properly.


With the devon4j-starter-security-csrf the CsrfRestService gets integrated into your app. It provides an operation to get the CSRF token via an HTTP GET request. The URL path to retrieve this CSRF token is services/rest/csrf/v1/token. As a result you will get a JSON like the following:


The token value is a strong random value that will differ for each user session. It has to be send with subsequent HTTP requests (when method is other than GET) in the specified header (X-CSRF-TOKEN).

How it works

Putting it all together, a browser client should call the CsrfRestService after successfull login to receive the current CSRF token. With every subsequent HTTP request (other than GET) the client has to send this token in the according HTTP header. Otherwise the server will reject the request to prevent CSRF attacks. Therefore, an attacker might make your browser perform HTTP requests towards your devon4j application backend via <image> elements, <iframes>, etc. Your browser will then still include your session coockie if you are already logged in (e.g. from another tab). However, in case he wants to trigger DELETE or POST requests trying your browser to make changes in the application (delete or update data, etc.) this will fail without CSRF token. The attacker may make your browser retrieve the CSRF token but he will not be able to retrieve the result and put it into the header of other requests due to the same-origin-policy. This way your application will be secured against CSRF attacks.

Configure devon4ng client for CSRF

Devon4ng client configuration for CSRF is described here


Validation is about checking syntax and semantics of input data. Invalid data is rejected by the application. Therefore validation is required in multiple places of an application. E.g. the GUI will do validation for usability reasons to assist the user, early feedback and to prevent unnecessary server requests. On the server-side validation has to be done for consistency and security.

In general we distinguish these forms of validation:

  • stateless validation will produce the same result for given input at any time (for the same code/release).

  • stateful validation is dependent on other states and can consider the same input data as valid in once case and as invalid in another.

Stateless Validation

For regular, stateless validation we use the JSR303 standard that is also called bean validation (BV). Details can be found in the specification. As implementation we recommend hibernate-validator.


A description of how to enable BV can be found in the relevant Spring documentation. For a quick summary follow these steps:

  • Make sure that hibernate-validator is located in the classpath by adding a dependency to the pom.xml.

  • Add the @Validated annotation to the implementation (spring bean) to be validated. The standard use case is to annotate the logic layer implementation, i.e. the use case implementation or component facade in case of simple logic layer pattern. Thus, the validation will be executed for service requests as well as batch processing. For methods to validate go to their declaration and add constraint annotations to the method parameters.

    • @Valid annotation to the arguments to validate (if that class itself is annotated with constraints to check).

    • @NotNull for required arguments.

    • Other constraints (e.g. @Size) for generic arguments (e.g. of type String or Integer). However, consider to create custom datatypes and avoid adding too much validation logic (especially redundant in multiple places).

public class BookingmanagementRestServiceImpl implements BookingmanagementRestService {
  public BookingEto saveBooking(@Valid BookingCto booking) {
  • Finally add appropriate validation constraint annotations to the fields of the ETO class.

  private BookingEto booking;
Listing 9.
  private Timestamp bookingDate;

A list with all bean validation constraint annotations available for hibernate-validator can be found here. In addition it is possible to configure custom constraints. Therefore it is necessary to implement a annotation and a corresponding validator. A description can also be found in the Spring documentation or with more details in the hibernate documentation.

Bean Validation in Wildfly >v8: Wildfly v8 is the first version of Wildfly implementing the JEE7 specification. It comes with bean validation based on hibernate-validator out of the box. In case someone is running Spring in Wildfly for whatever reasons, the spring based annotation @Validated would duplicate bean validation at runtime and thus should be omitted.


Cross-Field Validation

BV has poor support for this. Best practice is to create and use beans for ranges, etc. that solve this. A bean for a range could look like so:

public class Range<V extends Comparable<V>> {

  private V min;
  private V max;

  public Range(V min, V max) {

    if ((min != null) && (max != null)) {
      int delta = min.compareTo(max);
      if (delta > 0) {
        throw new ValueOutOfRangeException(null, min, min, max);
    this.min = min;
    this.max = max;

  public V getMin() ...
  public V getMax() ...
Stateful Validation

For complex and stateful business validations we do not use BV (possible with groups and context, etc.) but follow KISS and just implement this on the server in a straight forward manner. An example is the deletion of a table in the example application. Here the state of the table must be checked first:

  private void sendConfirmationEmails(BookingEntity booking) {

    if (!booking.getInvitedGuests().isEmpty()) {
      for (InvitedGuestEntity guest : booking.getInvitedGuests()) {
        sendInviteEmailToGuest(guest, booking);


Implementing this small check with BV would be a lot more effort.

Aspect Oriented Programming (AOP)

AOP is a powerful feature for cross-cutting concerns. However, if used extensive and for the wrong things an application can get unmaintainable. Therefore we give you the best practices where and how to use AOP properly.

AOP Key Principles

We follow these principles:

  • We use spring AOP based on dynamic proxies (and fallback to cglib).

  • We avoid AspectJ and other mighty and complex AOP frameworks whenever possible

  • We only use AOP where we consider it as necessary (see below).

AOP Usage

We recommend to use AOP with care but we consider it established for the following cross cutting concerns:

AOP Debugging

When using AOP with dynamic proxies the debugging of your code can get nasty. As you can see by the red boxes in the call stack in the debugger there is a lot of magic happening while you often just want to step directly into the implementation skipping all the AOP clutter. When using Eclipse this can easily be archived by enabling step filters. Therefore you have to enable the feature in the Eclipse tool bar (highlighted in read).

AOP debugging

In order to properly make this work you need to ensure that the step filters are properly configured:

Step Filter Configuration

Ensure you have at least the following step-filters configured and active:


Exception Handling

Exception Principles

For exceptions we follow these principles:

  • We only use exceptions for exceptional situations and not for programming control flows, etc. Creating an exception in Java is expensive and hence you should not do it just for testing if something is present, valid or permitted. In the latter case design your API to return this as a regular result.

  • We use unchecked exceptions (RuntimeException) [2]

  • We distinguish internal exceptions and user exceptions:

    • Internal exceptions have technical reasons. For unexpected and exotic situations it is sufficient to throw existing exceptions such as IllegalStateException. For common scenarios a own exception class is reasonable.

    • User exceptions contain a message explaining the problem for end users. Therefore we always define our own exception classes with a clear, brief but detailed message.

  • Our own exceptions derive from an exception base class supporting

All this is offered by mmm-util-core that we propose as solution.

Exception Example

Here is an exception class from our sample application:

public class IllegalEntityStateException extends ApplicationBusinessException {

  private static final long serialVersionUID = 1L;

  public IllegalEntityStateException(Object entity, Object state) {

    this((Throwable) null, entity, state);

  public IllegalEntityStateException(Object entity, Object currentState, Object newState) {

    this(null, entity, currentState, newState);

  public IllegalEntityStateException(Throwable cause, Object entity, Object state) {

    super(cause, createBundle(NlsBundleApplicationRoot.class).errorIllegalEntityState(entity, state));

  public IllegalEntityStateException(Throwable cause, Object entity, Object currentState, Object newState) {

    super(cause, createBundle(NlsBundleApplicationRoot.class).errorIllegalEntityStateChange(entity, currentState,


The message templates are defined in the interface NlsBundleRestaurantRoot as following:

public interface NlsBundleApplicationRoot extends NlsBundle {

  @NlsBundleMessage("The entity {entity} is in state {state}!")
  NlsMessage errorIllegalEntityState(@Named("entity") Object entity, @Named("state") Object state);

  @NlsBundleMessage("The entity {entity} in state {currentState} can not be changed to state {newState}!")
  NlsMessage errorIllegalEntityStateChange(@Named("entity") Object entity, @Named("currentState") Object currentState,
      @Named("newState") Object newState);

  @NlsBundleMessage("The property {property} of object {object} can not be changed!")
  NlsMessage errorIllegalPropertyChange(@Named("object") Object object, @Named("property") Object property);

  @NlsBundleMessage("There is currently no user logged in")
  NlsMessage errorNoActiveUser();
Handling Exceptions

For catching and handling exceptions we follow these rules:

  • We do not catch exceptions just to wrap or to re-throw them.

  • If we catch an exception and throw a new one, we always have to provide the original exception as cause to the constructor of the new exception.

  • At the entry points of the application (e.g. a service operation) we have to catch and handle all throwables. This is done via the exception-facade-pattern via an explicit facade or aspect. The devon4j-rest module already provides ready-to-use implementations for this such as RestServiceExceptionFacade. The exception facade has to …​

    • log all errors (user errors on info and technical errors on error level)

    • ensure the entire exception is passed to the logger (not only the message) so that the logger can capture the entire stacktrace and the root cause is not lost.

    • convert the error to a result appropriable for the client and secure for Sensitive Data Exposure. Especially for security exceptions only a generic security error code or message may be revealed but the details shall only be logged but not be exposed to the client. All internal exceptions are converted to a generic error with a message like:

      An unexpected technical error has occurred. We apologize any inconvenience. Please try again later.

Common Errors

The following errors may occur in any devon application:

Table 37. Common Exceptions
Code Message Link


An unexpected error has occurred! We apologize any inconvenience. Please try again later.


«original message of the cause»


Internationalization (I18N) is about writing code independent from locale-specific information. For I18N of text messages we are suggesting mmm native-language-support.

In devonfw we have developed a solution to manage text internationalization. devonfw solution comes into two aspects:

  • Bind locale information to the user.

  • Get the messages in the current user locale.

Binding locale information to the user

We have defined two different points to bind locale information to user, depending on user is authenticated or not.

  • User not authenticated: devonfw intercepts unsecured request and extract locale from it. At first, we try to extract a language parameter from the request and if it is not possible, we extract locale from Àccept-language` header.

  • User authenticated. During login process, applications developers are responsible to fill language parameter in the UserProfile class. This language parameter could be obtain from DB, LDAP, request, etc. In devonfw sample we get the locale information from database.

This image shows the entire process:

Getting internationalizated messages

devonfw has a bean that manage i18n message resolution, the ApplicationLocaleResolver. This bean is responsible to get the current user and extract locale information from it and read the correct properties file to get the message.

The i18n properties file must be called where la=language and CO=country. This is an example of a i18n properties file for English language to translate devonfw sample user roles:


You should define an file for every language that your application needs.

ApplicationLocaleResolver bean is injected in AbstractComponentFacade class so you have available this bean in logic layer so you only need to put this code to get an internationalized message:

String msg = getApplicationLocaleResolver().getMessage("mymessage");


XML (Extensible Markup Language) is a W3C standard format for structured information. It has a large eco-system of additional standards and tools.

In Java there are many different APIs and frameworks for accessing, producing and processing XML. For the devonfw we recommend to use JAXB for mapping Java objects to XML and vice-versa. Further there is the popular DOM API for reading and writing smaller XML documents directly. When processing large XML documents StAX is the right choice.


We use JAXB to serialize Java objects to XML or vice-versa.

JAXB and Inheritance

Use @XmlSeeAlso annotation to provide sub-classes. See section "Collective Polymorphism" described here.

JAXB Custom Mapping

In order to map custom datatypes or other types that do not follow the Java bean conventions, you need to define a custom mapping. If you create dedicated objects for the XML mapping you can easily avoid such situations. When this is not suitable use @XmlJavaTypeAdapter and provide an XmlAdapter implementation that handles the mapping. For details see here.


To prevent XML External Entity attacks, follow JAXP Security Guide and enable FSP.


JSON (JavaScript Object Notation) is a popular format to represent and exchange data especially for modern web-clients. For mapping Java objects to JSON and vice-versa there is no official standard API. We use the established and powerful open-source solution Jackson. Due to problems with the wiki of fasterxml you should try this alternative link: Jackson/AltLink.

Configure JSON Mapping

In order to avoid polluting business objects with proprietary Jackson annotations (e.g. @JsonTypeInfo, @JsonSubTypes, @JsonProperty) we propose to create a separate configuration class. Every devonfw application (sample or any app created from our app-template) therefore has a class called ApplicationObjectMapperFactory that extends ObjectMapperFactory from the devon4j-rest module. It looks like this:

public class ApplicationObjectMapperFactory extends ObjectMapperFactory {

  public RestaurantObjectMapperFactory() {
    // JSON configuration code goes here
JSON and Inheritance

If you are using inheritance for your objects mapped to JSON then polymorphism can not be supported out-of-the box. So in general avoid polymorphic objects in JSON mapping. However, this is not always possible. Have a look at the following example from our sample application:

inheritance class diagram
Figure 7. Transfer-Objects using Inheritance

Now assume you have a REST service operation as Java method that takes a ProductEto as argument. As this is an abstract class the server needs to know the actual sub-class to instantiate. We typically do not want to specify the classname in the JSON as this should be an implementation detail and not part of the public JSON format (e.g. in case of a service interface). Therefore we use a symbolic name for each polymorphic subtype that is provided as virtual attribute @type within the JSON data of the object:

{ "@type": "Drink", ... }

Therefore you add configuration code to the constructor of ApplicationObjectMapperFactory. Here you can see an example from the sample application:

addSubtypes(new NamedType(MealEto.class, "Meal"), new NamedType(DrinkEto.class, "Drink"),
  new NamedType(SideDishEto.class, "SideDish"));

We use setBaseClasses to register all top-level classes of polymorphic objects. Further we declare all concrete polymorphic sub-classes together with their symbolic name for the JSON format via addSubtypes.

Custom Mapping

In order to map custom datatypes or other types that do not follow the Java bean conventions, you need to define a custom mapping. If you create objects dedicated for the JSON mapping you can easily avoid such situations. When this is not suitable follow these instructions to define the mapping:

  1. As an example, the use of JSR354 ( is appreciated in order to process monetary amounts properly. However, without custom mapping, the default mapping of Jackson will produce the following JSON for a MonetaryAmount:

    "currency": {"defaultFractionDigits":2, "numericCode":978, "currencyCode":"EUR"},
    "monetaryContext": {...},
    "factory": {...}

    As clearly can be seen, the JSON contains too much information and reveals implementation secrets that do not belong here. Instead the JSON output expected and desired would be:


    Even worse, when we send the JSON data to the server, Jackson will see that MonetaryAmount is an interface and does not know how to instantiate it so the request will fail. Therefore we need a customized Serializer.

  2. We implement MonetaryAmountJsonSerializer to define how a MonetaryAmount is serialized to JSON:

    public final class MonetaryAmountJsonSerializer extends JsonSerializer<MonetaryAmount> {
      public static final String NUMBER = "amount";
      public static final String CURRENCY = "currency";
      public void serialize(MonetaryAmount value, JsonGenerator jgen, SerializerProvider provider) throws ... {
        if (value != null) {

    For composite datatypes it is important to wrap the info as an object (writeStartObject() and writeEndObject()). MonetaryAmount provides the information we need by the getCurrency() and getNumber(). So that we can easily write them into the JSON data.

  3. Next, we implement MonetaryAmountJsonDeserializer to define how a MonetaryAmount is deserialized back as Java object from JSON:

    public final class MonetaryAmountJsonDeserializer extends AbstractJsonDeserializer<MonetaryAmount> {
      protected MonetaryAmount deserializeNode(JsonNode node) {
        BigDecimal number = getRequiredValue(node, MonetaryAmountJsonSerializer.NUMBER, BigDecimal.class);
        String currencyCode = getRequiredValue(node, MonetaryAmountJsonSerializer.CURRENCY, String.class);
        MonetaryAmount monetaryAmount =
        return monetaryAmount;

    For composite datatypes we extend from AbstractJsonDeserializer as this makes our task easier. So we already get a JsonNode with the parsed payload of our datatype. Based on this API it is easy to retrieve individual fields from the payload without taking care of their order, etc. AbstractJsonDeserializer also provides methods such as getRequiredValue to read required fields and get them converted to the desired basis datatype. So we can easily read the amount and currency and construct an instance of MonetaryAmount via the official factory API.

  4. Finally we need to register our custom (de)serializers with the following configuration code in the constructor of ApplicationObjectMapperFactory:+

  SimpleModule module = getExtensionModule();
  module.addDeserializer(MonetaryAmount.class, new MonetaryAmountJsonDeserializer());
  module.addSerializer(MonetaryAmount.class, new MonetaryAmountJsonSerializer());

Now we can read and write MonetaryAmount from and to JSON as expected.


REST (REpresentational State Transfer) is an inter-operable protocol for services that is more lightweight than SOAP. However, it is no real standard and can cause confusion (see REST philosophy). Therefore we define best practices here to guide you.


URLs are not case sensitive. Hence, we follow the best practice to use only lower-case-letters-with-hyphen-to-separate-words. For operations in REST we distinguish the following types of URLs:

  • A collection URL is build from the rest service URL by appending the name of a collection. This is typically the name of an entity. Such URL identifies the entire collection of all elements of this type. Example:

  • An element URL is build from a collection URL by appending an element ID. It identifies a single element (entity) within the collection. Example:

This fits perfect for CRUD operations. For business operations (processing, calculation, advanced search, etc.) we simply append a collection URL with the name of the business operation. Then we can POST the input for the business operation and get the result back. Example:

HTTP Methods

The following table defines the HTTP methods (verbs) and their meaning:

Table 38. Usage of HTTP methods
HTTP Method Meaning


Read data (stateless).


Create or update data.


Process data.


Delete an entity.

Please also note that for (large) bulk deletions you may be forced to used POST instead of DELETE as according to the HTTP standard DELETE must not have payload and URLs are limited in length.

For general recommendations on HTTP methods for collection and element URLs see REST@wikipedia.

HTTP Status Codes

Further we define how to use the HTTP status codes for REST services properly. In general the 4xx codes correspond to an error on the client side and the 5xx codes to an error on the server side.

Table 39. Usage of HTTP status codes
HTTP Code Meaning Response Comment



requested result

Result of successful GET


No Content


Result of successful POST, DELETE, or PUT with empty result (void return)


Bad Request

error details

The HTTP request is invalid (parse error, validation failed)




Authentication failed




Authorization failed


Not found


Either the service URL is wrong or the requested resource does not exist


Server Error

error code, UUID

Internal server error occurred, in case of an exception, see REST exception handling


For implementing REST services we use the JAX-RS standard. As payload encoding we recommend JSON bindings using Jackson. To implement a REST service you simply add JAX-RS annotations. Here is a simple example:

public class ImagemanagementRestService {

  private Imagemanagement imagemanagement;

  public ImageDto getImage(@PathParam("id") long id) {

    return this.imagemanagement.findImage(id);

Here we can see a REST service for the business component imagemanagement. The method getImage can be accessed via HTTP GET (see @GET) under the URL path imagemanagement/image/{id} (see @Path annotations) where {id} is the ID of the requested table and will be extracted from the URL and provided as parameter id to the method getImage. It will return its result (ImageDto) as JSON (see @Produces annotation - you can also extend RestService marker interface that defines these annotations for JSON). As you can see it delegates to the logic component imagemanagement that contains the actual business logic while the service itself only exposes this logic via HTTP. The REST service implementation is a regular CDI bean that can use dependency injection.

With JAX-RS it is important to make sure that each service method is annotated with the proper HTTP method (@GET,@POST,etc.) to avoid unnecessary debugging. So you should take care not to forget to specify one of these annotations.

You may also separate API and implementation in case you want to reuse the API for service-client:

public interface ImagemanagementRestService {

  ImageEto getImage(@PathParam("id") long id);


public class ImagemanagementRestServiceImpl implements ImagemanagementRestService {

  public ImageEto getImage(long id) {

    return this.imagemanagement.findImage(id);

JAX-RS Configuration

Starting from CXF 3.0.0 it is possible to enable the auto-discovery of JAX-RS roots.

When the jaxrs server is instantiated all the scanned root and provider beans (beans annotated with and are configured.

REST Exception Handling

For exceptions a service needs to have an exception façade that catches all exceptions and handles them by writing proper log messages and mapping them to a HTTP response with an according HTTP status code. Therefore the devonfw provides a generic solution via RestServiceExceptionFacade. You need to follow the exception guide so that it works out of the box because the façade needs to be able to distinguish between business and technical exceptions. Now your service may throw exceptions but the façade with automatically handle them for you.

The general format for returning an error to the client is as follows:

  "message": "A human-readable message describing the error",
  "code": "A code identifying the concrete error",
  "uuid": "An identifier (generally the correlation id) to help identify corresponding requests in logs"
Pagination details

We recommend to use spring-data repositories for database access that already comes with pagination support. Therefore, when performing a search, you can include a Pageable object. Here is a JSON example for it:

{ "pageSize": 20, "pageNumber": 0, "sort": [] }

By increasing the pageNumber the client can browse and page through the hits.

As a result you will receive a Page. It is a container for your search results just like a Collection but additionally contains pagination information for the client. Here is a JSON example:

{ "totalElements": 1022,
  pageable: { "pageSize": 20, "pageNumber": 0 },
  content: [ ... ] }

The totalElements property contains the total number of hits. This can be used by the client to compute the total number of pages and render the pagination links accordingly. Via the pageable property the client gets back the Pageable properties from the search request. The actual hits for the current page are returned as array in the content property.

REST Testing

For testing REST services in general consult the testing guide.

For manual testing REST services there are browser plugins:


Your services are the major entry point to your application. Hence security considerations are important here.


A common security threat is CSRF for REST services. Therefore all REST operations that are performing modifications (PUT, POST, DELETE, etc. - all except GET) have to be secured against CSRF attacks. See CSRF how to do this.

JSON top-level arrays

OWASP earlier suggested to never return JSON arrays at the top-level, to prevent attacks without rationale. We digged deep and found anatomy-of-a-subtle-json-vulnerability. To sum it up the attack is many years old and does not work in any recent or relevant browser. Hence it is fine to use arrays as top-level result in a JSON REST service (means you can return List<Foo> in a Java JAX-RS service).


SOAP is a common protocol for services that is rather complex and heavy. It allows to build inter-operable and well specified services (see WSDL). SOAP is transport neutral what is not only an advantage. We strongly recommend to use HTTPS transport and ignore additional complex standards like WS-Security and use established HTTP-Standards such as RFC2617 (and RFC5280).


For building web-services with Java we use the JAX-WS standard. There are two approaches:

  • code first

  • contract first

Here is an example in case you define a code-first service.

Web-Service Interface

We define a regular interface to define the API of the service and annotate it with JAX-WS annotations:

public interface TablemanagmentWebService {

  @WebResult(name = "message")
  TableEto getTable(@WebParam(name = "id") String id);

Web-Service Implementation

And here is a simple implementation of the service:

@WebService(endpointInterface = "")
public class TablemanagementWebServiceImpl implements TablemanagmentWebService {

  private Tablemanagement tableManagement;

  public TableEto getTable(String id) {

    return this.tableManagement.findTable(id);
SOAP Custom Mapping

In order to map custom datatypes or other types that do not follow the Java bean conventions, you need to write adapters for JAXB (see XML).

SOAP Testing

For testing SOAP services in general consult the testing guide.

For testing SOAP services manually we strongly recommend SoapUI.

Service Client

This guide is about consuming (calling) services from other applications (micro-services). For providing services see the Service-Layer Guide. Services can be consumed in the client or the server. As the client is typically not written in Java you should consult the according guide for your client technology. In case you want to call a service within your Java code this guide is the right place to get help.


Various solutions already exist for calling services such as RestTemplate from spring or the JAX-RS client API. Further each and every service framework offers its own API as well. These solutions might be suitable for very small and simple projects (with one or two such invocations). However, with the trend of microservices the invocation of a service becomes a very common use-case that occurs all over the place. You typically need a solution that is very easy to use but supports flexible configuration, adding headers for authentication, mapping of errors from server, logging success/errors with duration for performance analysis, support for synchronous and asynchronous invocations, etc. This is exactly what this devon4j service-client solution brings for you.


You need to add (at least one of) these dependencies to your application:

<!-- Starter for asynchronous consuming REST services via Jaca HTTP Client (Java11+) -->
<!-- Starter for synchronous consuming REST services via Jaca HTTP Client (Java11+) -->
<!-- Starter for synchronous consuming REST services via Apache CXF (Java8+)
  NOTE: This is an alternative to devon4j-starter-http-client-rest-sync
<!-- Starter for synchronous consuming SOAP services via Apache CXF (Java8+) -->

When invoking a service you need to consider many cross-cutting aspects. You might not think about them in the very first place and you do not want to implement them multiple times redundantly. Therefore you should consider using this approach. The following sub-sections list the covered features and aspects:

Simple usage

Assuming you already have a Java interface MyService of the service you want to invoke:


public interface MyService extends RestService {

  MyResult getResult(MyArgs myArgs);

  void deleteEntity(@PathParam("id") long id);

Then all you need to do is this:

public class UcMyUseCaseImpl extends MyUseCaseBase implements UcMyUseCase {
  private ServiceClientFactory serviceClientFactory;

  private void callSynchronous(MyArgs myArgs) {
    MyService myService = this.serviceClientFactory.create(MyService.class);
    // call of service over the wire, synchronously blocking until result is received or error occurred
    MyResult myResult = myService.myMethod(myArgs);

  private void callAsynchronous(MyArgs myArgs) {
    AsyncServiceClient<MyService> client = this.serviceClientFactory.createAsync(MyService.class);
    // call of service over the wire, will return when request is send and invoke handleResult asynchronously, this::handleResult);

  private void handleResult(MyResult myResult) {

As you can see both synchronous and asynchronous invocation of a service is very simple and type-safe. Still it is very flexible and powerful (see following features). The actual call of myMethod will technically call the remote service over the wire (e.g. via HTTP) including marshaling the arguments (e.g. converting myArgs to JSON) and unmarshalling the result (e.g. converting the received JSON to myResult).

Asynchronous Invocation of void Methods

If you want to call a service method with void as return type, the type-safe call method can not be used as void methods do not return a result. Therefore you can use the callVoid method as following:

  private void callAsynchronousVoid(long id) {
    AsyncServiceClient<MyService> client = this.serviceClientFactory.createAsync(MyService.class);
    // call of service over the wire, will return when request is send and invoke resultHandler asynchronously
    Consumer<Void> resultHandler = r -> { System.out.println("Response received")};
    client.callVoid(() -> { client.get().deleteEntity(id);}, resultHandler);

You may also provide null as resultHandler for "fire and forget". However, this will lead to the result being ignored so even in case of an error you will not be notified.


This solution allows a very flexible configuration on the following levels:

  1. Global configuration (defaults)

  2. Configuration per remote service application (microservice)

  3. Configuration per invocation.

A configuration on a deeper level (e.g. 3) overrides the configuration from a higher level (e.g. 1).

The configuration on Level 1 and 2 are configured via (see configuration guide). For Level 1 the prefix service.client.default. is used for properties. Further, for level 2. the prefix«application». is used where «application» is the technical name of the application providing the service. This name will automatically be derived from the java package of the service interface (e.g. foo in MyService interface before) following our packaging conventions. In case these conventions are not met it will fallback to the fully qualified name of the service interface.

Configuration on Level 3 has to be provided as Map argument to the method ServiceClientFactory.create(Class<S> serviceInterface, Map<String, String> config). The keys of this Map will not use prefixes (such as the ones above). For common configuration parameters a type-safe builder is offered to create such map via ServiceClientConfigBuilder. E.g. for testing you may want to do:

  new ServiceClientConfigBuilder().authBasic().userLogin(login).userPassword(password).buildMap());

Here is an example of a configuration block for your

# authForward: simply forward Authorization header (e.g. with JWT) to remote service
Service Discovery

You do not want to hardwire service URLs in your code, right? Therefore different strategies might apply to discover the URL of the invoked service. This is done internally by an implementation of the interface ServiceDiscoverer. The default implementation simply reads the base URL from the configuration. So you can simply add this to your as in the above configuration example.

Assuming your service interface would have the fully qualified name then the URL would be resolved to as the «application» is foo.

Additionally, the URL might use the following variables that will automatically be resolved:

  • ${app} to «application» (useful for default URL)

  • ${type} to the type of the service. E.g. rest in case of a REST service and ws for a SOAP service.

  • ${local.server.port} for the port of your current Java servlet container running the JVM. Should only used for testing with spring-boot random port mechanism (technically spring can not resolve this variable but we do it for you here).

Therefore, the default URL may also be configured as:


As you can use any implementation of ServiceDiscoverer, you can also easily use eureka (or anything else) instead to discover your services. However, we recommend to use istio instead as described below.


A very common demand is to tweak (HTTP) headers in the request to invoke the service. May it be for security (authentication data) or for other cross-cutting concerns (such as the Correlation ID). This is done internally by implementations of the interface ServiceHeaderCustomizer. We already provide several implementations such as:

  • ServiceHeaderCustomizerBasicAuth for basic authentication (auth=basic).

  • ServiceHeaderCustomizerOAuth for OAuth: passes a security token from security context such as a JWT via OAuth (auth=oauth).

  • ServiceHeaderCustomizerAuthForward forwards the Authorization HTTP header from the running request to the request to the remote serivce as is (auth=authForward). Be careful to avoid security pitfals by misconfiguring this feature as it may also sensitive credentials (e.g. basic auth) to the remote service. Never use as default.

  • ServiceHeaderCustomizerCorrelationId passed the Correlation ID to the service request.

Additionally, you can add further custom implementations of ServiceHeaderCustomizer for your individual requirements and additional headers.


You can configure timeouts in a very flexible way. First of all you can configure timeouts to establish the connection (timeout.connection) and to wait for the response (timeout.response) separately. These timeouts can be configured on all three levels as described in the configuration section above.

Error Handling

Whilst invoking a remote service an error may occur. This solution will automatically handle such errors and map them to a higher level ServiceInvocationFailedException. In general we separate two different types of errors:

  • Network error
    In such case (host not found, connection refused, time out, etc.) there is not even a response from the server. However, in advance to a low-level exception you will get a wrapped ServiceInvocationFailedException (with code ServiceInvoke) with a readable message containing the service that could not be invoked.

  • Service error
    In case the service failed on the server-side the error result will be parsed and thrown as a ServiceInvocationFailedException with the received message and code.

This allows to catch and handle errors when a service-invocation failed. You can even distinguish business errors from the server-side from technical errors and implement retry strategies or the like. Further the created exception contains detailed contextual information about the serivce that failed (service interface class, method, URL) what makes it much easier to trace down errors. Here is an example from our tests:

While invoking the service[http://localhost:50178/app/services/rest/my-example/v1/business-error] the following error occurred: Test of business error. Probably the service is temporary unavailable. Please try again later. If the problem persists contact your system administrator.

You may even provide your own implementation of ServiceClientErrorFactory instead to provide an own exception class for this purpose.

Handling Erros

In case of a synchronous service invocation an error will be immediately thrown so you can sourround the call with a regular try-catch block:

  private void callSynchronous(MyArgs myArgs) {
    MyService myService = this.serviceClientFactory.create(MyService.class);
    // call of service over the wire, synchronously blocking until result is received or error occurred
    try {
      MyResult myResult = myService.myMethod(myArgs);
    } catch (ServiceInvocationFailedException e) {
      if (e.isTechnical()) {
      } else {
        // error code you defined in the exception on the server side of the service
        String errorCode = e.getCode();
        handleBusinessError(e, errorCode;
    } catch (Throwable e) { // you may not handle this explicitly here...

If you are using asynchronous service invocation an error can occurr in a separate thread. Therefore you may and should define a custom error handler:

  private void callAsynchronous(MyArgs myArgs) {
    AsyncServiceClient<MyService> client = this.serviceClientFactory.createAsync(MyService.class);
    Consumer<Throwalbe> errorHandler = this::handleError;
    // call of service over the wire, will return when request is send and invoke handleResult asynchronously, this::handleResult);

  private void handleError(Throwalbe error) {

The error handler consumes Throwable and not only RuntimeException so you can get notified even in case of an unexpected OutOfMemoryError, NoClassDefFoundError, or other technical problems. Please note that the error handler may also be called from the thread calling the service (e.g. if already creating the request fails). The default error handler used if no custom handler is set will only log the error and do nothing else.


By default this solution will log all invocations including the URL of the invoked service, success or error status flag and the duration in seconds (with decimal nano precision as available). Therefore you can easily monitor the status and performance of the service invocations. Here is an example from our tests:

Invoking service[http://localhost:50178/app/services/rest/my-example/v1/greet/John%20Doe%20%26%20%3F%23] took PT20.309756622S (20309756622ns) and succeded with status 200.

Resilience adds a lot of complexity and that typically means that addressing this here would most probably result in not being up-to-date and not meeting all requirements. Therefore we recommend something completely different: the sidecar approach (based on sidecar pattern). This means that you use a generic proxy app that runs as a separate process on the same host, VM, or container of your actual application. Then in your app you are calling the service via the sidecar proxy on localhost (service discovery URL is e.g. http://localhost:8081/${app}/services/${type}) that then acts as proxy to the actual remote service. Now aspects such as resilience with circuit breaking and the actual service discovery can be configured in the sidecar proxy app and independent of your actual application. Therefore, you can even share and reuse configuration and experience with such a sidecar proxy app even across different technologies (Java, .NET/C#, Node.JS, etc.). Further, you do not pollute the technology stack of your actual app with the infrastructure for resilience, throttling, etc. and can update the app and the side-card independently when security-fixes are available.

Various implementations of such sidecar proxy apps are available as free open source software. Our recommendation in devonfw is to use istio. This not only provides such a side-car but also an entire management solution for service-mesh making administration and maintenance much easier. Platforms like OpenShift support this out of the box.

However, if you are looking for details about side-car implementations for services you can have a look at the following links:


General best practices

For testing please follow our general best practices:

  • Tests should have a clear goal that should also be documented.

  • Tests have to be classified into different integration levels.

  • Tests should follow a clear naming convention.

  • Automated tests need to properly assert the result of the tested operation(s) in a reliable way. E.g. avoid stuff like assertThat(service.getAllEntities()).hasSize(42) or even worse tests that have no assertion at all.

  • Tests need to be independent of each other. Never write test-cases or tests (in Java @Test methods) that depend on another test to be executed before.

  • Use AssertJ to write good readable and maintainable tests that also provide valuable feedback in case a test fails. Do not use legacy JUnit methods like assertEquals anymore!

  • For easy understanding divide your test in three commented sections:

    • //given

    • //when

    • //then

  • Plan your tests and test data management properly before implementing.

  • Instead of having a too strong focus on test coverage better ensure you have covered your critical core functionality properly and review the code including tests.

  • Test code shall NOT be seen as second class code. You shall consider design, architecture and code-style also for your test code but do not over-engineer it.

  • Test automation is good but should be considered in relation to cost per use. Creating full coverage via automated system tests can cause a massive amount of test-code that can turn out as a huge maintenance hell. Always consider all aspects including product life-cycle, criticality of use-cases to test, and variability of the aspect to test (e.g. UI, test-data).

  • Use continuous integration and establish that the entire team wants to have clean builds and running tests.

  • Prefer delegation over inheritance for cross-cutting testing functionality. Good places to put this kind of code can be realized and reused via the JUnit @Rule mechanism.

Test Automation Technology Stack

For test automation we use JUnit. However, we are strictly doing all assertions with AssertJ. For mocking we use mockito. In order to mock remote connections we use wiremock. For testing entire components or sub-systems we recommend to use spring-boot-starter-test as lightweight and fast testing infrastructure that is already shipped with devon4j-test.

In case you have to use a full blown JEE application server, we recommend to use arquillian. To get started with arquillian, look here.

Test Doubles

We use test doubles as generic term for mocks, stubs, fakes, dummies, or spys to avoid confusion. Here is a short summary from stubs VS mocks:

  • Dummy objects specifying no logic at all. May declare data in a POJO style to be used as boiler plate code to parameter lists or even influence the control flow towards the test’s needs.

  • Fake objects actually have working implementations, but usually take some shortcut which makes them not suitable for production (an in memory database is a good example).

  • Stubs provide canned answers to calls made during the test, usually not responding at all to anything outside what’s programmed in for the test. Stubs may also record information about calls, such as an email gateway stub that remembers the messages it 'sent', or maybe only how many messages it 'sent'.

  • Mocks are objects pre-programmed with expectations, which form a specification of the calls they are expected to receive.

We try to give some examples, which should make it somehow clearer:


Best Practices for applications:

  • A good way to replace small to medium large boundary systems, whose impact (e.g. latency) should be ignored during load and performance tests of the application under development.

  • As stub implementation will rely on state-based verification, there is the threat, that test developers will partially reimplement the state transitions based on the replaced code. This will immediately lead to a black maintenance whole, so better use mocks to assure the certain behavior on interface level.

  • Do NOT use stubs as basis of a large amount of test cases as due to state-based verification of stubs, test developers will enrich the stub implementation to become a large monster with its own hunger after maintenance efforts.


Best Practices for applications:

  • Replace not-needed dependencies of your system-under-test (SUT) to minimize the application context to start of your component framework.

  • Replace dependencies of your SUT to impact the control flow under test without establishing all the context parameters needed to match the control flow.

  • Remember: Not everything has to be mocked! Especially on lower levels of tests like isolated module tests you can be betrayed into a mocking delusion, where you end up in a hundred lines of code mocking the whole context and five lines executing the test and verifying the mocks behavior. Always keep in mind the benefit-cost ratio, when implementing tests using mocks.


If you need to mock remote connections such as HTTP-Servers, wiremock offers easy to use functionality. For a full description see the homepage or the github repository. Wiremock can be used either as a JUnit Rule, in Java outside of JUnit or as a standalone process. The mocked server can be configured to respond to specific requests in a given way via a fluent Java API, JSON files and JSON over HTTP. An example as an integration to JUnit can look as follows.

import static com.github.tomakehurst.wiremock.core.WireMockConfiguration.wireMockConfig;
import com.github.tomakehurst.wiremock.junit.WireMockRule;

public class WireMockOfferImport{

  public WireMockRule mockServer = new WireMockRule(wireMockConfig().dynamicPort());

  public void requestDataTest() throws Exception {
  int port = this.mockServer.port();

This creates a server on a randomly chosen free port on the running machine. You can also specify the port to be used if wanted. Other than that there are several options to further configure the server. This includes HTTPs, proxy settings, file locations, logging and extensions.

  public void requestDataTest() throws Exception {
      this.mockServer.stubFor(get(urlEqualTo("/new/offers")).withHeader("Accept", equalTo("application/json"))
      .withHeader("Authorization", containing("Basic")).willReturn(aResponse().withStatus(200).withFixedDelay(1000)
      .withHeader("Content-Type", "application/json").withBodyFile("/wireMockTest/jsonBodyFile.json")));

This will stub the URL localhost:port/new/offers to respond with a status 200 message containing a header (Content-Type: application/json) and a body with content given in jsonBodyFile.json if the request matches several conditions. It has to be a GET request to ../new/offers with the two given header properties.

Note that by default files are located in src/test/resources/__files/. When using only one WireMock server one can omit the this.mockServer in before the stubFor call (static method). You can also add a fixed delay to the response or processing delay with WireMock.addRequestProcessingDelay(time) in order to test for timeouts.

WireMock can also respond with different corrupted messages to simulate faulty behaviour.

@Test(expected = ResourceAccessException.class)
public void faultTest() {


A GET request to ../fault returns an OK status header, then garbage, and then closes the connection.

Integration Levels

There are many discussions about the right level of integration for test automation. Sometimes it is better to focus on small, isolated modules of the system - whatever a "module" may be. In other cases it makes more sense to test integrated groups of modules. Because there is no universal answer to this question, devonfw only defines a common terminology for what could be tested. Each project must make its own decision where to put the focus of test automation. There is no worldwide accepted terminology for the integration levels of testing. In general we consider ISTQB. However, with a technical focus on test automation we want to get more precise.

The following picture shows a simplified view of an application based on the devonfw reference architecture. We define four integration levels that are explained in detail below. The boxes in the picture contain parenthesized numbers. These numbers depict the lowest integration level, a box belongs to. Higher integration levels also contain all boxes of lower integration levels. When writing tests for a given integration level, related boxes with a lower integration level must be replaced by test doubles or drivers.

Integration Levels

The main difference between the integration levels is the amount of infrastructure needed to test them. The more infrastructure you need, the more bugs you will find, but the more instable and the slower your tests will be. So each project has to make a trade-off between pros and contras of including much infrastructure in tests and has to select the integration levels that fit best to the project.

Consider, that more infrastructure does not automatically lead to a better bug-detection. There may be bugs in your software that are masked by bugs in the infrastructure. The best way to find those bugs is to test with very few infrastructure.

External systems do not belong to any of the integration levels defined here. devonfw does not recommend involving real external systems in test automation. This means, they have to be replaced by test doubles in automated tests. An exception may be external systems that are fully under control of the own development team.

The following chapters describe the four integration levels.

Level 1 Module Test

The goal of a isolated module test is to provide fast feedback to the developer. Consequently, isolated module tests must not have any interaction with the client, the database, the file system, the network, etc.

An isolated module test is testing a single classes or at least a small set of classes in isolation. If such classes depend on other components or external resources, etc. these shall be replaced with a test double.

public class MyClassTest extends ModuleTest {

  public void testMyClass() {

    // given
    MyClass myClass = new MyClass();
    // when
    String value = myClass.doSomething();
    // then
    assertThat(value).isEqualTo("expected value");


For an advanced example see here.

Level 2 Component Test

A component test aims to test components or component parts as a unit. These tests typically run with a (light-weight) infrastructure such as spring-boot-starter-test and can access resources such as a database (e.g. for DAO tests). Further, no remote communication is intended here. Access to external systems shall be replaced by a test double.

With devon4j and spring you can write a component-test as easy as illustrated in the following example:

@SpringBootTest(classes = { MySpringBootApp.class }, webEnvironment = WebEnvironment.NONE)
public class UcFindCountryTest extends ComponentTest {
  private UcFindCountry ucFindCountry;

  public void testFindCountry() {

    // given
    String countryCode = "de";

    // when
    TestUtil.login("user", MyAccessControlConfig.FIND_COUNTRY);
    CountryEto country = this.ucFindCountry.findCountry(countryCode);

    // then

This test will start the entire spring-context of your app (MySpringBootApp). Within the test spring will inject according spring-beans into all your fields annotated with @Inject. In the test methods you can use these spring-beans and perform your actual tests. This pattern can be used for testing DAOs/Repositories, Use-Cases, or any other spring-bean with its entire configuration including database and transactions.

When you are testing use-cases your authorization will also be in place. Therefore, you have to simulate a logon in advance what is done via the login method in the above example. The test-infrastructure will automatically do a logout for you after each test method in doTearDown.

Level 3 Subsystem Test

A subsystem test runs against the external interfaces (e.g. HTTP service) of the integrated subsystem. Subsystem tests of the client subsystem are described in the devon4ng testing guide. In devon4j the server (JEE application) is the subsystem under test. The tests act as a client (e.g. service consumer) and the server has to be integrated and started in a container.

With devon4j and spring you can write a subsystem-test as easy as illustrated in the following example:

@SpringBootTest(classes = { MySpringBootApp.class }, webEnvironment = WebEnvironment.RANDOM_PORT)
public class CountryRestServiceTest extends SubsystemTest {

  private ServiceClientFactory serviceClientFactory;

  public void testFindCountry() {

    // given
    String countryCode = "de";

    // when
    CountryRestService service = this.serviceClientFactory.create(CountryRestService.class);
    CountryEto country = service.findCountry(countryCode);

    // then

Even though not obvious on the first look this test will start your entire application as a server on a free random port (so that it works in CI with parallel builds for different branches) and tests the invocation of a (REST) service including (un)marshalling of data (e.g. as JSON) and transport via HTTP (all in the invocation of the findCountry method).

Do not confuse a subsystem test with a system integration test. A system integration test validates the interaction of several systems where we do not recommend test automation.

Level 4 System Test

A system test has the goal to test the system as a whole against its official interfaces such as its UI or batches. The system itself runs as a separate process in a way close to a regular deployment. Only external systems are simulated by test doubles.

The devonfw only gives advice for automated system test (TODO see allure testing framework). In nearly every project there must be manual system tests, too. This manual system tests are out of scope here.

Classifying Integration-Levels

devon4j defines Category-Interfaces that shall be used as JUnit Categories. Also devon4j provides abstract base classes that you may extend in your test-cases if you like.

devon4j further pre-configures the maven build to only run integration levels 1-2 by default (e.g. for fast feedback in continuous integration). It offers the profiles subsystemtest (1-3) and systemtest (1-4). In your nightly build you can simply add -Psystemtest to run all tests.


This section introduces how to implement tests on the different levels with the given devonfw infrastructure and the proposed frameworks.

Module Test

In devon4j you can extend the abstract class ModuleTest to basically get access to assertions. In order to test classes embedded in dependencies and external services one needs to provide mocks for that. As the technology stack recommends we use the Mockito framework to offer this functionality. The following example shows how to implement Mockito into a JUnit test.

import static org.mockito.Mockito.when;
import static org.mockito.Mockito.mock;

public class StaffmanagementImplTest extends ModuleTest {
  public MockitoRule rule = MockitoJUnit.rule();

  public void testFindStaffMember() {

Note that the test class does not use the @SpringApplicationConfiguration annotation. In a module test one does not use the whole application. The JUnit rule is the best solution to use in order to get all needed functionality of Mockito. Static imports are a convenient option to enhance readability within Mockito tests. You can define mocks with the @Mock annotation or the mock(*.class) call. To inject the mocked objects into your class under test you can use the @InjectMocks annotation. This automatically uses the setters of StaffmanagementImpl to inject the defined mocks into the class under test (CUT) when there is a setter available. In this case the beanMapper and the staffMemberDao are injected. Of course it is possible to do this manually if you need more control.

  private BeanMapper beanMapper;
  private StaffMemberEntity staffMemberEntity;
  private StaffMemberEto staffMemberEto;
  private StaffMemberDao staffMemberDao;
  StaffmanagementImpl staffmanagementImpl = new StaffmanagementImpl();

The mocked objects do not provide any functionality at the time being. To define what happens on a method call on a mocked dependency in the CUT one can use when(condition).thenReturn(result). In this case we want to test findStaffMember(Long id) in the StaffmanagementImpl.

public StaffMemberEto findStaffMember(Long id) {
  return getBeanMapper().map(getStaffMemberDao().find(id), StaffMemberEto.class);

In this simple example one has to stub two calls on the CUT as you can see below. For example the method call of the CUT staffMemberDao.find(id) is stubbed for returning a mock object staffMemberEntity that is also defined as mock.

Subsystem Test

devon4j provides a simple test infrastructure to aid with the implementation of subsystem tests. It becomes available by simply subclassing

long id = 1L;
Class<StaffMemberEto> targetClass = StaffMemberEto.class;
when(, targetClass)).thenReturn(this.staffMemberEto);

StaffMemberEto resultEto = this.staffmanagementImpl.findStaffMember(id);


After the test method call one can verify the expected results. Mockito can check whether a mocked method call was indeed called. This can be done using Mockito verify. Note that it does not generate any value if you check for method calls that are needed to reach the asserted result anyway. Call verification can be useful e.g. when you want to assure that statistics are written out without actually testing them.

Regression testing

When it comes to complex output (even binary) that you want to regression test by comparing with an expected result, you sould consider Approval Tests using ApprovalTests.Java. If applied for the right problems, it can be very helpful.

Deployment Pipeline

A deployment pipeline is a semi-automated process that gets software-changes from version control into production. It contains several validation steps, e.g. automated tests of all integration levels. Because devon4j should fit to different project types - from agile to waterfall - it does not define a standard deployment pipeline. But we recommend to define such a deployment pipeline explicitly for each project and to find the right place in it for each type of test.

For that purpose, it is advisable to have fast running test suite that gives as much confidence as possible without needing too much time and too much infrastructure. This test suite should run in an early stage of your deployment pipeline. Maybe the developer should run it even before he/she checked in the code. Usually lower integration levels are more suitable for this test suite than higher integration levels.

Note, that the deployment pipeline always should contain manual validation steps, at least manual acceptance testing. There also may be manual validation steps that have to be executed for special changes only, e.g. usability testing. Management and execution processes of those manual validation steps are currently not in the scope of devonfw.

Test Coverage

We are using tools (SonarQube/Jacoco) to measure the coverage of the tests. Please always keep in mind that the only reliable message of a code coverage of X% is that (100-X)% of the code is entirely untested. It does not say anything about the quality of the tests or the software though it often relates to it.

Test Configuration

This section covers test configuration in general without focusing on integration levels as in the first chapter.

Configure Test Specific Beans

Sometimes it can become handy to provide other or differently configured bean implementations via CDI than those available in production. For example, when creating beans using @Bean-annotated methods they are usually configured within those methods. WebSecurityBeansConfig shows an example of such methods.

public class WebSecurityBeansConfig {
  public AccessControlSchemaProvider accessControlSchemaProvider() {
    // actually no additional configuration is shown here
    return new AccessControlSchemaProviderImpl();

AccessControlSchemaProvider allows to programmatically access data defined in some XML file, e.g. access-control-schema.xml. Now, one can imagine that it would be helpful if AccessControlSchemaProvider would point to some other file than the default within a test class. That file could provide content that differs from the default. The question is: how can I change resource path of AccessControlSchemaProviderImpl within a test?

One very helpful solution is to use static inner classes. Static inner classes can contain @Bean -annotated methods, and by placing them in the classes parameter in @SpringBootTest(classes = { /* place class here*/ }) annotation the beans returned by these methods are placed in the application context during test execution. Combining this feature with inheritance allows to override methods defined in other configuration classes as shown in the following listing where TempWebSecurityConfig extends WebSecurityBeansConfig. This relationship allows to override public AccessControlSchemaProvider accessControlSchemaProvider(). Here we are able to configure the instance of type AccessControlSchemaProviderImpl before returning it (and, of course, we could also have used a completely different implementation of the AccessControlSchemaProvider interface). By overriding the method the implementation of the super class is ignored, hence, only the new implementation is called at runtime. Other methods defined in WebSecurityBeansConfig which are not overridden by the subclass are still dispatched to WebSecurityBeansConfig.

//... Other testing related annotations
@SpringBootTest(classes = { TempWebSecurityConfig.class })
public class SomeTestClass {

  public static class TempWebSecurityConfig extends WebSecurityBeansConfig {

    public AccessControlSchemaProvider accessControlSchemaProvider() {

      ClassPathResource resource = new ClassPathResource(locationPrefix + "access-control-schema3.xml");
      AccessControlSchemaProviderImpl accessControlSchemaProvider = new AccessControlSchemaProviderImpl();
      return accessControlSchemaProvider;

The following chapter of the Spring framework documentation explains issue, but uses a slightly different way to obtain the configuration.

Test Data

It is possible to obtain test data in two different ways depending on your test’s integration level.

Debugging Tests

The following two sections describe two debugging approaches for tests. Tests are either run from within the IDE or from the command line using Maven.

Debugging with the IDE

Debugging with the IDE is as easy as always. Even if you want to execute a SubsystemTest which needs a Spring context and a server infrastructure to run properly, you just set your breakpoints and click on Debug As → JUnit Test. The test infrastructure will take care of initializing the necessary infrastructure - if everything is configured properly.

Debugging with Maven

Please refer to the following two links to find a guide for debugging tests when running them from Maven.

In essence, you first have to start execute a test using the command line. Maven will halt just before the test execution and wait for your IDE to connect to the process. When receiving a connection the test will start and then pause at any breakpoint set in advance. The first link states that tests are started through the following command:

mvn -Dmaven.surefire.debug test

Although this is correct, it will run every test class in your project and - which is time consuming and mostly unnecessary - halt before each of these tests. To counter this problem you can simply execute a single test class through the following command (here we execute the TablemanagementRestServiceTest from the restaurant sample application):

mvn test -Dmaven.surefire.debug test -Dtest=TablemanagementRestServiceTest

It is important to notice that you first have to execute the Maven command in the according submodule, e.g. to execute the TablemanagementRestServiceTest you have first to navigate to the core module’s directory.


The technical data model is defined in form of persistent entities. However, passing persistent entities via call-by-reference across the entire application will soon cause problems:

  • Changes to a persistent entity are directly written back to the persistent store when the transaction is committed. When the entity is send across the application also changes tend to take place in multiple places endangering data sovereignty and leading to inconsistency.

  • You want to send and receive data via services across the network and have to define what section of your data is actually transferred. If you have relations in your technical model you quickly end up loading and transferring way too much data.

  • Modifications to your technical data model shall not automatically have impact on your external services causing incompatibilities.

To prevent such problems transfer-objects are used leading to a call-by-value model and decoupling changes to persistent entities.

In the following sections the different types of transfer-objects are explained. You will find all according naming-conventions in the architecture-mapping


For each persistent entity «BusinessObject»Entity we create or generate a corresponding entity transfer object (ETO) named «BusinessObject»Eto. It has the same properties except for relations.


In order to centralize the properties (getters and setters with their javadoc) we create a common interface «BusinessObject» implemented both by the entity and its ETO. This also gives us compile-time safety that bean-mapper can properly map all properties between entity and ETO.


If we need to pass an entity with its relation(s) we create a corresponding composite transfer object (CTO) named «BusinessObject»«Subset»Cto that only contains other transfer-objects or collections of them. Here «Subset» is empty for the canonical CTO that holds the ETO together with all its relations. This is what can be generated automatically with CobiGen. However, be careful to generate CTOs without thinking and considering design. If there are no relations at all a CTO is pointless and shall be omitted. However, if there are multiple relations you typically need multiple CTOs for the same «BusinessObject» that define different subsets of the related data. These will typically be designed and implemented by hand. E.g. you may have CustomerWithAddressCto and CustomerWithContractCto. Most CTOs correspond to a specific «BusinessObject» and therefore contain a «BusinessObject»Eto. Such CTOs should inherit from MasterCto.

This pattern with entities, ETOs and CTOs is illustrated by the following UML diagram from our sample application.

ETOs and CTOs
Figure 8. ETOs and CTOs

Finally, there are typically transfer-objects for data that is never persistent. For very generic cases these just carry the suffix To.


For searching we create or generate a «BusinessObject»SearchCriteriaTo representing a query to find instances of «BusinessObject».


We can potentially create separate service transfer objects (STO) (if possible named «BusinessObject»Sto) to keep the service API stable and independent of the actual data-model. However, we usually do not need this and want to keep our architecture simple. Only create STOs if you need service versioning and support previous APIs or to provide legacy service technologies that require their own isolated data-model. In such case you also need beanmapping between STOs and ETOs what means extra effort and complexity that should be avoided.


For decoupling you sometimes need to create separate objects (beans) for a different view. E.g. for an external service you will use a transfer-object instead of the persistence entity so internal changes to the entity do not implicitly change or break the service.

Therefore you have the need to map similar objects what creates a copy. This also has the benefit that modifications to the copy have no side-effect on the original source object. However, to implement such mapping code by hand is very tedious and error-prone (if new properties are added to beans but not to mapping code):

public UserEto mapUser(UserEntity source) {
  UserEto target = new UserEto();
  return target;

Therefore we are using a BeanMapper for this purpose that makes our lives a lot easier.

Bean-Mapper Dependency

To get access to the BeanMapper we have to use either of below dependency in our POM:

We started with[dozer] in devon4j and still support it. However, we now recommend orika (for new projects) as it is much faster (see Performance of Java Mapping Frameworks).



Bean-Mapper Configuration using Dozer

The BeanMapper implementation is based on an existing open-source bean mapping framework. In case of Dozer the mapping is configured src/main/resources/config/app/common/dozer-mapping.xml.

See the my-thai-star dozer-mapping.xml as an example. Important is that you configure all your custom datatypes as <copy-by-reference> tags and have the mapping from PersistenceEntity (ApplicationPersistenceEntity) to AbstractEto configured properly:

 <mapping type="one-way">
    <field custom-converter="com.devonfw.module.beanmapping.common.impl.dozer.IdentityConverter">
      <b is-accessible="true">persistentEntity</b>
Bean-Mapper Usage

Then we can get the BeanMapper via dependency-injection what we typically already provide by an abstract base class (e.g. AbstractUc). Now we can solve our problem very easy:

UserEntity resultEntity = ...;
return getBeanMapper().map(resultEntity, UserEto.class);

There is also additional support for mapping entire collections.


A datatype is an object representing a value of a specific type with the following aspects:

  • It has a technical or business specific semantic.

  • Its JavaDoc explains the meaning and semantic of the value.

  • It is immutable and therefore stateless (its value assigned at construction time and can not be modified).

  • It is serializable.

  • It properly implements #equals(Object) and #hashCode() (two different instances with the same value are equal and have the same hash).

  • It shall ensure syntactical validation so it is NOT possible to create an instance with an invalid value.

  • It is responsible for formatting its value to a string representation suitable for sinks such as UI, loggers, etc. Also consider cases like a Datatype representing a password where toString() should return something like "**" instead of the actual password to prevent security accidents.

  • It is responsible for parsing the value from other representations such as a string (as needed).

  • It shall provide required logical operations on the value to prevent redundancies. Due to the immutable attribute all manipulative operations have to return a new Datatype instance (see e.g. BigDecimal.add(java.math.BigDecimal)).

  • It should implement Comparable if a natural order is defined.

Based on the Datatype a presentation layer can decide how to view and how to edit the value. Therefore a structured data model should make use of custom datatypes in order to be expressive. Common generic datatypes are String, Boolean, Number and its subclasses, Currency, etc. Please note that both Date and Calendar are mutable and have very confusing APIs. Therefore, use JSR-310 or jodatime instead. Even if a datatype is technically nothing but a String or a Number but logically something special it is worth to define it as a dedicated datatype class already for the purpose of having a central javadoc to explain it. On the other side avoid to introduce technical datatypes like String32 for a String with a maximum length of 32 characters as this is not adding value in the sense of a real Datatype. It is suitable and in most cases also recommended to use the class implementing the datatype as API omitting a dedicated interface.

— mmm project
datatype javadoc
Datatype Packaging

For the devonfw we use a common packaging schema. The specifics for datatypes are as following:

Segment Value Explanation



Here we use the (business) component defining the datatype or general for generic datatypes.



Datatypes are used across all layers and are not assigned to a dedicated layer.



Datatypes are always used directly as API even tough they may contain (simple) implementation logic. Most datatypes are simple wrappers for generic Java types (e.g. String) but make these explicit and might add some validation.

Technical Concerns

Many technologies like Dozer and QueryDSL’s (alias API) are heavily based on reflection. For them to work properly with custom datatypes, the frameworks must be able to instantiate custom datatypes with no-argument constructors. It is therefore recommended to implement a no-argument constructor for each datatype of at least protected visibility.

Datatypes in Entities

The usage of custom datatypes in entities is explained in the persistence layer guide.

Datatypes in Transfer-Objects

For mapping datatypes with JAXB see XML guide.


For mapping datatypes from and to JSON see JSON custom mapping.

CORS support

When you are developing Javascript client and server application separately, you have to deal with cross domain issues. We have to request from a origin domain distinct to target domain and browser does not allow this.

So , we need to prepare server side to accept request from other domains. We need to cover the following points:

  • Accept request from other domains.

  • Accept devonfw used headers like X-CSRF-TOKEN or correlationId.

  • Be prepared to receive secured request (cookies).

It is important to note that if you are using security in your request (sending cookies) you have to set withCredentials flag to true in your client side request and deal with special IE8 characteristics.

Configuring CORS support

To enable the CORS support from the server side add the below dependency.


Add the below properties in your file.

#CORS support
Attribute Description HTTP Header


Decides the browser should include any cookies associated with the request (true if cookies should be included).



List of allowed origins (use * to allow all orgins).



List of allowed HTTP request methods (OPTIONS, HEAD, GET, PUT, POST, DELETE, PATCH, etc.).



List of allowed headers that can be used during the request (use * to allow all headers requested by the client)



Ant-style pattern for the URL paths where to apply CORS. Use "/**" to match all URL paths.

More information about the CORS headers can be found here

BLOB support

BLOB stands for Binary Large Object. A BLOB may be an image, an office document, ZIP archive or any other multimedia object. Often these BLOBs are large. if this is the case you need to take care, that you do not copy all the blob data into you application heap, e.g. when providing them via a REST service. This could easily lead to performance problems or out of memory errors. As solution for that problem is "streaming" those BLOBs directly from the database to the client. To demonstrate how this can be accomplished, devonfw provides a example.

Java Development Kit

The Java Development Kit is an implementation of the Java platform. It provides the Java Virtual Machine (JVM) and the Java Runtime Environment (JRE).


The JDK exists in different editions:

As Java is evolving and also complex maintaining a JVM requires a lot of energy. Therefore many alternative JDK editions are unable to cope with this and support latest Java versions and according compatibility. Unfortunately OpenJDK only maintains a specific version of Java for a relative short period of time before moving to the next major version. In the end, this technically means that OpenJDK is continuous beta and can not be used in production for reasonable software projects. As OracleJDK changed its licensing model and can not be used for commercial usage even during development, things can get tricky. You may want to use OpenJDK for development and OracleJDK only in production. However, e.g. OpenJDK 11 never released a version that is stable enough for reasonable development (e.g. javadoc tool is broken and fixes are not available of OpenJDK 11 - fixed in 11.0.3 what is only available as OracleJDK 11 or you need to go to OpenJDK 12+, what has other bugs) so in the end there is no working release of OpenJDK 11. This more or less forces you to use OracleJDK what requires you to buy a subscription so you can use it for commercial development. However, there is AdoptOpenJDK that provides forked releases of OpenJDK with bug-fixes what might be an option. Anyhow, as you want to have your development environment close to production, the productively used JDK (most likely OracleJDK) should be preferred also for development.


Until Java 8 compatibility was one of the key aspects for Java version updates (after the mess on the Swing updates with Java2 many years ago). However, Java 9 introduced a lot of breaking changes. This documentation wants to share the experience we collected in devonfw when upgrading from Java 8 to newer versions. First of all we separate runtime changes that you need if you want to build your software with JDK 8 but such that it can also run on newer versions (e.g. JRE 11) from changes required to also build your software with more recent JDKs (e.g. JDK 11 or 12).

Runtime Changes

This section describes required changes to your software in order to make it run also with versions newer than Java 8.

Classes removed from JDK

The first thing that most users hit when running their software with newer Java versions is a ClassNotFoundException like this:

Caused by: java.lang.ClassNotFoundException: javax.xml.bind.JAXBException

As Java 9 introduced a module system with Jigsaw, the JDK that has been a monolithic mess is now a well-defined set of structured modules. Some of the classes that used to come with the JDK moved to modules that where not available by default in Java 9 and have even been removed entirely in later versions of Java. Therefore you should simply treat such code just like any other 3rd party component that you can add as a (maven) dependency. The following table gives you the required hints to make your software work even with such classes / modules removed from the JDK (please note that the specified version is just a suggestion that worked, feel free to pick a more recent or more appropriate version):

Table 40. Dependencies for classes removed from Java 8 since 9+
Class GroupId ArtifactId Version



























3rd Party Updates

Further, internal and inofficial APIs (e.g. sun.misc.Unsafe) have been removed. These are typically not used by your software directly but by low-level 3rd party libraries like asm that need to be updated. Also simple things like the Java version have changed (from 1.8.x to 9.x, 10.x, 11.x, 12.x, etc.). Some 3rd party libraries were parsing the Java version in a very naive way making them unable to be used with Java 9+:

Caused by: java.lang.NullPointerException
   at (

Therefore the following table gives an overview of common 3rd party libraries that have been affected by such breaking changes and need to be updated to at least the specified version:

Table 41. Minimum recommended versions of common 3rd party for Java 9+
GroupId ArtifactId Version Issue








102, 93, 133








194, 228, 246, 171


For internationalization (i18n) and localization (l10n) ResourceBundle is used for language and country specific texts and configurations as properties (e.g. With Java modules there are changes and impacts you need to know to get things working. The most important change is documented in the JavaDoc of ResourceBundle. However, instead of using ResourceBundleProvider and refactoring your entire code causing incompatibilities, you can simply put the resource bundles in a regular JAR on the classpath rather than a named module (or into the lauching app). If you want to implement (new) Java modules with i18n support, you can have a look at mmm-nls.

Buildtime Changes

If you also want to change your build to work with a recent JDK you also need to ensure that test frameworks and maven plugins properly support this.


Findbugs does not work with Java 9+ and is actually a dead project. The new findbugs is SpotBugs. For maven the new solution is spotbugs-maven-plugin:

Test Frameworks
Table 42. Minimum recommended versions of common 3rd party test frameworks for Java 9+
GroupId ArtifactId Version Issue




1419, 1696, 1607, 1594, 1577, 1482

Maven Plugins
Table 43. Minimum recommended versions of common maven plugins for Java 9+
GroupId ArtifactId (min.) Version Issue

























Maven Usage

With Java modules you can not run Javadoc standalone anymore or you will get this error when running mvn javadoc:javadoc:

[ERROR] Failed to execute goal org.apache.maven.plugins:maven-javadoc-plugin:3.1.1:javadoc (default-cli) on project mmm-base: An error has occurred in Javadoc report generation:
[ERROR] Exit code: 1 - error: module not found: io.github.mmm.base
[ERROR] Command line was: /projects/mmm/software/java/bin/javadoc @options @packages @argfile

As a solution or workaround you need to include the compile goal into your build lifecycle so the module-path is properly configured:

mvn compile javadoc:javadoc

We want to give credits and say thanks to the following articles that have been there before and helped us on our way:

Microservices in devonfw

The Microservices architecture is an approach for application development based on a series of small services grouped under a business domain. Each individual service runs autonomously and communicating with each other through their APIs. That independence between the different services allows to manage (upgrade, fix, deploy, etc.) each one without affecting the rest of the system’s services. In addition to that the microservices architecture allows to scale specific services when facing an increment of the requests, so the applications based on microservices are more flexible and stable, and can be adapted quickly to demand changes.

However, this new approach, developing apps based on microservices, presents some downsides.

Let’s see the main challenges when working with microservices:

  • Having the applications divided in different services we will need a component (router) to redirect each request to the related microservice. These redirection rules must implement filters to guarantee a proper functionality.

  • In order to manage correctly the routing process, the application will also need a catalog with all the microservices and its details: IPs and ports of each of the deployed instances of each microservice, the state of each instance and some other related information. This catalog is called Service Discovery.

  • With all the information of the Service Discovery the application will need to calculate and select between all the available instances of a microservice which is the suitable one. This will be figured out by the library Client Side Load Balancer.

  • The different microservices will be likely interconnected with each other, that means that in case of failure of one of the microservices involved in a process, the application must implement a mechanism to avoid the error propagation through the rest of the services and provide an alternative as a process result. To solve this, the pattern Circuit Breaker can be implemented in the calls between microservices.

  • As we have mentioned, the microservices will exchange calls and information with each other so our applications will need to provide a secured context to avoid not allowed operations or intrusions. In addition, since microservices must be able to operate in an isolated way, it is not recommended to maintain a session. To meet this need without using Spring sessions, a token-based authentication is used that exchanges information using the json web token (JWT) protocol.

In addition to all of this we will find other issues related to this particular architecture that we will address fitting the requirements of each project.

  • Distributed data bases: each instance of a microservice should have only one data base.

  • Centralized logs: each instance of a microservice creates a log and a trace that should be centralized to allow an easier way to read all that information.

  • Centralized configuration: each microservice has its own configuration, so our applications should group all those configurations in only one place to ease the configuration management.

  • Automatized deployments: as we are managing several components (microservices, catalogs, balancers, etc.) the deployment should be automatized to avoid errors and ease this process.

To address the above, devonfw microservices has an alternative approach Microservices based on Netflix-Tools.

Application Performance Management

This guide gives hints how to manage, monitor and analyse performance of Java applications.

Temporary Analysis

If you are facing performance issues and want to do a punctual analysis we recommend you to use glowroot. It is ideal in cases where monitoring in your local development environment is suitable. However, it is also possible to use it in your test environment. It is entirely free and open-source. Still it is very powerful and helps to trace down bottlenecks. To get a first impression of the tool take a look at the demo.


In case you are forced to use an JEE application server and want to do a temporary analysis you can double click your server instance from the servers view in Eclipse and click on the link Open launch configuration in order to add the -javaagent JVM option.

Regular Analysis

In case you want to manage application performance regularly we recommend to use JavaMelody that can be integrated into your application. More information on javamelody is available on the JavaMelody Wiki


Caching is a technical approach to improve performance. While it may appear easy on the first sight it is an advanced topic. In general, try to use caching only when required for performance reasons. If you come to the point that you need caching first think about:

  • What to cache?
    Be sure about what you want to cache. Is it static data? How often will it change? What will happen if the data changes but due to caching you might receive "old" values? Can this be tolerated? For how long? This is not a technical question but a business requirement.

  • Where to cache?
    Will you cache data on client or server? Where exactly?

  • How to cache?
    Is a local cache sufficient or do you need a shared cache?

Local Cache
Shared Cache
Distributed Cache


The most software developing teams use Feature-Branching to be able to work in parallel and maintain a stable main branch in the VCS. However Feature-Branching might not be the ideal tool in every case because of big merges and isolation between development groups. In many cases, Feature-Toggles can avoid some of these problems, so these should definitely be considered to be used in the collaborative software development.

Implementation with the devonfw

To use Feature-Toggles with the devonfw, use the Framework Togglz because it has all the features generally needed and provides a great documentation.

For a pretty minimal working example, also see this fork.


The following example takes place in the oasp-sample-core project, so the necessary dependencies have to be added to the according pom.xml file. Required are the main Togglz project including Spring support, the Togglz console to graphically change the feature state and the Spring security package to handle authentication for the Togglz console.

<!-- Feature-Toggle-Framework togglz -->



In addition to that, the following lines have to be included in the spring configuration file

# configuration for the togglz Feature-Toggle-Framework
Small features

For small features, a simple query of the toggle state is often enough to achieve the desired functionality. To illustrate this, a simple example follows, which implements a toggle to limit the page size returned by the staffmanagement. See here for further details.

This is the current implementation to toggle the feature:

// Uncomment next line in order to limit the maximum page size for the staff member search
// criteria.limitMaximumPageSize(MAXIMUM_HIT_LIMIT);

To realise this more elegantly with Togglz, first an enum is required to configure the feature-toggle.

public enum StaffmanagementFeatures implements Feature {
  @Label("Limit the maximum page size for the staff members")

  public boolean isActive() {
    return FeatureContext.getFeatureManager().isActive(this);

To familiarize the Spring framework with the enum, add the following entry to the file.

After that, the toggle can be used easily by calling the isActive() method of the enum.

if (StaffmanagementFeatures.LIMIT_STAFF_PAGE_SIZE.isActive()) {

This way, you can easily switch the feature on or off by using the administration console at http://localhost:8081/devon4j-sample-server/togglz-console. If you are getting redirected to the login page, just sign in with any valid user (eg. admin).

Extensive features

When implementing extensive features, you might want to consider using the strategy design pattern to maintain the overview of your software. The following example is an implementation of a feature which adds a 25% discount to all products managed by the offermanagement.

Therefore there are two strategies needed:
  1. Return the offers with the normal price

  2. Return the offers with a 25% discount

The implementation is pretty straight forward so use this as a reference. Compare this for further details.

public PaginatedListTo<OfferEto> findOfferEtos(OfferSearchCriteriaTo criteria) {
  PaginatedListTo<OfferEntity> offers = getOfferDao().findOffers(criteria);

  if (OffermanagementFeatures.DISCOUNT.isActive()) {
    return getOfferEtosDiscount(offers);
  } else {
    return getOfferEtosNormalPrice(offers);


// Strategy 1: Return the OfferEtos with the normal price
private PaginatedListTo<OfferEto> getOfferEtosNormalPrice(PaginatedListTo<OfferEntity> offers) {
  return mapPaginatedEntityList(offers, OfferEto.class);

// Strategy 2: Return the OfferEtos with the new, discounted price
private PaginatedListTo<OfferEto> getOfferEtosDiscount(PaginatedListTo<OfferEntity> offers) {
  offers = addDiscountToOffers(offers);
  return mapPaginatedEntityList(offers, OfferEto.class);

private PaginatedListTo<OfferEntity> addDiscountToOffers(PaginatedListTo<OfferEntity> offers) {
  for (OfferEntity oe : offers.getResult()) {
    Double oldPrice = oe.getPrice().getValue().doubleValue();

    // calculate the new price and round it to two decimal places
    BigDecimal newPrice = new BigDecimal(oldPrice * 0.75);
    newPrice = newPrice.setScale(2, RoundingMode.HALF_UP);

    oe.setPrice(new Money(newPrice));

  return offers;
Guidelines for a successful use of feature-toggles

The use of feature-toggles requires a specified set of guidelines to maintain the overview on the software. The following is a collection of considerations and examples for conventions that are reasonable to use.

Minimize the number of toggles

When using too many toggles at the same time, it is hard to maintain a good overview of the system and things like finding bugs are getting much harder. Additionally, the management of toggles in the configuration interface gets more difficult due to the amount of toggles.

To prevent toggles from piling up during development, a toggle and the associated obsolete source code should be removed after the completion of the corresponding feature. In addition to that, the existing toggles should be revisited periodically to verify that these are still needed and therefore remove legacy toggles.

Consistent naming scheme

A consistent naming scheme is the key to a structured and easily maintainable set of features. This should include the naming of toggles in the source code and the appropriate naming of commit messages in the VCS. The following section contains an example for a useful naming scheme including a small example.

Every Feature-Toggle in the system has to get its own unique name without repeating any names of features, which were removed from the system. The chosen names should be descriptive names to simplify the association between toggles and their purpose. If the feature should be split into multiple sub-features, you might want to name the feature like the parent feature with a describing addition. If for example you want to split the DISCOUNT feature into the logic and the UI part, you might want to name the sub-features DISCOUNT_LOGIC and DISCOUNT_UI.

The entry in the togglz configuration enum should be named identically to the aforementioned feature name. The explicitness of feature names prevents a confusion between toggles due to using multiple enums.

Commit messages are very important for the use of feature-toggles and also should follow a predefined naming scheme. You might want to state the feature name at the beginning of the message, followed by the actual message, describing what the commit changes to the feature. An example commit message could look like the following:

DISCOUNT: Add the feature-toggle to the offermanagement implementation.

Mentioning the feature name in the commit message has the advantage, that you can search your git log for the feature name and get every commit belonging to the feature. An example for this using the tool grep could look like this.

$ git log | grep -C 4 DISCOUNT

commit 034669a48208cb946cc6ba8a258bdab586929dd9
Author: Florian Luediger <>
Date:   Thu Jul 7 13:04:37 2016 +0100

DISCOUNT: Add the feature-toggle to the offermanagement implementation.

To keep track of all the features in your software system, a platform like GitHub offers issues. When creating an issue for every feature, you can retrace, who created the feature and who is assigned to completing its development. When referencing the issue from commits, you also have links to all the relevant commits from the issue view.

Placement of toggle points

To maintain a clean codebase, you definitely want to avoid using the same toggle in different places in the software. There should be one single query of the toggle which should be able to toggle the whole functionality of the feature. If one single toggle point is not enough to switch the whole feature on or off, you might want to think about splitting the feature into multiple ones.

Use of fine-grained features

Bigger features in general should be split into multiple sub-features to maintain the overview on the codebase. These sub-features get their own feature-toggle and get implemented independently.


This section is about Java Enterprise Edition (JEE). Regarding to our key principles we focus on open standards. For Java this means that we consider official standards from Java Standard and Enterprise Edition as first choice for considerations. Therefore we also decided to recommend JAX-RS over SpringMVC as the latter is proprietary. Only if an existing Java standard is not suitable for current demands such as Java Server Faces (JSF), we do not officially recommend it (while you are still free to use it if you have good reasons to do so). In all other cases we officially suggest the according standard and use it in our guides, code-samples, sample application, modules, templates, etc. Examples for such standards are JPA, JAX-RS, JAX-WS, JSR330, JSR250, JAX-B, etc.


We designed everything based on standards to work with different technology stacks and servlet containers. However, we strongly encourage to use spring and spring-boot. You are free to decide for something else but here is a list of good reasons for our decision:

  • Up-to-date

    With spring you easily keep up to date with evolving technologies (microservices, reactive, NoSQL, etc.). Most application servers put you in a jail with old legacy technology. In many cases you are even forced to use a totally outdated version of java (JVM/JDK). This may even cause severe IT-Security vulnerabilities but with expensive support you might get updates. Also with spring and open-source you need to be aware that for IT-security you need to update recently what can cost quite a lot of additional maintenance effort.

  • Development speed

    With spring-boot you can implement and especially test your individual logic very fast. Starting the app in your IDE is very easy, fast, and realistic (close to production). You can easily write JUnit tests that startup your server application to e.g. test calls to your remote services via HTTP fast and easy. For application servers you need to bundle and deploy your app what takes more time and limits you in various ways. We are aware that this has improved in the past but also spring continuously improves and is always way ahead in this area. Further, with spring you have your configurations bundled together with the code in version control (still with ability to handle different environments) while with application servers these are configured externally and can not be easily tested during development.

  • Documentation

    Spring has an extremely open and active community. There is documentation for everything available for free on the web. You will find solutions to almost any problem on platforms like stackoverflow. If you have a problem you are only a google search away from your solution. This is very much different for proprietary application server products.

  • Helpful Exception Messages

    Spring is really great for developers on exception messages. If you do something wrong you get detailed and helpful messages that guide you to the problem or even the solution. This is not as great in application servers.

  • Future-proof

    Spring has evolved really awesome over time. Since its 1.0 release in 2004 spring has continuously been improved and always caught up with important trends and innovations. Even in critical situations, when the company behind it (interface21) was sold, spring went on perfectly. JEE went through a lot of trouble and crisis. Just look at the EJB pain stories. This happened often in the past and also recent. See JEE 8 in crisis.

  • Free

    Spring and its ecosystem is free and open-source. It still perfectly integrates with commercial solutions for specific needs. Most application servers are commercial and cost a lot of money. As of today the ROI for this is of question.

  • Fun

    If you go to conferences or ask developers you will see that spring is popular and fun. If new developers are forced to use an old application server product they will be less motivated or even get frustrated. Especially in today’s agile projects this is a very important aspect. In the end you will get into trouble with maintenance on the long run if you rely on a proprietary application server.

Of course the vendors of application servers will tell you a different story. This is simply because they still make a lot of money from their products. We do not get paid from application servers nor from spring. We are just developers who love to build great systems. A good reason for application servers is that they combine a set of solutions to particular aspects to one product that helps to standardize your IT. However, devonfw fills exactly this gap for the spring ecosystem in a very open and flexible way. However, there is one important aspect that you need to understand and be aware of:

Some big companies decided for a specific application server as their IT strategy. They may have hundreds of apps running with this application server. All their operators and developers have learned a lot of specific skills for this product and are familiar with it. If you are implementing yet another (small) app in this context it can make sense to stick with this application server. However, also they have to be aware that with every additional app they increase their technical debt.

Messaging Services

Messaging Services provide an asynchronous communication mechanism between applications. Technically this is implemented using Apache Kafka .

Devonfw uses Spring-Kafka as kafka framework.

This guide explains how Spring kafka is used in devonfw applications. It focuses on aspects which are special to devonfw if you want to learn about spring-kafka you should adhere to springs references documentation.

There is an example of simple kafka implementation in the devon4j-kafka-employeeapp.

The devon4j-kafka library consist of:

  • Custom message processor with retry pattern

  • Monitoring support

  • Tracing support

  • Logging support

  • Configuration support for Kafka Producers, Consumers, brave tracer and message retry processing including defaults

How to use?

To use devon4j-kafka you have to add required starter dependencies which is "starter-kafka-sender" or "starter-kafka-receiver" from devon4j. These 2 starters are responsible for taking care of the required spring configuration. If you only want to produce messages "starter-kafka-sender" is enough. For consuming messages you need "starter-kafka-receiver" which also includes "starter-kafka-sender".

To use devon4j-kafka message sender add the below dependency:


It includes the Tracer implementations from Spring cloud sleuth.

To use the devon4j-kafka message receiver configurations , loggers and message retry processor for processing message, add the below dependency:

Property Parameters

As written before kafka-producer and listener-specific configuration is done via properties classes. These classes provide useful defaults, at a minimum the following parameters have to be configured:

messaging.kafka.common.bootstrap-servers=kafka-broker:9092<e.g. application name>
messaging.kafka.listener.container.concurrency=<Number of listener threads for each listener container>

All the configuration beans for devon4j-kafka are annotated with @ConfigurationProperties and use common prefixes to read the property values from or application.yml.


  @ConfigurationProperties(prefix = "messaging.kafka.producer")
  public KafkaProducerProperties messageKafkaProducerProperties() {
    return new KafkaProducerProperties();

For producer and consumer the prefixes are messaging.kafka.producer…​ and `message.kafka.consumer…​ and for retry the prefix is `messaging.retry…​

We use the same properties defined by Apache Kafka or Spring Kafka. They are simply "mapped" to the above prefixes to allow easy access from your application properties. The java docs provided in each of the devon4j-kafka property classes which explains their use and what value has to be passed.

Naming convention for topics

For better managing of several Kafka topics in your application portfolio we devon strongly advice to introduce a naming scheme for your topics. The schema may depend on the actual usage pattern of kafka. For context where kafka is used in a 1-to-1-communication-scheme (not publish/subscribe) the following schema as been proven useful in practice:

<application name>-<service name>-<version>-<service-operation>

To keep thing easy and prevent problems we suggest to use only small letters, hyphens but no other special characters.

Send Messages

As mentioned above the 'starter-kafka-sender' is required to be added as dependency to use MessageSender from kafka.


The following example shows how to use MessageSender and its method to send message to kafka broker:


  private MessageSender messageSender;
  private ProducerRecord<K,V> producerRecord;

  public void sendMessageToKafka(){
  producerRecord=new ProducerRecord<>("topic-name","message");

There are multiple methods available from MessageSender of devon4j-kafka. The ProducerListener will log the message sent tot he kafka broker.

Receive Messages

To receive messages you have to define a listener. The listener is normally part of the service layer.

Architecture for Kafka services
Figure 9. Architecture for Kafka services

Import the following starter-kafka-receiver dependency to use the listener configurations and loggers from devon4j-kafka.


The listener is defined by implementing and annotating a method like in the following example:

  @KafkaListener(topics = "employeeapp-employee-v1-delete", groupId = "${messaging.kafka.consumer.groupId}", containerFactory = "kafkaListenerContainerFactory")
  public void consumer(ConsumerRecord<Object, Object> consumerRecord, Acknowledgment acknowledgment) {
  //user operation
  //To acknowledge listener after processing

The group id can be mentioned in as listener properties.


if there are multiple topics and multiple listeners then we suggest to specify the topic names directly on each listeners instead reading from the property file. The container factory mentioned in the @KafkaListener is provided in the to create default container factory with acknowledgement.

The default ack-mode is manual_immediate . It can be overridden by below example:


The other ack-mode values can be referred from here.


The retry pattern in devon4j-kafka is invoked when a particular exception(described by user in file) is thrown while processing the consumed message and it is configured in file. Then general idea is to separate messages which could not be processed into dedicated retry-topics to allow fine control on how processing of the messages is retried and to not block newly arriving messages. Let us see more about handling retry in the below topics.

Retry pattern in devon4j-kafka
Handling retry in devon4j-kafka

The retry pattern is included in the starter dependency of "starter-kafka-receiver".

The retryPattern method is used by calling the method processMessageWithRetry(ConsumerRecord<K, V> consumerRecord,MessageProcessor<K, V> processor). Please find the below Example:

private MessageRetryOperations<K, V> messageRetryOperations;
private DeleteEmployeeMessageProcessor<K, V> deleteEmployeeMessageProcessor;
@KafkaListener(topics = "employeeapp-employee-v1-delete", groupId = "${messaging.kafka.consumer.groupId}",containerFactory = "kafkaListenerContainerFactory")
public void consumer(ConsumerRecord<K, V> consumerRecord, Acknowledgment acknowledgment) {
this.messageRetryOperations.processMessageWithRetry(consumerRecord, this.deleteEmployeeMessageProcessor);
// Acknowledge the listener.

The implementation for MessageProcessor from devon4j-kafka is required to provide the implementation to process the ConsumedRecord from kafka broker. The implementation for MessageProcessor interface can look as below example:

import com.devonfw.module.kafka.common.messaging.retry.api.client.MessageProcessor;
public class DeleteEmployeeMessageProcessor<K, V> implements MessageProcessor<K, V> {
  public void processMessage(ConsumerRecord<K, V> message) {
  //process message

It works as follows: * The application gets a message from the topic. * During processing of the message an error occurs, the message will be written to the redelivery topic. * The message is acknowledged in the topic. * The message will be processed from the re-delivery topic after a delay. * Processing of the message fails again. It retires until the retry count gets over. * When the retry fails in all the retry then the message is logged and payload in the ProducerRecord is deleted for log compaction which is explained below.

Retry configuration and naming convention of redelivery topics.

The following properties should be added in the or application.yml file.

The retry pattern in devon4j-kafka will perform for specific topic of a message. So its mandatory to specify the properties for each topic. Below properties are example,

# Back off policy properties for employeeapp-employee-v1-delete

# Retry policy properties for employeeapp-employee-v1-delete
messaging.retry.retry-policy.retryableExceptions.employeeapp-employee-v1-delete=<Class names of exceptions for which a retry should be performed>

# Back off policy properties for employeeapp-employee-v1-add

# Retry policy properties for employeeapp-employee-v1-add
messaging.retry.retry-policy.retryableExceptions.employeeapp-employee-v1-add=<Class names of exceptions for which a retry should be performed>

If you notice the above properties, the retry-policy and back-off policy properties are repeated twice as i have 2 topics for the retry to be performed with different level of values. The topic name should be added at the last of attribute.

So, the retry will be performed for each topic according to their configuration values.

If you want to provide same/default values for all the topics, then its required to add default in the place of topic on the above properties example.

For example,

# Default back off policy properties

# Default retry policy properties
messaging.retry.retry-policy.retryableExceptions.default=<Class names of exceptions for which a retry should be performed>

By giving properties like above , the same values will be passed for all the topics and the way of processing retry for all the topics are same.

All these above property values are mapped to the classes and and configured by the class

The MessageRetryContext in devon kafka is used to perform the retry pattern with the properties from DefaultBackOffPolicyProperties and DefaultRetryPolicyProperties.

The 2 main properties of MessageRetryContext is nextRetry and retryUntil which is a Instant date format and it is calculated internally using the properties given in DefaultBackOffPolicyProperties and DefaultRetryPolicyProperties.

You may change the behavior of this date calculation by providing your own implementation classes for and

The naming convention for retry topic is the same topic name which you have given to publish the message and we add suffix -retry to it once it is consumed and given to process with retry.

If there is no topic found in the consumed record the default retry topic will be added which is default-message-retry.

Retry topics

Devon4j-kafka uses a separate retry topic for each topic where retries occur. By default this topic is named <topic name>-retry. You may change this behavior by providing your own implementation for DefaultKafkaRecordSupport which is an default implementation from devon4j-kafka for KafkaRecordSupport.

Devon4-kafka enqueues a new message for each retry attempt. It is very important to configure your retry tropics with log compaction enabled. More or less simplified, if log compaction is enabled Kafka keeps only one message per message key. Since each retry message has the same key, in fact only one message per retry attempt is stored. After the last retry attempt the message payload is removed from the message so, you do not keep unnecessary data in your topics.

Handling retry finally failed

Per default when the retry fails with final attempt we just log the message and delete the payload of ProducerRecord which comes to proceed the retry pattern.

You can change this behavior by providing the implementation class for the interface which has two method retryTimeout and retryFailedFinal.


We leverage Spring Cloud Sleuth for tracing in devon4j-kafka This is used to trace the asynchronous process of kafka producing and consuming. In an asynchronous process it is important to maintain a id which will be same for all asynchronous process. However, devon uses its own correlation-id(UUID) to track the process. But devon4j-kafka uses additional tracing protocol which is Brave Tracer.

This is a part of both starter dependencies starter-kafka-receiver and starter-kafka-sender.

There are 2 important properties which will be automatically logged which is trace-id and spain-id. The trace-id is same for all the asynchronous process and span-id is unique for each asynchronous process.

How devon4j-kafka handles tracer ?

We inject the trace-id and span-id in to the ProducerRecord headers which comes to publish into the kafka broker. Its injected in the headers with the key traceId for trace-id and spanId for span-id. Along with these, the correlation-id(UUID) is also injected in the headers of record with the key correlationId.

So, when you consume record from kafka broker, these values can be found in the consumed record’s headers with these keys.

So, it is very helpful to track the asynchronous process of consuming the messages.


devon4j-kafka provides multiples support classes to log the published message and the consumed message. * The class ProducerLoggingListener which implements ProducerListener<K,V> from spring kafka uses to log the message as soon as it is published in the kafka broker.

  • The aspect class MessageListenerLoggingAspect which is annotated with @Aspect and has a method logMessageprocessing which is annotated with @Around("@annotation(org.springframework.kafka.annotation.KafkaListener)&&args(kafkaRecord,..)") used to listen to the classes which is annotated with @KafkaListener and logs the message as soon as it is consumed.

  • The class MessageLoggingSupport has multiple methods to log different type of events like MessageReceived, MessageSent, MessageProcessed, MessageNotProcessed.

  • The class LoggingErrorHandler which implements ErrorHandler from spring-kafka which logs the message when an error occurred while consuming message. You may change this behavior by creating your own implementation class for the ErrorHandler.

Kafka Health check using Spring acutator

The spring config class MessageCommonConfig automatically provides a spring health indicator bean for kafka if the property endpoints. The health indicator will check for all topics listed in if a leader is available. If this property is missing only the broker connection will be checked. The timeout for the check (default 60s) maybe changed via the property If an application uses multiple broker(-clusters) for each broker(-cluster) a dedicated health indicator bean has to be configured in the spring config.

The properties for the devon kafka health check should be given like below example:<true or false><the health check timeout seconds>,employeeapp-employee-v1-add

These properties are provided with default values except the topicsToCheck and health check will do happen only when the property is

JSON Web Token (JWT)

devon4j-kafka supports authentication via JSON Web Tokens (JWT) out-of-the-box. To use it add a dependency to the devon4j-starter-security-jwt:


The authentication via JWT needs some configuration, e.g. a keystore to verify the token signature. This is explained in the JWT documentation.

To secure a message listener with jwt add the @JwtAuthentication:

  @KafkaListener(topics = "employeeapp-employee-v1-delete", groupId = "${messaging.kafka.consumer.groupId}")
  public void consumer(ConsumerRecord<K, V> consumerRecord, Acknowledgment acknowledgment) {

With this annotation in-place each message will be checked for a valid JWT in a message header with the name Authorization. If a valid annotation is found the spring security context will be initialized with the user roles and "normal" authorization e.g. with @RolesAllowed may be used. This is also demonstrated in the kafka sample application.

Using Kafka for internal parallel processing

Apart from the use of Kafka as "communication channel" it sometimes helpful to use Kafka internally to do parallel processing:

Architecture for internal parallel processing with Kafka
Figure 10. Architecture for internal parallel processing with Kafka

This examples shows a payment service which allows a to submit a list of receipt IDs for payment. We assume that the payment it self takes a long time and should be done asynchronously and in parallel. The general idea is to put a message for each receipt to pay into a topic. This is done in the use case implementation in a first step, if a rest call arrives. Also part of the use case is a listener which consumes the messages. For each message (e.g. payment to do) a processor is called, which actually does the payment via the use case. Since Kafka supports concurrency for the listeners easily the payment will also be done in parallel. All features of devon4j-kafka, like retry handling could also be used.


Messaging in Java is done using the JMS standard from JEE.


For messaging you need to choose a JMS provider such as:


As a receiver of messages is receiving data from other systems it is located in the service-layer.

JMS Listener

A JmsListener is a class listening and consuming JMS messages. It should carry the suffix JmsListener and implement the MessageListener interface or have its listener method annotated with @JmsListener. This is illustrated by the following example:

public class BookingJmsListener /* implements MessageListener */ {

  private Bookingmanagement bookingmanagement;

  private MessageConverter messageConverter;

  @JmsListener(destination = "BOOKING_QUEUE", containerFactory = "jmsListenerContainerFactory")
  public void onMessage(Message message) {
    try {
      BookingTo bookingTo = (BookingTo) this.messageConverter.fromMessage(message);
    } catch (MessageConversionException | JMSException e) {
      throw new InvalidMessageException(message);

The sending of JMS messages is considered as any other sending of data like kafka messages or RPC calls via REST using service-client, gRPC, etc. This will typically happen directly from a use-case in the logic-layer. However, the technical complexity of the communication and protocols itself shall be hidden from the use-case and not be part of the logic layer. With spring we can simply use JmsTemplate to do that.


Monitoring is a very comprehensive topic. For devon4j we focus on selected core topics which are most important when developing production-ready applications. On a high level view we strongly suggest to separate the application to be monitored from the monitoring system itself. The monitoring system covers aspects like

  • Collect monitoring information

  • Aggregate, process and visualizate data, e.g. in dashboards

  • Provide alarms

  • …​

In distributed systems it is crucial that a monitoring system provides a central overview over all applications in your application landscape. So it is not feasible to provide a monitoring system as part of your application. Your application is responsible for providing information to the monitoring system.

How to provide monitoring information
Java Management Extensions (JMX)

JMX is the official java monitoring solution. It is part of the JDK. Your application may provide monitoring information or receive monitoring related commands via MBeans. There is a huge amount of information about JMX available. A good starting point might be JMX on wikipedia. Traditionally JMX uses RMI for communication. In many environments HTTP(S) is preferred, so be careful on deciding if JMX is the right solution. Alternativly your application could provide a monitoring API via REST.


Since REST APIs are very popular it is quite natural to provide monitoring information via REST. If your application already uses REST and there is no JMX/RMI based monitoring solution in place, it is often a better approach to provide monitoring APIs via REST than introducing a new protocol with JMX.


Some aspects of monitoring are covered by "logging". A typical case might be the requirement to "monitor" the application REST APIs. For that it is perfectly fine to log the perfomance of all (or some) request and ship this information to a logging system like Graylog. So please carefully read the logging guide. To allow efficient processing of those logs you should use JSON based logs.

Which monitoring informaton to provide?

It is not possible to provide a final list of which monitoring information is exactly required for your application. But we give you a general advice what type of information your application should provide at a minimum.

General information

It is often useful if your application provides basic information about itself. This should cover

  • The name of the application

  • The version (or buildno., or commit-id, …​) of the application

This is espicially useful in complex environments, e.g. to very that deployments went correctly.


Metric means providing key figures about your applications like

  • performance key figures

    • request duration

    • queue lengths

    • duration of database queries

    • ..

  • business related information

    • number of successful / unsuccesful requests

    • size of result sets

    • worth of shopping baskets

    • ..

  • Technical key figures

    • JVM heap usage

    • cache sizes

    • (database) pool usage

    • …​

  • …​

Remember that processing of this data should be done mainly in the monitoring system. You might have noticed that there are different types of metrics: those that represent current values (like JVM heap usage, queue length, …​), others base on (timed) events like (duration of requests). Handling of different types of metrics might be different. For handling events you may:

  • Write log statements for each (or a sample of) event. These logs must then be shipped to your monitoring systems.

  • Send data for the event via an API of your monitoring system

  • Provide a REST API (or JMX MBeans) with pre-aggregated key figures, which is periodically polled by your monitoring system. This solution is a bit inferior since the aggregation is part of your application and might not fit to the desired visualization in your monitoring systems.

For actual values you may:

  • Write them perodically to your log. These logs must then be shipped to your monitoring systems.

  • Send them peridocally via an API of your monitoring systems

  • Provide a REST API (or JMX MBeans) which is periodically polled by your monitoring system.

Health (Watchdog)

For monitoring a complex application landscape it is crucial to have an exact overview if all applications are up and running. So your application should offer an API for the monitoring system which allows to easily check if the application is alive. Often this alive information is polled by the monitoring system with a kind of watchdog. The health check should include checks if the application is working "correctly". For that we suggest to check if all required neighbour systems and infrastructure components are usable:

  • Check if your database can be queried (with a dummy query)

  • Check if you can reach your messaging system

  • Check if you can reach all your neighbour system, e.g. by querying their info-endpoint

You should be very careful to not cascade those requests, e.g. your system should only test their direct neighbours. This test should not lead to additional tests in these systems.

The healthcheck should return a simple OK/NOK result for use in dashboards, but addtionally include detailed results for each check.

Implementation with Spring Boot Actuator

To implement a monitoring API for your systems we suggest to use Spring Boot Actuator. Actuator offers APIs which provide monitoring information including metrics via HTTP and JMX. It also contains a framework to implement health checks. Please consult the original documentation for information about how to use it. Basically to use it, add the following dependency to the pom.xml of your application core:


There will be several endpoints with monitoring information available out-of-the-box. We strongly advice to check carefully which information is required in your context, normally this is ìnfo, health and metrics. Be careful not to expose any security related information via this mechanismen (e.g. by exposing those endpoints externally).

To make the info-endpoint useful you need to provide information to actuactor. A good way to achive this is by using the provided maven module.

For first steps it might be useful to deactive security for the actuator endpoints (this is just for testing, never release it!). This can be accomblished by implementing the following class:

public class WebSecurityConfig extends BaseWebSecurityConfig {

  public void configure(WebSecurity web) throws Exception {


Devonfw additions

Devonfw includes the following additions for Spring boot actuator:

Integration of monitoring information

To loadbalance HTTP requests the loadbalancers needs to know which instances of the desired application are available and functioning. Often loadbalancers support reacting on the HTTP status code of an HTTP request to the service. The loadbalancer will periodically poll the service to find out if is available or not. To configure this you may use the healthcheck of the service to find out if the instance is functioning correctly or not.


Docker supports a healtcheck. You may use a simple local curl to your application here to find out if the service is healthy or not. But be careful often unhealthy containers are automatically restarted. If you use the health information of your application this may lead to undesired effects. Since the health checks relies on querying all neighbour systems and infrastucure components, applications often become unhealthy because a 3rd system has problems. Restarting the application itself will not heal the problem and be inexpedient. So generally it is better you query the info endpoint of your application to just check if the service itself is up and running.

Full Text Search

If you want to all your users fast and simple searches with just a single search field (like in google), you need full text indexing and search support.


Maybe you also want to use native features of your database

Best Practices


Last updated 2021-07-26 17:24:22 UTC