10. Layers

10.1. Client Layer

There are various technical approaches to building GUI clients. The devonfw proposes rich clients that connect to the server via data-oriented services (e.g. using REST with JSON). In general, we have to distinguish among the following types of clients:

  • web clients

  • native desktop clients

  • (native) mobile clients

Our main focus is on web-clients. In our sample application my-thai-star we offer a responsive web-client based on Angular following devon4ng that integrates seamlessly with the back ends of my-thai-star available for Java using devon4j as well as .NET/C# using devon4net. For building angular clients read the separate devon4ng guide.

10.1.1. JavaScript for Java Developers

In order to get started with client development as a Java developer we give you some hints to get started. Also if you are an experienced JavaScript developer and want to learn Java this can be helpful. First, you need to understand that the JavaScript ecosystem is as large as the Java ecosystem and developing a modern web client requires a lot of knowledge. The following table helps you as experienced developer to get an overview of the tools, configuration-files, and other related aspects from the new world to learn. Also it helps you to map concepts between the ecosystems. Please note that we list the tools recommended by devonfw here (and we know that there are alternatives not listed here such as gradle, grunt, bower, etc.).

Table 20. Aspects in JavaScript and Java ecosystem
Topic Aspect JavaScript Java



TypeScript (extends JavaScript)




nodejs (or web-browser)




yarn (or npm)






npm repo

maven central (repo search)




maven (or more comparable ant)


gulpfile.js (and gulp/*)

pom.xml (or build.xml)

Clean cmd

gulp clean

mvn clean

Build cmd

yarn install && gulp build:dist

mvn install (see lifecycle)

Test cmd

gulp test

mvn test







junit / surefire

Browser Testing




karma-*, PhantomJs for browser emulation

AssertJ,*Unit and spring-test, etc.)

Code Analysis

Code Coverage

karma-coverage (and remap-istanbul for TypeScript)




MS VS Code or IntelliJ

Eclipse or IntelliJ


Angular (etc.)

Spring (etc.)

10.2. Service Layer

The service layer is responsible for exposing functionality made available by the logical layer to external consumers over a network via technical protocols.

10.2.1. Types of Services

We distinguish between the following types of services:

  • External Services
    are used for communication between different companies, vendors, or partners.

  • Internal Services
    are used for communication between different applications in the same application landscape of the same vendor.

    • Back-end Services
      are internal services between Java back-end components typically with different release and deployment cycles (if not Java consider this as external service).

    • JS-Client Services
      are internal services provided by the Java back-end for JavaScript clients (GUI).

    • Java-Client Services
      are internal services provided by the Java back-end for a native Java client (JavaFx, EclipseRcp, etc.).

The choices for technology and protocols will depend on the type of service. The following table gives a guideline for aspects according to the service types.

Table 21. Aspects according to service-type
Aspect External Service Back-end Service JS-Client Service Java-Client Service




not required

not required



not required


not required

Recommended Protocol





10.2.2. Versioning

For services consumed by other applications we use versioning to prevent incompatibilities between applications when deploying updates. This is done by the following conventions:

  • We define a version number and prefix it with v (e.g. v1).

  • If we support previous versions we use that version numbers as part of the Java package defining the service API (e.g. com.foo.application.component.service.api.v1)

  • We use the version number as part of the service name in the remote URL (e.g. https://application.foo.com/services/rest/component/v1/resource)

  • Whenever breaking changes are made to the API, create a separate version of the service and increment the version (e.g. v1v2) . The implementations of the different versions of the service contain compatibility code and delegate to the same unversioned use-case of the logic layer whenever possible.

  • For maintenance and simplicity, avoid keeping more than one previous version.

10.2.3. Interoperability

For services that are consumed by clients with different technology, interoperability is required. This is addressed by selecting the right protocol, following protocol-specific best practices and following our considerations especially simplicity.

10.2.4. Service Considerations

The term service is quite generic and therefore easily misunderstood. It is a unit exposing coherent functionality via a well-defined interface over a network. For the design of a service, we consider the following aspects:

  • self-contained
    The entire API of the service shall be self-contained and have no dependencies on other parts of the application (other services, implementations, etc.).

  • idempotence
    E.g. creation of the same master-data entity has no effect (no error)

  • loosely coupled
    Service consumers have minimum knowledge and dependencies on the service provider.

  • normalized
    complete, no redundancy, minimal

  • coarse-grained
    Service provides rather large operations (save entire entity or set of entities rather than individual attributes)

  • atomic
    Process individual entities (for processing large sets of data, use a batch instead of a service)

  • simplicity
    avoid polymorphism, RPC methods with unique name per signature and no overloading, avoid attachments (consider separate download service), etc.

10.2.5. Security

Your services are the major entry point to your application. Hence security considerations are important here.

10.3. Logic Layer

The logic layer is the heart of the application and contains the main business logic. According to our business architecture we divide an application into components. For each component the logic layer defines a component-facade. According to the complexity you can further divide this into individual use-cases. It is very important that you follow the links to understand the concept of component-facade and use-case in order to properly implement your business logic.

10.3.1. Responsibility

The logic layer is responsible for implementation the business logic according to the specified functional demands and requirements. It therefore creates the actual value of the application. The following additional aspects are also in its responsibility:

10.3.2. Security

The logic layer is the heart of the application. It is also responsible for authorization and hence security is important here. Every method exposed in an interface needs to be annotated with an authorization check, stating what role(s) a caller must provide in order to be allowed to make the call. The authorization concept is described here.

Direct Object References

A security threat are Insecure Direct Object References. This simply gives you two options:

  • avoid direct object references at all

  • ensure that direct object references are secure

Especially when using REST, direct object references via technical IDs are common sense. This implies that you have a proper authorization in place. This is especially tricky when your authorization does not only rely on the type of the data and according static permissions but also on the data itself. Vulnerabilities for this threat can easily happen by design flaws and inadvertence. Here is an example from our sample application:

We have a generic use-case to manage BLOBs. In the first place it makes sense to write a generic REST service to load and save these BLOBs. However, the permission to read or even update such BLOB depend on the business object hosting the BLOB. Therefore, such a generic REST service would open the door for this OWASP A4 vulnerability. To solve this in a secure way, you need individual services for each hosting business object to manage the linked BLOB and have to check permissions based on the parent business object. In this example the ID of the BLOB would be the direct object reference and the ID of the business object (and a BLOB property indicator) would be the indirect object reference.

10.3.3. Component Facade

For each component of the application the logic layer defines a component facade. This is an interface defining all business operations of the component. It carries the name of the component («Component») and has an implementation named «Component»Impl (see implementation).


The component facade interface defines the logic API of the component and has to be business oriented. This means that all parameters and return types of all methods from this API have to be business transfer-objects, datatypes (String, Integer, MyCustomerNumber, etc.), or collections of these. The API may also only access objects of other business components listed in the (transitive) dependencies of the business-architecture.

Here is an example how such an API may look like:

public interface Bookingmanagement {

  BookingEto findBooking(Long id);

  BookingCto findBookingCto(Long id);

  Page<BookingEto> findBookingEtos(BookingSearchCriteriaTo criteria);

  void approveBooking(BookingEto booking);


The implementation of an interface from the logic layer (a component facade or a use-case) carries the name of that interface with the suffix Impl and is annotated with @Named. An implementation typically needs access to the persistent data. This is done by injecting the corresponding repository (or DAO). According to data-sovereignty, only repositories of the same business component may be accessed directly. For accessing data from other components the implementation has to use the corresponding API of the logic layer (the component facade). Further, it shall not expose persistent entities from the dataaccess layer and has to map them to transfer objects using the bean-mapper.

public class BookingmanagementImpl extends AbstractComponentFacade implements Bookingmanagement {

  private BookingRepository bookingRepository;

  public BookingEto findBooking(Long id) {

    LOG.debug("Get Booking with id {} from database.", id);
    BookingEntity entity = this.bookingRepository.findOne(id);
    return getBeanMapper().map(entity, BookingEto.class));

As you can see, entities (BookingEntity) are mapped to corresponding ETOs (BookingEto). Further details about this can be found in bean-mapping.

For complex applications, the component facade consisting of many different methods. For better maintainability in such case it is recommended to split it into separate use-cases that are then only aggregated by the component facade.

10.3.4. UseCase

A use-case is a small unit of the logic layer responsible for an operation on a particular entity (business object). It is defined by an interface (API) with its according implementation. Following our architecture-mapping use-cases are named Uc«Operation»«BusinessObject»[Impl]. The prefix Uc stands for use-case and allows to easily find and identify them in your IDE. The «Operation» stands for a verb that is operated on the entity identified by «BusinessObject». For CRUD we use the standard operations Find and Manage that can be generated by CobiGen. This also separates read and write operations (e.g. if you want to do CQSR, or to configure read-only transactions for read operations).


The UcFind«BusinessObject» defines all read operations to retrieve and search the «BusinessObject». Here is an example:

public interface UcFindBooking {

  BookingEto findBooking(Long id);

  BookingCto findBookingCto(Long id);

  Page<BookingEto> findBookingEtos(BookingSearchCriteriaTo criteria);

  Page<BookingCto> findBookingCtos(BookingSearchCriteriaTo criteria);


The UcManage«BusinessObject» defines all CRUD write operations (create, update and delete) for the «BusinessObject». Here is an example:

public interface UcManageBooking {

  BookingEto saveBooking(BookingEto booking);

  boolean deleteBooking(Long id);


Any other non CRUD operation Uc«Operation»«BusinessObject» uses any other custom verb for «Operation». Typically such custom use-cases only define a single method. Here is an example:

public interface UcApproveBooking {

  void approveBooking(BookingEto booking);


For the implementation of a use-case the same rules apply that are described for the component-facade implementation.

However, when following the use-case approach, your component facade simply changes to:

public interface Bookingmanagement extends UcFindBooking, UcManageBooking, UcApproveBooking {

Where the implementation only delegates to the use-cases and gets entirely generated by CobiGen:

public class BookingmanagementImpl implements  {

  private UcFindBooking ucFindBooking;

  private UcManageBooking ucManageBooking;

  private UcApproveBooking ucApproveBooking;

  public BookingEto findBooking(Long id) {
    return this.ucFindBooking.findBooking(id);

  public Page<BookingEto> findBookingEtos(BookingSearchCriteriaTo criteria) {
    return this.ucFindBooking.findBookingEtos(criteria);

  public BookingEto saveBooking(BookingEto booking) {
    return this.ucManageBooking.saveBooking(booking);

  public boolean deleteBooking(Long id) {
    return this.ucManageBooking.deleteBooking(booking);

  public void approveBooking(BookingEto booking) {


This approach is also illustrated by the following UML diagram:

Component facade with use cases.
Internal use case

Sometimes a component with multiple related entities and many use-cases needs to reuse business logic internally. Of course this can be exposed as official use-case API but this will imply using transfer-objects (ETOs) instead of entities. In some cases this is undesired e.g. for better performance to prevent unnecessary mapping of entire collections of entities. In the first place you should try to use abstract base implementations providing reusable methods the actual use-case implementations can inherit from. If your business logic is even more complex and you have multiple aspects of business logic to share and reuse but also run into multi-inheritance issues, you may also just create use-cases that have their interface located in the impl scope package right next to the implementation (or you may just skip the interface). In such case you may define methods that directly take or return entity objects. To avoid confusion with regular use-cases we recommend to add the Internal suffix to the type name leading to Uc«Operation»«BusinessObject»Internal[Impl].

Injection issues

Technically now you have two implementations of your use-case:

  • the direct implementation of the use-case (Uc*Impl)

  • the component facade implementation («Component»Impl)

When injecting a use-case interface this could cause ambiguities. This is addressed as following:

  • In the component facade implementation («Component»Impl) spring is smart enough to resolve the ambiguity as it assumes that a spring bean never wants to inject itself (can already be access via this). Therefore only the proper use-case implementation remains as candidate and injection works as expected.

  • In all other places simply always inject the component facade interface instead of the use-case.

In case you might have the lucky occasion to hit this nice exception:

org.springframework.beans.factory.BeanCurrentlyInCreationException: Error creating bean with name 'uc...Impl': Bean with name 'uc...Impl' has been injected into other beans [...Impl] in its raw version as part of a circular reference, but has eventually been wrapped. This means that said other beans do not use the final version of the bean. This is often the result of over-eager type matching - consider using 'getBeanNamesOfType' with the 'allowEagerInit' flag turned off, for example.

To get rid of such error you need to annotate your according implementation also with @Lazy in addition to @Named.

10.4. Data-Access Layer

The data-access layer is responsible for all outgoing connections to access and process data. This is mainly about accessing data from a persistent data-store but also about invoking external services.

10.4.1. Database

You need to make your choice for a database. Options are documented here.

The classical approach is to use a Relational Database Management System (RDMS). In such case we strongly recommend to follow our JPA Guide. Some NoSQL databases are supported by spring-data so you can consider the repository guide.

10.5. Batch Layer

We understand batch processing as bulk-oriented, non-interactive, typically long running execution of tasks. For simplicity we use the term batch or batch job for such tasks in the following documentation.

devonfw uses Spring Batch as batch framework.

This guide explains how Spring Batch is used in devonfw applications. It focuses on aspects which are special to devonfw if you want to learn about spring-batch you should adhere to springs references documentation.

There is an example of simple batch implementation in the my-thai-star batch module.

In this chapter we will describe the overall architecture (especially concerning layering) and how to administer batches.

10.5.1. Layering

Batches are implemented in the batch layer. The batch layer is responsible for batch processes, whereas the business logic is implemented in the logic layer. Compared to the service layer you may understand the batch layer just as a different way of accessing the business logic. From a component point of view each batch is implemented as a subcomponent in the corresponding business component. The business component is defined by the business architecture.

Let’s make an example for that. The sample application implements a batch for exporting ingredients. This ingredientExportJob belongs to the dishmanagement business component. So the ingredientExportJob is implemented in the following package:


Batches should invoke use cases in the logic layer for doing their work. Only "batch specific" technical aspects should be implemented in the batch layer.

Example: For a batch, which imports product data from a CSV file, this means that all code for actually reading and parsing the CSV input file is implemented in the batch layer. The batch calls the use case "create product" in the logic layer for actually creating the products for each line read from the CSV input file.

Directly accessing data access layer

In practice it is not always appropriate to create use cases for every bit of work a batch should do. Instead, the data access layer can be used directly. An example for that is a typical batch for data retention which deletes out-of-time data. Often deleting out-dated data is done by invoking a single SQL statement. It is appropriate to implement that SQL in a Repository or DAO method and call this method directly from the batch. But be careful that this pattern is a simplification which could lead to business logic cluttered in different layers which reduces maintainability of your application. It is a typical design decision you have to take when designing your specific batches.

10.5.2. Project structure and packaging

Batches will be implemented in a separate Maven module to keep the application core free of batch dependencies. The batch module includes a dependency to the application core-module to allow reuse of the use cases, DAOs etc. Additionally the batch module has dependencies to the required spring batch jars:





To allow an easy start of the batches from the command line it is advised to create a bootified jar for the batch module by adding the following to the pom.xml of the batch module:

      <!-- Create bootified jar for batch execution via command line.
           Your applications spring boot app is used as main-class.

10.5.3. Implementation

Most of the details about implementation of batches is described in the spring batch documentation. There is nothing special about implementing batches in devonfw. You will find an easy example in my-thai-star.

10.5.4. Starting from command line

Devonfw advises to start batches via command line. This is most common to many ops teams and allows easy integration in existing schedulers. In general batches are started with the following command:

java -jar <app>-batch-<version>-bootified.jar --spring.main.web-application-type=none --spring.batch.job.enabled=true --spring.batch.job.names=<myJob> <params>
Parameter Explanation


This disables the web app (e.g. Tomcat)


This specifies the name of the job to run. If you leave this out ALL jobs will be executed. Which probably does not make to much sense.


(Optional) additional parameters which are passed to your job

This will launch your normal spring boot app, disables the web application part and runs the designated job via Spring Boots org.springframework.boot.autoconfigure.batch.JobLauncherCommandLineRunner.


In real world scheduling of batches is not as simple as it first might look like.

  • Multiple batches have to be executed in order to achieve complex tasks. If one of those batches fails the further execution has to be stopped and operations should be notified for example.

  • Input files or those created by batches have to be copied from one node to another.

  • Scheduling batch executing could get complex easily (quarterly jobs, run job on first workday of a month, …​)

For devonfw we propose the batches themselves should not mess around with details of scheduling. Likewise your application should not do so. This complexity should be externalized to a dedicated batch administration service or scheduler. This service could be a complex product or a simple tool like cron. We propose Rundeck as an open source job scheduler.

This gives full control to operations to choose the solution which fits best into existing administration procedures.

10.5.5. Handling restarts

If you start a job with the same parameters set after a failed run (BatchStatus.FAILED) a restart will occur. In many cases your batch should then not reprocess all items it processed in the previous runs. For that you need some logic to start at the desired offset. There different ways to implement such logic:

  • Marking processed items in the database in a dedicated column

  • Write all IDs of items to process in a separate table as an initialization step of your batch. You can then delete IDs of already processed items from that table during the batch execution.

  • Storing restart information in springs ExecutionContext (see below)

Using spring batch ExecutionContext for restarts

By implementing the ItemStream interface in your ItemReader or ItemWriter you may store information about the batch progress in the ExecutionContext. You will find an example for that in the CountJob in My Thai Star.

Additional hint: It is important that bean definition method of your ItemReader/ItemWriter return types implementing ItemStream(and not just ItemReader or ItemWriter alone). For that the ItemStreamReader and ItemStreamWriter interfaces are provided.

10.5.6. Exit codes

Your batches should create a meaningful exit code to allow reaction to batch errors e.g. in a scheduler. For that spring batch automatically registers an org.springframework.boot.autoconfigure.batch.JobExecutionExitCodeGenerator. To make this mechanism work your spring boot app main class as to populate this exit code to the JVM:

public class SpringBootApp {

  public static void main(String[] args) {
    if (Arrays.stream(args).anyMatch((String e) -> e.contains("--spring.batch.job.names"))) {
      // if executing batch job, explicitly exit jvm to report error code from batch
      System.exit(SpringApplication.exit(SpringApplication.run(SpringBootApp.class, args)));
    } else {
      // normal web application start
      SpringApplication.run(SpringBootApp.class, args);

10.5.7. Stop batches and manage batch status

Spring batch uses several database tables to store the status of batch executions. Each execution may have different status. You may use this mechanism to gracefully stop batches. Additionally in some edge cases (batch process crashed) the execution status may be in an undesired state. E.g. the state will be running, despite the process crashed sometime ago. For that cases you have to change the status of the execution in the database.


Devonfw provides a easy to use cli-tool to manage the executing status of your jobs. The tool is implemented in the devonfw module devon4j-batch-tool. It will provide a runnable jar, which may be used as follows:

List names of all previous executed jobs

java -D'spring.datasource.url=jdbc:h2:~/mts;AUTO_SERVER=TRUE' -jar devon4j-batch-tool.jar jobs list

Stop job named 'countJob'

java -D'spring.datasource.url=jdbc:h2:~/mts;AUTO_SERVER=TRUE' -jar devon4j-batch-tool.jar jobs stop countJob

Show help

java -D'spring.datasource.url=jdbc:h2:~/mts;AUTO_SERVER=TRUE' -jar devon4j-batch-tool.jar

As you can the each invocation includes the JDBC connection string to your database. This means that you have to make sure that the corresponding DB driver is in the classpath (the prepared jar only contains H2).

10.5.8. Authentication

Most business application incorporate authentication and authorization. Your spring boot application will implement some kind of security, e.g. integrated login with username+password or in many cases authentication via an existing IAM. For security reasons your batch should also implement an authentication mechanism and obey the authorization implemented in your application (e.g. via @RolesAllowed).

Since there are many different authentication mechanism we cannot provide an out-of-the-box solution in devonfw, but we describe a pattern how this can be implemented in devonfw batches.

We suggest to implement the authentication in a Spring Batch tasklet, which runs as the first step in your batch. This tasklet will do all of the work which is required to authenticate the batch. A simple example which authenticates the batch "locally" via username and password could be implemented like this:

public class SimpleAuthenticationTasklet implements Tasklet {

  public RepeatStatus execute(StepContribution contribution, ChunkContext chunkContext) throws Exception {

    String username = chunkContext.getStepContext().getStepExecution().getJobParameters().getString("username");
    String password = chunkContext.getStepContext().getStepExecution().getJobParameters().getString("password");
    Authentication authentication = new UsernamePasswordAuthenticationToken(username, password);

    return RepeatStatus.FINISHED;


The username and password have to be supplied via two cli parameters -username and -password. This implementation creates an "authenticated" Authentication and sets in the Spring Security context. This is just for demonstration normally you should not provide passwords via command line. The actual authentication will be done automatically via Spring Security as in your "normal" application. If you have a more complex authentication mechanism in your application e.g. via OpenID connect just call this in the tasklet. Naturally you may read authentication parameters (e.g. secrets) from the command line or more securely from a configuration file.

In your Job Configuration set this tasklet as the first step:

public class BookingsExportBatchConfig {
  private JobBuilderFactory jobBuilderFactory;

  private StepBuilderFactory stepBuilderFactory;

  public Job myBatchJob() {
    return this.jobBuilderFactory.get("myJob").start(myAuthenticationStep()).next(...).build();

  public Step myAuthenticationStep() {
    return this.stepBuilderFactory.get("myAuthenticationStep").tasklet(myAuthenticatonTasklet()).build();

  public Tasklet myAuthenticatonTasklet() {
    return new SimpleAuthenticationTasklet();

10.5.9. Tipps & tricks

Identifying job parameters

Spring uses a jobs parameters to identify job executions. Parameters starting with "-" are not considered for identifying a job execution.

Last updated 2020-03-28 00:06:08 UTC