Layers
Client Layer
There are various technical approaches to building GUI clients. The devonfw proposes rich clients that connect to the server via data-oriented services (e.g. using REST with JSON). In general, we have to distinguish among the following types of clients:
-
web clients
-
native desktop clients
-
(native) mobile clients
Our main focus is on web-clients. In our sample application my-thai-star we offer a responsive web-client based on Angular following devon4ng that integrates seamlessly with the back ends of my-thai-star available for Java using devon4j as well as .NET/C# using devon4net. For building angular clients read the separate devon4ng guide.
JavaScript for Java Developers
In order to get started with client development as a Java developer we give you some hints to get started. Also if you are an experienced JavaScript developer and want to learn Java this can be helpful. First, you need to understand that the JavaScript ecosystem is as large as the Java ecosystem and developing a modern web client requires a lot of knowledge. The following table helps you as experienced developer to get an overview of the tools, configuration-files, and other related aspects from the new world to learn. Also it helps you to map concepts between the ecosystems. Please note that we list the tools recommended by devonfw here (and we know that there are alternatives not listed here such as gradle, grunt, bower, etc.).
Topic | Aspect | JavaScript | Java |
---|---|---|---|
Programming |
Language |
TypeScript (extends JavaScript) |
|
Runtime |
VM |
nodejs (or web-browser) |
|
Build- & Dependency-Management |
Tool |
||
Config |
|||
Repository |
|||
Build cmd |
|
||
Test cmd |
|
mvn test |
|
Test-Tool |
|||
Test-Runner |
|||
E2E Testing |
|||
Code Analysis |
Code Coverage |
||
Development |
IDE |
||
Framework |
Angular (etc.) |
Service Layer
The service layer is responsible for exposing functionality made available by the logical layer to external consumers over a network via technical protocols.
Types of Services
Before you start creating your services you should consider some general design aspects:
-
Do you want to create a RPC service?
-
Or is your problem better addressed by messaging or eventing?
-
Who will consume your service?
-
Do you have one or multiple consumers?
-
Do web-browsers have to use your service?
-
Will apps from other vendors or parties have to consume your service that you can not influence if the service may have to change or be extended?
-
Versioning
For RPC services consumed by other applications we use versioning to prevent incompatibilities between applications when deploying updates. This is done by the following conventions:
-
We define a version number and prefix it with
v
(e.g.v1
). -
If we support previous versions we use that version numbers as part of the Java package defining the service API (e.g.
com.foo.application.component.service.api.v1
) -
We use the version number as part of the service name in the remote URL (e.g.
https://application.foo.com/services/rest/component/v1/resource
) -
Whenever breaking changes are made to the API, create a separate version of the service and increment the version (e.g.
v1
→v2
) . The implementations of the different versions of the service contain compatibility code and delegate to the same unversioned use-case of the logic layer whenever possible. -
For maintenance and simplicity, avoid keeping more than one previous version.
Interoperability
For services that are consumed by clients with different technology, interoperability is required. This is addressed by selecting the right protocol, following protocol-specific best practices and following our considerations especially simplicity.
Service Considerations
The term service is quite generic and therefore easily misunderstood. It is a unit exposing coherent functionality via a well-defined interface over a network. For the design of a service, we consider the following aspects:
-
self-contained
The entire API of the service shall be self-contained and have no dependencies on other parts of the application (other services, implementations, etc.). -
idempotence
E.g. creation of the same master-data entity has no effect (no error) -
loosely coupled
Service consumers have minimum knowledge and dependencies on the service provider. -
normalized
Complete, no redundancy, minimal -
coarse-grained
Service provides rather large operations (save entire entity or set of entities rather than individual attributes) -
atomic
Process individual entities (for processing large sets of data, use a batch instead of a service) -
simplicity
Avoid polymorphism, RPC methods with unique name per signature and no overloading, avoid attachments (consider separate download service), etc.
Security
Your services are the major entry point to your application. Hence, security considerations are important here.
See REST Security.
Service-Versioning
This guide describes the aspect and details about versioning of services
Motivation
Why versioning of services? First of all, you should only care about this topic if you really have to. Service versioning is complex and requires effort (time and budget). The best way to avoid this is to be smart in the first place when designing the service API. Further, if you are creating services where the only consumer is e.g. the web-client that you deploy together with the consumed services then you can change your service without the overhead to create new service versions and keeping old service versions for compatibility.
However, if the following indicators are given you typically need to do service versioning:
-
Your service is part of a complex and distributed IT landscape
-
Your service requires incompatible changes
-
There are many consumers or there is at least one (relevant) consumer that can not be updated at the same time or is entirely out of control (unknown or totally different party/company)
What are incompatible changes?
-
Almost any change when SOAP is used (as it changes the WSDL and breaks the contract). Therefore, we recommend to use REST instead. Then, only the following changes are critical.
-
A change where existing properties (attributes) have to change their name
-
A change where existing features (properties, operations, etc.) have to change their semantics (meaning)
What changes do not cause incompatibilities?
-
Adding new service operations is entirely uncritical with REST.
-
Adding new properties is only a problem in the following cases:
-
Adding new mandatory properties to the input of a service is causing incompatibilities. This problem can be avoided by contract-design.
-
If a consumer is using a service to read data, modify it and then save it back via a service and a property is added to the data, then this property might be lost. This is not a problem with dynamic languages such as JavaScript/TypeScript but with strictly typed languages such as Java. In Java you will typically use structured typed transfer-objects (and not
Map<String, Object>
) so new properties that have been added but are not known to the consumer can not be mapped to the transfer-object and will be lost. When saving that transfer-object later the property will be gone. It might be impossible to determine the difference between a lost property and a property that was removed on purpose. This is a general problem that you need to be aware of and that you have to consider by your design in such situations.
-
Even if you hit an indicator for incompatible changes you can still think about adding a new service operation instead of changing an existing one (and deprecating the old one). Be creative to simplify and avoid extra effort.
Procedure
The procedure when rolling out incompatible changes is illustrated by the following example:
+------+ +------+
| App1 | | App2 |
+---+--+ +--+---+
| |
+---+----+
|
+-------+--------+
| Sv1 |
| |
| App3 |
+----------------+
So, here we see a simple example where App3
provides a Service S
in Version v1
that is consumed both by App1
and App2
.
Now for some reason the service S
has to be changed in an incompatible way to make it future-proof for demands. However, upgrading all 3 applications at the same time is not possible in this case for whatever reason. Therefore, service versioning is applied for the changes of S
.
+------+ +------+
| App1 | | App2 |
+---+--+ +--+---+
| |
+--------+
|
+---+------------+
| Sv1 | Sv2 |
| |
| App3* |
+----------------+
Now, App3
has been upgraded and the new release was deployed. A new version v2
of S
has been added while v1
is still kept for compatibility reasons and that version is still used by App1
and App2
.
+------+ +------+
| App1 | | App2*|
+---+--+ +--+---+
| |
| |
| |
+---+--------+---+
| Sv1 | Sv2 |
| |
| App3 |
+----------------+
Now, App2
has been updated and deployed and it is using the new version v2
of S
.
+------+ +------+
| App1*| | App2 |
+---+--+ +--+---+
| |
+--------+
|
+------------+---+
| Sv1 | Sv2 |
| |
| App3 |
+----------------+
Now, also App1
has been updated and deployed and it is using the new version v2
of S
. The version v1
of S
is not used anymore. This can be verified via logging and monitoring.
+------+ +------+
| App1 | | App2 |
+---+--+ +--+---+
| |
+--------+
|
+------------+---+
| Sv2 |
| |
| App3* |
+----------------+
Finally, version v1
of the service S
was removed from App3
and the new release has been deployed.
Versioning Schema
In general anything can be used to differentiate versions of a service. Possibilities are:
-
Code names (e.g.
Strawberry
,Blueberry
,Grapefruit
) -
Timestamps (
YYYYMMDD-HHmmSS
) -
Sequential version numbers (e.g.
v1
,v2
,v3
) -
Composed version numbers (e.g.
1.0.48-pre-alpha-3-20171231-235959-Strawberry
)
As we are following the KISS principle (see key principles) we propose to use sequential version numbers. These are short, clear, and easy while still allowing to see what version is after another one. Especially composed version numbers (even 1.1
vs. 2.0
) lead to decisions and discussions that easily waste more time than adding value. It is still very easy to maintain an Excel sheet or release-notes document that is explaining the changes for each version (v1
, v2
, v3
) of a particular service.
We suggest to always add the version schema to the service URL to be prepared for service versioning even if service versioning is not (yet) actively used. For simplicity it is explicitly stated that you may even do incompatible changes to the current version (typically v1
) of your service if you can update the according consumers within the same deployment.
Practice
So assuming you know that you have to do service versioning, the question is how to do it practically in the code. The approach for your devon4j project in case of code-first should be as described below:
-
Determine which types in the code need to be changed. It is likely to be the API and implementation of the according service but it may also impact transfer objects and potentially even datatypes.
-
Create new packages for all these concerned types containing the current version number (e.g.
v1
). -
Copy all these types to that new packages.
-
Rename these copies so they carry the version number as suffix (e.g.
V1
). -
Increase the version of the service in the unversioned package (e.g. from
v1
tov2
). -
Now you have two versions of the same service (e.g.
v1
andv2
) but so far they behave exactly the same. -
You start with your actual changes and modify the original files that have been copied before.
-
You will also ensure the links (import statements) of the copied types point to the copies with the version number
-
This will cause incompatibilities (and compile errors) in the copied service. Therefore, you need to fix that service implementation to map from the old API to the new API and behavior. In some cases, this may be easy (e.g. mapping
x.y.z.v1.FooTo
tox.y.z.FooTo
using bean-mapping with some custom mapping for the incompatible changes), in other cases this can get very complex. Be aware of this complexity from the start before you make your decision about service versioning. -
As far as possible this mapping should be done in the service-layer, not to pollute your business code in the core-layer with versioning-aspects. If there is no way to handle it in the service layer, e.g. you need some data from the persistence-layer, implement the "mapping" in the core-layer then, but don’t forget to remove this code, when removing the old service version.
-
Finally, ensure that both the old service behaves as before as well as the new service works as planned.
Modularization
For modularization, we also follow the KISS principle (see key principles):
we suggest to have one api
module per application that will contain the most recent version of your service and get released with every release-version of the application. The compatibility code with the versioned packages will be added to the core
module and therefore is not exposed via the api
module (because it has already been exposed in the previous release of the app). This way, you can always determine for sure which version of a service is used by another application just by its maven dependencies.
The KISS approach with only a single module that may contain multiple services (e.g. one for each business component) will cause problems when you want to have mixed usages of service versions: You can not use an old version of one service and a new version of another service from the same APP as then you would need to have its API module twice as a dependency on different versions, which is not possible. However, to avoid complicated overhead we always suggest to follow this easy approach. Only if you come to the point that you really need this complexity you can still solve it (even afterwards by publishing another maven artefact). As we are all on our way to build more but smaller applications (SOA, microservices, etc.) we should always start simple and only add complexity when really needed.
The following example gives an idea of the structure:
/«my-app»
├──/api
| └──/src/main/java/
| └──/«rootpackage»/«application»/«component»
| ├──/common/api/to
| | └──FooTo
| └──/service/api/rest
| └──FooRestService
└──/core
└──/src/main/java/
└──«rootpackage»/«application»/«component»
├──/common/api/to/v1
| └──FooToV1
└──/service
├──/api/rest/v1
| └──FooRestServiceV1
└──impl/rest
├──/v1
| └── FooRestServiceImplV1
└──FooRestServiceImpl
Logic Layer
The logic layer is the heart of the application and contains the main business logic. According to our business architecture, we divide an application into components. For each component, the logic layer defines different use-cases. Another approach is to define a component-facade, which we do not recommend for future application. Especially for quarkus application, we want to simplify things and highly suggest omitting component-facade completely and using use-cases only. It is very important that you follow the links to understand the concept of use-case in order to properly implement your business logic.
Responsibility
The logic layer is responsible to implement the business logic according to the specified functional demands and requirements. Therefore, it creates the actual value of the application. The logic layer is responsible for invoking business logic in external systems. The following additional aspects are also included in its responsibility:
-
transaction-handling (in addition to service layer).
Security
The logic layer is the heart of the application. It is also responsible for authorization and hence security is important in this current case. Every method exposed in an interface needs to be annotated with an authorization check, stating what role(s) a caller must provide in order to be allowed to make the call. The authorization concept is described here.
Direct Object References
A security threat are Insecure Direct Object References. This simply gives you two options:
-
avoid direct object references
-
ensure that direct object references are secure
Especially when using REST, direct object references via technical IDs are common sense. This implies that you have a proper authorization in place. This is especially tricky when your authorization does not only rely on the type of the data and according to static permissions but also on the data itself. Vulnerabilities for this threat can easily happen by design flaws and inadvertence. Here is an example from our sample application:
We have a generic use-case to manage BLOBs. In the first place, it makes sense to write a generic REST service to load and save these BLOBs. However, the permission to read or even update such BLOB depends on the business object hosting the BLOB. Therefore, such a generic REST service would open the door for this OWASP A4 vulnerability. To solve this in a secure way, you need individual services for each hosting business object to manage the linked BLOB and have to check permissions based on the parent business object. In this example the ID of the BLOB would be the direct object reference and the ID of the business object (and a BLOB property indicator) would be the indirect object reference.
Component Facade
Note
|
Our recommended approach for implementing the logic layer is use-cases |
For each component of the application, the logic layer defines a component facade.
This is an interface defining all business operations of the component.
It carries the name of the component («Component»
) and has an implementation named «Component»Impl
(see implementation).
API
The component facade interface defines the logic API of the component and has to be business oriented.
This means that all parameters and return types of all methods from this API have to be business transfer-objects, datatypes (String
, Integer
, MyCustomerNumber
, etc.), or collections of these.
The API may also only access objects of other business components listed in the (transitive) dependencies of the business-architecture.
Here is an example how such an API may look like:
public interface Bookingmanagement {
BookingEto findBooking(Long id);
BookingCto findBookingCto(Long id);
Page<BookingEto> findBookingEtos(BookingSearchCriteriaTo criteria);
void approveBooking(BookingEto booking);
}
Implementation
The implementation of an interface from the logic layer (a component facade or a use-case) carries the name of that interface with the suffix Impl
and is annotated with @Named
.
An implementation typically needs access to the persistent data.
This is done by injecting the corresponding repository (or DAO).
According to data-sovereignty, only repositories of the same business component may be accessed directly.
For accessing data from other components the implementation has to use the corresponding API of the logic layer (the component facade). Further, it shall not expose persistent entities from the domain layer and has to map them to transfer objects using the bean-mapper.
@Named
@Transactional
public class BookingmanagementImpl extends AbstractComponentFacade implements Bookingmanagement {
@Inject
private BookingRepository bookingRepository;
@Override
public BookingEto findBooking(Long id) {
LOG.debug("Get Booking with id {} from database.", id);
BookingEntity entity = this.bookingRepository.findOne(id);
return getBeanMapper().map(entity, BookingEto.class));
}
}
As you can see, entities (BookingEntity
) are mapped to corresponding ETOs (BookingEto
).
Further details about this can be found in bean-mapping.
UseCase
A use-case is a small unit of the logic layer responsible for an operation on a particular entity (business object). We leave it up to you to decide whether you want to define an interface (API) for each use-case or provide an implementation directly.
Following our architecture-mapping (for classic and modern project), use-cases are named Uc«Operation»«BusinessObject»[Impl]
. The prefix Uc
stands for use-case and allows to easily find and identify them in your IDE. The «Operation»
stands for a verb that is operated on the entity identified by «BusinessObject»
.
For CRUD we use the standard operations Find
and Manage
that can be generated by CobiGen. This also separates read and write operations (e.g. if you want to do CQSR, or to configure read-only transactions for read operations).
In our example, we choose to define an interface for each use-case. We also use *To
to refer to any type of transfer object. Please follow our guide to understand more about different types of transfer object e.g. Eto, Dto, Cto
Find
The UcFind«BusinessObject»
defines all read operations to retrieve and search the «BusinessObject»
.
Here is an example:
public interface UcFindBooking {
//*To = Eto, Dto or Cto
Booking*To findBooking(Long id);
}
Manage
The UcManage«BusinessObject»
defines all CRUD write operations (create, update and delete) for the «BusinessObject»
.
Here is an example:
public interface UcManageBooking {
//*To = Eto, Dto or Cto
Booking*To saveBooking(Booking*To booking);
void deleteBooking(Long id);
}
Custom
Any other non CRUD operation Uc«Operation»«BusinessObject»
uses any other custom verb for «Operation»
.
Typically, such custom use-cases only define a single method.
Here is an example:
public interface UcApproveBooking {
//*To = Eto, Dto or Cto
void approveBooking(Booking*To booking);
}
Implementation
The implementation should carry its own name and the suffix Impl
and is annotated with @Named
and @ApplicationScoped
. It will need access to the persistent data which is done by injecting the corresponding repository (or DAO). Furthermore, it shall not expose persistent entities from the data access layer and has to map them to transfer objects using the bean-mapper. Please refer to our bean mapping, transfer object and dependency injection documentation for more information.
Here is an example:
@ApplicationScoped
@Named
public class UcManageBookingImpl implements UcManageBooking {
@Inject
private BookingRepository bookingRepository;
@Override
public void deleteBooking(Long id) {
LOG.debug("Delete Booking with id {} from database.", id);
this.bookingRepository.deleteById(id);
}
}
The use-cases can then be injected directly into the service.
@Named("BookingmanagementRestService")
@Validated
public class BookingmanagementRestServiceImpl implements BookingmanagementRestService {
@Inject
private UcFindBooking ucFindBooking;
@Inject
private UcManageBooking ucManageBooking;
@Inject
private UcApproveBooking ucApproveBooking;
}
Internal use case
Sometimes, a component with multiple related entities and many use-cases needs to reuse business logic internally.
Of course, this can be exposed as an official use-case API but this will imply using transfer-objects (ETOs) instead of entities. In some cases, this is undesired e.g. for better performance to prevent unnecessary mapping of entire collections of entities.
In the first place, you should try to use abstract base implementations providing reusable methods the actual use-case implementations can inherit from.
If your business logic is even more complex and you have multiple aspects of business logic to share and reuse but also run into multi-inheritance issues, you may also just create use-cases that have their interface located in the impl
scope package right next to the implementation (or you may just skip the interface). In such a case, you may define methods that directly take or return entity objects.
To avoid confusion with regular use-cases, we recommend to add the Internal
suffix to the type name leading to Uc«Operation»«BusinessObject»Internal[Impl]
.
Data-Access Layer
The data-access layer is responsible for all outgoing connections to access and process data. This is mainly about accessing data from a persistent data-store. External system could also be accessed from the data-access layer if they match this definition, e.g. a mongo-db via rest services.
Database
You need to make your choice for a database. Options are documented here.
The classical approach is to use a Relational Database Management System (RDMS). In such a case, we strongly recommend to follow our JPA Guide. Some NoSQL databases are supported by spring-data so you can consider the repository guide.
Batch Layer
We understand batch processing as a bulk-oriented, non-interactive, typically long running execution of tasks. For simplicity, we use the term "batch" or "batch job" for such tasks in the following documentation.
devonfw uses Spring Batch as a batch framework.
This guide explains how Spring Batch is used in devonfw applications. It focuses on aspects which are special to devonfw. If you want to learn about spring-batch you should adhere to springs references documentation.
There is an example of a simple batch implementation in the my-thai-star batch module.
In this chapter, we will describe the overall architecture (especially concerning layering) and how to administer batches.
Layering
Batches are implemented in the batch layer. The batch layer is responsible for batch processes, whereas the business logic is implemented in the logic layer. Compared to the service layer, you may understand the batch layer just as a different way of accessing the business logic. From a component point of view, each batch is implemented as a subcomponent in the corresponding business component. The business component is defined by the business architecture.
Let’s make an example for that. The sample application implements a batch for exporting ingredients. This ingredientExportJob belongs to the dishmanagement business component. So the ingredientExportJob is implemented in the following package:
<basepackage>.dishmanagement.batch.impl.*
Batches should invoke use cases in the logic layer for doing their work. Only "batch specific" technical aspects should be implemented in the batch layer.
Example: For a batch, which imports product data from a CSV file, this means that all code for actually reading and parsing the CSV input file is implemented in the batch layer. The batch calls the use case "create product" in the logic layer for actually creating the products for each line read from the CSV input file.
Directly accessing data access layer
In practice, it is not always appropriate to create use cases for every bit of work a batch should do. Instead, the data access layer can be used directly. An example for that is a typical batch for data retention which deletes out-of-time data. Often deleting, out-dated data is done by invoking a single SQL statement. It is appropriate to implement that SQL in a Repository or DAO method and call this method directly from the batch. But be careful: this pattern is a simplification which could lead to business logic cluttered in different layers, which reduces the maintainability of your application. It is a typical design decision you have to make when designing your specific batches.
Project structure and packaging
Batches will be implemented in a separate Maven module to keep the application core free of batch dependencies. The batch module includes a dependency on the application core-module to allow the reuse of the use cases, DAOs etc. Additionally the batch module has dependencies on the required spring batch jars:
<dependencies>
<dependency>
<groupId>${project.groupId}</groupId>
<artifactId>mtsj-core</artifactId>
<version>dev-SNAPSHOT</version>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-batch</artifactId>
</dependency>
</dependencies>
To allow an easy start of the batches from the command line it is advised to create a bootified jar for the batch module by adding the following to the pom.xml
of the batch module:
<build>
<resources>
<resource>
<directory>src/main/resources</directory>
<filtering>true</filtering>
</resource>
</resources>
<plugins>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-jar-plugin</artifactId>
<configuration>
<excludes>
<exclude>config/application.properties</exclude>
</excludes>
</configuration>
</plugin>
<!-- Create bootified jar for batch execution via command line.
Your applications spring boot app is used as main-class.
-->
<plugin>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-maven-plugin</artifactId>
<configuration>
<mainClass>com.devonfw.application.mtsj.SpringBootApp</mainClass>
<classifier>bootified</classifier>
</configuration>
<executions>
<execution>
<goals>
<goal>repackage</goal>
</goals>
</execution>
</executions>
</plugin>
</plugins>
</build>
Implementation
Most of the details about implementation of batches is described in the spring batch documentation. There is nothing special about implementing batches in devonfw. You will find an easy example in my-thai-star.
Starting from command line
Devonfw advises to start batches via command line. This is most common to many ops teams and allows easy integration in existing schedulers. In general batches are started with the following command:
java -jar <app>-batch-<version>-bootified.jar --spring.main.web-application-type=none --spring.batch.job.enabled=true --spring.batch.job.names=<myJob> <params>
Parameter | Explanation |
---|---|
|
This disables the web app (e.g. Tomcat) |
|
This specifies the name of the job to run. If you leave this out ALL jobs will be executed. Which probably does not make to much sense. |
|
(Optional) additional parameters which are passed to your job |
This will launch your normal spring boot app, disables the web application part and runs the designated job via Spring Boots org.springframework.boot.autoconfigure.batch.JobLauncherCommandLineRunner
.
Scheduling
In real world scheduling of batches is not as simple as it first might look like.
-
Multiple batches have to be executed in order to achieve complex tasks. If one of those batches fails the further execution has to be stopped and operations should be notified for example.
-
Input files or those created by batches have to be copied from one node to another.
-
Scheduling batch executing could get complex easily (quarterly jobs, run job on first workday of a month, …)
For devonfw we propose the batches themselves should not mess around with details of scheduling. Likewise your application should not do so. This complexity should be externalized to a dedicated batch administration service or scheduler. This service could be a complex product or a simple tool like cron. We propose Rundeck as an open source job scheduler.
This gives full control to operations to choose the solution which fits best into existing administration procedures.
Handling restarts
If you start a job with the same parameters set after a failed run (BatchStatus.FAILED) a restart will occur. In many cases your batch should then not reprocess all items it processed in the previous runs. For that you need some logic to start at the desired offset. There different ways to implement such logic:
-
Marking processed items in the database in a dedicated column
-
Write all IDs of items to process in a separate table as an initialization step of your batch. You can then delete IDs of already processed items from that table during the batch execution.
-
Storing restart information in springs ExecutionContext (see below)
Using spring batch ExecutionContext for restarts
By implementing the ItemStream
interface in your ItemReader
or ItemWriter
you may store information about the batch progress in the ExecutionContext
. You will find an example for that in the CountJob in My Thai Star.
Additional hint: It is important that bean definition method of your ItemReader
/ItemWriter
return types implementing ItemStream
(and not just ItemReader
or ItemWriter
alone). For that the ItemStreamReader
and ItemStreamWriter
interfaces are provided.
Exit codes
Your batches should create a meaningful exit code to allow reaction to batch errors e.g. in a scheduler.
For that spring batch automatically registers an org.springframework.boot.autoconfigure.batch.JobExecutionExitCodeGenerator
. To make this mechanism work your spring boot app main class as to populate this exit code to the JVM:
@SpringBootApplication
public class SpringBootApp {
public static void main(String[] args) {
if (Arrays.stream(args).anyMatch((String e) -> e.contains("--spring.batch.job.names"))) {
// if executing batch job, explicitly exit jvm to report error code from batch
System.exit(SpringApplication.exit(SpringApplication.run(SpringBootApp.class, args)));
} else {
// normal web application start
SpringApplication.run(SpringBootApp.class, args);
}
}
}
Stop batches and manage batch status
Spring batch uses several database tables to store the status of batch executions. Each execution may have different status. You may use this mechanism to gracefully stop batches. Additionally in some edge cases (batch process crashed) the execution status may be in an undesired state. E.g. the state will be running, despite the process crashed sometime ago. For that cases you have to change the status of the execution in the database.
CLI-Tool
Devonfw provides a easy to use cli-tool to manage the executing status of your jobs.
The tool is implemented in the devonfw module devon4j-batch-tool
. It will provide a runnable jar, which may be used as follows:
- List names of all previous executed jobs
-
java -D'spring.datasource.url=jdbc:h2:~/mts;AUTO_SERVER=TRUE' -jar devon4j-batch-tool.jar jobs list
- Stop job named 'countJob'
-
java -D'spring.datasource.url=jdbc:h2:~/mts;AUTO_SERVER=TRUE' -jar devon4j-batch-tool.jar jobs stop countJob
- Show help
-
java -D'spring.datasource.url=jdbc:h2:~/mts;AUTO_SERVER=TRUE' -jar devon4j-batch-tool.jar
As you can the each invocation includes the JDBC connection string to your database. This means that you have to make sure that the corresponding DB driver is in the classpath (the prepared jar only contains H2).
Authentication
Most business application incorporate authentication and authorization. Your spring boot application will implement some kind of security, e.g. integrated login with username+password or in many cases authentication via an existing IAM. For security reasons your batch should also implement an authentication mechanism and obey the authorization implemented in your application (e.g. via @RolesAllowed).
Since there are many different authentication mechanism we cannot provide an out-of-the-box solution in devonfw, but we describe a pattern how this can be implemented in devonfw batches.
We suggest to implement the authentication in a Spring Batch tasklet, which runs as the first step in your batch. This tasklet will do all of the work which is required to authenticate the batch. A simple example which authenticates the batch "locally" via username and password could be implemented like this:
@Named
public class SimpleAuthenticationTasklet implements Tasklet {
@Override
public RepeatStatus execute(StepContribution contribution, ChunkContext chunkContext) throws Exception {
String username = chunkContext.getStepContext().getStepExecution().getJobParameters().getString("username");
String password = chunkContext.getStepContext().getStepExecution().getJobParameters().getString("password");
Authentication authentication = new UsernamePasswordAuthenticationToken(username, password);
SecurityContextHolder.getContext().setAuthentication(authentication);
return RepeatStatus.FINISHED;
}
}
The username and password have to be supplied via two cli parameters -username
and -password
. This implementation creates an "authenticated" Authentication
and sets in the Spring Security context. This is just for demonstration normally you should not provide passwords via command line. The actual authentication will be done automatically via Spring Security as in your "normal" application.
If you have a more complex authentication mechanism in your application e.g. via OpenID connect just call this in the tasklet. Naturally you may read authentication parameters (e.g. secrets) from the command line or more securely from a configuration file.
In your Job Configuration set this tasklet as the first step:
@Configuration
@EnableBatchProcessing
public class BookingsExportBatchConfig {
@Inject
private JobBuilderFactory jobBuilderFactory;
@Inject
private StepBuilderFactory stepBuilderFactory;
@Bean
public Job myBatchJob() {
return this.jobBuilderFactory.get("myJob").start(myAuthenticationStep()).next(...).build();
}
@Bean
public Step myAuthenticationStep() {
return this.stepBuilderFactory.get("myAuthenticationStep").tasklet(myAuthenticatonTasklet()).build();
}
@Bean
public Tasklet myAuthenticatonTasklet() {
return new SimpleAuthenticationTasklet();
}
...
Tipps & tricks
Identifying job parameters
Spring uses a jobs parameters to identify job executions. Parameters starting with "-" are not considered for identifying a job execution.