Contribute to help us improve!

Are there edge cases or problems that we didn't consider? Is there a technical pitfall that we should add? Did we miss a comma in a sentence?

If you have any input for us, we would love to hear from you and appreciate every contribution. Our goal is to learn from projects for projects such that nobody has to reinvent the wheel.

Let's collect our experiences together to make room to explore the novel!

To contribute click on Contribute to this page on the toolbar.

Subsystem- and System-Tests

This article focuses on Subsystem- and System-Tests. General test automation good practices can be found here.

General Good Practices

Use BDD to develop test scenarios

Behavior Driven Development (BDD) is a software development method that generates a common understanding by conversations with business, developers and testers about how the system should behave. This prevents problems due to misunderstandings and helps closing gaps in the specification. The wanted behavior is captured in scenarios with concrete example data in the so called Gherkin syntax. The syntax mainly consists of natural language with business terms in a given - when - then - structure. The scenarios are defined before implementation starts, so that the implementation can take the right direction from the beginning. BDD tools allow automating the scenarios directly as automated tests.

Typical Tool: Cucumber

Consider common BDD good and bad practices

Good and bad practices how to write scenarios with Gherkin can be found here.

Most important:

  • Collaborate with business, development and test to create scenarios. Do NOT expect the business or testers to hand them readily over to you as a developer.

  • Aim for short scenarios, e.g.

    • leave out irrelevant data, they can be set in the code as defaults

    • combine small UI action to user actions

    • …​.

  • Do NOT use generic UI steps, but complete user actions: e.g. instead of "when the user enters "admin" in field "username" the user enters "adminpwd" in field "password" and the user presses the "login" button", use "when the user logs in as administrator"

Mix API and UI steps

Just because your expected result can only be checked in the UI does not mean your whole test must be executed via UI. You can use the API to create preconditions and then perform some steps in the UI. This makes the complete test faster and more reliable.

Prevent waiting for an explicit time

Whenever you have asynchronous behavior in your system under test your test needs to wait until the system has finished the reaction on some asynchronous event. This is always the case for UI tests when the screen reacts on user interaction, but can also happen in API tests when the system works with asynchronous events. Avoid waiting an explicit amount of time (dont’t use Thread.sleep(5000) or TimeUnit.SECONDS.sleep(5))! It makes your tests slow and often not even more stable.

Typical tools:

Apply a rerun for failed tests

Even if you do good waiting, asynchronous behavior in the system under test might make your tests flaky. So, consider running all failing tests a second time in the pipeline, so that flaky tests do not break your build. But do not ignore these flaky tests! Even if you do not break the build, you should analyze and stabilize flaky tests. It helps to keep the test runtimes low, and you might otherwise overlook random errors in your application (not only tests can be flaky).

Typical tools:

Make tests transparent and analyzable by proper reporting

There are three views that need to be addressed by test reporting:

Business: If you include the business in test creation (which is highly recommended), they will want to see what is tested and that the tests are successful. But they will not click through Jenkins builds to see this.

Developers, testers: If a test fails, the reason for the failure must be easily analyzable.

Architects, developers, testers: Aggregated views on test runs are needed to detect general improvement needs like runtime optimizations.

  • Typical tools:

    • Allure reporting

    • Pushing individual metrics to Kibana or Influx + Grafana to get an aggregated view on execution times and stability (recommended for large-scale projects when the number of tests grows)

API-test in this section means a functional test that uses API calls to trigger or check functionality of a fully integrated system under test. This is a difference to unit testing of a method that exposes an endpoint via REST.

Typical Tool: RestAssured

API Test with RestAssured
@Test
@DisplayName("Test for getting an existing product by id")
public void testGetProductByIdEndpoint() {
    // arrange
    UUID productId = UUID.fromString("6c4d8ad9-f544-4cc4-8947-113c80fbed07");
    // act
    ProductDto result = when().get("/products/"+productId.toString())
            .then()
            .assertThat()
            .statusCode(200)
            .extract()
            .as(ProductDto.class);
    // assert
    assertThat(result, is(notNullValue()));
    assertThat(result.getId(), is(productId.toString()));
}

Execute general initializations at a central point

Example Junit Rule for RestAssured (init API URL and time mapping)
public class RestassuredConnectionSetup implements BeforeAllCallback {
    @Override
    public void beforeAll(ExtensionContext extensionContext) {
        RestAssured.baseURI = TestConfiguration.getApiUrl();
        RestAssured.config = RestAssuredConfig.config().objectMapperConfig(
                ObjectMapperConfig.objectMapperConfig()
                    .jackson2ObjectMapperFactory((type, s) -> {
                        ObjectMapper objectMapper = new ObjectMapper();
                        objectMapper.registerModule(new JavaTimeModule());
                        objectMapper.configure(SerializationFeatureWRITE_DATES_AS_TIMESTAMPS, false);
                    return objectMapper;
                })
        );
    }
}

Execute API authorization at a central point

You can have API-tests that test authorization and authentication, but in most cases, you only want to test the functionality. So, keep authorization details out of your functional tests. If you use RestAssured, you can use a rule that intersects the RequestSpecification and adds authorization details. If your test framework supports Spring (like Cucumber), you can intercept requests in the RestTemplate.

Examples:

  • RestAssured Junit Rule for Bearer token

  • Spring Bean for csrf token

public class LoggedInRequestSetup implements ParameterResolver {
    private static String token = null;
    @Override
    public boolean supportsParameter(ParameterContext parameterContext,
        ExtensionContext extensionContext) throws ParameterResolutionException {
        return parameterContext.getParameter().getType() == RequestSpecification.class;
    }
    @Override
    public Object resolveParameter(ParameterContext parameterContext,
        ExtensionContext extensionContext) throws ParameterResolutionException {
        if (null == token) {
            token = generateLoginToken();
        }
        return RestAssured.with()
            .contentType("application/json")
            .header("Authorization", "Bearer " + token);
    }
    private String generateLoginToken() {
        RestAssured.baseURI = TestConfiguration.getApiUrl();
        return RestAssured.with()
                .body(User.validUser())
                .post(TestConfiguration.loginUrl())
                .header(HttpHeaders.AUTHORIZATION);
    }
}
@Bean
public TestRestTemplate restTemplate() {
    TestRestTemplate restTemplate = new TestRestTemplate(TestRestTemplate.HttpClientOption.ENABLE_COOKIES);
    restTemplate.setUriTemplateHandler(new DefaultUriBuilderFactory(TestConfiguration.getApiUrl()));
    // login
    ResponseEntity<Void> responseEntity = restTemplate.postForEntity(TestPaths.login(), User.validUser(), Void.class);
    assertThat(responseEntity.getStatusCode()).isEqualTo(HttpStatus.OK);
    // get csrf token
    ResponseEntity<Token> tokenResponse = restTemplate.getForEntity(TestPaths.csrf(), Token.class);
    assertThat(tokenResponse.getStatusCode()).isEqualTo(HttpStatus.OK);
    final Token csrfToken = tokenResponse.getBody();
    assertThat(csrfToken).isNotNull();
    // add csrf token to all REST cally
    restTemplate.getRestTemplate().setInterceptors(Collections.singletonList(
        (request, body, execution) -> {
            request.getHeaders().add(csrfToken.getHeaderName(), csrfToken.getToken());
            return execution.execute(request, body);
      }));
    return restTemplate;
  }

Mock external system that are not in your scope

Your test should only fail if your system under test does not behave like expected. It may not depend on other systems that are not in your test scope. So, if your system under test communicates with another system or microservice, you need to mock the responses. If possible, prepare standard responses for tests that do not test the interaction, but need a positive response to go on.

Typical tool: Wiremock

Example Junit Rule to start Wiremock and set a OK response
public class WiremockSetup implements BeforeAllCallback, AfterAllCallback, BeforeEachCallback {
    private static WireMockServer wireMockServer = null;
    @Override
    public void beforeAll(ExtensionContext extensionContext)  {
        wireMockServer = new WireMockServer(wireMockConfig()
            .port(TestConfiguration.wirmockPort()));
        wireMockServer.start();
        WireMock.configureFor(TestConfiguration.host(), TestConfiguration.wirmockPort());
    }
    @Override
    public void afterAll(ExtensionContext extensionContext)  {
        if (null != wireMockServer) {
            wireMockServer.stop();
        }
    }
    @Override
    public void beforeEach(ExtensionContext extensionContext)  {
        WireMock.resetAllRequests();
        WireMock.stubFor(post(urlEqualTo(TestPatchs.myApi()))
                .willReturn(aResponse()
                    .withHeader("Content-Type", "application/json")
                    .withStatus(200)));
    }
}
Example Junit test with Wiremock rule
@ExtendWith(LoggedInRequestSetup.class)
@ExtendWith(WiremockSetup.class)
public class BookingConfirmationApiTest {
    @Test
    public void bookingNotSuccessfulForNotWorkingEmail(RequestSpecification request) {
        // prepare mock
        final String errorEmail = "myveryspecialemailforerror@error.de";
        WireMock.stubFor(post(urlEqualTo(TestPaths.myApi()))
                .withRequestBody(matchingJsonPath("recipient", equalTo(errorEmail)))
                .willReturn(aResponse().withStatus(400)));
        // call system under test (implicitly calls "myApi") and verify result
        Booking booking = BookingBuilder.defaultBooking().withEmail(errorEmail);
        request.body(booking)
                .when()
                .post(TestPaths.createBooking())
                .then().statusCode(500);
        // verify that Wiremock is called during the test
        List<LoggedRequest> requests = WireMock.findAll(
            postRequestedFor(urlEqualTo(TestPaths.myApi())));
        assertThat(requests).hasSize(1);
        ...
    }
}

UI-test in this section means a functional test that uses UI interaction to trigger or check functionality of a fully integrated system under test. This is a difference to Unit- or Unit-Integration Tests of a UI components.

Typical tool: Selenium WebDriver

Encapsulate UI technology specific logic in page objects

When the UI changes, you often need to change your UI tests. But if the change is just affecting the structure of one page, you don’t want to change all tests that use the page. To prevent this, encapsulate the access to widgets on a page in a page object. More details can be found here.

Encapsulate UI components in your test code

Web development often works with libraries or patterns of re-usable UI components that take care for a unique look and feel. They change over time. Like the motivation for the page object pattern, you don’t want to change all tests or all page objects, if the internal structure of such a component changes. So, similar to page objects, create "widget objects" that encapsulate the access to common UI components. These widget objects are then used in page objects.

If you use Selenium, you need to expect StaleElementReferenceExceptions. So, you should not save a reference to a WebElement in your widget object. And you can use the widget to handle this exception at a central place.

Example Selenium wideget superclass
public class Widget {
    private final WebDriver searchContext;
    private final By selector;
    public Widget(By selector, WebDriver searchContext)  {
        this.searchContext = searchContext;
        this.selector = selector;
    }
    public WebElement getWebElement() {
        return searchContext.findElement(selector);
    }
    public void click() {
        (new WebDriverWait(searchContext, 10)).
                until(ExpectedConditions.elementToBeClickable(getWebElement()));
        try {
            getWebElement().click();
        } catch (StaleElementReferenceException exc) {
            log.info("retry on StaleElementReferenceException", exc);
            getWebElement().click();
        }
    }
    ...
}

Address widgets by data-testid

To perform checks or actions in UI tests, you need to identify elements on a page. The recommended way is to assign custom data-attributes "data-testid" to all relevant elements when developing the page and use them for identifying the element. More details can be found here.

Example Selenium custom By-selector
public class ByTestId extends By  {
    private final String testId;
    public ByTestId(String testId) {
        this.testId = testId;
    }
    @Override
    public List<WebElement> findElements(SearchContext searchContext) {
        return searchContext.findElements(
            By.cssSelector(String.format("*[data-testid='%s']", testId)));
    }
    public static By testId(String testId) {
        return new ByTestId(testId);
    }
}

Add screenshots to the result report

A screenshot from the time of the failure of a test is in many cases enough to understand what went wrong (e.g. is the expected value really not displayed or is the UI still loading). When working with Selenium you can let the WebDriver take a screenshot. When integrated with Cucumber add screenshots in an After hook:

Example Cucumber After hook that adds screenshot with Selenium WebDriver
    @After()
    public void afterScenarioUI(Scenario scenario) {
        if (scenario.isFailed() && null != webDriver()) {
            scenario.attach(
                ((TakesScreenshot) webDriver()).getScreenshotAs(OutputType.BYTES),
                "image/png", scenario.getName());
        }
    }

Test Data Good Practices

Use test data builders with defaults for complex structures

Test data are often complex data structures. Usually only part of the attributes are relevant, e.g. you need a person and the only relevant fact is that it is a student. Create builders that provide valid defaults that can be adapted to you test.

Example usage of a test data builder
    // the created object will contain all required information based on default values
    Person student = PersonBuilder.withJob(JobEnum.STUDENT).build();

Keep references to data in the database at a central place

With complex systems it is not possible that each system test starts with an empty database. Usually there is a defined state of master data that every test can rely on. Tests need to refer to these data, e.g. to set an existing customer reference. Keep those references at a central place in your test code, so that they can be changed easily or be configured for test runs on different databases. You can start with a simple set of constants and make it adaptable for different environments later.

Example simple starting point for test data references
@RequiredArgsConstructor
public enum ProductData implements TestDataEnum {
    GENERAL("GEN"),
    AVIATION("AA");
    private final String value;
    @Override
    public String value() {
        return value;
    }
}
@RequiredArgsConstructor
public enum UserData implements TestData {
    STANDARD_USER("standard", "password"),
    ADMIN_USER("admin", "admin");
    private final String value;
    private final String password;
    @Override
    public String value() {
        return value;
    }
    public String password() {
        return password;
    }
}

Test Execution Good Practices

Ensure Fail Fast in pipelines

Subsystem- and System-Tests, especially when done via UI, will always take more time than Unit tests. When the system is large you can end up in hours of test execution time. To keep the runtimes in an acceptable time, you can:

  • Execute tests in parallel; Tool Support: Surefire parallel option, Cucumber Junit 5 options

  • Differentiate between priorities where lower priority tests are executed only nightly. Use tags to define the test sets and to configure the different test runs.

Simplify environment setup for local tests

A local execution of tests is usually possible without any problems for isolated unit tests. In contrast, sub-system tests involve databases, message brokers or other external systems that make local execution difficult.

One possibility is to set up the necessary systems using local Docker containers or to connect to corresponding cloud services (e.g. databases). In both cases, local execution requires manual effort before the tests can be run. In a microservice architecture with very diverse microservice environments, for example different databases / versions, this can become arbitrarily complex.

TestContainers offer a way to set up the necessary environment on-the-fly and without further local configuration. TestContainers uses Docker containers in the background, which are defined accordingly in the test itself. For example, a Postgres database and a Kafka message broker can be defined in this way, which are then automatically provided via Docker Containers. An additional local configuration is no longer necessary. After the test has been completed, all resources are removed again so that a new test execution is based on a clean environment. An example usage can be found here.

Quarkus goes one step further with DevServices. For example, if a Postgres DB extension is added as a dependency, the corresponding test containers are automatically configured and started in the background when a test is started. In addition to pure test execution, this also applies to starting the application in dev mode. However, this is so far limited to the existing DevServices since it is not possible to have user-defined DevServices.