Microservice Platforms Solutions - Azure Kubernetes Service
Microservice Platforms Introduction
Context & Problem
Microservice orchestration is the automatic process of managing or scheduling the work of individual microservices of an application within a cluster. The platform provides an automated process of managing, scaling, and maintaining microservices of an application. The container orchestration platform Kubernetes is the current de facto standard (Docker swarm and others are nowadays neglectable compared to Kubernetes). Containers are units that packages up code and all its dependencies like libraries so that the application can be started quickly and runs reliably, regardless of the infrastructure. Container orchestration tools automate the management of various tasks that software teams encounter in a container’s lifecycle, including the following: Container deployment, scaling, load balancing and traffic routing, network and container configuration, allocation of resources, gathering of insights, provisioning, scheduling, distribution of containers to physical hosts, service discovery, health monitoring, cluster management (Link).
Introduction to Kubernetes
To achieve the above goals a rough understanding of the Kubernetes structure is important. The control plane is responsible for managing your container workload. All action in Kubernetes goes through the api-server which receives and executes commands. The workload is defined by the target state you define in so called objects. A Kubernetes object is a "record of intent" - once you create the object, the Kubernetes system will constantly work to ensure that object exists. Those definitions can exist in manifest files, or be obtained from the api-server.
Pods are the smallest, most basic deployable objects in Kubernetes. A Pod represents a single instance of a running process in your cluster. Pods contain one or more containers. When a Pod runs multiple containers, the containers are managed as a single entity and share the Pod’s resources. However, to make life considerably easier, you don’t need to manage each Pod directly. Instead, you can use workload resources that manage a set of pods on your behalf such as a "deployment". These resources configure controllers that make sure the right number of the right kind of pod are running, to match the state you specified (See here for concepts implementing workflows).
Two major extensions can be distinguished. There is a certain overlap between both with both having a different focus. Service meshes focus at core container orchestration functionality. The other class are extensions that focus at supporting typical application patterns such as publish subscribe. Infrastructure from programming perspective falls into two categories:
-
Platform orchestration support: Provided by Kubernetes and service meshes
-
Application platform support: Provided by additional tools such as DAPR
The picture below summarizes the major aspects:

Standard Problems
This chapter focus at problems that are unique due to the microservice focus. Links to documentation of microservice agnostic parts are given below. An example for a problem with microservice specifics and agnostic parts is provisioning. Creating the orchestration infrastructure is conceptually not different from other resources that are created with a pipeline. However, the additional container build step is specific to microservices based on a container orchestration platform.
The following standard problems regarding the orchestration platform support will be addressed in subsequent paragraphs:
-
Deployment
On orchestration platform level special options exist how to create various environments. For general aspects of provisioning see the pattern "Provisioning".
-
Scaling
Targets of scaling can be the underlying nodes (=VMs) of the orchestration cluster and the pods per node. Scaling can be manual by stating a target number of components or automatic e.g. depending on load.
-
Load balancing/traffic routing
This is core strategy for maximizing availability and scalability, load balancing distributes network traffic among multiple backend services efficiently. A range of options for load balancing external traffic to pods exists in the Kubernetes context, each with its own benefits and tradeoffs. Basic options are:
-
Load Balancing with kube-proxy: Simple but not fair if clients send with different frequency
-
Kubernetes Endpoints API: Load balancer uses Kubernetes API to track availability of pods
-
Ingress Load Balancer: Most popular and allows for sophisticated load balancing rules
The above list only shows the major hooks/ solutions that are available in Kubernetes. More variations are possible if basic functions are taken into account:
-
Backend discovery
-
Health Checking
-
Distribution algorithm such as round robin
-
Protocol level e.g. OSI layer 7 or layer 4 only
-
-
Networking is a central part of Kubernetes, but it can be challenging to understand exactly how it is expected to work. There are 4 distinct networking problems to address:
-
Tightly coupled container-to-container communications: this is solved by ** Pods and localhost communications.
-
Pod-to-Pod communications: this is the primary focus of this document.
-
Pod-to-Service communications: this is covered by services.
-
External-to-Service communications: this is covered by services.
Kubernetes uses the following model to organize networking. Every Pod gets its own IP address. This means you do not need to explicitly create links between Pods and you almost never need to deal with mapping container ports to host ports. Pods on a node can communicate with all pods on all nodes without NAT. Kubernetes IP addresses exist at the Pod scope - containers within a Pod share their network namespaces - including their IP address and MAC address. This means that containers within a Pod can all reach each other’s ports on localhost.
-
-
Configuration
Configuration has various dimensions:
-
Sensitive versus non-sensitive information
-
Orchestration platform versus application settings
-
Automatic deployment of configuration settings versus manual
-
-
Scheduling
In Kubernetes, scheduling refers to making sure that Pods are matched to Nodes so that the kubelet can run them. Preemption is the process of terminating Pods with lower Priority so that Pods with higher Priority can schedule on Nodes. Eviction is the process of terminating one or more Pods on Nodes.
Factors that need to be taken into account for scheduling decisions include individual and collective resource requirements, hardware / software / policy constraints, affinity and anti-affinity specifications, data locality, inter-workload interference, and so on.
-
Service discovery
Service discovery is the actual process of figuring out how to connect to a service (Link). The approach can be either (1) client or (2) server driven.
In case of client-side discovery the client is responsible for determining which service instance it should connect to. It does that by contacting a service registry component, which keeps records of all the running services and their endpoints. When a new service gets added or another one dies, the Service Registry is automatically updated. It is the client’s responsibility to load-balance and distribute its request load on the available services.
In the server-side discovery a load-balancing layer exists in front of the service instances. The client connects to the well-defined URL of the load balancer and the latter determines which backend service it shall route the request too. Because a Pod can be moved or rescheduled to another Node, any internal IPs that this Pod is assigned can change over time. If we were to connect to this Pod to access our application, it would not work on the next re-deployment. To make a Pod reachable to external networks or clusters without relying on any internal IPs, we need another layer of abstraction. Services provide network connectivity to Pods that work uniformly across clusters. Each service exposes an IP address, and may also expose a DNS endpoint — both of which will never change. Internal or external consumers that need to communicate with a set of pods will use the service’s IP address, or its more generally known DNS endpoint. In this way, the service acts as the glue for connecting pods with other pods.
-
Application services
Standard services on application level include:
-
Service-to-Service invocation
-
State management
-
Publish & Subscribe: This pattern allows microservices to communicate with each other using messages. The producer or publisher sends messages to a topic without knowledge of what application will receive them. This involves writing them to an input channel. Similarly, a consumer or subscriber subscribes to the topic and receive its messages without any knowledge of what service produced these messages. This involves receiving messages from an output channel. An intermediary message broker is responsible for copying each message from an input channel to an output channels for all subscribers interested in that message. This pattern is especially useful when you need to decouple microservices from one another.
-
Resource & Binding Triggers: Using bindings, you can trigger your app with events coming in from external systems, or interface with external systems.
-
Secrets
-
The following standard problems regarding the applications to be deployed will be addressed in subsequent paragraphs:
-
Designing
Mutiple options exist how many containers an application consists of. In the simplest case persistence is achieved by persistent volumes but also hosting entire databases on Kubernetes is possible.
-
Provisioning
Container images need to be built, stored in a registry and deployed. Additional challenges during built are for instance triggering dependent images if a base image is affected or enforcing quality gates such as security scans as part of the build pipelines. Deployments might even go beyond Kubernetes if you have for instance a rolling update with database changes.
-
Compliance
Compliance affects the building of containers and the running application such as restricting communication between containers.
-
Configuration
Containers need to be configured. An additional challenge might therefore to inject environment specific values.
-
Monitoring
You can examine application performance in a Kubernetes cluster by examining the containers, pods, services, and the characteristics of the overall cluster. Kubernetes provides detailed information about an application’s resource usage at each of these levels. This information allows you to evaluate your application’s performance.
Microservice Platforms
Azure
This chapter lists major features/ concrete services for microservices platforms within Azure. A detailed discussion of services is part of the solution design based on a certain service.
Azure provides the following container platforms:
-
Azure Kubernetes Service (AKS)
It represents a hosted Kubernetes service. Additional features on orchestration platform include the integration with other Azure services such as Azure Active Directory concepts.
-
Azure Container Instance (ACI)
This service is intended to run single containers without native orchestration support. However, it can be integrated into Kubernetes.
-
Azure Red Hat OpenShift (ARO)
It provides highly available, fully managed OpenShift clusters as orchestration platform on demand, monitored and operated jointly by Microsoft and Red Hat. Kubernetes is at the core of Red Hat OpenShift. OpenShift brings added-value features to complement Kubernetes, making it a turnkey container platform as a service (PaaS) with a significantly improved developer and operator experience.
-
Azure Container Apps (in preview as of 07.11.2021)
Azure Container Apps allows to build serverless microservices based on containers. Distinctive features of Container Apps include:
-
Optimized for running general purpose containers, especially for applications that span many microservices deployed in containers.
-
Powered by Kubernetes and open-source technologies like Dapr, KEDA, and envoy.
-
Supports Kubernetes-style apps and microservices with features like service discovery and traffic splitting.
-
Enables event-driven application architectures by supporting scale based on traffic and pulling from event sources like queues, including scale to zero.
-
Support of long running processes and can run background tasks.
-
All Container Apps are Kubernetes compatible.
Azure Container Apps doesn’t provide direct access to the underlying Kubernetes APIs. If you require access to the Kubernetes APIs and control plane, you should use Azure Kubernetes Service. However, if you would like to build Kubernetes-style applications and don’t require direct access to all the native Kubernetes APIs and cluster management, Container Apps provides a fully managed experience based on best-practices.
-
-
Azure Spring Cloud
Azure Spring Cloud makes it easy to deploy Spring Boot microservice applications to Azure without any code changes. The service manages the infrastructure of Spring Cloud applications so developers can focus on their code. Azure Spring Cloud provides lifecycle management using comprehensive monitoring and diagnostics, configuration management, service discovery, CI/CD integration, blue-green deployments, and more. If your team or organization is predominantly Spring, Azure Spring Cloud is an ideal option.
Regarding the applications to be deployed Azure comes also with its own Azure container reistry (ACR).
The picture below summarizes major points:

Microservice Platforms Solutions - Azure Kubernetes Service
Infrastructure
Overview
The solution is to use Azure Kubernetes Service and the following platform features regarding. The focus of this chapter is to introduce the relevant features. Recommendations for a concrete setup are given in the next chapter. The platform features that (can) complement Kubernetes are:
The services that (can) complement Kubernetes:
-
Advisory
Proactive and actionable recommendations from Azure Advisor based on your configuration and usage telemetry as described here.
-
Provisioning
Use Bridge to Kubernetes to iteratively develop, test, and debug microservices targeted for AKS clusters. It is a client-only experience offered through extensions in Visual Studio and Visual Studio Code. See also Provisioning for general aspects and service options for creating pipelines for creating infrastructure.
-
Compliance
Use security measures on networking level to avoid public IPs. Combine AKS with additional services to control ingress and outgoing traffic such as Application Gateway or firewalls.
Enforce compliance rules to your cluster and CI/CD pipeline consistently with Azure Policy. Azure Active Directory provides access control with role-based-access-controls (RBAC) and service principals/ managed identities to back RBAC roles. Integration with Azure Security Center can provide security management, intelligent threat detection and actionable recommendations.
-
Desaster recovery
Higher availability using redundancies across availability zones, protecting applications from datacenter failures. Paired region deployment for disaster recovery.
-
Monitoring
For infrastructure monitoring see "Monitoring". For infrastructure monitoring specific pages in Azure Monitor exist which will be described here.
The picture below summarizes some of the services mentioned above:

solution_microservices_azure_aks_infra_detailed_native_setup
Application
Overview
The solution is to deploy the containerized application to an Azure Kubernetes Service. Focus of that chapter are designing, building, monitoring and deploying containerized applications. Recommendations for a concrete setup are given in the next chapter.
The services that (can) complement Kubernetes:
-
Designing
Each Pod is meant to run a single instance of a given application. If you want to scale your application horizontally (to provide more overall resources by running more instances), you should use multiple Pods, one for each instance. In Kubernetes, this is typically referred to as replication. Replicated Pods are usually created and managed as a group by a workload resource and its controller.
The "one-container-per-Pod" model is the most common Kubernetes use case. A more advanced use case is running multiple containers in a pod that need to work together. A Pod can encapsulate an application composed of multiple co-located containers that are tightly coupled and need to share resources. These co-located containers form a single cohesive unit of service - for example, one container serving data stored in a shared volume to the public, while a separate sidecar container refreshes or updates those files. The Pod wraps these containers, storage resources, and an ephemeral network identity together as a single unit.
Containers have to store information persistently. Azure provides Azure (managed) disks and Azure files as storage options for persistent volumes. AKS can connect with databases via wrapper objects such as Services or databases can be directly deployed to Kubernetes. Options for deplyong a database directly to Kubernetes are given below:
-
SQL server (Microsoft): Options range from single sql server to high availability with failover groups. In both cases MS provides containers that contain sql server.
-
Third party options such as PostgreSQL
-
-
Configuration
Configmaps are useful to store non-critical data in key-value pair format. They can also be used to inject env vars into pods. Secrets are useful to store sensitive data in key value pair format. They can also be used to inject env vars into pods. You can optionally specify how much of each resource a container needs. The most common resources to specify are CPU and memory (RAM).
-
Compliance
A security context defines privilege and access control settings for a pod. Examples are:
-
Discretionary Access Control: Permission to access an object, like a file, is based on user ID (UID) and group ID (GID).
-
Security Enhanced Linux (SELinux): Objects are assigned security labels.
-
Running as privileged or unprivileged.
-
Linux Capabilities: Give a process some privileges, but not all the privileges of the root user.
-
AppArmor: Use program profiles to restrict the capabilities of individual programs.
-
AllowPrivilegeEscalation: Controls whether a process can gain more privileges than its parent process.
-
readOnlyRootFilesystem: Mounts the container’s root filesystem as read-only.
Disks used in your AKS cluster can by encrypted by using your own keys through Azure Key Vault.
See building for additional security measures when containers are built.
-
-
Building (CI part of provisioning)
Building containers includes the following steps:
-
Building the container image(s)
-
Pushing the image(s) to the registry
Provisioning tools such as Azure DevOps and Gitub Actions provide special docker tasks/ activities to build images. Pushing to registries is also supported. The following additional features can be used/ should be considered from security perspective:
-
Each time a base image is updated, you should also update any downstream container images. Integrate this build process into validation and deployment pipelines such as Azure Pipelines or Jenkins. These pipelines make sure that your applications continue to run on the updated based images. Once your application container images are validated, the AKS deployments can then be updated to run the latest, secure images. Azure Container Registry Tasks can also automatically update container images when the base image is updated.
-
A container security scan can be included in the pipelines as quality gate by using tools like tools such as Twistlock or Aqua.
-
The provisioning services support Docker Content Trust (DCT). Docker Content Trust (DCT) are digital signatures for data sent to and received from remote Docker registries. These signatures allow client-side or runtime verification of the integrity and publisher of specific image tags.
-
-
Deployment (CI part of provisioning)
The term "Deployment" refers to the process that triggers a deployment in Kubernetes whereas a Kubernetes deployment refers to the Kubernets deployment resource. A kubernetes deployment resource is the standard controller for manipulating pods which in turn host the container workloads.
A deployment is triggered by the provisioning pipeline. Depending on the scope a deployment goes beyond the a kubernetes deployment that results in a Kubernetes deployment resource. The various steps across various scenarios can be generalized as follows:
-
Pre-Kubernetes Deployment steps
-
Kubernetes Deployment Azure provisioning services provide ways to trigger with native kubernetes means such as manifests by supporting special tasks/ activties. However, this results in quite a number of files you have to maintain. Additional tools like helm (see variation) provide better support.
-
Post-Kubernetes Deployment steps
The most complex deployment scenario is a rolling update with breaking database changes. In that case pre and post Kubernetes deployment steps are required to handle the breaking database changes. Such an update requires targeting specific components e.g. with a certain version. Labels are key/value pairs that are attached to objects, such as pods. They help in filtering out specific objects. Using a Selector, the client/user can identify a set of objects. Annotations are used to attach arbitrary non-identifying metadata to objects.
The basic idea is to break down the breaking database change into multiple non-breaking steps. The steps below refer to a renaming of a column:
-
Add a db migration that inserts the new column
-
Change the app so that all writes go to the old and new column
-
Run a task that copies all values from the old to the new column
-
Change the app that it reads from the new column
-
Add a migration that remove the old column
-
-
Monitoring
Application logs can help in understanding the activities and status of the application. The logs are particularly useful for debugging problems and monitoring cluster activity. Monitoring applications can be done by storing logs and studying the application’s metrics.
Tools like Prometheus-Grafana are popular as they make the management of metrics very easy. Very often, sidecar containers are used as metrics exporters of the main application container.
By integrating with Azure Monitor, a Prometheus server is not required. You just need to expose the Prometheus metrics endpoint through your exporters or pods (application), and the containerized agent for Container insights can scrape the metrics for you.
Variations
The following additional extra tools can be used in conjunction with Kubernetes:
-
Deployment
Instead of having to write separate YAML files for each application manually, you can simply create a Helm chart and let Helm deploy the application to the cluster for you. Helm charts contain templates for various Kubernetes resources that combine to form an application.
A Helm chart can be customized when deploying it on different Kubernetes clusters. Helm charts can be created in such a way that environment or deployment-specific configurations can be extracted out to a separate file so that these values can be specified when the Helm chart is deployed. The snippet below shows a template using placeholders to refer to the values in values.yaml:
apiVersion: apps/v1 kind: Deployment metadata: name: {{ .Values.postgres.name }} labels: app: {{ .Values.postgres.name }} group: {{ .Values.postgres.group }} spec: replicas: {{ .Values.replicaCount }} selector: matchLabels: app: {{ .Values.postgres.name }} ...
-
Compliance
For security reasons and improvement of Helm charts, it is useful to make use of at least one Helm linting tool to ensure your deployments are valid and versioned correctly.
Why choosing Polaris as Linting Tool: For helm chart linting, there are several tools like Polaris, kube-score or config-lint available. With Polaris, checks and rules are already given by default, whereby other tools need a lot of custom rules configuration and are therefore more complex to setup. Polaris runs a variety of checks to ensure that Kubernetes pods and controllers are configured using best practices, helping to avoid problems in the future. Polaris can be either installed inside a cluster or as a command-line tool to analyze Kubernetes manifests statically.
-
Configuration
See under infrastructure.
When to use
When you want to deploy containerized applications to Azure Kubernetes Service.