GitHub Actions deploying on AWS EKS or Azure AKS
Setup GitHub Actions workspace
Setting up a repository on GitHub
By the end of this guide, a repository on GitHub will be created in an automated way using a script.
Prerequisites
-
A GitHub user.
-
Install GitHub CLI.
-
Install Git.
Creating repository using provided script
The script located at /scripts/repositories/github/create-repo.sh
allows you to either:
-
Create an empty repository with just a README file and clone it to your computer into the directory you set. Useful when starting a project from scratch.
-
Import an already existing directory or Git repository into your project giving a path or an URL. Useful for taking to GitHub the development of an existing project.
create-repo.sh \
-a <action> \
-d <local directory> \
[-n <repository name>] \
[-g <giturl>] \
[-b <branch>] \
[-r] \
[-s <branch strategy>] \
[-f] \
[--subpath <subpath to import>] \
[-u]
-a, --action [Required] Use case to fulfil: create, import.
-d, --directory [Required] Path to the directory where your repository will be cloned or initialized.
-n, --name Name for the GitHub repository. By default, the source repository or directory name (either new or existing, depending on use case) is used.
-g, --source-git-url Source URL of the Git repository to import.
-b, --source-branch Source branch to be used as a basis to initialize the repository on import, as master branch.
-r, --remove-other-branches Removes branches other than the (possibly new) default one.
-s, --setup-branch-strategy Creates branches and policies required for the desired workflow. Requires -b on import. Accepted values: gitflow.
-f, --force Skips any user confirmation.
--subpath When combined with -g and -r, imports only the specified subpath of the source Git repository.
-u, --public Sets repository scope to public. Private otherwise.
This is non-exhaustive list. Make your own combination of flags if all of the following use cases does not fit your needs. |
./create-repo.sh -a create -n <repository name> -d <local destination directory>
In case repository name is not specified, destination directory name will be used.
./create-repo.sh -a create -n <repository name> -d <local destination directory> -s gitflow
./create-repo.sh -a import -g <source git url> -n <repository name> -d <local destination directory>
In case repository name is not specified, source repository name (in URL) will be used.
./create-repo.sh -a import -g <source git url> -b <source branch> -s gitflow -r -n <repository name> -d <local destination directory>
This will create master
(and develop
since a branching strategy is specified) from the <source branch>
, removing any other branch (including <source branch>
).
./create-repo.sh -a import -d <local source directory> -n <repository name>
In case repository name is not specified, source directory name will be used.
./create-repo.sh -a 'import' -d <local source directory> -b <source branch> -s gitflow -r -n <repository name>
This will create master
(and develop
since a branching strategy is specified) from the <source branch>
, removing any other branch (including <source branch>
).
This operation is destructive regarding branches on the local repository. |
Same command could also be used with a local directory, but then using -b and -r would be redundant.
|
Branching strategies
To ensure the quality of development, it is crucial to keep a clean Git workflow. The following branching strategies are supported (using -s
flag):
This is not an explanation of Gitflow (there are plenty of them on the web), but the actions performed by the script to help you start using this worflow.
-
master
is the initial (seed) branch. -
develop
branch is created frommaster
branch.
Any other branch part the strategy (feature, release, and hotfix branches) will be created by developers during the lifecycle of the project.
It is possible to protect important branches against bad practices using branch protection rules.
The following branch protection rules are applied to master
and develop
branches:
-
Require a pull request before merging: ON
-
Require approvals: 1
-
Dismiss stale pull request approvals when new commits are pushed: ON
-
-
Require conversation resolution before merging: ON
The above branch protection rules are defined in a configuration file located at /scripts/repositories/common/config/strategy.cfg
. Feel free to adapt it to your needs.
This is the bare minimum standard for any project. |
You can find more information about branch protection rules in the official documentation.
Setup AWS account
Setup AWS account IAM for deployment in EKS
The scope of this section is to prepare an AWS account to be ready for deploying in AWS EKS. By the end of this guide, a new IAM user belonging to a group with the required permissions will be created.
Prerequisites
-
An AWS account with IAM full access permission.
In case you do not have an account or permission to create new IAM users, request it to your AWS administrator asking for the following policies being attached. Then go to [check-iam-user-permissions].
AmazonEC2FullAccess
IAMReadOnlyAccess
AmazonEKSServicePolicy
AmazonS3FullAccess
AmazonEC2ContainerRegistryFullAccess
Find them on /scripts/accounts/aws/eks-custom-policies.json
.
Creating IAM user using provided script
The script located at /scripts/accounts/aws/create-user.sh
will automatically create a user, also enrolling it in a newly created group with the required policies attached.
In case you do not have an AWS access key (needed to authenticate through API), follow this guide to create it.
create-user.sh \
-u <username> \
-g <group> \
[-p <policies...>] \
[-f <policies file path>] \
[-c <custom policies file path>] \
[-a <AWS access key>] \
[-s <AWS secret key>] \
[-r <region>]
-u [Required] Username for the new user
-g [Required] Group name for the group to be created or used
-p [Optional] Policies to be attached to the group, splitted by comma
-f [Optional] Path to a file containing the policies to be attached to the group
-c [Optional] Path to a json file containing the custom policies to be attached to the group.
-a [Optional] AWS administrator access key
-s [Optional] AWS administrator secret key
-r [Optional] AWS region
./create-user.sh -u Bob -g DevOps -f ./eks-managed-policies.txt -c ./eks-custom-policies.json -a "myAccessKey" -s "mySecretKey" -r eu-west-1
If the "DevOps" group does not exist, it will be created. |
Required policies for using EKS are located at /scripts/accounts/aws/eks-managed-policies.txt and /scripts/accounts/aws/eks-custom-policies.json
|
On success, the newly created user access data will be shown as output:
Access key ID: <accessKeyID>
Secret access key: <secretAccessKey>
It is mandatory to store the access key ID and the secret access key securely at this point, as they will not be retrievable again. |
Check IAM user permissions
The script located at /scripts/accounts/aws/verify-account-policies.sh
will check that the necessary policies were attached to the IAM user.
verify-account-policies.sh \
-u <username> \
[-p <policies...>] \
[-f <policies file path>] \
[-c <custom policies file path>] \
[-a <AWS access key>] \
[-s <AWS secret key>] \
[-r <region>]
-u [Required] Username whose policies will be checked
-p [Optional] Policies to be checked, splitted by comma
-f [Optional] Path to a file containing the policies to be checked
-c [Optional] Path to a file containing the custom policies to be checked
-a [Optional] AWS administrator access key
-s [Optional] AWS administrator secret key
-r [Optional] AWS region
At least one policies flag (-p , -f or -c ) is required.
|
./verify-account-policies.sh -u Bob -f ./eks-managed-policies.txt -c ./eks-custom-policies.json -a "myAccessKey" -s "mySecretKey" -r eu-west-1
After execution, provided policies will be shown preceded by an OK
or FAILED
depending on the attachment status.
Required policies for using EKS are located at /scripts/accounts/aws/eks-managed-policies.txt and /scripts/accounts/aws/eks-custom-policies.json
|
]] == Configure AWS CLI Once you have been provided with an IAM user with the required policies attached, setup the AWS CLI using the following command:
aws configure
Fill the prompted fields with your data:
AWS Access Key ID [None]: <accessKeyID>
AWS Secret Access Key [None]: <secretAccessKey>
Default region name [None]: eu-west-1
Default output format [None]: json
Now you have AWS CLI ready to use.
Setup Github Workflows
Setting up a Build workflow on GitHub
In this section we will create a build workflow for compiling project code. This workflow will be configured to be executed as a job inside a CI workflow, regardless of which branch it is made on.
The creation of the GitHub action will follow the project workflow, so a new branch named feature/build-pipeline
will be created and the YAML file for the workflow will be pushed to it.
Then, a Pull Request (PR) will be created in order to merge the new branch into the appropriate branch (provided in -b
flag). The PR will be automatically merged if the repository policies are met. If the merge is not possible, either the PR URL will be shown as output, or it will be opened in your web browser if using -w
flag.
The script located at /scripts/pipelines/github/pipeline_generator.sh
will automatically create this new branch, create a build workflow based on a YAML template appropriate for the project programming language or framework, create the Pull Request and, if it is possible, merge this new branch into the specified branch.
Please note that this workflow, although manually triggerable, is designed to be executed as part of a CI workflow, which you can create following this guide.
Prerequisites
This script will commit and push the corresponding YAML template into your repository, so please be sure your local repository is up-to-date (i.e you have pulled latest changes with git pull
).
pipeline_generator.sh \
-c <config file path> \
-n <workflow name> \
-l <language or framework> \
-d <project local path> \
[-t <target-directory>] \
[-b <branch>] \
[-w]
The config file for the build workflow is located at /scripts/pipelines/github/templates/build/build-pipeline.cfg .
|
-c, --config-file [Required] Configuration file containing workflow definition.
-n, --pipeline-name [Required] Name that will be set to the workflow.
-l, --language [Required] Language or framework of the project.
-d, --local-directory [Required] Local directory of your project.
-t, --target-directory Target directory of build process. Takes precedence over the language/framework default one.
-b, --target-branch Name of the branch to which the Pull Request will target. PR is not created if the flag is not provided.
-w Open the Pull Request on the web browser if it cannot be automatically merged. Requires -b flag.
./pipeline_generator.sh -c ./templates/build/build-pipeline.cfg -n quarkus-project-build -l quarkus -d C:/Users/$USERNAME/Desktop/quarkus-project -b develop -w
./pipeline_generator.sh -c ./templates/build/build-pipeline.cfg -n quarkus-project-build -l quarkus-jvm -d C:/Users/$USERNAME/Desktop/quarkus-project -b develop -w
./pipeline_generator.sh -c ./templates/build/build-pipeline.cfg -n node-project-build -l node -d C:/Users/$USERNAME/Desktop/node-project -b develop -w
./pipeline_generator.sh -c ./templates/build/build-pipeline.cfg -n angular-project-build -l angular -d C:/Users/$USERNAME/Desktop/angular-project -b develop -w
Setting up a Test workflow on GitHub
In this section we will create a test workflow on GitHub for running project test cases. This workflow will be configured to be executed as a job inside a CI workflow after the build job, and consumes the artifact produced by the build workflow.
The creation of this GitHub action will follow the project workflow, so a new branch named feature/test-pipeline
will be created and the YAML file for the workflow will be pushed to it.
Then, a Pull Request (PR) will be created in order to merge the new branch into the appropriate branch (provided in -b
flag). The PR will be automatically merged if the repository policies are met. If the merge is not possible, either the PR URL will be shown as output, or it will be opened in your web browser if using -w
flag.
The script located at /scripts/pipelines/github/pipeline_generator.sh
will automatically create new branch, create a test workflow based on a YAML template appropriate for the project programming language or framework, create the Pull Request, and if it is possible, merge this new branch into the specified branch.
Please note that this workflow, although manually triggerable, is designed to be executed as part of a CI workflow, which you can create following this guide.
Prerequisites
-
This script will commit and push the corresponding YAML template into your repository, so please be sure your local repository is up-to-date (i.e you have pulled latest changes with
git pull
). -
[Optional] Having some knowledge about the application, in particular knowing if, when tested, it produces a log file or some other blob (e.g. performance profiling data) interesting to be kept as an artifact.
pipeline_generator.sh \
-c <config file path> \
-n <workflow name> \
-l <language or framework> \
-d <project local path> \
[-a <artifact source path>] \
[-b <branch>] \
[-w]
The config file for the test workflow is located at /scripts/pipelines/github/templates/test/test-pipeline.cfg .
|
-c, --config-file [Required] Configuration file containing workflow definition.
-n, --pipeline-name [Required] Name that will be set to the workflow.
-l, --language [Required] Language or framework of the project.
-d, --local-directory [Required] Local directory of your project.
-a, --artifact-path Path to be persisted as an artifact after workflow execution, e.g. where the application stores logs or any other blob on runtime.
-b, --target-branch Name of the branch to which the Pull Request will target. PR is not created if the flag is not provided.
-w Open the Pull Request on the web browser if it cannot be automatically merged. Requires -b flag.
./pipeline_generator.sh -c ./templates/test/test-pipeline.cfg -n quarkus-project-test -l quarkus -d C:/Users/$USERNAME/Desktop/quarkus-project -b develop -w
./pipeline_generator.sh -c ./templates/test/test-pipeline.cfg -n quarkus-project-test -l quarkus-jvm -d C:/Users/$USERNAME/Desktop/quarkus-project -b develop -w
./pipeline_generator.sh -c ./templates/test/test-pipeline.cfg -n node-project-test -l node -d C:/Users/$USERNAME/Desktop/node-project -b develop -w
./pipeline_generator.sh -c ./templates/test/test-pipeline.cfg -n angular-project-test -l angular -d C:/Users/$USERNAME/Desktop/node-project -b develop -w
Quality pipeline
Setting up a SonarQube instance in AWS
The scope of this section is to deploy an AWS EC2 instance running SonarQube for further usage from a CI pipeline. A set of scripts and a Terraform recipe have been created in order to assist you in the launch of a SonarQube instance with an embedded database.
-
Have a SSH keypair for the SonarQube instance. You can use an existing one or create a new one with the following command:
aws ec2 create-key-pair --key-name sonarqube --query 'KeyMaterial' --output text > sonarqube.pem
This will create a public key, directly stored in AWS (current region only), and a private key stored in the sonarqube.pem file, that will be necessary if you ever need to access the instance, so be sure you store it securely.
|
-
main.tf
contains declarative definition written in HCL of AWS infrastructure. -
setup_sonarqube.sh
script to be run on EC2 instance that installs and deploys a container running SonarQube. -
variables.tf
contains variable definition formain.tf
. -
terraform.tfvars
contains values (user-changeable) for the variables defined invariables.tf
. -
terraform.tfstate
contains current state of the created infrastructure. Should be stored securely. -
set-terraform-variables.sh
assists user in setting the values ofterraform.tfvars
.
First, you need to initialize the working directory containing Terraform configuration files (located at /scripts/sonarqube
) and install any required plugins:
terraform init
Then, you may need to customize some input variables about the environment. To do so, you can either edit terraform.tfvars
file or take advantage of the set-terraform-variables
script, which allows you to create or update values for the required variables, passing them as flags. As a full example:
./set-terraform-variables.sh --aws_region eu-west-1 --vpc_cidr_block 10.0.0.0/16 --subnet_cidr_block 10.0.1.0/24 --nic_private_ip 10.0.1.50 --instance_type t3a.small --keypair_name sonarqube
Unless changed, the keypair name expected by default is sonarqube .
|
Finally, deploy SonarQube instance:
terraform apply --auto-approve
terraform apply command performs a plan and actually carries out the planned changes to each resource using the relevant infrastructure provider’s API. You can use it to perform changes on the created resources later on. Remember to securely store terraform.tfstate file, otherwise you will not be able to perform any changes, including detroying them, from Terraform. More insights here.
|
In particular, this will create an Ubuntu-based EC2 instance in AWS and deploy a Docker container running SonarQube.
You will get the public IP address of the EC2 instance as output. Take note of it, you will need it later on.
After a few minutes, you will be able to access SonarQube web interface on http://sonarqube_public_ip:9000
(replace with actual IP) with the following credentials:
-
Username:
admin
-
Password:
admin
Change the default password promptly. |
As long as you keep the terraform.tfstate
file generated when creating the SonarQube instance, you can easily destroy it and all associated resources by executing:
terraform destroy
Setting up a Quality workflow on GitHub
In this section we will create a quality workflow for analyzing project code with SonarQube. This workflow will be configured to be executed as a job inside a CI workflow after the test (or build, if no test) job, and consumes the artifact produced by the build workflow.
The creation of this GitHub action will follow the project workflow, so a new branch named feature/quality-pipeline
will be created and the YAML file for the workflow will be pushed to it.
Then, a Pull Request (PR) will be created in order to merge the new branch into the appropriate branch (provided in -b
flag). The PR will be automatically merged if the repository policies are met. If the merge is not possible, either the PR URL will be shown as output, or it will be opened in your web browser if using -w
flag.
The script located at /scripts/pipelines/github/pipeline_generator.sh
will automatically create this new branch, create a quality workflow based on a YAML template appropriate for the project programming language or framework, create the Pull Request, and if it is possible, merge this new branch into the specified branch.
Please note that this workflow, although manually triggerable, is designed to be executed as part of a CI workflow, which you can create following this guide.
-
This script will commit and push the corresponding YAML template into your repository, so please be sure your local repository is up-to-date (i.e you have pulled the latest changes with
git pull
). -
Generate a SonarQube token (just follow the section "Generating a token").
pipeline_generator.sh \
-c <config file path> \
-n <workflow name> \
-l <language or framework> \
--sonar-url <sonarqube url> \
--sonar-token <sonarqube token> \
-d <project local path> \
[-b <branch>] \
[-w]
The config file for the quality workflow is located at /scripts/pipelines/github/templates/quality/quality-pipeline.cfg .
|
-c, --config-file [Required] Configuration file containing workflow definition.
-n, --pipeline-name [Required] Name that will be set to the workflow.
-l, --language [Required] Language or framework of the project.
--sonar-url [Required] SonarQube URL.
--sonar-token [Required] SonarQube token.
-d, --local-directory [Required] Local directory of your project.
-b, --target-branch Name of the branch to which the Pull Request will target. PR is not created if the flag is not provided.
-w Open the Pull Request on the web browser if it cannot be automatically merged. Requires -b flag.
./pipeline_generator.sh -c ./templates/quality/quality-pipeline.cfg -n quarkus-project-quality -l quarkus --sonar-url http://1.2.3.4:9000 --sonar-token 6ce6663b63fc02881c6ea4c7cBa6563b8247a04e -d C:/Users/$USERNAME/Desktop/quarkus-project -b develop -w
./pipeline_generator.sh -c ./templates/quality/quality-pipeline.cfg -n node-project-quality -l node --sonar-url http://1.2.3.4:9000 --sonar-token 6ce6663b63fc02881c6ea4c7cBa6563b8247a04e -d C:/Users/$USERNAME/Desktop/node-project -b develop -w
./pipeline_generator.sh -c ./templates/quality/quality-pipeline.cfg -n angular-project-quality -l angular --sonar-url http://1.2.3.4:9000 --sonar-token 6ce6663b63fc02881c6ea4c7cBa6563b8247a04e -d C:/Users/$USERNAME/Desktop/angular-project -b develop -w
Setting up a CI workflow on GitHub
In this section we will create a CI workflow for the project. This workflow will be configured to be triggered every time there is a commit to the GitHub repository, regardless of which branch it is made on. This CI workflow will execute the build workflow, and depending on the flags given, also the test and quality workflows, as its jobs.
The creation of the GitHub action will follow the project workflow, so a new branch named feature/ci-pipeline
will be created and the YAML file for the workflow will be pushed to it.
Then, a Pull Request (PR) will be created in order to merge the new branch into the appropriate branch (provided in -b
flag). The PR will be automatically merged if the repository policies are met. If the merge is not possible, either the PR URL will be shown as output, or it will be opened in your web browser if using -w
flag.
The script located at /scripts/pipelines/github/pipeline_generator.sh
will automatically create this new branch, create a CI workflow based on a YAML template, create the Pull Request and, if it is possible, merge this new branch into the specified branch.
Prerequisites
-
This script will commit and push the corresponding YAML template into your repository, so please be sure your local repository is up-to-date (i.e you have pulled latest changes with
git pull
). -
An existing build workflow.
-
[Optional] An existing test workflow.
-
[Optional] An existing quality workflow.
Creating the workflow using provided script
pipeline_generator.sh \
-c <config file path> \
-n <workflow name> \
-d <project local path> \
--build-pipeline-name <build workflow name> \
[--test-pipeline-name <test workflow name>] \
[--quality-pipeline-name <quality workflow name>] \
[-b <branch>] \
[-w]
The config file for the CI workflow is located at /scripts/pipelines/github/templates/ci/ci-pipeline.cfg .
|
-c, --config-file [Required] Configuration file containing workflow definition.
-n, --pipeline-name [Required] Name that will be set to the workflow.
-d, --local-directory [Required] Local directory of your project.
--build-pipeline-name [Required] Name of the job calling the build workflow.
--test-pipeline-name Name of the job calling the test workflow.
--quality-pipeline-name Name of the job calling the quality workflow.
-b, --target-branch Name of the branch to which the Pull Request will target. PR is not created if the flag is not provided.
-w Open the Pull Request on the web browser if it cannot be automatically merged. Requires -b flag.
./pipeline_generator.sh -c ./templates/ci/ci-pipeline.cfg -n ci-pipeline -d C:/Users/$USERNAME/Desktop/quarkus-project --build-pipeline-name build --test-pipeline-name test --quality-pipeline-name quality -b develop -w
App Package pipeline
The scope of this section is to setup/create a container image registry or repository (depending on provider) on Docker Hub, AWS or Azure for allowing the pipeline that will package the application to push the resulting container image. By the end of this guide, we will get as an output the container repository URI, and, for some providers, the credentials for accessing the registry.
A container image name generically has the following format:
-
<registry-url>/<namespace>/<image-name>:<tag>
-
<registry-url>
: Container registry URL based on registry provider. -
<namespace>
: Namespace within which the image is located. -
<image-name>
: Repository/image name which can be from one level to n-level deep (depending on provider). -
<tag>
: Some alphanumeric tag which is given as identifier.
-
-
Docker Hub account is required to access Docker Hub Registry. You can create one here.
-
Login on Docker Hub website.
-
Go to Repositories tab and click on "Create Repository".
-
Provide Name and Visibility for the repository and click "Create" button.
For referencing an image in Docker Hub, you don’t have to specify the <registry-url> since it is the default on Docker.
IMPORTANT: Docker Hub does not support multi-level image names.
|
-
<namespace>/<image-name>:<tag>
-
<namespace>
: Username or Organization on Docker Hub. -
<image-name>
: Previously chosen repository name. -
<tag>
: Some alphanumeric tag which is given as identifier.
-
-
devonfw/my-thai-star-angular:latest
-
devonfw/my-thai-star-java:1.5
-
devonfw/devon4quarkus-reference:2.0
-
An AWS account.
-
AWS CLI installed.
-
Get the AWS Account ID by executing
aws sts get-caller-identity
. -
Login to AWS ECR with the following command (an example
<region>
would beeu-west-1
):
aws ecr get-login-password \
--region <region> | docker login \
--username AWS \
--password-stdin <aws-account-id>.dkr.ecr.<region>.amazonaws.com
-
Create a repository namespace with the following command:
aws ecr create-repository \
--repository-name <namespace> \
--region <region>
Sample Output
{
"repository": {
"registryId": "123456789012",
"repositoryName": "sample-repo",
"repositoryArn": "arn:aws:ecr:eu-west-1:123456789012:repository/project-a/nginx-web-app"
}
}
-
<registry-url>/<namespace>/<image-name>:<tag>
-
<registry-url>
:<aws-account-id>.dkr.ecr.<region>.amazonaws.com
-
<namespace>
: Previously chosen repository name. -
<image-name>
: Freely chosen project/image-name given by the user. -
<tag>
: Some alphanumeric tag which is given as identifier.
-
That is:
-
<aws-account-id>.dkr.ecr.<region>.amazonaws.com/<repository-name>/<image-name>:<tag>
-
1000000001.dkr.ecr.eu-west-1.amazonaws.com/devonfw/my-thai-star-angular:latest
-
1000100001.dkr.ecr.us-east-1.amazonaws.com/devonfw/my-thai-star/angular:1.5
-
1000200001.dkr.ecr.ap-south-1.amazonaws.com/devonfw/quarkus/sample/devon4quarkus-reference:2.0
-
An Azure account with active subscription.
-
An Azure resource group.
-
Azure CLI installed.
-
Login to Azure using
az login
. -
Set the Azure Subscription using
az account set --subscription <mySubscription>
. -
Create a registry with the following command:
az acr create --resource-group <resourcegroup-name> --name <registry-name> --sku Basic
Sample Output
{
"adminUserEnabled": false,
"creationDate": "2019-01-08T22:32:13.175925+00:00",
"id": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/myResourceGroup/providers/Microsoft.ContainerRegistry/registries/myContainerRegistry007",
"location": "eastus",
"loginServer": "mycontainerregistry007.azurecr.io",
"name": "myContainerRegistry007",
"provisioningState": "Succeeded",
"resourceGroup": "myResourceGroup",
"sku": {
"name": "Basic",
"tier": "Basic"
},
"status": null,
"storageAccount": null,
"tags": {},
"type": "Microsoft.ContainerRegistry/registries"
}
-
Enable user and password authentication on the registry with the following command:
Any authentication option that produces as a result a long-term user and password is valid. The least troublesome one follows. |
az acr update -n <registry-name> --admin-enabled true
-
Retrieve credentials for accessing the registry with the following command:
az acr credential show --name <registry-name>
-
<registry-url>/<namespace>/<image-name>:<tag>
-
<registry-url>
:<registry-name>.azurecr.io
-
<namespace>/<image-name>
: Freely chosen project/image-name given by the user. -
<tag>
: Some alphanumeric tag which is given as identifier.
-
That is:
-
<registry-name>.azurecr.io/<namespace>/<image-name>:<tag>
-
devonacr.azurecr.io/devonfw/my-thai-star-angular:latest
-
devonacr.azurecr.io/devonfw/my-thai-star/angular:1.5
-
devonacr.azurecr.io/devonfw/quarkus/sample/devon4quarkus-reference:2.0
-
A Google Cloud project already setup
-
Artifact Repository API enabled for the project
-
GCloud CLI installed and configured
-
Login to GCloud using
gcloud auth login
. -
Create a container image repository with the following command:
gcloud artifacts repositories create <repository-name> --repository-format=docker --location=<repository-location>
Sample Output
Create request issued for: [testdockerrepo]
Waiting for operation [projects/poc-cloudnative-capgemini/locations/europe-southwest1/operations/748b5502-43af-46b9-9f3
a-eb2f5bd4178c] to complete...done.
Created repository [testdockerrepo].
-
Enable access to your Artifact Registry repository from your local Docker client using:
gcloud auth configure-docker <location>-docker.pkg.dev
Sample Output
Adding credentials for: europe-west9-docker.pkg.dev
After update, the following will be written to your Docker config file located at
[C:\Users\mcerverc\.docker\config.json]:
{
"credHelpers": {
"europe-west9-docker.pkg.dev": "gcloud"
}
}
-
<location>-docker.pkg.dev/<project-id>/<repository>/<image-name>:<tag>
-
<location>
: Regional or multi-regional location of the repository. -
<project-id>
: Google Cloud project ID. -
<repository>
: Previously chosen repository name. -
<image-name>
: Freely chosen project/image-name given by the user. -
<tag>
: Some alphanumeric tag which is given as identifier.
-
-
europe-southwest1-docker.pkg.dev/poc-cloudnative-capgemini/testdockerrepo/imagendetest:v1
-
us-east5-docker.pkg.dev/projecttest/repo123/helloworld:latest
Setting up a Package workflow on GitHub
In this section we will create a package workflow to build and push a container image of the project application into the specified container registry. This workflow will be configured in order to be triggered every time CI workflow is executed successfully on a commit for release/* and develop branches, requiring manual launch for other branches but still enforcing that CI workflow has passed. It consumes the artifact produced by the build workflow.
The creation of the GitHub action will follow the project workflow, so a new branch named feature/package-pipeline
will be created and the YAML file for the workflow will be pushed to it.
Then, a Pull Request (PR) will be created in order to merge the new branch into the appropriate branch (provided in -b
flag). The PR will be automatically merged if the repository policies are met. If the merge is not possible, either the PR URL will be shown as output, or it will be opened in your web browser if using -w
flag.
The script located at /scripts/pipelines/github/pipeline_generator.sh
will automatically create this new branch, create a package workflow based on a YAML template, create the Pull Request and, if it is possible, merge this new branch into the specified branch.
This script will commit and push the corresponding YAML template into your repository, so please be sure your local repository is up-to-date (i.e you have pulled latest changes with git pull
).
pipeline_generator.sh \
-c <config file path> \
-n <workflow name> \
-l <language or framework> \
--dockerfile <dockerfile path> \
-d <project local path> \
--ci-pipeline-name <ci workflow name> \
-i <image name> \
[-u <registry user>] \
[-p <registry password>] \
[--aws-access-key <aws access key>] \
[--aws-secret-access-key <aws secret access key>] \
[--aws-region <aws region>] \
[-b <branch>] \
[-w]
The config file for the package workflow is located at /scripts/pipelines/github/templates/package/package-pipeline.cfg .
|
-c, --config-file [Required] Configuration file containing workflow definition.
-n, --pipeline-name [Required] Name that will be set to the workflow.
-l, --language [Required, if dockerfile not set] Language or framework of the project.
--dockerfile [Required, if language not set] Path from the root of the project to its Dockerfile. Takes precedence over the language/framework default one.
-d, --local-directory [Required] Local directory of your project.
--ci-pipeline-name [Required] CI workflow name.
-i, --image-name [Required] Name (excluding tag) for the generated container image.
-u, --registry-user [Required, unless AWS or GCP] Container registry login user.
-p, --registry-password [Required, unless AWS or GCP] Container registry login password.
--aws-access-key [Required, if AWS] AWS account access key ID. Takes precedence over registry credentials."
--aws-secret-access-key [Required, if AWS] AWS account secret access key."
--aws-region [Required, if AWS] AWS region for ECR."
-b, --target-branch Name of the branch to which the Pull Request will target. PR is not created if the flag is not provided.
-w Open the Pull Request on the web browser if it cannot be automatically merged. Requires -b flag.
./pipeline_generator.sh -c ./templates/package/package-pipeline.cfg -n quarkus-project-package -l quarkus -d C:/Users/$USERNAME/Desktop/quarkus-project -i username/quarkus-project -u username -p password --ci-pipeline-name quarkus-project-ci -b develop -w
./pipeline_generator.sh -c ./templates/package/package-pipeline.cfg -n quarkus-project-package -l quarkus -d C:/Users/$USERNAME/Desktop/quarkus-project -i username/quarkus-project --aws-access-key AKIAIOSFODNN7EXAMPLE --aws-secret-access-key wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY --aws-region eu-west-1 --ci-pipeline-name quarkus-project-ci -b develop -w
./pipeline_generator.sh -c ./templates/package/package-pipeline.cfg -n node-project-package -l node -d C:/Users/$USERNAME/Desktop/node-project -i username/node-project -u username -p password --ci-pipeline-name node-project-ci -b develop -w
./pipeline_generator.sh -c ./templates/package/package-pipeline.cfg -n node-project-package -l node -d C:/Users/$USERNAME/Desktop/node-project -i username/node-project --aws-access-key AKIAIOSFODNN7EXAMPLE --aws-secret-access-key wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY --aws-region eu-west-1 --ci-pipeline-name node-project-ci -b develop -w
./pipeline_generator.sh -c ./templates/package/package-pipeline.cfg -n angular-project-package -l angular -d C:/Users/$USERNAME/Desktop/angular-project --build-pipeline-name angular-project-build --quality-pipeline-name angular-project-quality -i username/angular-project -u username -p password -b develop -w
./pipeline_generator.sh -c ./templates/package/package-pipeline.cfg -n angular-project-package -l angular -d C:/Users/$USERNAME/Desktop/angular-project --build-pipeline-name angular-project-build --quality-pipeline-name angular-project-quality -i username/angular-project --aws-access-key AKIAIOSFODNN7EXAMPLE --aws-secret-access-key wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY --aws-region eu-west-1 -b develop -w
Setting up a library package pipeline on Azure DevOps
In this section we will create a package pipeline to generate a distributable of a library and push it to a feed in Azure DevOps. This pipeline will be configured in order to be triggered every time quality pipeline is executed successfully on a commit for release/*
and develop
branches, requiring manual launch for other branches but still enforcing that quality pipeline has passed. Depending on the language, it consumes the artifact produced by the build pipeline.
The creation of this pipeline will follow the project workflow, so a new branch named feature/library-package-pipeline will be created. The YAML file and the bash script for the pipeline will be pushed to it, as well as a generated file containing required credentials for connecting to the feed.
Then, a Pull Request (PR) will be created in order to merge the new branch into the appropriate branch (provided in -b flag). The PR will be automatically merged if the repository policies are met. If the merge is not possible, either the PR URL will be shown as output, or it will be opened in your web browser if using -w flag.
The script located at /scripts/pipelines/azure-devops/pipeline_generator.sh
will automatically create this new branch, create a library package pipeline based on a YAML template, create the feed, create a PAT token with permission to push to that feed, create the Pull Request, and if it is possible, merge this new branch into the specified branch.
Prerequisites
This script will commit and push the corresponding YAML template into your repository, so please be sure your local repository is up-to-date (i.e you have pulled the latest changes with git pull
).
Creating the pipeline using provided script
./pipeline_generator.sh \
-c <config file path> \
-n <pipeline name> \
-l <language or framework> \
-d <project local path> \
--build-pipeline-name <build pipeline name> \
--quality-pipeline-name <quality pipeline name> \
[-b <branch>] \
[-w]
The config file for the library package pipeline is located at /scripts/pipelines/azure-devops/templates/library-package/library-package-pipeline.cfg .
|
-c, --config-file [Required] Configuration file containing pipeline definition.
-n, --pipeline-name [Required] Name that will be set to the pipeline.
-l, --language [Required] Language or framework of the project.
-d, --local-directory [Required] Local directory of your project.
--build-pipeline-name [Required, if Java/Maven] Build pipeline name.
--quality-pipeline-name [Required] Quality pipeline name.
-b, --target-branch Name of the branch to which the Pull Request will target. PR is not created if the flag is not provided.
-w Open the Pull Request on the web browser if it cannot be automatically merged. Requires -b flag.
./pipeline_generator.sh -c ./templates/library-package/library-package-pipeline.cfg -n library-package-pipeline -l java -d C:/Users/$USERNAME/Desktop/java-library-project --build-pipeline-name java-library-build --quality-pipeline-name java-library-quality
./pipeline_generator.sh -c ./templates/library-package/library-package-pipeline.cfg -n library-package-pipeline -l node -d C:/Users/$USERNAME/Desktop/node-library-project --quality-pipeline-name node-library-quality
Adding Maven feed to library repository list
Go to Artifacts section in Azure DevOps and select the newly created Maven feed by the script (default name is maven-feed
). Then follow the Set up the Maven client guide from Azure Devops official documentation to perform required steps on the library project code side for being able to push the library from the pipeline.
Ignore the settings.xml part as it is created by the script. Only pom.xml modification is needed. |
Appendix
Go to Azure Artifacts and click on the artifact of the library. Then copy the contents of the <dependency> element and paste it inside the <dependencies>
element of your pom.xml
file.
Then, follow again the Set up the Maven client guide to connect the main project to the Maven feed to be able to consume the library.
Ignore the settings.xml part as is not necessary to consume an artifact. Only pom.xml modification is needed. |