CI/CD Pipelines with Git, Spinnaker and Kubernetes (GKE)
Overview
Developing a continuous delivery (CI/CD) pipeline is an essential component of contemporary software development. By setting up services to build, test, and deploy an application automatically, we can create a streamlined and dependable deployment process. The CI/CD pipeline we design can automatically rebuild, retest, and redeploy an updated version whenever modifications are made to the application's code. This enables us to quickly deliver high-quality updates to our users, resulting in a more efficient software development process.
Objectives
To achieve a reliable deployment process for an application on Kubernetes Engine with Spinnaker, a number of steps must be taken. First, the environment must be set up by creating a Kubernetes Engine cluster and configuring the necessary identity and user management. Then, the sample application is obtained and a Git repository is established before uploading the code to Google Cloud Source Repository. Using Helm, Spinnaker is deployed on Kubernetes Engine with a focus on building Docker images and creating triggers to ensure automatic image generation when application changes are made. Finally, a Spinnaker pipeline is configured to enable the rollout of changes to production and monitor the deployment process for any issues or errors. This results in a dependable and continuous deployment process.
CI/CD Architecture (example)
To ensure that our application updates are delivered smoothly and efficiently, we must have a well-structured automated process that can handle building, testing, and updating our software in a reliable manner. This process should include creating artifacts, unit and functional testing, and deploying updates to production through an automated pipeline that seamlessly handles code changes. We can also implement a canary release process that updates a small subset of users before making changes available to everyone, allowing for valuable feedback before a full release. In the case of an unsuccessful canary release, an automated rollback process should be in place to quickly return to a stable and functional state. These measures are crucial to ensure that our software updates are successfully delivered to our users.
By leveraging Kubernetes Engine and Spinnaker, we can establish a continuous delivery process that is both rapid and reliable. However, to ensure that a new application revision meets our standards for deployment to production, it must first pass through multiple automated validation stages, which include building, testing, and deployment. Once the change has successfully completed these automated checks, it can be manually reviewed and subjected to further pre-release testing. After we have thoroughly validated the application and deemed it ready for production, it must be approved for deployment by a member of our team.
Continuous Delivery pipeline (example)
There are various options for implementing a continuous delivery pipeline, and Spinnaker is just one of them. Other dependable tools that can be used include Gitlab or Octopus Deploy. However, Spinnaker is an advanced and widely used platform for continuous delivery, having been developed by Netflix and adopted by industry leaders like Google, JPMorgan, Airbnb, and Cisco. Its track record of successful implementation in high-pressure settings makes it a secure and reliable option for building a powerful and efficient continuous delivery process.
1. Set up the environment
To set up the necessary infrastructure and identities, the first step is to create a Kubernetes Engine cluster that will be used to deploy both Spinnaker and the sample application. Once the cluster is created, we can proceed to set the default compute zone:
Once the process is complete, we will be presented with a report that outlines various details about the newly created Kubernetes Engine cluster. The report will include information such as the cluster name, location, version, IP address, machine type, node version, number of nodes, and status. This will indicate that the cluster is up and running.
Identity and access management configuration
To delegate permissions to Spinnaker, allowing it to store data in Cloud Storage, we need to create a Cloud Identity Access Management (Cloud IAM) service account. This will enable Spinnaker to store its pipeline data in Cloud Storage, ensuring reliability and resiliency. In the event of an unexpected Spinnaker deployment failure, we can quickly create a new deployment with access to the same pipeline data as the original.
Download the service account key. Later, we will install Spinnaker and upload this key to Kubernetes Engine.
2. Set up Cloud Pub/Sub to trigger Spinnaker pipelines
To receive notifications from Container Registry, we need to create a Cloud Pub/Sub topic.
Then enabling Spinnaker to receive notifications of new images being pushed to Container Registry, we need to create a subscription for the Cloud Pub/Sub topic that we created and grant the Spinnaker service account necessary permissions to read from the gcr-triggers subscription.
3. Deploying Spinnaker using Helm
We will utilize Helm (a powerful package manager that simplifies the process of configuring and deploying Kubernetes applications) to deploy Spinnaker from the Charts repository
Configuring Helm
We give Helm the cluster-admin role in our cluster and authorize Spinnaker with the cluster-admin role to enable it to deploy resources across all namespaces.
Configuring Spinnaker
While still working in Cloud Shell, generate a bucket for Spinnaker to save its pipeline configuration.
4. Building the Docker image
Our objective is to set up Cloud Build so that it can identify and handle any modifications that occur within the application's source code. To do this, we start by obtaining a copy of the sample application's source code and extracting it. Once we have access to the code, we navigate to the relevant directory and configure our Git commits with a desired username and email address. After that, we commit our changes and proceed to create a new repository to host our updated code. We then add this repository as a remote and push our code to the master branch. Finally, we confirm that the source code is visible in the Console by selecting Navigation Menu and accessing the Source Repositories tab.
Configuring our build triggers
Setting up an automated process using Cloud Build, takes care of building and uploading Docker images to Container Registry. This process will be triggered automatically whenever Git tags are pushed to our source repository. To achieve this, we will configure Cloud Build to check out the source code from the repository and build a Docker image using the Dockerfile contained within. Once the image has been created, Cloud Build will push it to Google Cloud Container Registry.
Creating a trigger in Cloud Build:
We visit the Cloud Build Triggers page in the Cloud Console.
Clicking the Create trigger button.
Setting the trigger Name to sample-app-tags.
Setting the Event to "Push new tag".
Selecting our newly created sample-app repository.
Setting the Tag field to ".*" (to match any tag).
Setting the Configuration type to Cloud Build configuration file (yaml or json).
Setting the Cloud Build configuration file location to "/cloudbuild.yaml".
Clicking the CREATE button.
Every time a Git tag is pushed to the source code repository, the automated process of Cloud Build (Container Builder) is triggered. The application is built and pushed to the Container Registry as a Docker image without any manual intervention.
Preparing our Kubernetes Manifests for use in Spinnaker
Here we create a Cloud Storage bucket that will store our Kubernetes manifests, which will be used by Spinnaker to deploy them to our clusters. During the CI process in Cloud Build, we will populate this bucket with our manifests. Once the manifests are available in Cloud Storage, Spinnaker can access and apply them during the pipeline's execution.
To create the bucket, we will first enable versioning on it so that we have a history of our manifests. After that, we will set the correct project ID in our Kubernetes deployment manifests. Once we have made these changes, we will commit them to the repository.
Building our image
To push our first image to the repository, in Cloud Shell, we navigate to the sample-app directory, create a Git tag, and push the tag.
Confirming the build has been triggered by visiting the Cloud Build History page in Cloud Console.
5. Configuring the deployment pipelines
After the automated creation of our images, the next step is to deploy them to the Kubernetes cluster. We do this by deploying them first to a smaller environment for integration testing purposes. In order to deploy them to the production services, approval must be given manually. This approval step is crucial to ensure that the changes meet the necessary standards and requirements for deployment to a live environment. It adds an additional layer of security and reduces the likelihood of unintended consequences or downtime caused by faulty code.
Viewing our pipeline execution triggered manually
The setup established uses notifications for newly tagged images to activate a Spinnaker pipeline. In an earlier step, we added a tag to the Cloud Source Repositories, which prompted Cloud Build to construct and push the image to Container Registry. To confirm that the pipeline is operational, we trigger it manually.
6. Triggering the pipeline from code changes
To conduct an end-to-end test of our pipeline, we modify the code, push a Git tag, and observe the pipeline execute accordingly. When we create a Git tag that begins with the letter "v," it triggers Cloud Build to construct a new Docker image and push it to Container Registry. Upon detecting that the new image tag starts with "v," Spinnaker initiates a pipeline to deploy the image to canaries, run tests, and ultimately deploy the same image to all pods in the deployment.
7. Observing the canary deployments
When the deployment process is halted, and it is pending rollout to the production environment, we navigate to the webpage displaying our active application and initiate repeated tab refreshes. At this point, four of the backends will be running the previous version of the application, while only one backend runs the canary. Upon refreshing the page, the new version of our application should be visible approximately every fifth time.
If required, after a successful deployment of our application to the entire production environment, we can revert the modification by undoing the previous commit. This process will generate a new tag (v1.0.2), which will be routed through the same pipeline used to deploy v1.0.1.
To accomplish this, we will execute the following commands:
git revert v1.0.1
git tag v1.0.2
git push --tags
Once the build and pipeline are complete, we can confirm the roll-back by accessing Infrastructure > Load Balancers in the Spinnaker UI. Copying the Ingress IP address, and accessing it in a new tab. The application should now be reverted back to its previous state, and the production version number should be visible.
Review
To ensure that our users receive prompt and seamless application updates, it's crucial to establish an automated process that can reliably build, test, and update our software. This process must cover artifact creation, unit testing, functional testing, and production rollout, with code changes flowing through a pipeline automatically.
The above-mentioned example provides a practical demonstration of setting up a continuous delivery (CI/CD) pipeline with Kubernetes (GKE), Cloud Source Repositories, Cloud Build, and Spinnaker. The pipeline enables the automated building, testing, and deployment of a sample application every time code changes occur.