Migrating a Monolithic Web Application to Microservices (MACH Architecture) on Kubernetes Engine (GKE)





Overview

In this session we explore the advantages and disadvantages of migrating from a monolithic application to a microservices (MACH) architecture. Microservices offer benefits such as being able to independently test and deploy, implementing different technologies, and being managed by different teams. Microservices can also be designed for failure more easily, and Kubernetes is a platform that facilitates managing, hosting, scaling, and deploying containers, which are well-suited to the microservices pattern.

However, microservices can lead to increased system complexity, security concerns, performance issues due to latencies, and difficulty in observing system behaviour. A monolithic application is a single, self-contained unit whereas microservices are a network of different services that interact in ways that may not be immediately apparent. As a result, understanding how the system behaves in production can be challenging. Istio is a solution that can be used to address some of these problems, such as automatically encrypting traffic between microservices and providing better observability.

In this example, an existing monolithic application is deployed to a Google Kubernetes Engine cluster and broken down into microservices using containers. By doing so, it is possible to take advantage of the benefits of a microservices architecture while addressing its potential challenges.

(The following process is similar and adaptable to other major cloud vendors such as AWS and Azure).




Evolutionary Architecture for Microservices Migration  

The idea behind evolutionary architecture is that successful products and services will inevitably require updates and changes throughout their lifecycle to meet evolving needs and demands. One useful pattern for implementing evolutionary architecture is the strangler fig application. This pattern involves gradually replacing a monolithic architecture with a more modular and component-based one through the DevOps process, with new work following the principles of a service-oriented architecture. During this process, the new architecture may delegate to the existing system as needed, gradually taking on more and more functionality until the old system is "strangled" and replaced entirely.

In this example, we break down the monolith into three microservices, namely Orders, Products, and Frontend. The process is initiated by refactoring each microservice out of the monolith one at a time. This involves building a Docker image for each microservice using Cloud Build, followed by deploying and exposing the microservices on Google Kubernetes Engine (GKE) with a Kubernetes service type LoadBalancer. It is imperative that the monolith and the microservices are running simultaneously during this process. This will continue until the final stages, at which point the monolith can be deleted. 

By following this approach, the benefits of a microservices architecture can be realised, including the ability to independently test and deploy services, utilize different technologies, and manage services separately. By utilising containerisation technology with Kubernetes, the complexity of deploying and managing microservices can be minimised, resulting in a more efficient and scalable infrastructure.



Prerequisites:
A basic understanding of Docker and Kubernetes.


1. Cloning the source repository

In this example, we work with an existing monolithic application for an ecommerce website that features a welcome page, a products page, and an order history page. To get started, we'll need to clone the application's source code from the git repository. This will allow us to focus on breaking the application down into microservices and deploying it to Google Kubernetes Engine (GKE).

To clone the git repository and navigate to the appropriate directory, we run a series of commands in the Cloud Shell (SDK) instance. Additionally, we will need to install NodeJS dependencies to ensure that the monolith is working correctly before we begin the process of breaking it down into microservices and deploying it to GKE.

cd ~
git clone https://github.com/googlecodelabs/monolith-to-microservices.git
cd ~/monolith-to-microservices
./setup.sh


2. Creating a GKE Cluster

To deploy the monolith and microservices, we need a Kubernetes cluster. Before creating a cluster, ensure that the necessary APIs are enabled. By running the command to enable the Containers API, we will be able to use Google Kubernetes Engine to create a cluster.

gcloud services enable container.googleapis.com

To create a GKE cluster named fancy-cluster with 3 nodes, run the following command:

gcloud container clusters create fancy-cluster --num-nodes 3 --machine-type=e2-standard-4

After creating the GKE cluster, we can use the following command to see the cluster's three worker VM instances:

gcloud compute instances list

Output:














3. Deploying the existing Monolith

To deploy a monolith application to the GKE cluster, we run the script provided below.

cd ~/monolith-to-microservices
./deploy-monolith.sh


Accessing the Monolith

To access the monolith application, we need to use the external IP address of the Load Balancer service. To find the external IP address, we run the following command:

kubectl get service monolith

We should see output similar to the following:


Once we have the external IP address, we can use it to access the monolith application. Simply enter the IP address in our web browser and we should see the welcome page of the application. We now have our monolith fully running on Kubernetes!







4. Migrating Orders to Microservice

After successfully deploying the monolith website on GKE, the next step is to break down each service into smaller microservices. It is recommended to plan the decomposition process carefully, identifying which services to break down around specific parts of the application such as the business domain.

Here we create three microservices: Orders, Products, and Frontend, each representing a specific business domain. The code has already been migrated so we focus on building and deploying these microservices on Kubernetes Engine.


Creating new Microservice for Orders

Our initial step towards microservice decomposition is to create a Docker container for this service and break out the Orders service using a continuous integration/continuous delivery (CI/CD) platform. Here we use Cloud Build; a fully-managed CI/CD platform in GCP that helps to streamline the software development process. 

Traditionally, building and deploying a Docker container involves a two-step process, which requires you to first build the container and then push it to a registry for storage. However, with Cloud Build, you can simplify this process by using a single command to build the Docker container and store the image in the Container Registry. This means that we no longer have to issue multiple commands to build and move our Docker image to the container registry.

With Cloud Build, the files from the directory are compressed and moved to a Cloud Storage bucket. Cloud Build automation process then takes all the files from the bucket and uses the Dockerfile to run the Docker build process.

We execute the subsequent commands to construct the Docker container and transfer it to the Google Container Registry:

cd ~/monolith-to-microservices/microservices/src/orders
gcloud builds submit --tag
gcr.io/${GOOGLE_CLOUD_PROJECT}/orders:1.0.0 .

We should see similar output:



Deploying Container to GKE

Now that we have containerized our website and transferred the container to the Google Container Registry, it's time to deploy it to Kubernetes!

In Kubernetes, applications are represented as Pods, which are the smallest deployable unit in the system. Each Pod represents a container or a group of tightly-coupled containers. In this example, our microservices container will be contained within each Pod.

To deploy and manage applications on a GKE cluster, we need to interact with the Kubernetes cluster management system. This is usually accomplished using the kubectl command-line tool from within Cloud Shell.

To begin, we'll need to create a Deployment resource. The Deployment resource manages multiple copies of our application, known as replicas, and schedules them to run on individual nodes within our cluster. In this case, the Deployment will only run one Pod for our application. Deployments ensure this by creating a ReplicaSet, which is responsible for ensuring that the specified number of replicas are always running.

The following kubectl create deployment command will prompt Kubernetes to generate a Deployment called "Orders" on our cluster with one replica.

We execute the subsequent command to deploy our application:

kubectl create deployment orders --image=gcr.io/${GOOGLE_CLOUD_PROJECT}/orders:1.0.0


Note: It is considered a best practice to use a YAML file to declare any changes to the Kubernetes cluster, such as creating or modifying a deployment or service. Additionally, it is recommended to store these changes in a source control system such as GitHub or Cloud Source Repositories.


Verifying the Deployment

We run the following command to confirm that the Deployment was successfully created:

kubectl get all

Output:


Upon running the command, we should see that the Deployment is up-to-date, the replicaset has the desired number of pods (which is 1), and that the pod is currently running. This indicates that everything has been created successfully!


Exposing GKE Container

After deploying our application on GKE, we are unable to access it outside the cluster. By default, containers running on GKE do not have external IP addresses and are therefore not accessible from the internet. To enable external access to our application, we need to use a Service resource that provides networking and IP support to our application's Pods. GKE creates an external IP and a Load Balancer for our application.

In this example, the process of exposing the service has been simplified. However, in a production environment, it is recommended to use an API gateway to secure public endpoints adhering to microservices best practices.

When we deployed the Orders service, we exposed it internally on port 8081 via a Kubernetes deployment. To expose this service externally, we need to create a Kubernetes service of type LoadBalancer that will route traffic from port 80 externally to internal port 8081.

To expose our website to the internet, we run the following command:

kubectl expose deployment orders --type=LoadBalancer --port 80 --target-port 8081


Accessing the Service

The external IP address assigned to our application on GKE is managed by the Service resource, not the Deployment resource. To find out the external IP provisioned by GKE for our application, we use the kubectl get service command to inspect the Service:

kubectl get service orders

Output:


Reconfiguring the Monolith

When decomposing a monolith into microservices, code is extracted from a single codebase and deployed as multiple microservices. As these microservices are now running on different servers, referencing service URLs as absolute paths is no longer possible. Instead, we need to route to the server address of the Orders microservice. This update requires some downtime for the monolith service to update the URL for each broken-out service. This should be taken into account when planning to move the microservices and monolith to production during the migration process.

To point to the new Orders microservice IP address, the configuration file in the monolith needs to be updated using an editor.

cd ~/monolith-to-microservices/react-app
vi .env.monolith

the file should look like this:


We replace the REACT_APP_ORDERS_URL to the new format while replacing with the Orders microservice IP address so it matches below:

REACT_APP_ORDERS_URL=http://<ORDERS_IP_ADDRESS>/api/orders
REACT_APP_PRODUCTS_URL=/service/products


Save the file in the editor.

To validate the configuration changes made to the Orders microservice, access the URL we just set in the configuration file. We should receive a JSON response from the Orders microservice.




After verifying the changes, proceed to rebuild the frontend of the monolith and repeat the build process to create the container image for the monolith. Finally, redeploy the updated container image to the GKE cluster. The following command can be used to rebuild the monolith's configuration files:

npm run build:monolith


Create Docker Container with Cloud Build:

cd ~/monolith-to-microservices/monolith
gcloud builds submit --tag gcr.io/${GOOGLE_CLOUD_PROJECT}/monolith:2.0.0 .


Deploy Container to GKE:

kubectl set image deployment/monolith monolith=gcr.io/${GOOGLE_CLOUD_PROJECT}/monolith:2.0.0


To confirm that the application is now communicating with the Orders microservice, we open the monolith application in our web browser and navigate to the Orders page. We should see that all order IDs now end with the suffix "-MICROSERVICE," as shown below:




5. Migrating Products to Microservice

Creating new Products Microservice

We proceed with the next step in the process of breaking down the monolith by migrating the Products service. Follow the same process as before by building a Docker container, deploying it, and exposing it via a Kubernetes service.

Run the following commands to create a Docker container using Cloud Build:

cd ~/monolith-to-microservices/microservices/src/products
gcloud builds submit --tag gcr.io/${GOOGLE_CLOUD_PROJECT}/products:1.0.0 .


Deploy Container to GKE:

kubectl create deployment products --image=gcr.io/${GOOGLE_CLOUD_PROJECT}/products:1.0.0


Expose GKE Container:

kubectl expose deployment products --type=LoadBalancer --port 80 --target-port 8082


Obtaining the public IP address of the Products service in the same manner as we did for the Orders service, by using the kubectl get service command.

kubectl get service products

Output:


In the next step of reconfiguring the monolith to connect with your new Products microservice, you will need to utilize the IP address.


Reconfiguring the monolith

Utilize the editor to substitute the local URL with the IP address of the recently deployed Products microservice:

cd ~/monolith-to-microservices/react-app
vi .env.monolith


Our file should look like this:


Modify the format of REACT_APP_PRODUCTS_URL to match the example below, replacing the placeholder with the IP address of your deployed Product microservice:

REACT_APP_ORDERS_URL=http://<ORDERS_IP_ADDRESS>/api/orders
REACT_APP_PRODUCTS_URL=http://<PRODUCTS_IP_ADDRESS>/api/products


Save the file in the editor.

To test the newly configured microservice, navigate to the URL we set in the file and ensure that it returns a JSON response from the Products microservice.




After that, we proceed to rebuild the frontend of the monolith and repeat the build process in order to create a container for the monolith and redeploy it to the GKE cluster. Using the following commands to complete these steps:

Rebuild Monolith Configuration Files:

npm run build:monolith


Creating Docker Container with Cloud Build:

cd ~/monolith-to-microservices/monolith
gcloud builds submit --tag gcr.io/${GOOGLE_CLOUD_PROJECT}/monolith:3.0.0 .


Deploying Container to GKE:

kubectl set image deployment/monolith monolith=gcr.io/${GOOGLE_CLOUD_PROJECT}/monolith:3.0.0


To confirm that our application is now utilizing the new Products microservice, open our browser and navigate to the monolith application. Then, proceed to the Products page, where we should see that all product names are now prefixed by "MS-", as demonstrated below:







6. Migrating Frontend to Microservice

The final step in the migration process is to transfer the frontend code to a microservice and shut down the monolith, completing the migration to a microservices architecture.


Creating new Frontend Microservice

To create a new frontend microservice, we follow the same procedure as the previous two steps.

Note that when we rebuilt the monolith, we updated the configuration to point to the monolith. Now we will need to use the same configuration for the frontend microservice.

To copy the microservices URL configuration files to the frontend microservice codebase, run the following commands:

cd ~/monolith-to-microservices/react-app
cp .env.monolith .env
npm run build


Once we have completed copying the microservices URL configuration files to the frontend microservice codebase, follow the same process as the previous steps. Using the following commands to build a Docker container, deploy the container, and expose it through a Kubernetes service:

Create Docker Container with Google Cloud Build:

cd ~/monolith-to-microservices/microservices/src/frontend
gcloud builds submit --tag gcr.io/${GOOGLE_CLOUD_PROJECT}/frontend:1.0.0 .

Deploying Container to GKE:

kubectl create deployment frontend --image=gcr.io/${GOOGLE_CLOUD_PROJECT}/frontend:1.0.0

Exposing GKE Container:

kubectl expose deployment frontend --type=LoadBalancer --port 80 --target-port 8080


Delete the Monolith

Now that all of the services are functioning as microservices, we may proceed to delete the monolith application. Keep in mind that in an actual migration, this would also involve making DNS changes and other modifications to ensure that our existing domain names are directing traffic to the new frontend microservices for our application.

Use the following commands to delete the monolith:

kubectl delete deployment monolith
kubectl delete service monolith


Test our work

To verify everything is working, the old IP address from the monolith service should not work, and the new IP address from our frontend service should host the new application.

To see a list of all the services and IP addresses, run the following command:

kubectl get services

Output should look similar to below:



After determining the external IP address for the frontend microservice, copy the IP address and navigate to the URL in our browser (e.g. http://203.0.113.0) to verify if the frontend is accessible. Our web application should be identical to what it was prior to breaking down the monolith into microservices!



Review

In this example, we successfully broke down our Monolithic application into Microservices and deployed them on Kubernetes Engine (GKE). By transitioning to a Microservices (MACH) Architecture, we increased the scalability and flexibility of our application, making it easier to maintain and update in the future.