close
Cloud Containerization Programming - ASK & ACK

The development of application containerization emerged on individual systems after 2000 (e.g., Sandboxie - Windows, LXC - Linux, Solaris Containers - Solaris), but it wasn't until Docker Inc. introduced their product Docker that containerization really took off. Among the commonly used container runtime technologies today, Docker dominates the market, while other technologies like Containerd or Cri-o are gradually gaining popularity.

Future Prediction: Cloud will move towards Containerization - ASK & ACK
 

 


What are Application Containers?

Operating system-level virtualization, also known as containerization, is a technology that virtualizes the operating system kernel, allowing users to divide applications into independent units and run them in different kernels. The following diagram helps illustrate this more clearly.


 

Future Prediction: Cloud will move towards Containerization - ASK & ACK
 

Traditional Deployment Era

In the early days, each application ran directly on physical servers. This approach couldn't define resource boundaries for applications, leading to potential resource allocation issues. For example, running multiple applications on a physical server might result in one application monopolizing most of the resources, potentially causing a decrease in performance for other applications.


Virtualization Deployment Era

As a solution, virtualization emerged. Virtualization technology allows you to run multiple virtual machines (VMs) on a single physical server's CPU. Each VM is a complete system, utilizing virtualized hardware resources to run all components, including the operating system. Virtualization allows applications to be isolated between virtual machines, providing a certain level of security. Virtualization technology also enables more efficient utilization of hardware resources on physical servers, and because applications in virtual machines can be easily added or updated, it achieves better scalability and reduces hardware costs.

Container Deployment Era

Containers are similar to virtual machines, but they allow applications to share the same underlying operating system (OS). Containers are lighter-weight compared to virtual machines. Like virtual machines, containers have their own file system, CPU, memory, and process space. Container deployment also supports portability across clouds or OS versions.


Benefits of Containerization

Containers have become popular due to their numerous advantages. Some of the benefits of containers are listed below:

  • Agile application creation and deployment: Container creation typically requires minimal coding, and if there are suitable container images, deploying applications can be as simple as a single command!
  • Continuous development, integration, and deployment: Due to the immutability of images, containers support reliable and frequent container image builds and deployments, providing a fast and simple recovery process.
  • Consistency across development, testing, and production environments: There is portability across clouds and operating system distributions, such that the same container image can run on Ubuntu, RHEL, CoreOS, locally, and on any cloud.
  • Loose coupling, distributed, elastic, liberated microservices: Applications are broken down into smaller independent parts, which can be dynamically deployed and managed. Updating these independent, distributed parts is also easier compared to traditional deployments. Better predictability of application performance and more efficient resource utilization due to resource isolation.
Trends in Containerization

Due to the various benefits of containerization, more and more applications are being containerized. Reports indicate that the proportion of applications containerized is continuously increasing.

Future Prediction: Cloud will move towards Containerization - ASK & ACK
圖片來源:https://www.stackrox.com/post/2020/03/6-container-adoption-trends-of-2020/

On the other hand, the rate of containerization is also high for some commonly used technology applications. This is because vendors have already prepared containers for these applications, significantly reducing the time required to deploy application architectures.

 

Future Prediction: Cloud will move towards Containerization - ASK & ACK

2020 November, Datadog

 

Why Kubernetes is Needed?

Using containers is a great way to run applications. In a production environment, you need to manage the containers running your applications and ensure they stay up. For example, if one container fails, another one needs to be started as a backup. It would be easier and faster if the system could handle this automatically.

Kubernetes provides the following features:
  • Service discovery and load balancing: Kubernetes can expose containers using DNS names or their own IP addresses. It can also balance and distribute network traffic if there's heavy traffic to a container, ensuring stable deployments.
  • Automatic deployment and recovery: Kubernetes allows you to describe the desired state of the deployed containers, and it then changes the actual state to match the desired state at a controlled rate. For instance, Kubernetes can automatically deploy new containers, remove existing ones, and use their resources for new containers.
  • Automatic scaling of compute resources: Kubernetes lets you specify the CPU and memory (RAM) required for each container. When containers specify resource requests, Kubernetes can make better decisions to manage container resources.
  • Self-healing: Kubernetes can restart failed containers, replace containers, and remove containers that fail to respond to health checks.
 
Alibaba Cloud Kubernetes (ACK)

Alibaba Cloud's Container Service for Kubernetes (ACK) integrates Alibaba Cloud's virtualization, storage, networking, and security features to provide efficient and scalable container application management capabilities, supporting the entire lifecycle management of enterprise-grade containerized applications.

ACK offers both dedicated and managed versions. The dedicated version requires setting up Kubernetes master and worker nodes. The master node manages the underlying infrastructure of Kubernetes, while the worker nodes execute application containers. Setting up the master node requires a deeper understanding of Kubernetes. The managed version, on the other hand, only requires deploying worker nodes for application deployment, making it more suitable for most users.

 

Comparison between ACK and ASK Cluster

Future Prediction: Cloud will move towards Containerization - ASK & ACK

Core Advantages of ASK

 

  • Maintenance-free: Quickly establish Serverless clusters with low thresholds and deploy containerized applications rapidly. No need to manage Kubernetes nodes and servers, allowing a focus on business applications.
  • Flexibility: No need to worry about capacity planning for cluster nodes. Easily and flexibly scale resources required by applications based on application loads.
  • Native compatibility: Supports native Kubernetes applications and ecosystems, including Services, Ingress, Helm, etc., allowing seamless migration of Kubernetes applications.
  • Pay-as-you-go: Only pay for what you use, with no costs incurred for idle resources. Additionally, Serverless brings lower maintenance costs.
 

Proprietary Kubernetes

Managed Kubernetes

Serverless Kubernetes


Key Features

Need to set up Master nodes and Worker nodes manually.

Only need to set up Worker nodes, Master nodes are created and managed by ACK.

No need to set up Master nodes and Worker nodes.
 
More control over cluster infrastructure, requires planning, maintenance, and upgrading of server clusters.

Simple, low-cost, highly available, no need to manage Master nodes.

No need to manage any nodes, can directly launch applications.

Billing Method


Responsible for the cost of Master nodes, Worker nodes, and other infrastructure resources.

Responsible for the cost of Worker nodes and other infrastructure resources.

Billed based on container resource usage and duration.


Cluster Creation

Future Prediction: Cloud will move towards Containerization - ASK & ACK

Future Prediction: Cloud will move towards Containerization - ASK & ACK

Future Prediction: Cloud will move towards Containerization - ASK & ACK

 

ASK Application Scenarios

 

  • Application Hosting: In an ASK cluster, there is no need to manage or maintain nodes, nor is there a need for capacity planning, reducing infrastructure management and operational costs.
  • Flexible Business Workloads: For business workloads with significant fluctuations, such as online education and e-commerce industries, the scalability of ASK clusters can significantly reduce computing costs, minimize idle resource waste, and smoothly handle sudden traffic peaks.
  • Scheduled Tasks: Execute scheduled tasks in ASK clusters, with billing stopping when the tasks end. No need to maintain a fixed resource pool, avoiding resource idle waste.

 

ASK cluster creation is much simpler than ACK cluster creation, and it also offers advantages such as pay-as-you-go billing.

The following will demonstrate deploying a website application using ASK, leveraging the immutability of container images for quick and easy recovery.

ASK Container Programming Demonstration

 

  1. In Container Service - Kubernetes, click on Cluster Creation.

    Future Prediction: Cloud will move towards Containerization - ASK & ACK
  2. Choose ASK cluster, enter the cluster name, select the region, specifications, etc., and then click on "Create Cluster". Future Prediction: Cloud will move towards Containerization - ASK & ACK
  3. It takes a few minutes to create the cluster. Future Prediction: Cloud will move towards Containerization - ASK & ACK
  4. In Workloads - Tasks, click on "Create using Image". Future Prediction: Cloud will move towards Containerization - ASK & ACK
  5. Enter the application name, select "Deployment" as the type, and click "Next". Future Prediction: Cloud will move towards Containerization - ASK & ACK
  6. Choose the image name and image TAG. Future Prediction: Cloud will move towards Containerization - ASK & ACK
  7. Create the service.

Future Prediction: Cloud will move towards Containerization - ASK & ACK

Future Prediction: Cloud will move towards Containerization - ASK & ACK

   8.After creation, click on "Services", where you can find the external IP of the load balancer.

Future Prediction: Cloud will move towards Containerization - ASK & ACK

   9. Opening the external IP of the load balancer in your browser will display the application's response.

Future Prediction: Cloud will move towards Containerization - ASK & ACK

   10. On the other hand, because ASK utilizes Serverless, there won't be any ECS instance resources in the cloud service ECS.

Future Prediction: Cloud will move towards Containerization - ASK & ACK

 

 

Container Update Demo
Here's how to deploy a new image to Kubernetes. Note that the image version is nginx:1.7.

  1. Click Edit
Future Prediction: Cloud will move towards Containerization - ASK & ACK
   2. Select the image TAG, choose 1.21.0, and click "Update".

Future Prediction: Cloud will move towards Containerization - ASK & ACK

   3. In the container group, you can observe the containers progressively updating to the specified container TAG version.

Future Prediction: Cloud will move towards Containerization - ASK & ACK

 

Container Recovery Demo

  1. Select "Deployment" and choose "Recovery". Future Prediction: Cloud will move towards Containerization - ASK & ACK
  2. Choose the version to recover to Future Prediction: Cloud will move towards Containerization - ASK & ACK
  3. You can observe in the container group that the containers orderly recover to the specified version.

    Future Prediction: Cloud will move towards Containerization - ASK & ACK
 

Conclusion
In this demonstration, we can see that using container programming allows for very rapid and simple deployment of application images. Additionally, due to the immutable nature of container images, deployment or recovery in different environments can also become quick. Moreover, with the ability to deploy using Serverless with container programming, the cost and complexity of maintaining containerized applications are further reduced.


 

 


Author

 

 

Solution Architecture
覃永德 Barry Chum