Developers need to update changes several times a day in which users expect consistent service. Containerized apps run consistently across environments like dev, testing, and prod without the need to re-tooling. That’s where the containerization helps to reach this goal.
A container is a lightweight, stand-alone and executable package of peace of software that has everything necessary to run it, including code, runtime, system settings, libraries, etc.
basically, its wrap everything which app needs. Over the past few years, developers have come to rely on containerization technology to improve the portability of the code and to ensure the consistency of their app across a variety of platforms.
Containerization becomes popular after releasing of docker. Docker was first to package Linux kernel features together in a way that allows teams to create artifact easily. But lately, developers noticed that they are running hundreds and thousands of workloads inside of the container and it might become an operational challenge. That’s where the Kubernetes comes as a sustainable solution.
Kubernetes is an open source project which enables developers to automate the process of deploying, managing and scaling containerized apps. First, it was an internal project of google’s go language and then it comes public. It makes ongoing orchestration of potentially thousands of containers manageable and helps to increase the availability of your code.
Kubernetes contains a number of abstractions that represent the state of your system: deployed containerized applications and workloads, their associated network and disk resources, and other information about what your cluster is doing. These abstractions are represented by objects in the Kubernetes API.
Pods are the management unit in Kubernetes which comprise one or more containers. Each pod has its own IP address and storage namespace. Pod represents a unit of deployment: a single instance of an application in Kubernetes, which might consist of either a single container or a small number of containers that are tightly coupled and that share resources.
Kubernetes pods are created and destroyed dynamically over time on different reasons like scaling in or out. Hence these pods have a unique IP address, this leads to a problem when two or more pods need to communicate among them through their network interfaces. Kubernetes service comes with a solution for this.
Serviceis an abstraction which defines a logical set of
Podsand a policy by which to access them - sometimes called a micro-service. The set of
Podstargeted by a
Serviceis (usually) determined by a
In addition to the above two, there are two kubernetes objects called Volumes and Namespaces along with several controllers like ReplicaSet and Deployment.
- Master nodes
- Worker nodes
- Distributed key-value store
An entry point for all administrative tasks. Related to many Kubernetes clusters. The main node has an API service, Scheduler for schedule tasks for worker nodes and controller for watch desired state of the objects it manages, watch their current state through API service to make sure that current state is the desired state.
The nodes in a cluster are the machines (VMs, physical servers, etc) that run your applications and cloud workflows. The Kubernetes master controls each node; you’ll rarely interact with nodes directly.
All of the above is just a piece of theoretical knowledge. Even I didn’t understand the very well until my hands get dirty with kubernetes. So, let’s try a simple example. This project is just an API service which consists of two API endpoints. They return a name of US state by when its code is given and return state code by when its name is given.
Here I have two services called API service and data service. Here I implement API service using nodejs and my data service going to be a redis data service. Both services run on two different pods.
Now its time to host this in GCK platform and sleep well 😉 😴 💤