Kubernetes

Deepanshu Yadav
10 min readSep 26, 2021

WHAT IS KUBERNETES

Kubernetes is a container management platform designed to run enterprise-class, cloud-enabled and web-scalable IT workloads. It is built upon the foundation laid by Google based on 15 yrs of experience in running containerized applications .Kubernetes is a container management platform designed to run enterprise-class, cloud-enabled and web-scalable IT workloads.

For example, Kubernetes helps e-commerce applications handle the huge rush on a big sale day, or Swiggy to manage its peak-hour orders. Fast deployment and scaling is directly linked to profitability. And so enterprises and startups are excited. And that’s creating a huge demand for Kubernetes specialists.

Why Kubernetes?

The Docker adoption is still growing exponentially as more and more companies have started using it in production. It is important to use an orchestration platform to scale and manage your containers.

Imagine a situation where you have been using Docker for a little while, and have deployed on a few different servers. Your application starts getting massive traffic, and you need to scale up fast; how will you go from 3 servers to 40 servers that you may require? And how will you decide which container should go where? How would you monitor all these containers and make sure they are restarted if they die? This is where Kubernetes comes in.

Key Design Principles

Kubernetes is designed on the principles of scalability, availability, security and portability .It optimizes the cost of infrastructure distributing the workload across available resources.

WorkLoad Scalability

Applications deployed in Kubernetes are packaged as microservices. These microservices are composed of multiple containers grouped as pods. Each container is designed to perform only one composed of stateless containers or stateful containers. Stateless pods can easily be scaled on-demand or through dynamic auto-scaling. kubernetes 1.4 supports horizontal pod auto-scaling, which automaticallyscales the number of pods in a replication controller based on CPU utilization. Future versions will support custom metrics for defining the auto-scale rules and thresholds

High Availability

Contemporary workloads demand availability at both the infrastructure and application levels. In clusters at scale, everything is prone to failure, which makes high availability for production workloads strictly necessary. While most container orchestration engines and PaaS offerings deliver application

availability, Kubernetes is designed to tackle the availability of both infrastructure and applications.

Security

Security in Kubernetes is configured at multiple levels. The API endpoints are secured through transport layer security (TLS), which ensures the user is authenticated using the most secure mechanism available. Kubernetes
clusters have two categories of users — service accounts managed
directly by Kubernetes, and normal users assumed to be managed by an
independent service. Service accounts managed by the Kubernetes API
are created automatically by the API server. Every operation that manages
a process running within the cluster must be initiated by an authenticated
user; this mechanism ensures the security of the cluster.
Applications deployed within a Kubernetes cluster can leverage the
concept of secrets to securely access data. A secret is a Kubernetes object
that contains a small amount of sensitive data, such as a password, token
or key, which reduces the risk of accidental exposure of data.

Portability

Kubernetes is designed to offer freedom of choice when choosing operating systems, container runtimes, processor architectures, cloud platform and PaaS. A Kubernetes cluster can be configured on mainstream Linux distributions, includings Centos, CoreOS ,Debian, Fedora, RedHat Linux and ubuntu.

It can be deployed to run on local development machines; cloud platforms such as AWS, Azure and Google Cloud virtualization environments based on KVM, vSphere and libvirt; and bare metal.

WHAT TO KNOW WHEN USING KUBERNETES👉

Kubernetes is gaining ground in the container orchestration and cloud-native application management segment. While there are options available to customers in the form of other orchestration engines, Paas and hosted solutions ,the community and ecosystem built around Kubernetes make it a top contender.

STRENGTHS👉

📌Kubernetes has a clear governance model managed by the Linux Foundation. Google is actively driving the product features and rodamap, while allowing the rest of the ecosystem to participate.

📌A growing and vibrant Kubernetes ecosystem provides confidence to enterprises about its long term viability. Huawei ,IBM, INTEL and redhat are some of the companies that are making prominent contributions to the project.

📌The commercial viability of Kubernetes make it an interesting choice for vendors.

📌Kubernetes supports a wide range of of deployment options.

📌The design of Kubernetes is more operations-centric than developer-oriented ,which makes it the first choice of Devops teams.

📌Kubernetes is less prescriptive than some other PaaS offerings.

LIMITATIONS👉

✔ Kubernetes support for stateful applications is still evoloving. In its current version 1.4 ,running transactional databases and bigdata workloads in not recommended.

✔ Lack of support for Microsoft Windows is another major gap in the Kubernetes ecosystem. There are no vendors offering integration with Windows Containers and Hyper-V Containers running within the Microsoft environment.

Kubernetes vs. Docker

DOCKER VS KUBERNETES

Kubernetes vs. Docker” is a phrase that you hear more and more these days as Kubernetes becomes ever more popular as a container orchestration solution.

However, “Kubernetes vs. Docker” is also a somewhat misleading phrase. When you break it down, these words don’t mean what many people intend them to mean, because Docker and Kubernetes aren’t direct competitors. Docker is a containerization platform, and Kubernetes is a container orchestrator for container platforms like Docker.

This post aims to clear up some common confusion surrounding Kubernetes and Docker, and explain what people really mean when they talk about DOCKER VS KUBERNETES

is impossible to talk about Docker without first exploring containers. Containers solve a critical issue in the life of application development. When developers are writing code they are working on their own local development environment. When they are ready to move that code to production this is where problems arise. The code that worked perfectly on their machine doesn’t work in production. The reasons for this are varied; different operating system, different dependencies, different libraries.

Containers solved this critical issue of portability allowing you to separate code from the underlying infrastructure it is running on. Developers could package up their application, including all of the bins and libraries it needs to run correctly, into a small container image. In production that container can be run on any computer that has a containerization platform.

Solutions for orchestrating containers soon emerged. Kubernetes, Mesos, and Docker Swarm are some of the more popular options for providing an abstraction to make a cluster of machines behave like one big machine, which is vital in a large-scale environment.

The truth is that containers are not easy to manage at volume in a real-world production environment. Containers at volume need an orchestration system.

ORCHESTRATION SYSTEM

Orchestration is the automated configuration, management, and coordination of computer systems, applications, and services. Orchestration helps IT to more easily manage complex tasks and workflows. .

What does an orchestration system need to do? Among other things, it must:

  • Handle a large volume of containers and users, simultaneously. An application may have thousands of containers and users interacting with each other at the same time; managing and keeping track of these interactions requires a comprehensive overall system designed specifically for that purpose.
  • Manage service discovery and communication between containers and users. How does a user find a container and stay in contact with it? Providing each microservice with its own, built-in functions for service discovery would be repetitive and highly inefficient at best; in practice, it would be likely to lead to intolerable slowdowns (or gridlock), at scale.
  • Balance loads efficiently. In an ad-hoc, un-orchestrated environment, loads at the container level are likely to be based largely on user requirements at the moment, resulting in highly imbalanced loads at the server level, along with logjams resulting from the inefficient allocation and resulting limited availability of containers and system resources. Load-balancing replaces this semi-chaos with order and efficient resource allocation.
  • Authentication and security. An orchestration system such as Kubernetes makes it easy to handle authentication and security at the infrastructure (rather than the application) level, and to apply consistent policies across all platforms.
  • Multi-platform deployment. Orchestration manages the otherwise very complex task of coordinating container operation, microservice availability, and synchronization in a multi-platform, multi-cloud environment.

An orchestration system serves as a dynamic, comprehensive infrastructure for a container-based application, allowing it to operate in a protected, highly organized environment, while managing its interactions with the external world.

Kubernetes is well-suited to the task and is one of the reasons it has become so popular.

USE-CASES OF INDUSTRIES

NOKIA

“When people are picking up their phones and making a call on Nokia networks, they are creating containers in the background with Kubernetes.” — GERGELY CSATARI, SENIOR OPEN SOURCE ENGINEER, NOKIA

Challenge

Nokia’s core business is building telecom networks end-to-end; its main products are related to the infrastructure, such as antennas, switching equipment, and routing equipment. “As telecom vendors, we have to deliver our software to several telecom operators and put the software into their infrastructure, and each of the operators have a bit different infrastructure,” says Gergely Csatari, Senior Open Source Engineer. “There are operators who are running on bare metal. There are operators who are running on virtual machines. There are operators who are running on VMware Cloud and OpenStack Cloud. We want to run the same product on all of these different infrastructures without changing the product itself.”

Solution

The company decided that moving to cloud native technologies would allow teams to have infrastructure-agnostic behavior in their products. Teams at Nokia began experimenting with Kubernetes in pre-1.0 versions. “The simplicity of the label-based scheduling of Kubernetes was a sign that showed us this architecture will scale, will be stable, and will be good for our purposes,” says Csatari. The first Kubernetes-based product, the Nokia Telephony Application Server, went live in early 2018. “Now, all the products are doing some kind of re-architecture work, and they’re moving to Kubernetes.”

Impact

Kubernetes has enabled Nokia’s foray into 5G. “When you develop something that is part of the operator’s infrastructure, you have to develop it for the future, and Kubernetes and containers are the forward-looking technologies,” says Csatari. The teams using Kubernetes are already seeing clear benefits. “By separating the infrastructure and the application layer, we have less dependencies in the system, which means that it’s easier to implement features in the application layer,” says Csatari. And because teams can test the exact same binary artifact independently of the target execution environment, “we find more errors in early phases of the testing, and we do not need to run the same tests on different target environments, like VMware, OpenStack, or bare metal,” he adds. As a result, “we save several hundred hours in every release.”

NEW YORK TIMES

“Right now, every team is running a small Kubernetes cluster, but it would be nice if we could all live in a larger ecosystem,” says Kapadia. “Then we can harness the power of things like service mesh proxies that can actually do a lot of instrumentation between microservices, or service-to-service orchestration. Those are the new things that we want to experiment with as we go forward.”

Challenge

When the company decided a few years ago to move out of its data centers, its first deployments on the public cloud were smaller, less critical applications managed on virtual machines. “We started building more and more tools, and at some point we realized that we were doing a disservice by treating Amazon as another data center,” says Deep Kapadia, Executive Director, Engineering at The New York Times. Kapadia was tapped to lead a Delivery Engineering Team that would “design for the abstractions that cloud providers offer us.”

Solution

The team decided to use Google Cloud Platform and its Kubernetes-as-a-service offering, GKE.

Impact

Speed of delivery increased. Some of the legacy VM-based deployments took 45 minutes; with Kubernetes, that time was “just a few seconds to a couple of minutes,” says Engineering Manager Brian Balser. Adds Li: “Teams that used to deploy on weekly schedules or had to coordinate schedules with the infrastructure team now deploy their updates independently, and can do it daily when necessary.” Adopting Cloud Native Computing Foundation technologies allows for a more unified approach to deployment across the engineering staff, and portability for the company.

Thanks for reading!

--

--