How to use kubernetes in industry environment

Ashutosh Rai
9 min readJan 8, 2021

--

Kubernetes is a portable, extensible, open-source platform for managing containerized workloads and services, that facilitates both declarative configuration and automation. It has a large, rapidly growing ecosystem. Kubernetes services, support, and tools are widely available.

The name Kubernetes originates from Greek, meaning helmsman or pilot. Google open-sourced the Kubernetes project in 2014. Kubernetes combines over 15 years of Google’s experience running production workloads at scale with best-of-breed ideas and practices from the community.

History

Kubernetes was founded by Joe Beda, Brendan Burns, and Craig McLuckie, who were quickly joined by other Google engineers including Brian Grant and Tim Hockin, and was first announced by Google in mid-2014. Its development and design are heavily influenced by Google’s Borg system, and many of the top contributors to the project previously worked on Borg. The original codename for Kubernetes within Google was Project 7, a reference to the Star The seven spokes on the wheel of the Kubernetes logo are a reference to that codename. The original Borg project was written entirely in C++, but the rewritten Kubernetes system is implemented in Go.

Kubernetes v1.0 was released on July 21, 2015. Along with the Kubernetes v1.0 release, Google partnered with the Linux Foundation to form the Cloud Native Computing Foundation and offered Kubernetes as a seed technology. In February 2016, Helm package manager for Kubernetes was released. On March 6, 2018, Kubernetes Project reached ninth place in commits at GitHub, and second place in authors and issues, after the Linux kernel.

Up to v1.18, Kubernetes followed an N-2 support policy(meaning that the 3 most recent minor versions receive security and bug fixes)

From v1.19 onwards, Kubernetes will follow an N-3 support policy.

1. Learning Kubernetes by deploying a simple app

The first case where you can make use of Kubernetes may seem controversial, but still is very useful. Let’s assume that we have a simple three-tier application with backend written in Python/PHP, a database and front-end created in React or Angular. To deploy it, you can use Kubernetes. Yes, from a purely practical point of view this would be not very reasonable: Kubernetes is complex and creating a Kubernetes cluster to run one simple app would mean doing unnecessary work. Further, you can deploy such an app using other, less expensive solutions. But there is an educational purpose that shouldn’t be overlooked. In undertaking such a deployment, you will learn how to run a Kubernetes cluster and deploy applications on it.

There is still one more practical and advanced scenario where we can use Kubernetes to deploy apps. Imagine we work in a creative agency that is developing a marketing webpage for a client in the pharmaceutical industry. Each medicine advertised on the main page requires a separate webpage where a leaflet of information about the medicine, its ingredients, dosage, possible adverse effects etc. Each medicine would also have a dedicated app. In this scenario, we would be well advised to call on the power of Kubernetes. Thanks to the better resource allocation it affords, it will be cheaper to run one dedicated K8s cluster than many separate servers for each website. What’s more, it will be much easier to manage such a cluster than to employ separate hosts. So, in this case, using Kubernetes is perfectly reasonable.

2. Microservices architecture

A use case where you want to deploy a more complicated app with many components that will communicate with one another is a classic scenario for Kubernetes. In fact, its origins go back to Google deploying, managing and scaling apps in a more efficient way by using containers. That’s how the container orchestration platform Kubernetes was born. So, we now have a K8s cluster with one complicated app deployed. This app has numerous components that communicate with one another. Kubernetes helps you manage this communication.

This is closely related with another important trend in software development: microservice architecture, which I’ll explain using the example of an Internet bookstore. In such a store, we have different functionalities: manage users, order books, manage order lists, etc. There can be many such functionalities and each of them is a separate app. This is a practical realization which are aptly called microservices. All these apps must communicate with each other. To enable such communication and coordination, code must be written to conform with the programming language of each component.

Here you can clearly see the power of Kubernetes in managing microservices. It handles for developers such tasks as detecting problems with communication between the intra-app components, managing the behaviour of components in the event of a failure or managing the authentication processes between components. What’s more, as more or less resources are needed for a particular component, Kubernetes automatically scales them up or down. This is a clear advantage of the microservice architecture: scalability. You can scale a single component rather than the whole app.

Kubernetes has built in tools like Horizontal Pod Autoscaler, which helps ensure that each microservice has the optimal number of replicas. Thank to this cluster operators can be sure that the application has enough resources to work smoothly but doesn’t waste valuable resources.

Of course, at the design stage, it has to be decided which architecture is better for a given app, as there are many different approaches to software development. Microservices are not always the best choice. Still, if microservice architecture is chosen, Kubernetes offers a number of advantages. It simplifies the entire process of managing app components and considerably reduces the work needed to get the app up and running.

3. Lift and shift — from servers to cloud

This scenario occurs frequently today, as software is migrated from on-prem infrastructure to cloud solutions. Let’s imagine the following situation. We have an application deployed on physical servers in a classical data center. For practical or economic reasons, it has been decided to move it to the cloud: either to a Virtual Machine or to big pods in Kubernetes. Of course, moving it to big pods in K8s isn’t a cloud native approach, but it can be treated as an intermediary phase. First, such a big app working outside the cloud is moved to the same big app in Kubernetes. It is then split into smaller components to become a regular cloud native-app. Such methodology is called “lift and shift” and is a good use case where Kubernetes can be used effectively.

4. Cloud-native Network Functions (CNF)

A few years ago, big telco companies had a problem. Their network services were based on hardware such as firewalls or load balancers provided by specialized hardware companies. Of course, this left them dependent on the hardware providers, and gave them little in the way of flexibility. If new functionality was needed, operators had to upgrade existing hardware. When a device firmware update was not possible, additional hardware had to be purchased. To address this disadvantage, the telcos opted to have all these network services as software and use Virtual Machines and OpenStack for network function virtualization (NFV). They now want to go a step further and use containers for the same purpose.

This approach is called Cloud-native Network Functions (CNF). R&D projects are now afoot, focusing on moving from VM-based Virtual Network Functions to Container-based network functions. In such a scenario, Kubernetes would be responsible not only for orchestrating the containers, but also for directing network traffic to proper pods. However, this is still a research area. There are no established standards related to various network components allowing software providers to deliver different implementations of the same functions. The Cloud Native Computing Foundation (CNCF) and LF Networking (LFN) have joined forces to launch the Cloud Native Network Functions (CNF) Testbed in order to foster the evolution from VNFs into CNFs. We will be keeping abreast of the research in this area and foresee Kubernetes playing an important role here.

5. Machine learning and Kubernetes

Machine learning techniques are now widely used to solve real-life problems. Successes have come in multiple fields — self-driving cars, image recognition, machine translation, speech recognition, game playing (Go or poker). Machine learning models have beaten even humans in games like Go, which was once thought to be too difficult a game for machines to crack. Moreover, AI could lead to real breakthroughs in detecting cancer and drug discovery. The business world has not failed to get in on the technology, either. Google, Microsoft and Amazon, to name three behemoths, have all put machines to good use, while other companies are investing heavily to boost their AI capabilities.

Yet, the process of building an effective AI model and using it in production is complicated and time-consuming. Building an app that can reliably recognize whether an image presents a cat or a dog is a case in point. First of all, a large dataset of images tagged “cat” or “dog” must be uploaded. Then, an untrained machine learning model is trained to classify the data in mathematical terms; trained, that is, to recognize the images that are neither in the training nor in the test dataset. After the model is trained, it is implemented in an app that will be made available to the public.

As you can see, it takes time to use an AI-trained model in an application. Therefore, many companies would like to simplify this process and make the life of data scientists or ML engineers easier by introducing a toolkit to speed up the whole process. In this way, the number of operations necessary to deploy such an app will be significantly reduced, shortening the app’s time-to-market. In this scenario, enterprises can harness the power of Kubernetes, as all the calculations necessary to train the ML model are performed inside the K8s cluster. The data scientist or ML engineer will only need to clean the data and write the code. The rest will be handled by a toolkit based on Kubernetes. Such toolkits are already available on the market: Kubeflow by Google and CodiLime spin-off Neptune both come to mind. The increasing demand for AI-powered solutions will surely further promote the adoption of Kubernetes.

6. Computing power for resource-hungry tasks

Recently, Emma Haruka Iwao, a Google employee, broke the Guinness World Record in computing Pi by reaching 31.4 trillion digits. Such calculations require huge computing power, so a Kubernetes cluster would be a natural solution here to manage the distribution of the calculations across multiple computers. Were we to follow in Iwao’s footsteps, we would only need to write a program to perform the calculations. Kubernetes would handle the rest. Another computation-heavy case that could make use of the power of K8s is drug discovery.

7. CI/CD — software development lifecycle

Kubernetes also brings considerable benefits to Continuous Integration/Continuous Deployment or Continuous Delivery methodology . This is a logical continuation of the use cases presented in points 1 and 2. Once an app is deployed into operations, how it works must be monitored constantly. That’s in addition to gathering users’ feedback and developing new features. Whether it’s for testing, frequent releases or deploying newer versions of an app, Kubernetes makes everything simpler and more manageable.

Sign up to discover human stories that deepen your understanding of the world.

Free

Distraction-free reading. No ads.

Organize your knowledge with lists and highlights.

Tell your story. Find your audience.

Membership

Read member-only stories

Support writers you read most

Earn money for your writing

Listen to audio narrations

Read offline with the Medium app

--

--

Ashutosh Rai
Ashutosh Rai

Written by Ashutosh Rai

0 Followers

Nothing is impossible

No responses yet

Write a response