Container Clustering with Kubernetes

23 minutes, 39 seconds Read
0 0
Read Time:24 Minute, 45 Second

1. What is orchestration

Orchestration is the coordination and management of multiple computer systems,
applications and/or services, stringing together multiple tasks in order to execute a
larger workflow or process. These processes can consist of multiple tasks that are
automated and can involve multiple systems.

The goal of orchestration is to streamline and optimize the execution of frequent,
repeatable processes and thus to help data teams more easily manage complex tasks
and workflows. Anytime a process is repeatable, and its tasks can be automated,
orchestration can be used to save time, increase efficiency, and eliminate
redundancies.

What exactly is container orchestration?

You may have come across the term “container orchestration” in the context of
application and service orchestration. So, what is container orchestration and why
should we use it?

Container orchestration is the automation of container management and
coordination. Software teams use the best container orchestration tools to control
and automate tasks such as provisioning and deployments of containers, allocation
of resources between containers, health monitoring of containers, and securing
interactions between containers.

How does container orchestration work?

Software orchestration teams typically use container orchestration tools like
Kubernetes and Docker Swarm. You start by describing your app’s configuration in a
file, which tells the tool where to gather container images and how to network
between containers.

The tool also schedules deployment of containers into clusters and finds the most
appropriate host based on pre-set constraints such as labels or metadata. It then
manages the container’s lifecycle based on the specifications laid out in the file.

But why do we need container orchestration? And what is the purpose of
automation and orchestration? Well, automating container orchestration enables
you to scale applications with a single command, quickly create new containerized
applications to handle growing traffic, and simplify the installation process. It also
improves security.

 

2. Why Kubernetes is required

Containers are a good way to bundle and run your applications. In a production
environment, you need to manage the containers that run the applications and
ensure that there is no downtime. For example, if a container goes down, another
container needs to start. Wouldn’t it be easier if this behavior was handled by a
system?

That’s how Kubernetes comes to the rescue! Kubernetes provides you with a
framework to run distributed systems resiliently. It takes care of scaling and failover
for your application, provides deployment patterns, and more. For example,
Kubernetes can easily manage a canary deployment for your system.

Kubernetes provides you with:

Service discovery and load balancing Kubernetes can expose a container using the
DNS name or using their own IP address. If traffic to a container is high, Kubernetes
is able to load balance and distribute the network traffic so that the deployment is
stable.

Storage orchestration Kubernetes allows you to automatically mount a storage
system of your choice, such as local storages, public cloud providers, and more.

 

Automated rollouts and rollbacks You can describe the desired state for your
deployed containers using Kubernetes, and it can change the actual state to the
desired state at a controlled rate. For example, you can automate Kubernetes to
create new containers for your deployment, remove existing containers and adopt
all their resources to the new container.


Automatic bin packing
 You provide Kubernetes with a cluster of nodes that it can
use to run containerized tasks. You tell Kubernetes how much CPU and memory
(RAM) each container needs. Kubernetes can fit containers onto your nodes to make
the best use of your resources.

Self-healing Kubernetes restarts containers that fail, replaces containers, kills
containers that don't respond to your user-defined health check, and doesn't
advertise them to clients until they are ready to serve.

Secret and configuration management Kubernetes lets you store and manage
sensitive information, such as passwords, OAuth tokens, and SSH keys. You can
deploy and update secrets and application configuration without rebuilding your
container images, and without exposing secrets in your stack configuration.

 

3. Understanding of Swarm vs Kubernetes vs Mesos – Docker Swarm

It is the native Docker clustering solution, so it exposes standard Docker API.

While we will be able to use familiar Docker tools of our own choice, we are bound
by the limitations of Docker API.

Swarm extends the existing Docker API to make a cluster of machines look like a
single Docker API.

 

– Kubernetes

It is a Google’s point of view on container orchestration.

We could mount persistent volumes that would allow us to move containers without
loosing data, it used flannel to create networking between containers, it has load
balancer integrated, it uses etcd for service discovery, and so on. However,
Kubernetes comes at a cost.

We cannot use Docker CLI nor we can use Docker Compose to define containers
since it uses a different CLI, different API and different YAML definitions. It requires
steep learning curve because everything needs to be done from scratch exclusively
for Kubernetes.

 

 

– Apache Mesos

It is a Multi-Framework orchestration solution for containers.

Mesos is less focused on running just containers since Mesos existed prior to
widespread interest in containers and has been re-factored in parts to support
containers.

Mesos focuses on scheduling, and plugging in multiple different schedulers, and as
the result, the Hadoop and Marathon can co-exist in the same scheduling
environment.

4. Understanding Kubernetes Architecture

Kubernetes follows a client-server architecture. It’s possible to have a multi-master
setup (for high availability), but by default there is a single master server which acts
as a controlling node and point of contact. The master server consists of various
components including a kube-apiserver, an etcd storage, a kube-controller-manager,
a cloud-controller-manager, a kube-scheduler, and a DNS server for Kubernetes
services. Node components include kubelet and kube-proxy on top of  Docker .

Master Components

Below are the main components found on the master node:

  • etcd cluster  – a simple, distributed key value storage which is used to store the
    Kubernetes cluster data (such as number of pods, their state, namespace, etc),
    API objects and service discovery details. It is only accessible from the API server
    for security reasons. etcd enables notifications to the cluster about
    configuration changes with the help of watchers. Notifications are API requests
    on each etcd cluster node to trigger the update of information in the node’s
    storage.
  • kube-apiserver  – Kubernetes API server is the central management entity that
    receives all REST requests for modifications (to pods, services, replication
    sets/controllers and others), serving as frontend to the cluster. Also, this is the
    only component that communicates with the etcd cluster, making sure data is
    stored in etcd and is in agreement with the service details of the deployed pods.
  • kube-controller-manager  – runs a number of distinct controller processes in the
    background (for example, replication controller controls number of replicas in a
    pod, endpoints controller populates endpoint objects like services and pods, and
    others) to regulate the shared state of the cluster and perform routine tasks.
    When a change in a service configuration occurs (for example, replacing the
    image from which the pods are running, or changing parameters in the
    configuration yaml file), the controller spots the change and starts working
    towards the new desired state.
  • cloud-controller-manager  – is responsible for managing controller processes
    with dependencies on the underlying cloud provider (if applicable). For example,
    when a controller needs to check if a node was terminated or set up routes, load
    balancers or volumes in the cloud infrastructure, all that is handled by the cloud-
    controller-manager.
  • kube-scheduler  – helps schedule the pods (a co-located group of containers
    inside which our application processes are running) on the various nodes based
    on resource utilization. It reads the service’s operational requirements and
    schedules it on the best fit node. For example, if the application needs 1GB of
    memory and 2 CPU cores, then the pods for that application will be scheduled
    on a node with at least those resources. The scheduler runs each time there is a
    need to schedule pods. The scheduler must know the total resources available
    as well as resources allocated to existing workloads on each node.

Node (worker) components

Below are the main components found on a (worker) node:

  • kubelet  – the main service on a node, regularly taking in new or modified pod
    specifications (primarily through the kube-apiserver) and ensuring that pods and
    their containers are healthy and running in the desired state. This component
    also reports to the master on the health of the host where it is running.
  • kube-proxy  – a proxy service that runs on each worker node to deal with
    individual host subnetting and expose services to the external world. It performs
    request forwarding to the correct pods/containers across the various isolated
    networks in a cluster.

Kubectl

kubectl  command is a line tool that interacts with kube-apiserver and send
commands to the master node. Each command is converted into an API call.

Kubernetes Concepts

Making use of Kubernetes requires understanding the different abstractions it uses
to represent the state of the system, such as services, pods, volumes, namespaces,
and deployments.

  • Pod  – generally refers to one or more containers that should be controlled as a
    single application. A pod encapsulates application containers, storage resources,
    a unique network ID and other configuration on how to run the containers.
  •  Service  – pods are volatile, that is Kubernetes does not guarantee a given
    physical pod will be kept alive (for instance, the replication controller might kill
    and start a new set of pods). Instead, a service represents a logical set of pods
    and acts as a gateway, allowing (client) pods to send requests to the service
    without needing to keep track of which physical pods actually make up the
    service.
  • Volume  – similar to a container volume in Docker, but a Kubernetes volume
    applies to a whole pod and is mounted on all containers in the pod. Kubernetes
    guarantees data is preserved across container restarts. The volume will be
    removed only when the pod gets destroyed. Also, a pod can have multiple
    volumes (possibly of different types) associated.
  • Namespace  – a virtual cluster (a single physical cluster can run multiple virtual
    ones) intended for environments with many users spread across multiple teams
    or projects, for isolation of concerns. Resources inside a namespace must be
    unique and cannot access resources in a different namespace. Also, a
    namespace can be allocated a  resource quota  to avoid consuming more than its
    share of the physical cluster’s overall resources.
  • Deployment  – describes the desired state of a pod or a replica set, in a yaml file.
    The deployment controller then gradually updates the environment (for
    example, creating or deleting replicas) until the current state matches the
    desired state specified in the deployment file. For example, if the yaml file
    defines 2 replicas for a pod but only one is currently running, an extra one will
    get created. Note that replicas managed via a deployment should not be
    manipulated directly, only via new deployments.

 

5. Implementing Kubernetes cluster on 4 Servers


Step 1 – Get each server ready to run Kubernetes

We will start with creating three Ubuntu 16.04 servers. This will give you three
servers to configure. To get this three member cluster up and running, you will need
to select Ubuntu 16.04, 4GM RAM servers and enable Private Networking.

Create 3 hosts and call them kube-01, kube-02, kube-03 and kube-04.

Set your hostnames for your servers as follows:

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Kubernetes will need to assign specialized roles to each server. We will setup one
server to act as the master:

 

 

 

 

 

 

 

 

 

 

 

 

Step 2 – Set up each server in the cluster to run Kubernetes

SSH to each of the servers you created. Proceed with executing the following
commands as root. You may become the root user by executing sudo -i after SSH-ing
to each host.

On each of the four Ubuntu 16.04 servers run the following commands as root:

1 apt-get update && apt-get install -y apt-transport-https
2 curl -s https://packages.cloud.google.com/apt/doc/apt-
key.gpg | apt-key add –
3 cat <<EOF >/etc/apt/sources.list.d/kubernetes.list
4 deb http://apt.kubernetes.io/ kubernetes-xenial main
5 EOF
6 apt-get update
7 apt-get install -y kubelet=1.15.4-00 kubeadm=1.15.4-00
kubectl=1.15.4-00 docker.io

Step 3 – Setup the Kubernetes Master

On the kube-01 node run the following command:

1kubeadm init

This can take a minute or two to run, the result will look like this:

To start using your cluster, you need to run the following as a regular user:

1 mkdir -p $HOME/.kube
2 sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
3 sudo chown $(id -u):$(id -g) $HOME/.kube/config

Your Kubernetes master has initialized successfully!

Run the following commands on kube-01:

1 mkdir -p $HOME/.kube
2 sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
3 sudo chown $(id -u):$(id -g) $HOME/.kube/config

Step 4 – Join your nodes to your Kubernetes cluster

You can now join any number of machines by running the kubeadm join command
on each node as root. This command will be created for you as displayed in your
terminal for you to copy and run.

An example of what this looks like is below:

1     kubeadm   join   – -token    702ff6.bc7aacff7aacab17     174.138.15.158:6443     – -discovery-
token-ca-cert-hash
sha256:68bc22d2c631800fd358a6d7e3998e598deb2980ee613b3c2f1da8978960c8a
b

When you join your kube-02 and kube-01 nodes you will see the following on the
node:

1 This node has joined the cluster:
2* Certificate signing request was sent to master and a response was received.
3* The Kubelet was informed of the new secure connection details.

To check that all nodes are now joined to the master run the following command on
the Kubernetes master kube-01:

1 kubectl get nodes

The successful result will look like this:

1 NAME STATUS ROLES AGE VERSION
2 kube-01 Ready master 8m v1.9.3
3 kube-02 Ready <none> 6m v1.9.3
4 kube-03 Ready <none> 6m v1.9.3
5 kube-04 Ready <none> 6m v1.9.3

 

6. Managing docker lifecycle using Kubernetes

There are different stages when we create a Docker container which is known as
Docker Container Lifecycle. Some of the states are:

  • Created: A container that has been created but not started
  • Running: A container running with all its processes
  • Paused: A container whose processes have been paused
  • Stopped: A container whose processes have been stopped
  • Deleted: A container in a dead state

7. Creating a deployment in Kubernetes

Deployments represent a set of multiple, identical  Pods  with no unique identities. A
Deployment runs multiple replicas of your application and automatically replaces
any instances that fail or become unresponsive. In this way, Deployments help
ensure that one or more instances of your application are available to serve user
requests. Deployments are managed by the Kubernetes Deployment controller.

Deployments use a  Pod template , which contains a  specification  for its Pods. The
Pod specification determines how each Pod should look like: what applications
should run inside its containers, which volumes the Pods should mount, its labels,
and more.

When a Deployment's Pod template is changed, new Pods are automatically created
one at a time.

The following is an example of a Deployment. It creates a ReplicaSet to bring up three nginx Pods:

apiVersion: apps/v1

kind: Deployment

metadata:

  name: nginx-deployment

  labels:

    app: nginx

spec:

  replicas: 3

  selector:

    matchLabels:

      app: nginx

  template:

    metadata:

      labels:

        app: nginx

    spec:

      containers:

      – name: nginx

        image: nginx:1.14.2

        ports:

        – containerPort: 80

In this example:

A Deployment named nginx-deployment is created, indicated by
the .metadata.name field.

The Deployment creates three replicated Pods, indicated by the .spec.replicas field.

The .spec.selector field defines how the Deployment finds which Pods to manage. In
this case, you select a label that is defined in the Pod template (app: nginx).
However, more sophisticated selection rules are possible, as long as the Pod
template itself satisfies the rule.

The template field contains the following sub-fields:

The Pods are labeled app: nginx using the .metadata.labels field.

The Pod template's specification, or .template.spec field, indicates that the Pods run
one container, nginx, which runs the nginx  Docker Hub  image at version 1.14.2.

Create one container and name it nginx using
the .spec.template.spec.containers[0].name field.

Before you begin, make sure your Kubernetes cluster is up and running. Follow the
steps given below to create the above Deployment:

1. Create the Deployment by running the following command:

kubectl apply -f https://k8s.io/examples/controllers/nginx-deployment.yaml

2. Run kubectl get deployments to check if the Deployment was created.
If the Deployment is still being created, the output is similar to the following:

NAME READY UP-TO-DATE AVAILABLE AGE
nginx-deployment 0/3 0 0 1s

When you inspect the Deployments in your cluster, the following fields are
displayed:

  • NAME lists the names of the Deployments in the namespace.
  • READY displays how many replicas of the application are available to your
    users. It follows the pattern ready/desired.
  • UP-TO-DATE displays the number of replicas that have been updated to
    achieve the desired state.
  • AVAILABLE displays how many replicas of the application are available to
    your users.
  • AGE displays the amount of time that the application has been running.

 

Notice how the number of desired replicas is 3 according
to .spec.replicas field.

1. To see the Deployment rollout status, run kubectl rollout status
deployment/nginx-deployment.

2. Run the kubectl get deployments again a few seconds later. The output
is similar to this:

NAME READY UP-TO-DATE AVAILABLE AGE
nginx-deployment 3/3 3 3 18s

Notice that the Deployment has created all three replicas, and all replicas are
up-to-date (they contain the latest Pod template) and available.

3. To see the ReplicaSet (rs) created by the Deployment, run kubectl get
rs. The output is similar to this:

NAME DESIRED CURRENT READY AGE
nginx-deployment-75675f5897 3 3 3 18s

4. To see the labels automatically generated for each Pod, run kubectl
get pods –show-labels. The output is similar to:

NAME READY STATUS RESTARTS AGE
LABELS
nginx-deployment-75675f5897-7ci7o 1/1 Running 0 18s
app=nginx,pod-template-hash=3123191453
nginx-deployment-75675f5897-kzszj 1/1 Running 0 18s
app=nginx,pod-template-hash=3123191453
nginx-deployment-75675f5897-qqcnn 1/1 Running 0 18s
app=nginx,pod-template-hash=3123191453

The created ReplicaSet ensures that there are three nginx Pods.

8. Deleting Deployment

A Kubernetes Deployment runs multiple replicas of your application and
automatically replaces any instances that fail or become unresponsive.

When you are practicing Kubernetes, you'll often need to delete Kubernetes
deployments.

Deleting deployments is easy, thanks to the kubectl delete deployments command:

kubectl delete deployment deployment_name

Check if the deployment is deleted using the below command.
kubectl get deployments

Output would be similar to below

NAME READY UP-TO-DATE AVAILABLE AGE
my-dep 2/2 2 2 4m22s

 

9. Scaling of containers on Kubernetes

You can scale a Deployment by using the following command:

kubectl scale deployment/nginx-deployment –replicas=10

The output is similar to this:

deployment.apps/nginx-deployment scaled

10. Exploring container logs in Kubernetes

To fetch the logs, use the kubectl logs command, as follows:

kubectl logs nginx-deployment

The output is:

0: Mon Jan 1 00:00:00 UTC 2001
1: Mon Jan 1 00:00:01 UTC 2001
2: Mon Jan 1 00:00:02 UTC 2001

You can use kubectl logs –previous to retrieve logs from a previous instantiation of a
container. If your pod has multiple containers, specify which container's logs you
want to access by appending a container name to the command, with a -c flag, like
so:

kubectl logs nginx-deployment-c count

Application Logs

First and foremost are the logs from the applications that run on Kubernetes. The
data stored in these logs consists of the information that your applications output as
they run. Typically, this data is written to stdout inside the container where the
application runs.

We’ll look at how to access this data in the “Viewing Application Logs” section
below.

Kubernetes Cluster Logs

Several of the components that form Kubernetes itself generate their own logs:

  • Kube-apiserver
  • Kube-scheduler
  • Etcd
  • Kube-proxy
  • Kubelet

These logs are usually stored in files under the /var/log directory of the server on
which the service runs. For most services, that server is the Kubernetes master node.
Kubelet, however, runs on worker nodes.

If you’re experiencing a cluster-level problem (as opposed to one that impacts just a
certain container or pod), these logs are a good place to look for insight. For
example, if your applications are having trouble accessing configuration data, you
could look at Etcd logs to see if the problem lies with Etcd. If a worker node is failing
to come online as expected, its Kubelet log could provide insights.

Kubernetes Events

Kubernetes keeps track of what it calls “events,” which can be normal changes to the
state of an object in a cluster (such as a container being created or starting) or errors
(such as the exhaustion of resources).

Events provide only limited context and visibility. They tell you that something
happened, but not much about why it happened. They are still a useful way of
getting quick information about the state of various objects within your cluster.

Kubernetes Audit Logs

Kubernetes can be configured to log requests to the Kube-apiserver. These include
requests made by humans (such as requesting a list of running pods) and Kubernetes
resources (such as a container requesting access to storage).

Audit logs record who or what issued the request, what the request was for, and the
result. If you need to troubleshoot a problem related to an API request, audit logs
provide a great deal of visibility. They are also useful for detecting unusual behavior
by looking for requests that are out of the ordinary, like repeated failed attempts by
a user to access different resources in the cluster, which could signal attempted
abuse by someone who is looking for improperly secured resources. (It could also
reflect a problem with your authentication configuration or certificates.)

 

11. Understanding Kubernetes Docker Placements

 

Simply put, the Docker suite and Kubernetes are technologies with different scopes.
You can use Docker without Kubernetes and vice versa, however they work well
together.

From the perspective of a software development cycle, Docker’s home turf is
development. This includes configuring, building, and distributing containers using
CI/CD pipelines and DockerHub as an image registry. On the other hand, Kubernetes
shines in operations, allowing you to use your existing Docker containers while
tackling the complexities of deployment, networking, scaling, and monitoring.

Although Docker Swarm is an alternative in this domain, Kubernetes is the best
choice when it comes to orchestrating large distributed applications with hundreds
of connected  microservices  including databases, secrets and external dependencies.

12. Implementing and Using GUI for Kubernetes

Dashboard is a web-based Kubernetes user interface. You can use Dashboard to
deploy containerized applications to a Kubernetes cluster, troubleshoot your
containerized application, and manage the cluster resources. You can use Dashboard
to get an overview of applications running on your cluster, as well as for creating or
modifying individual Kubernetes resources (such as Deployments, Jobs, DaemonSets,
etc). For example, you can scale a Deployment, initiate a rolling update, restart a pod
or deploy new applications using a deploy wizard.

Dashboard also provides information on the state of Kubernetes resources in your
cluster and on any errors that may have occurred.

 

Deploying the Dashboard UI

The Dashboard UI is not deployed by default. To deploy it, run the following
command:

kubectl apply -f
https://raw.githubusercontent.com/kubernetes/dashboard/v2.5.0/aio/deploy/rec
ommended.yaml

 

Accessing the Dashboard UI

To protect your cluster data, Dashboard deploys with a minimal RBAC configuration
by default. Currently, Dashboard only supports logging in with a Bearer Token. To
create a token for this demo, you can follow our guide on  creating a sample user .

Command line proxy

You can enable access to the Dashboard using the kubectl command-line tool, by
running the following command:

kubectl proxy

Kubectl will make Dashboard available
at  http://localhost:8001/api/v1/namespaces/kubernetes-
dashboard/services/https:kubernetes-dashboard:/proxy/ .

The UI can only be accessed from the machine where the command is executed.

Welcome view

When you access Dashboard on an empty cluster, you'll see the welcome page. This
page contains a link to this document as well as a button to deploy your first
application. In addition, you can view which system applications are running by
default in the kube-system  namespace  of your cluster, for example the Dashboard
itself.

 

13. Introduction to Microservices and relevance of Kubernetes in context

Microservices is the state-of-the-art software development technique that structures
an application as an assortment of loosely coupled services. Each service is self-
contained and should implement a single business capability. Microservice
architecture is intended to overcome the hurdles, failure, and breakdown of the
bigger applications and thereby increase the modularity aspect. It is  considered apt
for enterprise software development .

How Can Kubernetes Prove Pertinent For An Effective Microservices Architecture?

  • For an effective Microservices architecture, you would need an automated CI /
    CD procedure and artefact registries. Kubernetes can assist very well in running
    and managing this. Of course, there need to be certain computing resources and
    a standardized operating infrastructure managed by a service cloud provider.
  • With the help of other specialized software like Jenkins and Docker, Kubernetes
    can assist in manging disparate isolated settings, resources, storage
    distributions, etc. Docker has started supporting and shipping Kubernetes from
    its CE (community edition) and EE (enterprise edition) releases.
  • It can help in performing deployments and rollbacks with automatic scheduling,
    service detection, and load balancing.
  • Maintaining resilience and fault tolerance becomes easier and effective with Kubernetes.
  • The resiliency structure of Kubernetes can be joint with other tools like Docker
    for implementing containers.
  • With this, Kubernetes can be of great assistance in dealing with app
    configurations and executing centralized logging systems, metrics gathering and
    tracing, etc.
  • Kubernetes can assist in executing stateful services, scheduled jobs and batch
    jobs with ease and efficiency.
  • Based on the type of Microservices, there could be certain definite
    requirements like API management solution for API based Microservices.
  • Getting almost all activities done under one roof provides a lot of innovative
    time for the users to try their hands-on newer things like auto replication, auto-
    scaling, etc.

 

15. Introduction to PaaS and relevance of Kubernetes in context

Kubernetes has made something old new again. You may not want to admit it, but
you’re probably into Kubernetes because it feels like PaaS.

By many measures (and  according  to  many folks ), Platform-as-a-Service, or PaaS,
died long ago. Unlike other types of cloud-based architectures — notably,  IaaS  and
SaaS — PaaS never really caught on. It’s true that most conventional PaaS platforms
have disappeared. Yet if you look at Kubernetes, the massively popular open-source
orchestrator, PaaS is alive and well. In many ways, Kubernetes is basically just PaaS
by another name (and with less vendor lock-in).

What is a PaaS?

Historically, PaaS was a type of cloud computing service that let developers write,
build, and deploy applications at scale on a cloud platform.

PaaS was a big deal in the early days of cloud computing – which is to say, the mid to
late-2000s. Back then, the idea that you could write an app and deploy it on
someone else’s server without having to manage the infrastructure or the
development or deployment environments was a big deal. So was having a unified,
preconfigured toolset for building and deploying apps.

Kubernetes as a PaaS

Many of those developers, though, are probably deploying apps using Kubernetes –
which is arguably PaaS by a different name.

After all, the core features of Kubernetes include:

  • The ability to deploy any type of app in a consistent way.
  • Support for running on any infrastructure – on-prem, public clouds, or both.
  • A centralized control plane for managing applications, no matter where they are
    hosted.
  • Some automated management of applications and infrastructure in the form of
    load balancing, automatic container restarts, and so on.

In a lot of ways, these are also the core features of a PaaS. Simple deployment,
automated infrastructure management, and application orchestration are the
reasons why most developers got excited about PaaS platforms more than a decade
ago.

Perhaps the one key feature that Kubernetes lacks, but which is available in a
conventional PaaS, is integrated development tooling. Kubernetes does nothing to
help you actually write or test your code. You need to do that separately.

But in the sense that Kubernetes provides a unified, consistent, developer-friendly
means of deploying applications at scale, it looks a lot like a PaaS.

Happy
Happy
0 %
Sad
Sad
0 %
Excited
Excited
0 %
Sleepy
Sleepy
0 %
Angry
Angry
0 %
Surprise
Surprise
0 %

Similar Posts

Average Rating

5 Star
0%
4 Star
0%
3 Star
0%
2 Star
0%
1 Star
0%

Leave a Reply

Your email address will not be published.

X

Cart

Your Cart is Empty

Back To Shop