Jan 06, 2026
Ariffud M.
13min Read
Kubernetes, often abbreviated as K8s, is a top choice for container orchestration due to its scalability, flexibility, and robustness. Whether you’re a developer or a system administrator, mastering Kubernetes can simplify how you deploy, scale, and manage containerized applications.
In this article, you’ll learn the basic concepts of Kubernetes, explore its key features, and examine its pros and cons. We’ll guide you through setting up a Kubernetes environment, deploying your first application, and troubleshooting common issues.
By the end of this tutorial, you can fully leverage Kubernetes for efficient container management.
Download free docker cheat sheet
Kubernetes is a powerful open-source platform for container orchestration. It provides an efficient framework to deploy, scale, and manage applications, ensuring they run seamlessly across a cluster of machines.
Kubernetes’ architecture offers a consistent interface for both developers and administrators. It allows teams to focus on application development without the distraction of underlying infrastructure complexities.
This IT tool ensures that containerized applications run reliably, effectively managing deployment and scaling while abstracting the hardware and network configurations.
Kubernetes operates through a control plane and core components, each with specialized roles in managing containerized applications across clusters.
Nodes
Nodes are the individual machines that form the backbone of a Kubernetes cluster. They can be either master nodes or worker nodes, which are crucial in running multiple containers. Nodes can operate on both physical and virtual private servers.
Pods
Pods are the smallest deployable units on this platform, serving as the basic building blocks of Kubernetes applications. A pod can contain one or more containers. Kubernetes schedules these containers together on the same node to optimize communication and load balancing.
Services
Services make the applications accessible online and handle load balancing. They provide a consistent way to access containerized services and abstract the complexities of network connectivity.
API server
The API server is the front end for the Kubernetes control plane, processing both internal and external requests to manage various aspects of the cluster.
Replication sets
These maintain a specified number of identical pods to ensure high availability and reliability. If a pod fails, the replication set automatically replaces it.
Ingress controllers
Ingress controllers act as gatekeepers for incoming traffic to your Kubernetes cluster, managing access to services within the cluster and facilitating external access.
These components collectively enable Kubernetes to manage the complexities of containerized workloads within distributed systems efficiently.
Kubernetes offers a robust set of features designed to meet the needs of modern containerized applications:
Scaling
Kubernetes dynamically adjusts the number of running containers based on demand, ensuring optimal resource utilization. This adaptability helps reduce costs while maintaining a smooth user experience.
Load balancing
Load balancing is integral to Kubernetes. It effectively distributes incoming traffic across multiple pods, ensuring high availability and optimal performance and preventing any single pod from becoming overloaded.
Self-healing
Kubernetes’ self-healing capabilities minimize downtime. If a container or pod fails, it is automatically replaced, keeping your application running smoothly and ensuring consistent service delivery.
Service discovery and metadata
Service discovery is streamlined in Kubernetes, facilitating communication between different application components. Metadata enhances these interactions, simplifying the complexities associated with distributed systems.
Rolling updates and rollbacks
Kubernetes supports rolling updates to maintain continuous service availability. If an update causes issues, reverting to a previous stable version is quick and effortless.
Resource management
Kubernetes allows for precise resource management by letting you define resource limits and requests for pods, ensuring efficient use of CPU and memory.
ConfigMaps, Secrets, and environment variables
Kubernetes uses ConfigMaps and Secrets for secure configuration management. These tools help store and manage sensitive information such as API keys and passwords securely, protecting them from unauthorized access.
Weighing Kubernetes’ strengths and weaknesses is crucial to deciding whether it is the right platform for your container management needs.
Kubernetes offers numerous benefits, making it a preferred option for managing containerized applications. Here’s how it stands out:
Scalability
High availability
Flexibility and extensibility
While Kubernetes is a robust platform, it has certain drawbacks you should consider:
Complexity
The steep learning curve of Kubernetes can be a hurdle, particularly for new users. Expertise in managing Kubernetes clusters is essential to unlocking its capabilities.
Resource intensiveness
Kubernetes demands significant server resources, including CPU, memory, and storage. For smaller applications or organizations with limited resources, this can lead to more overhead than benefits.
Lack of native storage solutions
Kubernetes doesn’t include built-in storage solutions, posing challenges for applications requiring persistent or sensitive data storage. As a result, you need to integrate external storage solutions, such as network-attached storage (NAS), storage area networks (SAN), or cloud services.
Setting up Kubernetes is crucial for managing containers effectively. Your hosting environment plays a pivotal role. Hostinger’s VPS plans offer the resources and stability needed for a Kubernetes cluster.

Please note that the following steps apply to all nodes you’ll use to deploy your applications.
Choosing the right environment
We’ll guide you through setting up a Kubernetes environment on Hostinger using an Ubuntu 24.04 64-bit operating system. Follow these steps:
sudo apt-get update && sudo apt-get upgrade
sudo apt install docker.io
sudo systemctl enable docker sudo systemctl start docker
sudo swapoff -a sudo sed -i '/ swap / s/^(.*)$/#1/g' /etc/fstab
sudo tee /etc/modules-load.d/containerd.conf <<EOF overlay br_netfilter EOF sudo modprobe overlay sudo modprobe br_netfilter
sudo tee /etc/sysctl.d/kubernetes.conf <<EOF net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 net.ipv4.ip_forward = 1 EOF
sudo sysctl --system
After setting up the environment, proceed to install containerd, a container runtime that manages the lifecycle of containers and their dependencies on your nodes. Here’s how to do it:
sudo apt-get install apt-transport-https ca-certificates curl software-properties-common
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmour -o /etc/apt/trusted.gpg.d/docker.gpg sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
sudo apt update sudo apt install containerd.io
containerd config default | sudo tee /etc/containerd/config.toml >/dev/null sudo sed -i 's/SystemdCgroup = false/SystemdCgroup = true/'/etc/containerd/config.toml
sudo systemctl restart containerd sudo systemctl enable containerd
After preparing the environment, you can begin installing the essential Kubernetes components on your host. Follow these detailed steps:
curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.30/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.30/deb/ /' | sudo tee /etc/apt/sources.list.d/kubernetes.list
sudo apt-get update sudo apt-get install kubelet kubeadm kubectl
sudo apt-mark hold kubelet kubeadm kubectl
sudo systemctl enable --now kubelet
Now that you’ve successfully installed Kubernetes on all your nodes, the following section will guide you through deploying applications using this powerful orchestration tool.
With all necessary components now installed and configured, let’s deploy your first application on your nodes. Be mindful of which nodes each step is implemented on.
Begin by creating a Kubernetes cluster, which involves setting up your master node as the control plane. This allows it to manage worker nodes and orchestrate container deployments across the system.
sudo kubeadm init
[init] Using Kubernetes version: v1.30.0 [preflight] Running pre-flight checks [preflight] Pulling images required for setting up a Kubernetes cluster ... Your Kubernetes control-plane has initialized successfully!
kubeadm join 22.222.222.84:6443 --token i6krb8.8rfdmq9haf6yrxwg
--discovery-token-ca-cert-hash sha256:bb9160d7d05a51b82338fd3ff788fea86440c4f5f04da6c9571f1e5a7c1848e3sudo kubeadm init --ignore-preflight-errors=all
mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config
kubectl get nodes
NAME STATUS ROLES AGE VERSION master NotReady control-plane 176m v1.30.0
Switch to the server you wish to add as a worker node. Ensure you have completed all preparatory steps from the How to Set Up Kubernetes section to confirm the node is ready for integration into the cluster.
Follow these steps:
kubeadm join 22.222.222.84:6443 --token i6krb8.8rfdmq9haf6yrxwg --discovery-token-ca-cert-hash sha256:bb9160d7d05a51b82338fd3ff788fea86440c4f5f04da6c9571f1e5a7c1848e3
kubectl get nodes
NAME STATUS ROLES AGE VERSION master Ready control-plane 176m v1.30.0 worker-node1 Ready worker 5m v1.30.0
A pod network allows different nodes within a cluster to communicate with each other. There are several pod network plugins available, such as Flannel and Calico. Follow these steps to install one:
kubectl apply -f https://github.com/flannel-io/flannel/releases/latest/download/kube-flannel.yml
kubectl apply -f https://raw.githubusercontent.com/projectcalico/calico/v3.25.0/manifests/calico.yaml
kubectl taint nodes --all node-role.kubernetes.io/control-plane-:NoSchedule-
Now it’s time to deploy your application, which is packaged as a Docker image using Kubernetes. Follow these steps:
kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE coredns-7db6d8ff4d-4cwxd 0/1 ContainerCreating 0 3h32m coredns-7db6d8ff4d-77l6f 0/1 ContainerCreating 0 3h32m etcd-master 1/1 Running 0 3h32m kube-apiserver-master 1/1 Running 0 3h32m kube-controller-manager-master 1/1 Running 0 3h32m kube-proxy-lfhsh 1/1 Running 0 3h32m kube-scheduler-master 1/1 Running 0 3h32m
kubectl run your_kubernetes_app --image=your_docker_image
kubectl get pods
NAME READY STATUS RESTARTS AGE your_kubernetes_app 1/1 Running 0 6m
Congratulations on deploying your application on the cluster using Kubernetes. You are now one step closer to scaling and managing your project across multiple environments.
Adhering to best practices developed within the Kubernetes community is crucial for fully leveraging its capabilities.
Optimize resource management
Efficient resource management is crucial to enhancing your applications’ performance and stability. By defining resource limits and requests for different objects, like pods, you establish a stable environment for managing containerized applications effectively.
Resource limits cap the CPU and memory usage to prevent any single application from hogging resources, while resource requests guarantee that your containers have the minimum resources they need.
Finding the right balance between these limits and requests is essential for achieving optimal performance without wasting resources.

Ensure health checks and self-healing
One of Kubernetes’ core principles is maintaining the desired state of applications through automated health checks and self-healing mechanisms.
Readiness probes manage incoming traffic, ensuring a container is fully prepared to handle requests. They also prevent traffic from going to containers that are not ready, enhancing user experience and system efficiency.
Meanwhile, liveness probes monitor a container’s ongoing health. If a liveness probe fails, Kubernetes automatically replaces the problematic container, thus maintaining the application’s desired state without needing manual intervention.
Secure configurations and Secrets
ConfigMaps stores configuration data, while Secrets securely contains sensitive information such as API keys and passwords, ensuring that this data is encrypted and accessible only to authorized users.
We also recommend you apply these security best practices:
Furthermore, Hostinger offers enhanced security features to protect your VPS. These include a cloud-based firewall solution that helps safeguard your virtual server from potential internet threats.
Additionally, our robust malware scanner provides proactive monitoring and security for your VPS by detecting, managing, and cleaning compromised and malicious files.
You can activate both features via the Security menu on hPanel’s VPS dashboard.
Execute rolling updates and rollbacks
Kubernetes excels with its rolling update strategy, which allows for old containers to be gradually phased out and replaced by new versions.
This approach ensures seamless transitions and zero-downtime deployments, maintaining uninterrupted service and providing a superior user experience even during significant application updates.
Use Minikube for local development and testing
If you prefer to develop your applications locally before deploying them to the server, consider using Minikube. It’s a tool for setting up Kubernetes single-node clusters on local machines, and it’s perfect for building, testing, and learning.
Follow these steps to get Minikube started on a Debian-based machine:
curl -LO https://storage.googleapis.com/minikube/releases/latest/minikube_latest_amd64.deb
sudo dpkg -i minikube_latest_amd64.deb
minikube start
kubectl create deployment hello-minikube --image=kicbase/echo-server:1.0
kubectl expose deployment hello-minikube --type=NodePort --port=8080
kubectl get services hello-minikube minikube service hello-minikube
Now, you can access your application by typing http://localhost:8080/ in the browser.
Here, we’ll cover some common Kubernetes issues and how to address them effectively:
Pod failures
Pod failures occur when these components do not function as expected, disrupting the availability and performance of your applications. Common reasons for pod failures include:
Here are steps to troubleshoot pod failures:
Networking problems
Networking issues in a Kubernetes cluster can disrupt communication between pods and services, impacting your applications’ functionality. Here are some common network-related problems:
Here’s how to address networking issues:
Persistent storage challenges
In Kubernetes, managing persistent storage is crucial for running stateful applications. However, improper management can lead to data loss, application disruptions, and degraded performance. Here are some common issues in this area:
Here’s the guide to solving persistent storage challenges:
Cluster scaling and performance
Scalability is a crucial feature of Kubernetes, enabling applications to adapt to varying workloads. However, as your applications grow, you might encounter scaling challenges and performance bottlenecks, such as:
Here are key strategies to optimize performance and scaling:
Kubernetes is a powerful tool that simplifies the management and deployment of applications, ensuring they run smoothly on servers across local and cloud environments.
In this Kubernetes tutorial for beginners, you’ve learned about its core components and key features, set up a clustered environment, and deployed your first application using this platform.
By adhering to best practices and proactively addressing challenges, you can fully leverage this open-source system’s capabilities to meet your container management needs.
This section answers some of the most common questions about Kubernetes.
Kubernetes is primarily used for container orchestration. It automates the deployment, scaling, and management of containerized applications, ensuring efficient resource utilization, enhanced scalability, and simplified lifecycle management in cloud-native environments.
No, they serve different purposes. Docker is a platform for containerization that packages applications into containers. Kubernetes, on the other hand, is a system that manages these containers across a cluster, supporting Docker and other container runtimes.
Begin by understanding the basics with the official Kubernetes tutorial and documentation. Then, set up a Kubernetes cluster, either locally or with a cloud provider. Apply what you’ve learned by deploying and managing applications. Join online forums and communities for support and further guidance.
While Kubernetes offers robust features ideal for managing complex, large-scale applications, it can be overkill for smaller projects due to its complexity in setup and ongoing maintenance. Smaller projects might consider simpler alternatives unless they require the specific capabilities of Kubernetes.