Kubernetes Mastery - Part 1: Introduction to Kubernetes

Begin your Kubernetes journey by understanding what Kubernetes is, why it exists, its evolution from traditional infrastructure, and the problems it solves in modern cloud-native applications.

October 11, 2025
45 minutes
By Prashant Chaturvedi
Kubernetes Container Orchestration DevOps Cloud Native Microservices

Kubernetes Mastery - Part 1: Introduction to Kubernetes

Welcome to this comprehensive Kubernetes tutorial series designed for Ubuntu-based systems. Whether you’re a developer, DevOps engineer, or system administrator, this series will take you from zero to proficient in Kubernetes, covering theory, practical applications, and essential commands.

What is Kubernetes?

Kubernetes (often abbreviated as K8s, where “8” represents the eight letters between “K” and “s”) is an open-source container orchestration platform originally developed by Google and now maintained by the Cloud Native Computing Foundation (CNCF).

Simple Definition

Kubernetes is a system for automating deployment, scaling, and management of containerized applications. Think of it as an operating system for your distributed applications.

Key Characteristics

  • Container Orchestration: Manages containers across multiple hosts
  • Declarative Configuration: You describe the desired state, Kubernetes maintains it
  • Self-Healing: Automatically restarts failed containers and replaces nodes
  • Horizontal Scaling: Scale applications up or down based on demand
  • Service Discovery & Load Balancing: Built-in networking and load distribution
  • Automated Rollouts & Rollbacks: Deploy new versions safely with automatic rollback on failure

The Evolution: Why Kubernetes?

Traditional Deployment Era (Pre-2000s)

Physical Server
├── Operating System
├── Application A
├── Application B
└── Application C

Problems:
- Resource allocation issues
- One app could consume all resources
- Expensive and difficult to scale
- Slow deployment cycles

Limitations:

  • No resource boundaries between applications
  • Under-utilization of physical resources
  • High maintenance costs
  • Slow scaling (buy new hardware)

Virtualized Deployment Era (2000s-2010s)

Physical Server
├── Hypervisor (VMware, VirtualBox, KVM)
├── VM 1
│   ├── Guest OS
│   └── Application A
├── VM 2
│   ├── Guest OS
│   └── Application B
└── VM 3
    ├── Guest OS
    └── Application C

Improvements:
- Better resource utilization
- Isolation between applications
- Faster deployment than physical
- Cost savings

Advantages:

  • Applications isolated in VMs
  • Better security
  • Resource limits per VM
  • Multiple VMs on one physical server

Still Had Issues:

  • Heavy resource overhead (each VM needs full OS)
  • Slower boot times (minutes)
  • Complex management at scale

Container Deployment Era (2010s-Present)

Physical/Virtual Server
├── Operating System
├── Container Runtime (Docker, containerd)
├── Container A (App + Dependencies)
├── Container B (App + Dependencies)
└── Container C (App + Dependencies)

Benefits:
- Lightweight (share OS kernel)
- Fast startup (seconds)
- Consistent across environments
- Portable

Containers Solved:

  • Resource efficiency (no full OS per app)
  • Fast deployment and scaling
  • Consistency from dev to production
  • Microservices architecture enabler

The Orchestration Challenge

With containers came new challenges:

# Managing containers manually becomes complex:

# 1. On Server 1
docker run -d myapp:v1

# 2. On Server 2
docker run -d myapp:v1

# 3. On Server 3
docker run -d myapp:v1

# Questions arise:
# - What if Server 2 goes down?
# - How do containers communicate across servers?
# - How to scale from 3 to 300 instances?
# - How to update all containers to v2?
# - How to load balance traffic?
# - How to handle configuration?
# - How to manage secrets?

This is where Kubernetes comes in.

Problems Kubernetes Solves

1. High Availability & Fault Tolerance

Problem: Applications crash, servers fail, networks partition.

Kubernetes Solution:

# Kubernetes automatically maintains desired state
apiVersion: apps/v1
kind: Deployment
metadata:
  name: web-app
spec:
  replicas: 3  # Always maintain 3 running instances

# If one pod crashes, Kubernetes automatically creates a replacement

Example Scenario:

Time 0: [Pod 1] [Pod 2] [Pod 3]  ← 3 pods running
Time 1: [Pod 1] [Pod 2] [CRASH]  ← Pod 3 crashes
Time 2: [Pod 1] [Pod 2] [Pod 4]  ← Kubernetes auto-creates Pod 4

2. Automated Scaling

Problem: Traffic varies (peak hours vs off-hours), manual scaling is slow.

Kubernetes Solution:

# Horizontal Pod Autoscaler
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
  name: web-app-hpa
spec:
  minReplicas: 2
  maxReplicas: 10
  metrics:
  - type: Resource
    resource:
      name: cpu
      target:
        type: Utilization
        averageUtilization: 70

# Automatically scales from 2 to 10 pods based on CPU usage

Visual Example:

Normal Traffic:  [Pod 1] [Pod 2]              (2 pods, CPU: 30%)
Peak Traffic:    [Pod 1] [Pod 2] [Pod 3] [Pod 4] [Pod 5]  (5 pods, CPU: 65%)
After Peak:      [Pod 1] [Pod 2]              (scaled back down)

3. Service Discovery & Load Balancing

Problem: Container IPs change, need automatic discovery and traffic distribution.

Kubernetes Solution:

# Service provides stable endpoint
apiVersion: v1
kind: Service
metadata:
  name: web-app-service
spec:
  selector:
    app: web-app
  ports:
  - port: 80
    targetPort: 8080

# Traffic automatically distributed across all matching pods
# Service DNS name remains constant: web-app-service.default.svc.cluster.local

4. Automated Rollouts & Rollbacks

Problem: Deployments are risky, rollbacks are manual and error-prone.

Kubernetes Solution:

# Deploy new version
kubectl set image deployment/web-app web-app=myapp:v2

# Kubernetes performs rolling update:
# - Creates new pods with v2
# - Waits for health checks
# - Terminates old v1 pods
# - If issues detected, automatically rolls back

# Manual rollback if needed
kubectl rollout undo deployment/web-app

Rolling Update Process:

Initial:  [v1] [v1] [v1]
Step 1:   [v1] [v1] [v1] [v2]      ← Create v2 pod
Step 2:   [v1] [v1] [v2]           ← Terminate one v1 pod
Step 3:   [v1] [v1] [v2] [v2]      ← Create another v2 pod
Step 4:   [v1] [v2] [v2]           ← Terminate another v1 pod
Step 5:   [v1] [v2] [v2] [v2]      ← Create last v2 pod
Final:    [v2] [v2] [v2]           ← All updated, zero downtime

5. Storage Orchestration

Problem: Containers are ephemeral, need persistent storage for databases.

Kubernetes Solution:

# Persistent Volume Claim
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: postgres-pvc
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 10Gi

# Automatically provisions storage from cloud provider or local storage

6. Secret & Configuration Management

Problem: Hard-coded configs, exposed credentials, different configs per environment.

Kubernetes Solution:

# ConfigMap for configuration
apiVersion: v1
kind: ConfigMap
metadata:
  name: app-config
data:
  database_url: "postgres://db.example.com:5432"
  log_level: "info"

---
# Secret for sensitive data
apiVersion: v1
kind: Secret
metadata:
  name: db-credentials
type: Opaque
data:
  username: YWRtaW4=  # base64 encoded
  password: cGFzc3dvcmQ=

# Injected into containers as environment variables or files

7. Resource Management

Problem: Applications compete for resources, “noisy neighbor” problem.

Kubernetes Solution:

# Resource requests and limits
apiVersion: v1
kind: Pod
metadata:
  name: web-app
spec:
  containers:
  - name: app
    image: myapp:v1
    resources:
      requests:  # Guaranteed minimum
        memory: "256Mi"
        cpu: "250m"
      limits:    # Maximum allowed
        memory: "512Mi"
        cpu: "500m"

Kubernetes vs. Docker: Clearing the Confusion

Many beginners confuse Kubernetes and Docker. Let’s clarify:

Docker

What it is: Container runtime and platform What it does:

  • Builds container images
  • Runs individual containers
  • Manages single-host networking

Analogy: Docker is like a single ship

# Docker manages individual containers
docker run -d nginx
docker stop container_id
docker rm container_id

Kubernetes

What it is: Container orchestration platform What it does:

  • Manages multiple containers across multiple hosts
  • Handles scaling, networking, storage
  • Provides high availability and self-healing

Analogy: Kubernetes is like a fleet management system

# Kubernetes manages container workloads at scale
kubectl create deployment nginx --image=nginx --replicas=3
kubectl scale deployment nginx --replicas=10
kubectl delete deployment nginx

They Work Together

┌─────────────────────────────────────┐
│         Kubernetes Cluster          │
│  ┌───────────────────────────────┐  │
│  │         Control Plane         │  │
│  │    (Management & Scheduling)  │  │
│  └───────────────────────────────┘  │
│                                     │
│  ┌──────────┐  ┌──────────┐        │
│  │  Node 1  │  │  Node 2  │        │
│  │  ┌────┐  │  │  ┌────┐  │        │
│  │  │Docker│ │  │  │Docker│ │       │
│  │  │Engine│ │  │  │Engine│ │       │
│  │  └────┘  │  │  └────┘  │        │
│  │ [Pod 1]  │  │ [Pod 3]  │        │
│  │ [Pod 2]  │  │ [Pod 4]  │        │
│  └──────────┘  └──────────┘        │
└─────────────────────────────────────┘

Kubernetes orchestrates, Docker (or other runtimes) runs the containers

Real-World Use Cases

1. Microservices Architecture

Scenario: E-commerce platform with multiple services

Traditional Monolith:
┌────────────────────────┐
│   Single Application   │
│  ┌──────────────────┐  │
│  │ User Management  │  │
│  │ Product Catalog  │  │
│  │ Shopping Cart    │  │
│  │ Payment          │  │
│  │ Notifications    │  │
│  └──────────────────┘  │
└────────────────────────┘
Problem: Update one part, deploy everything

Kubernetes Microservices:
┌─────────────┐  ┌─────────────┐  ┌─────────────┐
│   User      │  │  Product    │  │  Shopping   │
│   Service   │  │  Service    │  │  Cart       │
│   (3 pods)  │  │  (5 pods)   │  │  Service    │
└─────────────┘  └─────────────┘  │  (4 pods)   │
                                   └─────────────┘
┌─────────────┐  ┌─────────────┐
│  Payment    │  │Notification │
│  Service    │  │  Service    │
│  (2 pods)   │  │  (2 pods)   │
└─────────────┘  └─────────────┘

Benefits:
- Independent scaling (Product service needs more pods during sales)
- Independent deployments (update Payment without touching others)
- Better fault isolation (if Cart crashes, User service still works)

2. CI/CD Pipelines

Scenario: Automated testing and deployment

# Jenkins running in Kubernetes
# Each build gets isolated environment

# Build process:
1. Code commit triggers webhook
2. Kubernetes spawns build pod
3. Tests run in isolated container
4. Build completes, pod is destroyed
5. If successful, deploy to staging cluster
6. Automated tests run
7. If pass, promote to production

# Benefits:
- Consistent build environments
- Parallel builds (scale based on load)
- Resource efficiency (pods deleted after use)

3. Batch Processing & Jobs

Scenario: Daily report generation

# Kubernetes Job for batch processing
apiVersion: batch/v1
kind: Job
metadata:
  name: daily-report
spec:
  parallelism: 5      # Run 5 pods in parallel
  completions: 100    # Process 100 tasks total
  template:
    spec:
      containers:
      - name: report-generator
        image: report-app:v1
      restartPolicy: OnFailure

# Kubernetes handles:
# - Distributing 100 tasks across 5 parallel workers
# - Retrying failed tasks
# - Cleaning up completed pods

4. Machine Learning Model Serving

Scenario: ML model inference API

# GPU-enabled pods for ML inference
apiVersion: apps/v1
kind: Deployment
metadata:
  name: ml-model-api
spec:
  replicas: 3
  template:
    spec:
      containers:
      - name: model-server
        image: tensorflow-serving:latest
        resources:
          limits:
            nvidia.com/gpu: 1  # Request GPU

# Benefits:
# - Automatic scaling based on inference requests
# - GPU resource management
# - A/B testing with multiple model versions
# - Blue-green deployments for model updates

Kubernetes Ecosystem & CNCF

Kubernetes is part of a larger cloud-native ecosystem:

CNCF Projects Commonly Used with Kubernetes

Container Runtimes:
- containerd (most common)
- CRI-O
- Docker (deprecated as runtime, still works for images)

Service Mesh:
- Istio
- Linkerd

Monitoring:
- Prometheus (metrics)
- Grafana (visualization)

Logging:
- Fluentd
- Loki

CI/CD:
- ArgoCD (GitOps)
- Flux
- Tekton

Package Management:
- Helm (Kubernetes package manager)

When to Use Kubernetes

Good Fits

YES - Use Kubernetes when:

  • Running microservices architecture
  • Need auto-scaling and self-healing
  • Multi-cloud or hybrid cloud deployment
  • Team has container experience
  • Application requires high availability
  • Multiple environments (dev, staging, prod) with consistency

Example Perfect Scenario:

SaaS Platform Requirements:
- 20+ microservices
- Need to scale 10x during peak hours
- Deploy 50+ times per day
- Multi-region deployment
- 99.99% uptime requirement

→ Kubernetes is ideal

Not Always Necessary

MAYBE - Consider alternatives when:

  • Simple monolithic application
  • Small team with no Kubernetes experience
  • Cost-sensitive (Kubernetes has overhead)
  • Application doesn’t need scaling
  • Regulatory requirements prevent containerization

Example Where Simpler Solution Works:

Simple Blog/CMS:
- Single application
- Predictable traffic
- 1-2 deployments per month
- Small team

→ Traditional VM with Docker Compose might be better
→ Or managed PaaS like Heroku, Railway

What You’ll Learn in This Series

This tutorial series is structured to take you from beginner to proficient:

Part 1: Introduction to Kubernetes (Current)

  • Understanding Kubernetes fundamentals
  • Evolution and problems solved
  • Real-world use cases

Part 2: Installing Kubernetes on Ubuntu

  • System requirements and preparation
  • Installing Docker/containerd
  • Setting up kubectl
  • Installing Minikube (single-node)
  • Installing kubeadm (multi-node)
  • Cluster verification

Part 3: Kubernetes Architecture and Core Concepts

  • Control Plane components
  • Worker Node components
  • etcd, API Server, Scheduler, Controller Manager
  • kubelet, kube-proxy
  • How components communicate

Part 4: Working with Pods

  • Pod lifecycle
  • Creating pods with YAML
  • Multi-container pods
  • Init containers
  • Pod networking
  • Debugging pods

Part 5: Deployments and ReplicaSets

  • Declarative deployments
  • Rolling updates
  • Rollback strategies
  • Scaling applications
  • Deployment strategies (blue-green, canary)

Part 6: Services and Networking

  • ClusterIP, NodePort, LoadBalancer
  • Ingress controllers
  • Network policies
  • DNS in Kubernetes
  • External service integration

Part 7: ConfigMaps and Secrets

  • Externalizing configuration
  • Managing secrets securely
  • Environment variables vs volumes
  • Updating configurations

Part 8: Persistent Storage

  • Volumes and Volume Mounts
  • PersistentVolumes (PV)
  • PersistentVolumeClaims (PVC)
  • StorageClasses
  • StatefulSets for stateful apps

Part 9: Advanced Topics

  • Ingress controllers
  • StatefulSets
  • DaemonSets
  • Jobs and CronJobs
  • Resource quotas
  • LimitRanges

Part 10: Monitoring, Logging, and Best Practices

  • Prometheus and Grafana setup
  • Centralized logging
  • Health checks (liveness, readiness)
  • Security best practices
  • Cost optimization
  • Production readiness checklist

Key Takeaways

What is Kubernetes?

  • Container orchestration platform for automating deployment, scaling, and management

Why Kubernetes?

  • Evolved from physical → virtual → containers → orchestration needs
  • Solves high availability, scaling, service discovery, rollouts, storage, and configuration challenges

Kubernetes vs Docker

  • Docker: Container runtime (runs individual containers)
  • Kubernetes: Orchestration platform (manages containers at scale)
  • They complement each other

When to Use

  • Microservices architecture
  • Need auto-scaling and high availability
  • Multiple deployments per day
  • Multi-environment consistency
  • Consider alternatives for simple applications with small teams

What’s Next

  • In Part 2, we’ll get hands-on by installing Kubernetes on Ubuntu, setting up both single-node (Minikube) and multi-node (kubeadm) clusters

Prerequisites for Next Tutorial

Before Part 2, ensure you have:

  • Ubuntu 20.04 or 22.04 LTS (physical or VM)
  • At least 2 CPU cores
  • 4GB RAM (8GB recommended)
  • 20GB free disk space
  • Stable internet connection
  • Basic Linux command line knowledge
  • Understanding of this introduction

Additional Resources

Official Documentation:

Interactive Learning:

Community:

  • Kubernetes Slack: kubernetes.slack.com
  • Stack Overflow [kubernetes] tag
  • Reddit: r/kubernetes

Ready to dive deeper? In Part 2, we’ll install Kubernetes on Ubuntu and get your first cluster running!