Kubernetes for Beginners - Complete Guide
From basics to production deployments. Learn Kubernetes, master kubectl, understand architecture, and start orchestrating containers like a pro.

Photo by Chanaka from Pexels
Kubernetes is a container orchestration system that changed how we build and deploy applications. If you've heard about it but never had a chance to use it, this guide is for you.
In this article, you'll learn Kubernetes from scratch – no unnecessary theory, with practical examples and real case studies. We'll go through key concepts, you'll see how working with kubectl looks, learn best practices, and find out when Kubernetes is the right choice (and when it's not).
After reading this guide, you'll be ready to create your first cluster and deploy a production application.
What is Kubernetes and why is it important?
Kubernetes (often abbreviated as K8s) is an open-source platform for automating deployment, scaling, and management of containerized applications. Created by Google, now managed by the Cloud Native Computing Foundation (CNCF).
Why is Kubernetes revolutionary?
Before Kubernetes
You have an application in 10 Docker containers. One container crashes – you need to restart it manually. Traffic increases 3x – you need to manually start more instances. You want to deploy a new version – downtime for a few minutes.
With Kubernetes
You define the desired state: "I want 10 instances of this container". Kubernetes automatically maintains that state – crashing containers are automatically restarted, auto-scaling responds to load, deployment of new versions happens without downtime.
Key advantages of Kubernetes
Key Kubernetes concepts
Kubernetes uses several basic objects (called resources). Understanding them is key to working effectively with K8s.
📦 Pod
Pod is the smallest unit in Kubernetes. It's a group of one or more Docker containers that share storage and network. In 99% of cases: 1 Pod = 1 container.
Example:
apiVersion: v1
kind: Pod
metadata:
name: my-app-pod
labels:
app: my-app
spec:
containers:
- name: nginx
image: nginx:1.25
ports:
- containerPort: 80🚀 Deployment
Deployment manages Pods. You define how many replicas you want, and Kubernetes automatically creates and maintains the appropriate number of Pods. Handles rolling updates, rollback, and self-healing.
Example: Deployment with 3 replicas
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app
spec:
replicas: 3 # 3 copies of the application
selector:
matchLabels:
app: my-app
template:
metadata:
labels:
app: my-app
spec:
containers:
- name: app
image: my-app:v1.0
resources:
requests:
cpu: 100m
memory: 128Mi
limits:
cpu: 500m
memory: 512Mi🌐 Service
Service is a stable endpoint for Pods. Pods have dynamic IPs that change with every restart. Service provides a stable DNS name and load balancing.
Example: LoadBalancer Service
apiVersion: v1
kind: Service
metadata:
name: my-app-service
spec:
type: LoadBalancer # Public IP
selector:
app: my-app # Routes traffic to Pods with this label
ports:
- port: 80 # External port
targetPort: 8080 # Port in containerClusterIP (default)
Internal IP, accessible only in cluster. For service-to-service communication.
NodePort
Public port (30000-32767) on each node. Rarely used.
LoadBalancer
Public IP from cloud provider (Azure LB, AWS ELB). For web applications.
⚙️ ConfigMap & Secret
ConfigMap
Stores configuration (non-sensitive data): API URLs, feature flags, settings.
apiVersion: v1 kind: ConfigMap metadata: name: app-config data: API_URL: "https://api.example.com" LOG_LEVEL: "info"
Secret
Stores sensitive data (base64-encoded): passwords, API keys, certificates.
apiVersion: v1 kind: Secret metadata: name: app-secrets type: Opaque data: DB_PASSWORD: cGFzc3dvcmQxMjM= # base64 API_KEY: c2VjcmV0a2V5NDU2=
Kubernetes Architecture
A Kubernetes cluster consists of Control Plane (the cluster's brain) and Worker Nodes (worker machines).
Control Plane Components
kube-apiserver
Control Plane frontend. Everything goes through API server – kubectl, CI/CD, internal components. RESTful API for cluster management.
etcd
Distributed key-value store. Stores the entire cluster state: manifests, secrets, config. Highly available (HA setup = 3 or 5 etcd instances).
kube-scheduler
Decides which node to run a new Pod on. Takes into account: available resources (CPU, RAM), affinity rules, taints/tolerations.
kube-controller-manager
Runs controllers (ReplicaSet, Deployment, StatefulSet, DaemonSet). Each controller monitors its objects and maintains desired state.
Worker Node Components
kubelet
Agent running on each node. Communicates with Control Plane, starts Pods, monitors health checks (liveness/readiness probes).
kube-proxy
Network proxy running on each node. Handles Service networking – routes traffic to appropriate Pods (via iptables or IPVS).
Container Runtime
Runs containers. Standard: containerd (lightweight, CNCF project). Alternatives: CRI-O, Docker (deprecated, but still works).
How does it work in practice?
- You send a Deployment manifest via kubectl → kube-apiserver
- etcd saves desired state (3 Pod replicas)
- kube-controller-manager detects that 0/3 Pods exist → creates 3 Pods
- kube-scheduler decides which nodes to run Pods on (node A, B, C)
- kubelet on nodes A, B, C receives instructions → starts containers
- kube-proxy configures networking so Service routes traffic to these Pods
- If a Pod crashes → kubelet restarts it, if a node dies → scheduler moves Pods to a live node
Managed Kubernetes: AKS vs EKS vs GKE
You can install Kubernetes self-hosted (kubeadm, kops), but 95% of companies use managed Kubernetes from cloud providers. Control Plane is managed by the provider – you only pay for worker nodes.
| Feature | Azure AKS | AWS EKS | Google GKE |
|---|---|---|---|
| Control Plane | Free ($0) | $0.10/hour ($73/mo) | Free (1 cluster), then $0.10/h |
| Setup time | 5-10 min | 15-20 min | 5 min (fastest) |
| Upgrade | Auto-upgrade optional | Manual (via eksctl) | Auto-upgrade by default |
| Monitoring | Azure Monitor + Container Insights | CloudWatch + EKS Observability | Cloud Monitoring (best-in-class) |
| Networking | Azure CNI, Kubenet | VPC CNI, Calico | GKE native (best performance) |
| Auto-scaling | Cluster Autoscaler, KEDA | Cluster Autoscaler, Karpenter | Node Auto-Provisioning (best) |
| Windows containers | Yes (best support) | Yes | Limited |
| Best for | Azure ecosystem, .NET apps | AWS ecosystem, most services | Best Kubernetes experience |
Let's be honest about costs
Big cloud providers are convenient, but they're NOT the only option. Here's what a 3-node production cluster actually costs per month:
Notice something? Hetzner and DigitalOcean are 4-5x cheaper than big cloud providers. For small-medium workloads with predictable traffic, they're often the better choice.
Hetzner Cloud Kubernetes – best price/performance for EU
Hetzner offers managed Kubernetes at unbeatable prices. Perfect for EU customers (GDPR compliant, low latency).
- €36/month for production 3-node cluster (vs €150-200 on Azure/AWS/GCP)
- Data centers in Germany and Finland (EU data residency)
- Fast NVMe SSDs, 20TB free traffic per month
- Simple pricing – no hidden costs or complex billing
- Great for: startups, SMEs, cost-sensitive projects
Cloud vs Dedicated Servers – the truth nobody tells you
Cloud marketing tells you cloud is always better. That's not true. Here's when dedicated servers beat cloud:
Cloud makes sense for:
- Variable, unpredictable traffic
- Fast scaling requirements (seasonal spikes)
- Need for managed services (databases, AI/ML)
- Global presence (multi-region deployments)
- Development/testing environments
Dedicated servers make sense for:
- Stable, predictable workloads
- High constant resource usage (databases, caches)
- Cost-sensitive projects (50-70% cheaper)
- Full control over hardware
- Privacy/compliance requirements
Reality check: A Hetzner dedicated server (AMD Ryzen 9, 64GB RAM, 2x NVMe SSD) costs €40/month. The same specs on AWS would cost $300-400/month. If your load is stable, dedicated wins.
My honest recommendation
Azure AKS: If you're already using Azure (VMs, SQL Database, Functions) – AKS is a no-brainer. Free Control Plane, excellent integration with Azure services. → Full AKS production guide
AWS EKS: If you're on AWS and need mature ecosystem – EKS + ECS/Fargate is a powerful combo. Pay $73/mo for Control Plane.
Google GKE: If you want the best Kubernetes experience and don't care about cloud lock-in – GKE is Kubernetes done right (Google invented K8s).
Hetzner/DigitalOcean/Civo: If budget matters and you don't need big cloud managed services – these offer 80% of the value at 20% of the cost.
kubectl Basics – Command Line Interface

Photo by Brett Sayles from Pexels
kubectl is your main tool for working with Kubernetes. Here are the most important commands you'll use daily.
Basic commands
Cluster info
kubectl cluster-infoList Pods
kubectl get podskubectl get pods -A # all namespacesPod details
kubectl describe pod <pod-name>Pod logs
kubectl logs <pod-name>kubectl logs -f <pod-name> # follow (tail -f)Deployment & Configuration
Apply manifest (declarative)
kubectl apply -f deployment.yamlkubectl apply -f ./manifests/ # entire folderCreate resources (imperative)
kubectl create deployment my-app --image=nginx:1.25kubectl expose deployment my-app --port=80 --type=LoadBalancerScale deployment
kubectl scale deployment my-app --replicas=5Updates & Rollbacks
Update image
kubectl set image deployment/my-app app=my-app:v2.0Rollout status & history
kubectl rollout status deployment/my-appkubectl rollout history deployment/my-appRollback
kubectl rollout undo deployment/my-appkubectl rollout undo deployment/my-app --to-revision=2Debugging & Troubleshooting
Execute command in Pod
kubectl exec -it <pod-name> -- /bin/bashPort forwarding (local testing)
kubectl port-forward pod/my-app 8080:80Access: http://localhost:8080
Resource usage
kubectl top nodeskubectl top podsCleanup
Delete resources
kubectl delete pod <pod-name>kubectl delete deployment my-appkubectl delete -f deployment.yamlDelete all resources in namespace
kubectl delete all --all -n <namespace>Pro tips for kubectl
- Alias:
alias k=kubectl→ instead of kubectl type just "k" - kubectl cheat sheet: kubernetes.io/docs/reference/kubectl/cheatsheet
- kubectl plugins: krew (plugin manager), kubens/kubectx (switch contexts), stern (multi-pod logs)
- Bash completion:
kubectl completion bash > ~/.kube/completion
Deployment Strategies
Kubernetes offers several deployment strategies. The choice depends on business requirements, risk tolerance, and application type.
Rolling Update (default)
Gradual replacement of Pods from old version to new. Kubernetes removes old Pods and creates new ones in batches, maintaining minimum running replicas.
Pros
- ✓Zero downtime – application runs continuously
- ✓Automatic rollback if new version fails
- ✓Default strategy – no configuration needed
Cons
- ✗2 versions running simultaneously (for a few minutes)
- ✗Difficult for breaking changes (DB schema migrations)
Configuration example:
spec:
replicas: 10
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1 # Max 1 Pod offline
maxSurge: 2 # Max 2 extra Pods during updateBlue/Green Deployment
Two identical environments: Blue (current production) and Green (new version). Deploy to Green, test, switch traffic from Blue to Green (instant switch).
Pros
- ✓Instant rollback – switch Service label
- ✓Testing on full production environment
- ✓Only one version active – no compatibility issues
Cons
- ✗2x resources (Blue + Green simultaneously)
- ✗Database migrations can be problematic
Canary Deployment
New version first goes to a small group of users (e.g., 5%). If everything is OK, gradually increase percentage (10%, 25%, 50%, 100%).
Pros
- ✓Minimal risk – limited blast radius
- ✓A/B testing – compare metrics (errors, latency)
- ✓Progressive rollout – stop if something goes wrong
Cons
- ✗Requires advanced routing (Istio, Linkerd, Flagger)
- ✗Slower rollout (days vs minutes)
Which strategy to choose?
- Rolling Update: Default for 90% of applications. Safe enough, zero downtime, no extra costs.
- Blue/Green: High-risk releases, e-commerce peak seasons, banking apps. You need instant rollback.
- Canary: Major releases, new features, A/B testing. You have time and monitoring infrastructure.
Auto-scaling in Kubernetes
Kubernetes offers two types of auto-scaling: Horizontal (more Pods) and Vertical (more CPU/RAM per Pod).
Horizontal Pod Autoscaler (HPA)
Automatically scales the number of Pods based on CPU, memory, or custom metrics. Example: 10% CPU → 2 Pods, 80% CPU → 10 Pods.
Example: HPA based on CPU
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
name: my-app-hpa
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: my-app
minReplicas: 2
maxReplicas: 10
metrics:
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 70 # Scale if CPU > 70%Vertical Pod Autoscaler (VPA)
Automatically adjusts CPU/RAM requests and limits. Useful when you don't know optimal resource allocation.
Cluster Autoscaler (Node-level)
Adds/removes worker nodes when Pods can't be scheduled due to lack of resources. Example: 10 Pods need 20GB RAM, cluster has 16GB → Cluster Autoscaler adds new node.
How it works:
- HPA scales Deployment from 3 to 8 Pods
- Cluster has capacity for 6 Pods only – 2 Pods are pending
- Cluster Autoscaler detects pending Pods → adds new node
- New node ready → scheduler places pending Pods
Best practices for auto-scaling
- Always set resource requests/limits – HPA needs them to work correctly
- Use HPA for stateless apps (web servers, APIs), not for databases
- Set reasonable min/maxReplicas – avoid cost explosion
- Monitor scaling events via
kubectl get hpa -w - Test auto-scaling under load before production (use k6, Locust, or JMeter)
Networking in Kubernetes
Kubernetes networking may seem complicated, but in practice it comes down to a few basic concepts.
Kubernetes networking basics
1. Pod-to-Pod communication
Each Pod has its own IP (internal cluster IP). Pods can communicate directly without NAT – flat network model.
2. Service networking
Service is a stable DNS name and IP. Traffic to Service is load-balanced between Pods.
3. Ingress (HTTP/HTTPS routing)
Ingress is a layer 7 load balancer. Routing based on hostname/path.
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: my-app-ingress
spec:
rules:
- host: app.example.com
http:
paths:
- path: /api
pathType: Prefix
backend:
service:
name: api-service
port:
number: 80
- path: /
pathType: Prefix
backend:
service:
name: frontend-service
port:
number: 80CNI (Container Network Interface)
CNI plugin implements the Kubernetes networking model. CNI choice affects performance and features.
Calico (most popular)
Network policies, BGP routing, eBPF dataplane. Production-ready.
Cilium (modern)
eBPF-based, high performance, advanced observability, service mesh.
Azure CNI / AWS VPC CNI
Native cloud integration, Pods get IPs from VNet/VPC.
Network Policies (firewall rules)
Network Policies are firewalls for Pods. By default all Pods can communicate with all. Network Policy restricts communication.
Example: Frontend can connect to Backend, but Backend NOT to Database
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: backend-policy
spec:
podSelector:
matchLabels:
app: backend
policyTypes:
- Ingress
ingress:
- from:
- podSelector:
matchLabels:
app: frontend # Only frontend can connect to backendWhen to use Kubernetes (and when not)?
Kubernetes is powerful, but not for every project. Here's an honest assessment of when it's worth it, and when it's better to choose a simpler solution.
Use Kubernetes if:
- ✓Microservices architecture: 5+ services, you need orchestration, service discovery, load balancing (compare microservices vs monolith)
- ✓High availability requirements: Zero-downtime deployment, self-healing, multi-zone redundancy
- ✓Variable traffic: Auto-scaling based on CPU/RAM/custom metrics, daily/weekly traffic patterns
- ✓Multi-cloud or cloud portability: Deploy the same app on Azure, AWS, GCP, on-premise
- ✓DevOps team available: Someone knows Kubernetes, can manage infrastructure, troubleshoot
- ✓Complex deployment scenarios: Canary deployments, blue/green, feature flags, A/B testing (see GitHub Actions vs Azure DevOps)
DON'T use Kubernetes if:
- ✗Simple monolithic app: 1 service, simple architecture → use Azure App Service, AWS Elastic Beanstalk, Heroku
- ✗Small team, no DevOps: Kubernetes requires knowledge and maintenance. Use PaaS (Platform as a Service)
- ✗Stable traffic, no scaling needs: If traffic is constant (100 req/s always) → no need for auto-scaling
- ✗Very low budget: If you can't afford even Hetzner (€36/month), use serverless (Functions, Lambda) or PaaS (Heroku, Render)
- ✗Startup MVP: Early stage, need fast iteration → use simpler stack, migrate to K8s later if needed
- ✗Stateful databases: Kubernetes is great for stateless apps. Databases → managed services (Azure SQL, RDS, Cloud SQL)
My honest recommendation
Start simple – if your app works on Azure App Service or AWS Elastic Beanstalk, stay there. Migrate to Kubernetes when you actually need its features (auto-scaling, multi-service orchestration).
Don't fall into hype – Kubernetes is popular, but that doesn't mean it's right for every project. Many startups waste time on K8s instead of building product.
Use managed Kubernetes – if you decide on K8s, use AKS/EKS/GKE. Never self-host in production unless you have dedicated DevOps team.
Getting Started – Your First Kubernetes Cluster
Theory is great, but let's get practical. Here's how to create your first Kubernetes cluster and deploy an application.
Option 1: Local Development (minikube)
Best for learning and local testing. Single-node cluster on your laptop.
Installation & Setup:
# Install minikube (macOS) brew install minikube # Start cluster minikube start --driver=docker # Verify kubectl cluster-info kubectl get nodes
Option 2: Azure AKS (Production-ready)
Managed Kubernetes from Azure. Free Control Plane, pay only for nodes.
Create AKS cluster:
# Create resource group az group create --name my-k8s-rg --location westeurope # Create AKS cluster (2 nodes, Standard_D2s_v3) az aks create \ --resource-group my-k8s-rg \ --name my-aks-cluster \ --node-count 2 \ --node-vm-size Standard_D2s_v3 \ --enable-cluster-autoscaler \ --min-count 2 \ --max-count 5 \ --generate-ssh-keys # Get credentials az aks get-credentials --resource-group my-k8s-rg --name my-aks-cluster # Verify kubectl get nodes
Option 3: AWS EKS
Managed Kubernetes from AWS. $73/month for Control Plane + node costs.
Create EKS cluster with eksctl:
# Install eksctl brew install eksctl # Create cluster (2 t3.medium nodes) eksctl create cluster \ --name my-eks-cluster \ --region us-east-1 \ --nodegroup-name standard-workers \ --node-type t3.medium \ --nodes 2 \ --nodes-min 2 \ --nodes-max 5 \ --managed # Verify kubectl get nodes
Real-world Example: Deploy Node.js API to Kubernetes

Photo by Pixabay from Pexels
Let's deploy a real application: Node.js REST API with PostgreSQL database, auto-scaling, and production-ready configuration.
Step 1: Prepare Docker Image
First, containerize your Node.js app and push to container registry (Docker Hub, Azure ACR, AWS ECR).
Dockerfile:
FROM node:20-alpine WORKDIR /app COPY package*.json ./ RUN npm ci --only=production COPY . . EXPOSE 3000 CMD ["node", "server.js"]
Step 2: Kubernetes Deployment
Create Deployment with 3 replicas, resource limits, health checks.
deployment.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nodejs-api
spec:
replicas: 3
selector:
matchLabels:
app: nodejs-api
template:
metadata:
labels:
app: nodejs-api
spec:
containers:
- name: api
image: myregistry.azurecr.io/nodejs-api:v1.0
ports:
- containerPort: 3000
env:
- name: DATABASE_URL
valueFrom:
secretKeyRef:
name: db-secret
key: url
- name: NODE_ENV
value: "production"
resources:
requests:
cpu: 100m
memory: 128Mi
limits:
cpu: 500m
memory: 512Mi
livenessProbe:
httpGet:
path: /health
port: 3000
initialDelaySeconds: 30
periodSeconds: 10
readinessProbe:
httpGet:
path: /ready
port: 3000
initialDelaySeconds: 5
periodSeconds: 5Step 3: Service & Ingress
Expose API via LoadBalancer Service and configure Ingress for HTTPS.
service.yaml:
apiVersion: v1
kind: Service
metadata:
name: nodejs-api-service
spec:
type: LoadBalancer
selector:
app: nodejs-api
ports:
- port: 80
targetPort: 3000ingress.yaml (with TLS):
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: nodejs-api-ingress
annotations:
cert-manager.io/cluster-issuer: letsencrypt-prod
spec:
tls:
- hosts:
- api.example.com
secretName: api-tls-cert
rules:
- host: api.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: nodejs-api-service
port:
number: 80Step 4: Auto-scaling Configuration
hpa.yaml:
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
name: nodejs-api-hpa
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: nodejs-api
minReplicas: 3
maxReplicas: 10
metrics:
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 70Step 5: Deploy to Kubernetes
# Create namespace kubectl create namespace production # Create secret for database kubectl create secret generic db-secret \ --from-literal=url="postgresql://user:pass@db.example.com/mydb" \ -n production # Apply all manifests kubectl apply -f deployment.yaml -n production kubectl apply -f service.yaml -n production kubectl apply -f ingress.yaml -n production kubectl apply -f hpa.yaml -n production # Watch deployment kubectl get pods -n production -w # Get LoadBalancer IP kubectl get service nodejs-api-service -n production
What did we achieve?
- 3 replicas for high availability – if one Pod crashes, 2 others keep serving traffic
- Auto-scaling from 3 to 10 Pods based on CPU – handles traffic spikes automatically
- Health checks – Kubernetes restarts unhealthy Pods automatically
- Resource limits – prevents one app from consuming entire node resources
- Secrets management – database credentials stored securely
- LoadBalancer Service – public IP for external access
- Zero-downtime deployment – rolling updates without service interruption
Kubernetes Best Practices
Production Kubernetes requires more than just deploying Pods. Here are battle-tested best practices from real projects.
Security
- •Never run containers as root: Use
runAsNonRoot: truein Pod spec - •Use Network Policies: Restrict Pod-to-Pod communication to necessary services only
- •Secrets management: Use Azure Key Vault, AWS Secrets Manager, or HashiCorp Vault instead of K8s Secrets
- •Image scanning: Scan container images for vulnerabilities (Trivy, Snyk, Azure Defender)
- •RBAC: Use Role-Based Access Control – developers shouldn't have cluster-admin rights
Resource Management
- •Always set requests & limits: CPU/RAM requests for scheduling, limits to prevent resource hogging
- •Use QoS classes wisely: Guaranteed (requests = limits) for critical apps, Burstable for most workloads
- •ResourceQuotas per namespace: Prevent one team from consuming all cluster resources
- •Monitor resource usage: Use Prometheus + Grafana or cloud-native monitoring
Reliability
- •Health checks are mandatory: Both liveness (restart unhealthy Pods) and readiness (don't send traffic to starting Pods)
- •PodDisruptionBudget: Ensure minimum number of Pods during voluntary disruptions (node upgrades)
- •Multi-zone deployment: Spread Pods across availability zones for fault tolerance
- •Graceful shutdown: Handle SIGTERM properly, finish in-flight requests before exit
Deployment Strategy
- •Use GitOps: ArgoCD or Flux for declarative deployments from Git repo
- •Rolling updates: Default strategy, gradual Pod replacement without downtime
- •Canary deployments: Deploy new version to 10% traffic, verify, then 100%
- •Rollback plan: Always test rollback before production deployment
Cost Optimization
- •Right-size nodes: Use node pools with different VM sizes for different workloads
- •Spot/Preemptible instances: Save 60-80% for fault-tolerant workloads
- •Cluster Autoscaler: Scale down nodes during low traffic (nights, weekends)
- •Monitor costs: Use Kubecost or cloud-native cost management tools
Common Kubernetes Mistakes to Avoid
1. No resource requests/limits
Problem: Pods can consume all node resources, causing OOM kills and cluster instability.
Solution: Always set resources.requests and resources.limits for every container.
2. Ignoring health checks
Problem: Kubernetes sends traffic to unhealthy Pods, causing 5xx errors.
Solution: Implement livenessProbe and readinessProbe for every Pod.
3. Running as root
Problem: Security risk – if container is compromised, attacker has root access.
Solution: Set securityContext.runAsNonRoot: true and use non-root user in Dockerfile.
4. Single replica for critical services
Problem: If the only Pod crashes or node dies, service is down.
Solution: Use replicas: 3 minimum for production services.
5. Storing secrets in Git
Problem: Secrets in manifests are visible in Git history – security breach.
Solution: Use external secret managers (Azure Key Vault, AWS Secrets Manager) or Sealed Secrets.
6. No monitoring & logging
Problem: When something breaks, you're blind – no idea what went wrong.
Solution: Use Prometheus + Grafana for metrics, ELK/Loki for logs, Jaeger for tracing.
Summary & Next Steps
What you've learned
- What Kubernetes is and why it matters for modern applications
- Key concepts: Pods, Deployments, Services, ConfigMaps, Secrets
- Kubernetes architecture: Control Plane and Worker Nodes
- Working with kubectl – essential commands for daily work
- Auto-scaling: HPA, VPA, Cluster Autoscaler
- Networking: CNI, Ingress, Network Policies
- When to use Kubernetes (and when to avoid it)
- Real-world deployment example with best practices
Next steps to master Kubernetes
- Set up local cluster with minikube – practice basic commands and deployments
- Deploy your own application – containerize an app and deploy to Kubernetes
- Learn advanced concepts: StatefulSets (databases), DaemonSets (monitoring agents), Jobs & CronJobs
- Master Helm – package manager for Kubernetes (like npm for K8s)
- Implement GitOps – ArgoCD or Flux for automated deployments from Git
- Study service mesh: Istio or Linkerd for advanced traffic management and observability
- Get certified: CKA (Certified Kubernetes Administrator) or CKAD (Application Developer)
Useful resources
- Official Kubernetes documentation – comprehensive reference
- kubectl cheat sheet – quick command reference
- Cloud Native Computing Foundation – K8s ecosystem projects
- Kubernetes The Hard Way – deep understanding by manual setup
Need help with Kubernetes deployment?
We help companies migrate to Kubernetes, optimize existing clusters, and build cloud-native architectures. From strategy to implementation – see our cloud solutions.
📧 Email: hello@wojciechowski.app · I respond within 24h
Related articles
References
- [1] Microsoft Azure - Official Documentation -https://learn.microsoft.com/en-us/azure/
- [2] Microsoft Learn - Azure Training Center -https://learn.microsoft.com/en-us/training/azure/
- [3] Kubernetes - Official Documentation -https://kubernetes.io/docs/
- [4] CNCF Annual Survey 2023 - State of Kubernetes Adoption -https://www.cncf.io/reports/cncf-annual-survey-2023/
- [5] .NET - Official Microsoft Documentation -https://learn.microsoft.com/en-us/dotnet/
- [6] .NET Blog - Latest updates and best practices -https://devblogs.microsoft.com/dotnet/
- [7] Next.js - Official Documentation -https://nextjs.org/docs
- [8] React - Documentation -https://react.dev/
- [9] Flexera State of the Cloud Report 2024 -https://www.flexera.com/blog/cloud/cloud-computing-trends-2024-state-of-the-cloud-report/
- [10] FinOps Foundation - Best Practices -https://www.finops.org/
- [11] Gartner - Cloud Computing Research -https://www.gartner.com/en/information-technology/insights/cloud-computing
- [12] AWS - Official Documentation -https://docs.aws.amazon.com/
- [13] Google Cloud - Official Documentation -https://cloud.google.com/docs