Kubernetes (K3s)¶
K3s is a lightweight Kubernetes distribution running in the homelab for orchestrating containerized workloads that require advanced scheduling, scaling, and high availability features beyond what Docker Compose provides.
Cluster Architecture¶
K3s Setup¶
- Lightweight Distribution – K3s is optimized for resource-constrained environments
- Single Binary – Easy installation and updates
- Built-in Components – Includes Traefik, CoreDNS, local-path provisioner
- HA Capable – Multi-master configuration for production workloads
Cluster Configuration¶
- Control Plane Nodes – 3 master nodes for HA
- Worker Nodes – 4 worker nodes for workload distribution
- Embedded etcd – Distributed key-value store for cluster state
- Storage Integration – TrueNAS CSI for persistent volumes
Key Features¶
Automated Deployment & Scaling¶
- Deployments – Declarative updates and rollbacks
- StatefulSets – Ordered, persistent workloads (databases)
- DaemonSets – Run on every node (monitoring agents)
- Horizontal Pod Autoscaling – Scale based on CPU/memory metrics
Service Discovery & Load Balancing¶
- ClusterIP Services – Internal service discovery
- NodePort Services – External access on specific ports
- LoadBalancer Services – Integration with MetalLB for bare-metal
- Ingress Controllers – HTTP/HTTPS routing with Traefik
Storage Management¶
- Persistent Volumes – Abstracted storage provisioning
- TrueNAS CSI Driver – Dynamic NFS volume creation
- Storage Classes – Different performance tiers (SSD, HDD)
- Volume Snapshots – Backup and restore capabilities
Workload Examples¶
Stateless Applications¶
apiVersion: apps/v1
kind: Deployment
metadata:
name: web-app
spec:
replicas: 3
selector:
matchLabels:
app: web
template:
metadata:
labels:
app: web
spec:
containers:
- name: nginx
image: nginx:latest
ports:
- containerPort: 80
Stateful Applications¶
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: postgres
spec:
serviceName: postgres
replicas: 1
selector:
matchLabels:
app: postgres
template:
metadata:
labels:
app: postgres
spec:
containers:
- name: postgres
image: postgres:15
env:
- name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
name: postgres-secret
key: password
volumeMounts:
- name: data
mountPath: /var/lib/postgresql/data
volumeClaimTemplates:
- metadata:
name: data
spec:
accessModes: ["ReadWriteOnce"]
storageClassName: truenas-nfs
resources:
requests:
storage: 20Gi
Helm Package Manager¶
Chart Repositories¶
# Add common chart repos
helm repo add stable https://charts.helm.sh/stable
helm repo add bitnami https://charts.bitnami.com/bitnami
helm repo update
Application Deployment¶
# Install Grafana via Helm
helm install grafana bitnami/grafana \
--namespace monitoring \
--set service.type=LoadBalancer \
--set persistence.enabled=true \
--set persistence.size=10Gi
Installed Charts¶
- Monitoring Stack – Prometheus, Grafana, Alertmanager
- Logging Stack – Loki, Promtail
- CI/CD – ArgoCD for GitOps
- Databases – PostgreSQL, Redis
Ingress & Networking¶
Traefik Ingress Controller¶
Built into K3s, Traefik handles HTTP/HTTPS routing:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: web-ingress
annotations:
traefik.ingress.kubernetes.io/router.entrypoints: websecure
cert-manager.io/cluster-issuer: letsencrypt-prod
spec:
rules:
- host: app.k8s.local
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: web-app
port:
number: 80
tls:
- hosts:
- app.k8s.local
secretName: app-tls
MetalLB Load Balancer¶
Provides LoadBalancer IPs for services on bare metal:
apiVersion: v1
kind: Service
metadata:
name: web-app-lb
spec:
type: LoadBalancer
selector:
app: web
ports:
- port: 80
targetPort: 80
loadBalancerIP: 10.0.20.100 # Reserved IP from pool
Storage Integration¶
TrueNAS CSI Driver¶
Dynamic provisioning of NFS volumes from TrueNAS:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: truenas-nfs
provisioner: nfs.csi.k8s.io
parameters:
server: truenas.local
share: /mnt/pool/k8s-volumes
mountOptions:
- nfsvers=4
- nolock
reclaimPolicy: Retain
volumeBindingMode: Immediate
Persistent Volume Claims¶
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: app-data
spec:
accessModes:
- ReadWriteOnce
storageClassName: truenas-nfs
resources:
requests:
storage: 10Gi
Monitoring & Observability¶
Prometheus + Grafana¶
- Metrics Collection – Scrapes all pods and nodes
- Custom Dashboards – Cluster health, resource usage
- Alerting – PagerDuty/Slack integration
- Service Monitors – Auto-discover annotated services
Logging with Loki¶
- Log Aggregation – Centralized logs from all pods
- Label-Based Querying – Filter by namespace, pod, container
- Grafana Integration – Unified metrics + logs view
GitOps with ArgoCD¶
Continuous Deployment¶
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: web-app
namespace: argocd
spec:
project: default
source:
repoURL: https://gitea.local/homelab/k8s-manifests
targetRevision: HEAD
path: apps/web
destination:
server: https://kubernetes.default.svc
namespace: production
syncPolicy:
automated:
prune: true
selfHeal: true
Benefits¶
- Declarative Config – All manifests in Git
- Auto-Sync – Changes in Git automatically deployed
- Rollback – Easy revert to previous Git commits
- Multi-Cluster – Manage multiple clusters from one ArgoCD
Secrets Management¶
Sealed Secrets¶
Encrypt secrets for safe Git storage:
# Create sealed secret
kubectl create secret generic db-secret \
--from-literal=password=supersecret \
--dry-run=client -o yaml | \
kubeseal -o yaml > sealed-secret.yaml
# Commit sealed-secret.yaml to Git
# Sealed Secrets controller decrypts in cluster
External Secrets Operator¶
Sync secrets from Vaultwarden/HashiCorp Vault:
apiVersion: external-secrets.io/v1beta1
kind: ExternalSecret
metadata:
name: db-credentials
spec:
secretStoreRef:
name: vault-backend
kind: SecretStore
target:
name: db-secret
data:
- secretKey: password
remoteRef:
key: database/postgres
property: password
Best Practices¶
- Namespaces – Isolate workloads by environment (dev, staging, prod)
- Resource Limits – Set CPU/memory requests and limits
- Health Checks – Define liveness and readiness probes
- RBAC – Use Role-Based Access Control for security
- Network Policies – Restrict pod-to-pod communication
- GitOps – All config in Git, deployed via ArgoCD
- Monitoring – Instrument all services with Prometheus
- Backups – Regular etcd snapshots and PV backups
Use Cases in Homelab¶
Production Workloads¶
- Databases – PostgreSQL, Redis with HA
- Monitoring – Prometheus/Grafana stack
- CI/CD – ArgoCD, Tekton pipelines
Development & Testing¶
- Staging Environments – Isolated namespaces per project
- Feature Testing – Temporary deployments for PRs
- Load Testing – Scale replicas for performance tests
Learning & Experimentation¶
- Cloud-Native Patterns – Service mesh (Istio), serverless (Knative)
- CKA/CKAD Prep – Hands-on practice for certifications
- GitOps Workflows – ArgoCD, FluxCD comparisons
Troubleshooting¶
Common Commands¶
# Get cluster info
kubectl cluster-info
# List all resources
kubectl get all -A
# Describe pod for events
kubectl describe pod <pod-name> -n <namespace>
# View logs
kubectl logs <pod-name> -n <namespace> -f
# Execute in pod
kubectl exec -it <pod-name> -n <namespace> -- /bin/bash
# Port forward for debugging
kubectl port-forward svc/<service-name> 8080:80 -n <namespace>
Debugging Deployments¶
# Check deployment status
kubectl rollout status deployment/<name> -n <namespace>
# View rollout history
kubectl rollout history deployment/<name> -n <namespace>
# Rollback to previous version
kubectl rollout undo deployment/<name> -n <namespace>