Sunday, August 10, 2025

best kuber

Got it ✅ — I’ll prepare Kubernetes 50 Best Practices for Production Support in this structure for each point:

1. Best Practice (clear, short title)


2. What will happen if followed / if ignored (impact)


3. How to implement (step-by-step)


4. Commands (imperative)


5. YAML Example (if applicable)



Here’s the first 5 points in the new format — once you confirm this is what you want, I’ll continue with all 50:


---

1. Use Resource Requests & Limits for All Pods

What will happen:

If followed: Prevents noisy neighbor problems, ensures fair resource distribution, avoids node overload.

If ignored: Pods may consume excessive CPU/memory, causing other workloads to crash or get evicted.


How to implement:

1. Define CPU & memory requests (minimum) and limits (maximum) for each container.


2. Apply them in all deployments.



Command:

kubectl set resources deployment my-app --limits=cpu=500m,memory=512Mi --requests=cpu=200m,memory=256Mi

YAML Example:

resources:
  requests:
    cpu: "200m"
    memory: "256Mi"
  limits:
    cpu: "500m"
    memory: "512Mi"


---

2. Set Liveness & Readiness Probes

What will happen:

If followed: Detects unhealthy containers and restarts them automatically, ensures only ready pods receive traffic.

If ignored: Traffic may be routed to unhealthy pods, causing errors or downtime.


How to implement:

1. Add livenessProbe to check container health.


2. Add readinessProbe to delay traffic until the pod is ready.



Command:

kubectl edit deployment my-app # Add probes under containers

YAML Example:

livenessProbe:
  httpGet:
    path: /health
    port: 8080
  initialDelaySeconds: 15
  periodSeconds: 10
readinessProbe:
  httpGet:
    path: /ready
    port: 8080
  initialDelaySeconds: 5
  periodSeconds: 5


---

3. Use Pod Disruption Budgets (PDB)

What will happen:

If followed: Prevents too many pods from being evicted during maintenance, ensuring app availability.

If ignored: A node upgrade or drain may remove all pods at once, causing downtime.


How to implement:

1. Create a PDB specifying min pods available or max unavailable.



Command:

kubectl create pdb my-app-pdb --selector=app=my-app --min-available=2

YAML Example:

apiVersion: policy/v1
kind: PodDisruptionBudget
metadata:
  name: my-app-pdb
spec:
  minAvailable: 2
  selector:
    matchLabels:
      app: my-app


---

4. Enable Logging & Centralized Log Collection

What will happen:

If followed: Easy debugging with all logs in one place (e.g., ELK, Loki, or Splunk).

If ignored: Troubleshooting requires manually checking each pod’s logs.


How to implement:

1. Ensure apps log to stdout & stderr.


2. Use Fluentd/Fluent Bit/Logstash to collect logs.



Command:

kubectl logs -f my-pod

(Central logging tools handle multi-pod aggregation.)

YAML Example (Fluent Bit DaemonSet snippet):

containers:
  - name: fluent-bit
    image: fluent/fluent-bit:latest
    volumeMounts:
      - name: varlog
        mountPath: /var/log


---

5. Use Network Policies for Traffic Control

What will happen:

If followed: Limits which pods/services can communicate, enhancing security.

If ignored: Any pod can connect to any other pod, risk of lateral movement in attacks.


How to implement:

1. Define NetworkPolicy to allow only required ingress/egress.



Command:

kubectl apply -f allow-frontend-to-backend.yaml

YAML Example:

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: allow-frontend-to-backend
spec:
  podSelector:
    matchLabels:
      role: backend
  ingress:
  - from:
    - podSelector:
        matchLabels:
          role: frontend


---

If you confirm, I’ll prepare all 50 best practices in this format so you have a complete production-ready reference with impact, implementation, commands, and YAML examples.

Do you want me to continue with the remaining 45 in this same detailed style?


No comments:

Post a Comment