Avoiding Resource Chaos in Kubernetes: A Practical Guide to LimitRange and ResourceQuota

Photo by Yan Krukau from Pexels:

It started with a single rogue pod.
The DevOps team had just wrapped up a sprint and gone home for the weekend. By Monday, alerts were screaming. Nodes were throttling, memory usage had spiked, and several apps were unresponsive.
All thanks to one Pod, launched with no memory or CPU limits.

The cluster was shared between several teams, each with their own namespaces, but no guardrails. Kubernetes, being generous by default, allowed the pod to consume as much as it wanted.

That’s when the DevOps lead stepped in and introduced two lifesaving Kubernetes policies: LimitRange and ResourceQuota.

LimitRange and ResourceQuota

What is LimitRange

Think of it like a speed limit for each car on the highway.

With LimitRange
  • Set minimum and maximum CPU/memory per container.
  • Define default limits when developers forget.
  • Ensure every pod had guardrails.
apiVersion: v1
kind: LimitRange
metadata:
  name: limit-defaults
  namespace: dev
spec:
  limits:
  - default:
      memory: 512Mi
      cpu: 500m
    defaultRequest:
      memory: 256Mi
      cpu: 250m
    max:
      memory: 1Gi
      cpu: 1
    type: Container

Now, even if someone forgot to set limits, Kubernetes wouldn’t.

What is ResourceQuota ?

If LimitRange is a speed limit per car, ResourceQuota is the total number of lanes your team can use.

With ResourceQuota
  • Set a hard cap on total resources per namespace.
  • Prevent one team from overwhelming the cluster.
  • Enforce fair usage across all services.
apiVersion: v1
kind: ResourceQuota
metadata:
  name: team-dev-quota
  namespace: dev
spec:
  hard:
    pods: "10"
    requests.cpu: "2"
    requests.memory: 2Gi
    limits.cpu: "4"
    limits.memory: 4Gi

Now, the dev team couldn’t launch more than 10 pods or consume more than 4Gi of memory — even if they tried.

The Balance Restored

With LimitRange and ResourceQuota in place:

  • Pods had sane defaults and safe boundaries.
  • Teams could no longer impact others.
  • Cluster usage became predictable and stable.

Developers had freedom, but within guardrails. No need to micromanage every pods, the policies did it for you.

Final Thoughts on Kubernetes Resource Management

If you’re running Kubernetes in production, especially a shared cluster, ignoring resource limits is a time bomb.

  • Use LimitRange to protect your pods.
  • Use ResourceQuota to protect your cluster.

Together, they form the resource safety net your team (and future self) will thank you for.

Leave a Reply

Your email address will not be published. Required fields are marked *