PerconaDB Pods are Restarting Frequently due to OOMKilled Errors
Problem
Frequent PerconaDB pod restarts are observed due to OOMKilled errors. The output of the **percona-db-pxc-db-pxc**
pod describe:
Containers:
...
pxc:
Image: percona/percona-xtradb-cluster:8.0.39-30.1
Ports: 3306/TCP, 4444/TCP, 4567/TCP, 4568/TCP, 33062/TCP, 33060/TCP
Host Ports: 0/TCP, 0/TCP, 0/TCP, 0/TCP, 0/TCP, 0/TCP
Command:
/var/lib/mysql/pxc-entrypoint.sh
Args:
mysqld
State: Running
Started: Tue, 03 Jun 2025 10:58:46 +0000
Last State: Terminated
Reason: OOMKilled
Exit Code: 137
Started: Tue, 03 Jun 2025 06:54:25 +0000
Finished: Tue, 03 Jun 2025 10:58:43 +0000
Ready: True
Restart Count: 73
Limits:
cpu: 2
memory: 6Gi
Requests:
cpu: 400m
memory: 3Gi
Environment
- Self-Hosted Private Cloud Director Virtualization - v2025.4 and Higher
- Self-Hosted Private Cloud Director Kubernetes - v2025.4 and Higher
Cause
The current limit for the **percona-db-pxc-db-pxc-<0-2>**
pod memory limit is insufficient for its operations.
Diagnostics
Commands used to identify the issue:
$ kubectl describe pod -n <REGION_NAMESPACE> percona-db-pxc-db-pxc-<0-2>
$ kubectl events -n <REGION_NAMESPACE>
Resolution
Increase the memory limit of the pods from the statefulset **percona-db-pxc-db-pxc**
$ kubectl edit statefulset percona-db-pxc-db-pxc-<0-2> -n <REGION_NAMESPACE>
...
resources:
limits:
cpu: "2"
memory: 8Gi # Default value 6Gi
...
Validation
Make sure the pods are running as expected and the new memory limits are updated in the pods
$ kubectl describe pod -n <REGION_NAMESPACE> percona-db-pxc-db-pxc-<0-2>
$ kubectl get pods -n <REGION_NAMESPACE>
Additional Information
If the PerconaDB pods are down, it might impact other pods like the pf9-nginx
keystone
etc.
Was this page helpful?