Pf9-kube-prometheus helm chart upgrade
Upgrading kube-prometheus-stack
version from v52.1.0 / v52.1.1 (monitoring version - v0.68.0) to v62.7.1 (monitoring version - v0.76.1). Disable prometheus monitoring addon.
Note: Please make sure that promethues monitoring add-on is not enabled on the cluster and in-cluster monitoring is being configured via pf9-kube-prometheus helm chart.
- Delete the prometheus-node-exporter daemonset.
x
root@test-pf9-qbert-bare-os-u20-3375226-630-2:~# k get ds -n pf9-monitoring
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
proms-prometheus-node-exporter 2 2 2 2 2 kubernetes.io/os=linux 15m
root@test-pf9-qbert-bare-os-u20-3375226-630-2:~# k delete ds proms-prometheus-node-exporter -n pf9-monitoring
daemonset.apps "proms-prometheus-node-exporter" deleted
- Run following commands to update the CRDs before applying the upgrade.
kubectl apply --server-side -f https://raw.githubusercontent.com/prometheus-operator/prometheus-operator/v0.76.0/example/prometheus-operator-crd/monitoring.coreos.com_alertmanagerconfigs.yaml --force-conflicts
kubectl apply --server-side -f https://raw.githubusercontent.com/prometheus-operator/prometheus-operator/v0.76.0/example/prometheus-operator-crd/monitoring.coreos.com_alertmanagers.yaml --force-conflicts
kubectl apply --server-side -f https://raw.githubusercontent.com/prometheus-operator/prometheus-operator/v0.76.0/example/prometheus-operator-crd/monitoring.coreos.com_podmonitors.yaml --force-conflicts
kubectl apply --server-side -f https://raw.githubusercontent.com/prometheus-operator/prometheus-operator/v0.76.0/example/prometheus-operator-crd/monitoring.coreos.com_probes.yaml --force-conflicts
kubectl apply --server-side -f https://raw.githubusercontent.com/prometheus-operator/prometheus-operator/v0.76.0/example/prometheus-operator-crd/monitoring.coreos.com_prometheusagents.yaml --force-conflicts
kubectl apply --server-side -f https://raw.githubusercontent.com/prometheus-operator/prometheus-operator/v0.76.0/example/prometheus-operator-crd/monitoring.coreos.com_prometheuses.yaml --force-conflicts
kubectl apply --server-side -f https://raw.githubusercontent.com/prometheus-operator/prometheus-operator/v0.76.0/example/prometheus-operator-crd/monitoring.coreos.com_prometheusrules.yaml --force-conflicts
kubectl apply --server-side -f https://raw.githubusercontent.com/prometheus-operator/prometheus-operator/v0.76.0/example/prometheus-operator-crd/monitoring.coreos.com_scrapeconfigs.yaml --force-conflicts
kubectl apply --server-side -f https://raw.githubusercontent.com/prometheus-operator/prometheus-operator/v0.76.0/example/prometheus-operator-crd/monitoring.coreos.com_servicemonitors.yaml --force-conflicts
kubectl apply --server-side -f https://raw.githubusercontent.com/prometheus-operator/prometheus-operator/v0.76.0/example/prometheus-operator-crd/monitoring.coreos.com_thanosrulers.yaml --force-conflicts
3. Add additional scrape config in the values.yaml file before upgrading the chart.
Ensure to update the additionalScrapeConfigs key in the values.yaml file being used with the following values.
additionalScrapeConfigs:
- job_name: kubelet
scrape_interval: 5m
honor_labels: false
scheme: https
tls_config:
ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
kubernetes_sd_configs:
- role: node
relabel_configs:
- action: labelmap
regex: __meta_kubernetes_node_label_(.+)
- source_labels:
- __metrics_path__
target_label: metrics_path
replacement: /metrics
- job_name: cadvisor
scrape_interval: 2m
honor_labels: false
metrics_path: /metrics/cadvisor
scheme: https
tls_config:
ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
kubernetes_sd_configs:
- role: node
relabel_configs:
- action: labelmap
regex: __meta_kubernetes_node_label_(.+)
- source_labels:
- __metrics_path__
target_label: metrics_path
replacement: /metrics/cadvisor
- job_name: apiserver
scrape_interval: 5m
honor_labels: false
scheme: https
tls_config:
ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
insecure_skip_verify: true
bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
kubernetes_sd_configs:
- role: endpoints
relabel_configs:
- source_labels: [__meta_kubernetes_namespace, __meta_kubernetes_service_name, __meta_kubernetes_endpoint_port_name]
action: keep
regex: default;kubernetes;https
4. Upgrade the helm chart.
helm upgrade <chart-name> pf9-plus/kube-prometheus-stack -f values.yaml
Was this page helpful?