How to deploy Prometheus on Kubernetes using helm-part2

Introduction

In the previous post, we explored setting up Prometheus and Grafana on a local Kubernetes cluster using the kube-prometheus stack Helm chart. Building on that foundation, this guide will take you through deploying the same kube-prometheus stack on a cloud-managed Kubernetes cluster. We’ll focus on configuring persistent storage to ensure no critical monitoring data is lost during pod restarts or rescheduling, and we’ll also cover how to modify the Grafana admin password for enhanced security. These steps are essential for maintaining a reliable and robust monitoring system across your cloud infrastructure.

Prerequisites

To effectively follow this guide, ensure you have:

  • A Kubernetes cluster: Set up and ready any cloud managed kubernetes cluster.Here using digital ocean cluster
  • kubectl: Configured to communicate with your kubernetes cluster, ensuring command-line management of Kubernetes resources.
  • Helm 3: Installed on your workstation, serving as your Kubernetes package manager to deploy the kube-prometheus stack.

Step 1: Preparing Your Kubernetes Cluster

Ensure your cluster is in an optimal state:

These commands help verify that your Kubernetes nodes are ready and that kubectl is properly configured to interact with your cluster, establishing a foundational check before proceeding with deployments.

Step 2: Helm Chart Preparation

Adding the Prometheus Community Helm Repository

Integrate the Prometheus community’s repository into Helm to gain access to their curated charts:

helm repo add prometheus-community https://prometheus-community.github.io/helm-charts<br>helm repo update

This step ensures you have the latest chart versions at your disposal, crucial for accessing up-to-date features and security patches.

Downloading the Default values.yaml

Secure the default values file to customize:

helm show values prometheus-community/kube-prometheus-stack > values.yaml

To get the entire kube-prometheus-stack helm chart itself to your machine down load the chart using below command

helm fetch <repo-name>/<chart-name> –untar

helm fetch prometheus-community/kube-prometheus-stack --untar

So you will get chart folder with all required charts

Step 3: Understand the Storage Requirements

Before setting up PVCs, it’s important to know how much storage each application typically needs:

  • Prometheus: This application needs quick and reliable storage because it constantly writes time-series data. The necessary storage size varies based on how long you keep data and how much data it collects.
  • Grafana: This application mainly uses storage for saving dashboards and user settings, which generally requires less space than Prometheus.
  • Alertmanager: This requires the least amount of storage, mainly for storing its configuration and managing alert silences.

What is a PV?

A Persistent Volume (PV) is a type of storage within your Kubernetes cluster that an administrator can provision in advance or can be automatically set up using Storage Classes. It’s a resource in the cluster, much like how a node is a resource. PVs are storage plugins that have a life independent of any individual pod that uses them. This means they can persist even when the pods that use them are deleted. The PV contains details about the type of storage used, whether it’s NFS, iSCSI, or a specific cloud-provider storage solution.To learn more Persistant volumes

What is a PVC?

A Persistent Volume Claim (PVC) is essentially a storage request made by a user, similar to how a pod requests resources like CPU and memory from a node. PVCs are used to allocate storage from the pool of available PVs based on size and access requirements. For example, PVCs can specify how the storage should be accessed with modes like ReadWriteOnce (mounted by a single node), ReadOnlyMany (readable by many nodes), or ReadWriteMany (writable by many nodes), adapting to different needs for storage access.To know more check Access Modes

Step 4: Configure Persistent Volumes via Helm

You’ll need to edit the values.yaml file of the kube-prometheus-stack Helm chart to define the PVCs for Prometheus, Grafana, and Alertmanager.

Prometheus Storage Configuration

Locate or add the following section in the values.yaml file under the Prometheus configuration:

  • Storage Class Name: Make sure that the storageClassName matches one of the Storage Classes available in your cluster. You can find the available Storage Classes by running the following command: kubectl get sc -A
  • Access Modes: Set accessModes to ReadWriteOnce if the storage needs to be accessed by only one pod at a time.
  • Storage Size: Adjust the storage size according to how much data you expect to retain over time.

Will get the type of storage ClassName from below command

Alert manager persistent volume configuration

The storage needs for Alertmanager are typically lower, so 2Gi should be sufficient.

Grafana persistent volume configuration

For Grafana, add the following under the Grafana configuration section:

Ensure persistence.enabled is set to true to enable persistent storage.Like Prometheus, configure the storageClassName and size according to your cluster setup and needs.

Step 5: Modifying Default Grafana Admin Password

Enhance security by changing the Grafana admin password:

This step is crucial to protect your Grafana dashboard from unauthorized access, a common vulnerability in default configurations.

You can find the modified values.yaml from here : https://github.com/rjshk013/cldop-blog/blob/main/prod-values.yaml

Step 6: Deploy or Update the Helm Chart

Apply the changes to your cluster to set up or update the persistent storage configurations:

helm upgrade --install kube-prom-stack prometheus-community/kube-prometheus-stack -f values.yaml --namespace monitoring --create-namespace

Step 7: Verify the Deployment

After waiting approximately 2–5 minutes, verify the installation:

kubectl get po -n monitoring

and you should see a similar output to this

Check the status of the deployed Persistent Volume Claims to ensure they are correctly bound and functional:

This command lists all PVCs in the monitoring namespace, verifying their successful deployment and status.

Also check the pv status

Now that Kube-Prometheus is installed, the next steps involve visualizing services, exposing them to localhost

Exposing the Prometheus UI

After installing Kube-Prometheus, various services are set up in your cluster. You can view these by running:

kubectl get svc -n monitoring

and the result should be similar to this one

To access the Prometheus UI, we need to expose the service kube-prom-stack-kube-prome-prometheus using node port or port forwarding or Loadbalancer .Since it’s a ClusterIP-type service and not directly accessible via URL.So here for this tutorial we are using port forward method

kubectl port-forward svc/kube-prometheus-kube-prome-prometheus -n monitoring 9090 &

Now, visit http://localhost:9090 to access the Prometheus UI.

Within the UI, you’ll find:

  • Graph Section: Allows running PromQL queries and interacting with stored data.
  • Alerts Section: Shows all system-detected alerts. Navigate to the Alerts tab to explore further.
Exposing the Grafana UI

The grafana Kubernetes service is also a ClusterIP type. You can only access it from within the Kubernetes cluster. We need to expose this Kubernetes service to access it outside the Kubernetes cluster.Expose the service using port forwading method

kubectl port-forward svc/kube-prom-stack-grafana -n monitoring 3000:80 &

The service will now be reachable on http://localhost:3000

Now, log in to the Grafana dashboard using the default username ‘admin’ and the password configured in Step 5


To Learn more about the grafana dashboards , please check our previous blog

Exposing the Alert manager UI

Finally we can expose the alert manager service as we did for prometheus

kubectl port-forward svc/kube-prom-stack-kube-prome-alertmanager -n monitoring 9093 &

Now, visit http://localhost:9093 to access the Alert manager UI

We’ve now covered how to set up a monitoring stack with the kube-prometheus stack Helm chart on any managed Kubernetes cloud cluster and how to configure persistent volumes. In upcoming posts, we’ll dive into configuring custom alerts in Prometheus, setting up notifications with Alertmanager, and exploring Grafana dashboards

Conclusion

Setting up persistent storage is crucial for a reliable and scalable monitoring system in Kubernetes. This guide has shown you how to protect your monitoring data to keep your system stable and performing well. If you need a refresher on the basics, check out our previous blog on setting up Prometheus and Grafana on a local Kubernetes cluster. With these steps, you’re ready to handle your Kubernetes monitoring needs with confidence.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top