DEV Community

Sindhuja N.S
Sindhuja N.S

Posted on

Configuring OpenShift Cluster Services to Use OpenShift Data Foundation

As enterprises adopt container platforms like Red Hat OpenShift, ensuring that critical services have reliable, scalable, and persistent storage becomes essential. This is where OpenShift Data Foundation (ODF) comes into play — providing integrated, Kubernetes-native storage for OpenShift clusters.

In this blog, we’ll walk through how key OpenShift services like the internal image registry, monitoring, and logging can be connected to ODF — all without diving into code or configuration files.

💡 What Is OpenShift Data Foundation?
ODF is a software-defined storage solution integrated directly into OpenShift. It supports:

Block, file, and object storage

High availability and data protection

Seamless scaling and dynamic provisioning

It makes managing storage in cloud-native environments simple, resilient, and highly available.

🔍 Why Configure Cluster Services to Use ODF?
OpenShift includes built-in services that benefit significantly from persistent storage:

Image Registry – Stores container images

Monitoring Stack – Collects metrics and alerts

Logging – Captures system and application logs

By default, some of these services use ephemeral (temporary) storage, which can be lost during reboots or failures. Configuring them with ODF ensures that your data remains safe, durable, and recoverable.

⚙️ What’s the Configuration Process Like?
You don't need to write code or use the command line. The configuration can be done using the OpenShift web console by following these high-level steps:

🔹 1. Verify ODF is Installed
Go to Operators > Installed Operators in the OpenShift console

Look for OpenShift Data Foundation

Ensure it shows as “Succeeded” or “Healthy”

🔹 2. Storage Classes Overview
Navigate to Storage > StorageClasses

Identify the ones created by ODF (like ocs-storagecluster-ceph-rbd)

These classes allow OpenShift services to dynamically request and attach storage

🔹 3. Configure the Internal Image Registry
Go to Administration > Cluster Settings > Configuration > Image Registry

Set the storage to “Persistent”

Choose the appropriate ODF StorageClass (like RBD)

Define the storage size (e.g., 100Gi)

🔹 4. Enable Monitoring Persistence
Navigate to Monitoring > Configuration

Enable “Persistent Storage” for Prometheus and Alertmanager

Select the ODF StorageClass and set your desired size (e.g., 50Gi for Prometheus)

🔹 5. (Optional) Set Up Logging with ODF
If using OpenShift Logging (EFK or Loki):

Go to Operators > OpenShift Logging

Set up or edit the logging stack

Assign persistent storage using the same method — select ODF-backed storage

✅ How to Confirm Everything’s Working
You can check storage usage visually:

Go to Storage > PersistentVolumeClaims

See if the key services (registry, monitoring, logging) are listed

Confirm they are “Bound” and in “Running” status

🔐 Benefits of Using ODF for Cluster Services
No data loss during pod restarts or node failures

Centralized management of storage within OpenShift

High performance and scalability

Resilient logging and monitoring for better observability

Ready for hybrid and multi-cloud environments

📘 Conclusion
Configuring OpenShift services like the internal registry, monitoring, and logging to use OpenShift Data Foundation doesn’t require coding knowledge. Through the intuitive OpenShift web console, you can assign persistent, highly available storage to critical services — ensuring your platform remains robust and production-ready.

This low-code approach is ideal for system administrators, DevOps engineers, or platform teams looking to streamline storage management in OpenShift without getting into YAML or CLI commands.

For more info, Kindly follow: Hawkstack Technologies

Top comments (0)