Modern applications need reliable and scalable storage for everything from logs and backups to images and videos. OpenShift Data Foundation (ODF) makes this possible inside Red Hat OpenShift by providing block, file, and object storage that integrates seamlessly with your workloads.
This article explains — in simple terms — how to configure applications to use ODF Object Storage, which is built on Ceph and offers an S3-compatible interface for storing unstructured data.
🔹 Why Object Storage?
Object storage is different from traditional file or block storage. It’s designed to handle large volumes of unstructured data, making it ideal for:
Applications that store files like images, videos, or logs.
Cloud-native workloads that need flexible, scalable storage.
Teams that want to use the familiar Amazon S3 API, but on OpenShift.
Scenarios where resilience, replication, and portability are important.
🔹 What You Need Before You Start
To configure applications with ODF object storage, you’ll need:
An OpenShift cluster with ODF installed.
The object storage service in ODF already running.
Access to your cluster via OpenShift’s web console or CLI.
🔹 How Applications Connect to ODF Object Storage
Create a storage bucket
Think of this like creating a folder in the cloud where your application can save and retrieve files.
In ODF, this is done using something called an Object Bucket Claim (OBC), which automatically creates the bucket, along with access credentials.
Credentials are provided automatically
When the bucket is created, OpenShift generates the details your application will need:
Access keys (like a username and password).
The endpoint address (like a web link where the bucket can be reached).
The bucket name.
These are stored securely in OpenShift as Secrets and ConfigMaps.
Expose credentials to the application
Applications need these details to connect to the bucket.
This is usually done by passing the access information as environment variables or configuration settings so the application knows where to send and retrieve data.
Application stores and retrieves data
With the credentials in place, the application can now upload, download, and manage data in the bucket using an S3-compatible client library.
This works the same way as connecting to Amazon S3 but stays fully within your OpenShift cluster.
🔹 How to Verify It’s Working
Check that the bucket has been created in the OpenShift console.
Confirm that a secret and config map with credentials exist for the application.
Validate that your application can write data (e.g., uploading a file) and then retrieve it back.
🔹 Best Practices
One bucket per application → keeps data secure and organized.
Rotate access keys → for security, update them regularly.
Set quotas → prevent applications from using unlimited storage.
Use TLS encryption → protect data in transit.
Monitor usage → track storage consumption and performance.
🔹 Conclusion
By using OpenShift Data Foundation Object Storage, applications gain a scalable, cloud-native, S3-compatible storage solution directly within OpenShift. Developers can focus on building features, while operators ensure data security, reliability, and governance.
This makes ODF object storage an excellent choice for enterprises running data-intensive workloads in OpenShift.
For more info, Kindly follow: Hawkstack
Top comments (0)