DEV Community πŸ‘©β€πŸ’»πŸ‘¨β€πŸ’»

DEV Community πŸ‘©β€πŸ’»πŸ‘¨β€πŸ’» is a community of 966,904 amazing developers

We're a place where coders share, stay up-to-date and grow their careers.

Create account Log in
santoshjpawar
santoshjpawar

Posted on • Updated on

Use S3 compatible Object Storage in OpenShift

We have came across situations where we need to feed the custom files to the applications deployed in OpenShift. These files will be consumed by the application for the further operations.

Consider a scenario - An application allows users to create custom jar files using the provided SDK, and feed it to the application to execute that customization. These custom jars should be made available to the application through Java CLASSPATH by copying it to the appropriate path.

OpenShift has two options to handle such scenarios.

Using Persistent Volume

  • Persistent Volumes (PV) allows to share the file storage between application pods and external world. Users can copy the files to PV to make it available to the pods (for example configuration files), or pods can create the files to make it accessible outside the OpenShift cluster (for example log files). This sounds like a feasible approach for sharing the files described above. However, some organizations have restrictions over using or accessing PVs.

Using ConfigMap

  • ConfigMaps can be mounted as volumes to make the custom files available to the pods.
  • ConfigMap has 1MiB size limit.
  • Using ConfigMap for sharing the custom files requires access to the OpenShift cluster to create the ConfigMap containing custom files.

Both the approaches are tightly coupled with OpenShift platform. These approaches may work just fine for some users. But we might want to look for some generic approach, and that's what will be discussing here.

Using S3 Compatible Object Storage

Here we will see how to use S3 compatible storage to allow users to share the custom files with pods.
Image description

Setup S3 compatible storage

If you have AWS account, you can use AWS S3 service as object storage. If not, then you can use any other S3 compatible storage. Here we will use MinIO (https://min.io/) object storage. You can check the documentation for all the supported scenarios to get MinIO running. In this example, I am installing it on CentOS 7 VM.

Installation

mkdir /opt/minio
cd /opt/minio
wget https://dl.min.io/server/minio/release/linux-amd64/minio
chmod +x minio
Enter fullscreen mode Exit fullscreen mode

Start MinIO server

export MINIO_REGION_NAME="us-east-1"
export MINIO_ROOT_USER=admin
export MINIO_ROOT_PASSWORD=password
./minio server /mnt/data --console-address ":9001" &
Enter fullscreen mode Exit fullscreen mode

If you don’t set the MINIO_ROOT_USER and MINIO_ROOT_PASSWORD environment variables, MinIO will use default as minioadmin:minioadmin.

It is important to set the region using MINIO_REGION_NAME as we will need to set the region while running the aws cli commands later in the process.

The directory /mnt/data will be used as a storage space. Make sure the directory you use as a data directory has enough storage space as per your requirement.

Create API user to be used to access the S3 storage

  • Open http://[minio-server]:9001 in the browser and login using credentials provided while starting the server.
  • Click on the Buckets link in LHN.
  • Click on + Create User button.-
  • Specify Access Key and Secret Key value. You can use the sample values as below:
Access Key: Q3AM3UQ867SPQDA43P2G
Secret Key: zuf+tfteSlswRu7BJ86wekitnifILbZam1KYY3TG
Enter fullscreen mode Exit fullscreen mode
  • Check the readwrite checkbox from policies list in Assign Policies section and click on Save button.

Create S3 bucket to keep the custom files

  • Click on the Buckets link in LHN.
  • Click on Create Bucket + button.
  • Specify bucket name, for example santosh-bucket-1. Keep other values default and click on Save button.

Using S3 object storage from pods

  • To use the S3 object storage from, you can install generally available AWS CLI v2 by following the AWS documentation.
  • Configure AWS CLI with Access Key and Secret Key used while creating the API user in MinIO. You can use OpenShift secret to keep the Access Key and Secret Key. Please check my other post https://dev.to/santoshjpawar/how-to-use-openshift-secret-securely-597c for using OpenShift secrets securely.
  • In the pods, use below AWS CLI S3 commands to get the custom files copied to the container storage from S3 object storage. The custom jar file custom-settlement-extention.jar kept on S3 bucket santosh-bucket-1. Note that the MinIO port (9000) is different than the MinIO console port (9001)
aws --endpoint-url http://[minio-server]:9000 s3 cp s3://santosh-bucket-1/custom-settlement-extention.jar /work/custom-jars
Enter fullscreen mode Exit fullscreen mode

The above command can be added in the container startup script to copy the custom jar file from S3 object storage to the container storage.

You can list the files in the bucket and then copy them in case multiple files need to be copied.

aws --endpoint-url http://[minio-server]:9000 s3 ls santosh-bucket-1
Enter fullscreen mode Exit fullscreen mode

Top comments (0)

🌚 Browsing with dark mode makes you a better developer.

It's a scientific fact.