DEV Community

Cover image for Deploy HarperDB on ROSA (Red Hat OpenShift Service on AWS)
Shakir for AWS Community Builders

Posted on

Deploy HarperDB on ROSA (Red Hat OpenShift Service on AWS)

In this post, we would be deploying HarperDB on ROSA (Red Hat OpenShift Service on AWS). Let's begin with the steps.

Before you begin, ensure you have installed and configured the AWS CLI. Instructions are here.

Enable ROSA

ROSA should be enabled on the AWS console as shown in the screenshot below.
Enable ROSA

And then, click on Continue to Red Hat at the bottom of the page.
Continue to RedHat

This should take you to a page where you can view the terms and conditions and accept, with a RedHat login.

Download the binaries

Go to the OpenShift downloads page and download the rosa cli.
Download rosa cli

Extract the contents from the archive, and remove the archive.

$ cd Downloads 
$ tar xvf rosa-macosx.tar.gz
$ rm rosa-macosx.tar.gz
Enter fullscreen mode Exit fullscreen mode

Move the extracted binary to one of the directories in PATH. I am using /usr/local/bin.

$ sudo mv rosa /usr/local/bin/.
Enter fullscreen mode Exit fullscreen mode

Check if it's installed properly.

$ rosa version
1.2.22
I: Your ROSA CLI is up to date.
Enter fullscreen mode Exit fullscreen mode

Similarly, download the oc CLI.
Download oc CLI

$ cd ~/Downloads

$ tar xvf openshift-client-mac.tar.gz 
x README.md
x oc
x kubectl
Enter fullscreen mode Exit fullscreen mode

I already have kubectl in my system, so I would only move the oc binary.

$ rm README.md 
$ rm kubectl 
$ rm openshift-client-mac.tar.gz 
$ sudo mv oc /usr/local/bin/.
Enter fullscreen mode Exit fullscreen mode

If it's a Mac, go to Security settings and allow oc to be run.
Allow oc cli on mac

Check the version to see if it's installed properly.

$ oc version
Client Version: 4.13.1
Kustomize Version: v4.5.7
Kubernetes Version: v1.25.2
Enter fullscreen mode Exit fullscreen mode

API token

Go to the URL https://console.redhat.com/openshift/token/aws and load the token.
Load API token for rosa

Copy the token from the next step, and set it as a variable. Note that you need to paste the token in the next line after the read command.

$ read -s ROSA_TOKEN
$
Enter fullscreen mode Exit fullscreen mode

Login with this token.

$ rosa login --token $ROSA_TOKEN
Enter fullscreen mode Exit fullscreen mode

Note that it didn't work for me with just setting the variable ROSA_TOKEN, hence I had to login with the --token option

Validate

Verify permissions and quota in AWS.

$ rosa verify permissions
I: Verifying permissions for non-STS clusters
I: Validating SCP policies...
I: AWS SCP policies ok

$ rosa verify quota
I: Validating AWS quota...
E: Insufficient AWS quotas
E: Service quota is insufficient for the following service quota codes:
- Service ec2 quota code L-1216C47A Running On-Demand Standard (A, C, D, H, I, M, R, T, Z) instances not valid, expected quota of at least 100, but got 64
Enter fullscreen mode Exit fullscreen mode

The quota is insufficient, so we can increase it. Go to Service Quotas > AWS services > EC2 and select Running On-Demand Standard instances as shown below.
Search for EC2 quotas

On the next page, change the quota to 100 and request an increase.
Increase EC2 quota

A support case should be automatically created, and it might take up to 30 minutes for the new quota to reflect as per the message below.
Confirmation message for EC2 quota increase

Once the quota is granted, rosa verify quota should be successful.

$ rosa verify quota
I: Validating AWS quota...
I: AWS quota ok. If cluster installation fails, validate actual AWS resource usage against https://docs.openshift.com/rosa/rosa_getting_started/rosa-required-aws-service-quotas.html
Enter fullscreen mode Exit fullscreen mode

Provision the cluster

Do an init first, for the validation.

$ rosa create cluster --cluster-name='hdb-rosa-clstr'
E: Failed to create cluster: The maximum number of VPCs has been reached
Enter fullscreen mode Exit fullscreen mode

I first got an error that the maximum number of VPCs has been reached, there were 5, so I deleted the unwanted VPCs, and ran the command again.

$ rosa create cluster --cluster-name='hdb-rosa-clstr'
Details Page:               https://console.redhat.com/openshift/details/s/2R2oUbagDXdMV1pWbSftXIzclea
Enter fullscreen mode Exit fullscreen mode

Also, note that the cluster name must not contain more than 15 characters.

Fine, our cluster is ready. We can add the admin user.

$ rosa create admin -c hdb-rosa-clstr
Enter fullscreen mode Exit fullscreen mode

Copy and run the oc login command from the output.

$ oc login https://api.hdb-rosa-clstr.0h87.p1.openshiftapps.com:6443 --username cluster-admin --password Uzrkx-IJE3a-xky3s-SodfM
Login successful.

You have access to 103 projects, the list has been suppressed. You can list all projects with 'oc projects'

Using project "default".
Enter fullscreen mode Exit fullscreen mode

You can see the list of nodes with oc.

$ oc get nodes
NAME                                          STATUS   ROLES                  AGE     VERSION
ip-10-0-135-59.ap-south-1.compute.internal    Ready    worker                 7h22m   v1.26.3+b404935
ip-10-0-143-8.ap-south-1.compute.internal     Ready    infra,worker           7h9m    v1.26.3+b404935
ip-10-0-164-209.ap-south-1.compute.internal   Ready    infra,worker           7h9m    v1.26.3+b404935
ip-10-0-213-140.ap-south-1.compute.internal   Ready    worker                 7h25m   v1.26.3+b404935
ip-10-0-221-178.ap-south-1.compute.internal   Ready    control-plane,master   7h32m   v1.26.3+b404935
ip-10-0-228-159.ap-south-1.compute.internal   Ready    control-plane,master   7h32m   v1.26.3+b404935
ip-10-0-251-121.ap-south-1.compute.internal   Ready    control-plane,master   7h32m   v1.26.3+b404935
Enter fullscreen mode Exit fullscreen mode

You can check this on the EC2 console too, where it gives the names.
List of EC2 rosa instances

Deploy HarperDB

We will first add a new project.

$ oc new-project harperdb
Enter fullscreen mode Exit fullscreen mode

We can use kubernetes native manifests to deploy with oc. For which we can clone the manifests repo.

$ git clone https://github.com/HarperDB-Add-Ons/harperdb-deployments.git
Enter fullscreen mode Exit fullscreen mode

In OpenShift, by default the user set on Dockerfile, is not considered, it would run with a different project default userid which is somewhat bigger like 1000610000 . In the case of HarperDB image it uses the notroot user harperdb with userid 1000 in docker.

$ docker run -it harperdb/harperdb bash
harperdb@6b19c2abbebc:~$ id
uid=1000(harperdb) gid=1000(harperdb) groups=1000(harperdb)
Enter fullscreen mode Exit fullscreen mode

So we can let the container in OpenShift also use the same uid as set in Dockerfile.

$ oc adm policy add-scc-to-group anyuid system:authenticated
Enter fullscreen mode Exit fullscreen mode

We can now apply the Kubernetes manifests with oc.

$ cd harperdb-deployments/kubernetes-manifests
$ oc apply -f .
Enter fullscreen mode Exit fullscreen mode

The pod should be running.


$ oc get po
NAME                        READY   STATUS    RESTARTS   AGE
harperdb-5597447d8b-fqzg7   1/1     Running   0          2m58s
Enter fullscreen mode Exit fullscreen mode

Retrieve the HarperDB API endpoint.

$ HDB_SVC_HOSTNAME=$(oc get svc harperdb -o jsonpath='{.status.loadBalancer.ingress[0].hostname}')

$ HDB_SVC_PORT=$(oc get svc harperdb -o jsonpath='{.spec.ports[0].port}')

$ HDB_API_ENDPOINT=$HDB_SVC_HOSTNAME:$HDB_SVC_PORT
Enter fullscreen mode Exit fullscreen mode

We can now test schema creation with curl.

$ curl --location --request POST $HDB_API_ENDPOINT \
--header 'Content-Type: application/json' \
--header 'Authorization: Basic YWRtaW46cGFzc3dvcmQxMjM0NQ==' \
--data-raw '{
    "operation": "create_schema",
    "schema": "hdb_rosa_schema" 
}'

{"message":"schema 'hdb_rosa_schema' successfully created"}
Enter fullscreen mode Exit fullscreen mode

Persistence

Let's test persistence, we shall delete the pod and when the new pod comes, we'd see if the schema we created exists.

$ oc delete po --all
pod "harperdb-5597447d8b-fqzg7" deleted

$ oc get po
NAME                        READY   STATUS    RESTARTS   AGE
harperdb-5597447d8b-cc4jt   1/1     Running   0          54s
Enter fullscreen mode Exit fullscreen mode

A new pod is now running, we can check the list of schemas with curl.

$ curl --location $HDB_API_ENDPOINT \
--header 'Content-Type: application/json' \
--header 'Authorization: Basic YWRtaW46cGFzc3dvcmQxMjM0NQ==' \
--data '{
    "operation": "describe_all"
}'
{"hdb_rosa_schema":{}}
Enter fullscreen mode Exit fullscreen mode

OpenShift Image

So far we tested HarperDB deployment on OpenShift with the harperdb/harperdb image on the docker hub. We can now try with the harperdb/harperdb-openshift image. This link also has details about the image.

Just change the image section of the deployment manifest and apply it again with oc.

$ cat deploy.yaml | grep image:     
        image: harperdb/harperdb-openshift:4.1.0

$ oc apply -f .
Enter fullscreen mode Exit fullscreen mode

The new harperdb pod with openshift image should be running.

$ oc get pods   
NAME                        READY   STATUS    RESTARTS      AGE
harperdb-757579fb58-l46sc   1/1     Running   2 (39s ago)   88s
Enter fullscreen mode Exit fullscreen mode

This image has a slightly different user than the previous image.

$ oc rsh harperdb-757579fb58-l46sc     
sh-5.1$ id
uid=1001(default) gid=0(root) groups=0(root),1000
Enter fullscreen mode Exit fullscreen mode

The schema should still be existing here.

$ curl --location $HDB_API_ENDPOINT \
--header 'Content-Type: application/json' \
--header 'Authorization: Basic YWRtaW46cGFzc3dvcmQxMjM0NQ==' \
--data '{
    "operation": "describe_all"
}'
{"hdb_rosa_schema":{}}
Enter fullscreen mode Exit fullscreen mode

We can add a table in this schema.

$ curl --location $HDB_API_ENDPOINT \
--header 'Content-Type: application/json' \
--header 'Authorization: Basic YWRtaW46cGFzc3dvcmQxMjM0NQ==' \
--data '{ "operation": "create_table", "schema": "hdb_rosa_schema", "table": "hdb_schema_table", "hash_attribute": "id" }'
{"message":"table 'hdb_rosa_schema.hdb_schema_table' successfully created."}
Enter fullscreen mode Exit fullscreen mode

Delete the pod and test persistence, this time describe all.

$ curl --location $HDB_API_ENDPOINT \
--header 'Content-Type: application/json' \
--header 'Authorization: Basic YWRtaW46cGFzc3dvcmQxMjM0NQ==' \
--data '{
    "operation": "describe_all"
}'
{"hdb_rosa_schema":{"hdb_schema_table":{"__createdtime__":1686484996114.1133,"__updatedtime__":1686484996114.1133,"hash_attribute":"id","id":"a71a415d-199b-41ce-86a8-3c336850ea67","name":"hdb_schema_table","residence":null,"schema":"hdb_rosa_schema","attributes":[{"attribute":"__createdtime__"},{"attribute":"__updatedtime__"},{"attribute":"id"}],"clustering_stream_name":"1e79fdbec80f4ab386755f827fb8863b","record_count":0}}}
Enter fullscreen mode Exit fullscreen mode

Clean up

Delete the cluster with rosa cli.

$ rosa delete cluster -c hdb-rosa-clstr
? Are you sure you want to delete cluster hdb-rosa-clstr? Yes
I: Cluster 'hdb-rosa-clstr' will start uninstalling now
I: To watch your cluster uninstallation logs, run 'rosa logs uninstall -c hdb-rosa-clstr --watch'
Enter fullscreen mode Exit fullscreen mode

Summary

So we saw some information about the rosa, oc CLIs, setting them up and getting the prerequisites ready before launching the cluster. We then launched the cluster and deployed HarperDB on it with both the usual HarperDB image and the OpenShift specific one and tested the endpoints. Thank you for reading !!!

Top comments (0)