As an IT and cloud team manager with 18 years of experience with InterSystems technologies, I recently led our team in the transformation of our traditional on-premises ERP system to a cloud-based solution. We embarked on deploying InterSystems IRIS within a Kubernetes environment on AWS EKS, aiming to achieve a scalable, performant, and secure system. Central to this endeavor was the utilization of the AWS Application Load Balancer (ALB) as our ingress controller.
However, our challenge extended beyond the initial cluster and application deployment; we needed to establish an efficient and secure method to manage the various IRIS instances, particularly when employing mirroring for high availability.
This post will focus on the centralized management solution we implemented to address this challenge. By leveraging the capabilities of AWS EKS and ALB, we developed a robust architecture that allowed us to effectively manage and monitor the IRIS cluster, ensuring seamless accessibility and maintaining the highest levels of security.
In the following sections, we will delve into the technical details of our implementation, sharing the strategies and best practices we employed to overcome the complexities of managing a distributed IRIS environment on AWS EKS. Through this post, we aim to provide valuable insights and guidance to assist others facing similar challenges in their cloud migration journeys with InterSystems technologies.
Configuration Summary Our configuration capitalized on the scalability of AWS EKS, the automation of the InterSystems Kubernetes Operator (IKO) 3.6, and the routing proficiency of AWS ALB. This combination provided a robust and agile environment for our ERP system's web services.
Mirroring Configuration and Management Access We deployed mirrored IRIS data servers to ensure high availability. These servers, alongside a single application server, were each equipped with a Web Gateway sidecar pod. Establishing secure access to these management portals was paramount, achieved by meticulous network and service configuration.
Detailed Configuration Steps
Initial Deployment with IKO:
- We leveraged IKO 3.6, we deployed the IRIS instances, ensuring they adhered to our high-availability requirements.
Web Gateway Management Configuration:
- We create server access profiles within the Web Gateway Management interface. These profiles, named
data00
anddata01
, were crucial in establishing direct and secure connectivity to the respective Web Gateway sidecar pods associated with each IRIS data server. - To achieve precise routing of incoming traffic to the appropriate Web Gateway, we utilized the DNS pod names of the IRIS data servers. By configuring the server access profiles with the fully qualified DNS pod names, such as
iris-svc.app.data-0.svc.cluster.local
andiris-svc.app.data-1.svc.cluster.local
, we ensured that requests were accurately directed to the designated Web Gateway sidecar pods.
https://docs.intersystems.com/irislatest/csp/docbook/DocBook.UI.Page.cls?KEY=GCGI_config_serv
IRIS Terminal Commands:
- To align the CSP settings with the newly created server profiles, we executed the following commands in the IRIS terminal:
d $System.CSP.SetConfig("CSPConfigName","data00") # on data00
d $System.CSP.SetConfig("CSPConfigName","data01") # on data01
https://docs.intersystems.com/healthconnectlatest/csp/docbook/DocBook.UI.Page.cls?KEY=GCGI_remote_csp
NGINX Configuration:
- The NGINX configuration was updated to respond to
/data00
and/data01
paths, followed by creating Kubernetes services and ingress resources that interfaced with the AWS ALB, completing our secure and unified access solution.
Creating Kubernetes Services:
- I initiated the setup by creating Kubernetes services for the IRIS data servers and the SAM:
Ingress Resource Definition:
- Next, I defined the ingress resources, which route traffic to the appropriate paths using annotations to secure and manage access.
Explanations for the Annotations in the Ingress YAML Configuration:
-
alb.ingress.kubernetes.io/scheme: internal
- Specifies that the Application Load Balancer should be internal, not accessible from the internet.
- This ensures that the ALB is only reachable within the private network and not exposed publicly.
-
alb.ingress.kubernetes.io/subnets: subnet-internal, subnet-internal
- Specifies the subnets where the Application Load Balancer should be provisioned.
- In this case, the ALB will be deployed in the specified internal subnets, ensuring it is not accessible from the public internet.
-
alb.ingress.kubernetes.io/target-type: ip
- Specifies that the target type for the Application Load Balancer should be IP-based.
- This means that the ALB will route traffic directly to the IP addresses of the pods, rather than using instance IDs or other target types.
-
alb.ingress.kubernetes.io/target-group-attributes: stickiness.enabled=true
- Enables sticky sessions (session affinity) for the target group.
- When enabled, the ALB will ensure that requests from the same client are consistently routed to the same target pod, maintaining session persistence.
-
alb.ingress.kubernetes.io/listen-ports: '[{"HTTPS": 443}]'
- Specifies the ports and protocols that the Application Load Balancer should listen on.
- In this case, the ALB is configured to listen for HTTPS traffic on port 443.
-
alb.ingress.kubernetes.io/certificate-arn: arn:aws:acm:il-
- Specifies the Amazon Resource Name (ARN) of the SSL/TLS certificate to use for HTTPS traffic.
- The ARN points to a certificate stored in AWS Certificate Manager (ACM), which will be used to terminate SSL/TLS connections at the ALB.
These annotations provide fine-grained control over the behavior and configuration of the AWS Application Load Balancer when used as an ingress controller in a Kubernetes cluster. They allow you to customize the ALB's networking, security, and routing settings to suit your specific requirements.
After configuring the NGINX with location settings to respond to the paths for our data servers, the final step was to extend this setup to include the SAM by defining its service and adding the route in the ingress file.
Security Considerations: We meticulously aligned our approach with cloud security best practices, particularly the principle of least privilege, ensuring that only necessary access rights are granted to perform a task.
DATA00:
DATA01:
SAM:
Conclusion:
This article shared our journey of migrating our application to the cloud using InterSystems IRIS on AWS EKS, focusing on creating a centralized, accessible, and secure management solution for the IRIS cluster. By leveraging security best practices and innovative approaches, we achieved a scalable and highly available architecture.
We hope that the insights and techniques shared in this article prove valuable to those embarking on their own cloud migration projects with InterSystems IRIS. If you apply these concepts to your work, we'd be interested to learn about your experiences and any lessons you discover throughout the process
Top comments (0)