Integrations are nothing new
As long as there is a need to connect different programs, software, or data systems, there is also a need for integrations. An integration system can act as a sender, translator, or creator.
However, everything starts with a need - what needs to be connected and how.
And there are plenty of these needs. In the world, there are countless different programs and systems, as well as companies that utilize them. When companies collaborate with each other, the incompatibility of different systems can become a hindrance: this is where integration comes into play. Integration ensures that systems can communicate and work together, allowing companies to focus their time and resources on developing and maintaining their core business operations.
So, what do modern integrations require today? How have contemporary trends in the IT industry shaped the anatomy of integrations? What does AWS offer for building modern integrations? How have we at Skillwell solved this issue? In this article, we'll answer these questions.
Agility in Integrations from Cloud Services
One of the modern trends in application development and in the IT industry as a whole is agility. When it comes to project work, it has been observed that rarely do all possible aspects come up in the requirements specification, at least not within a reasonable timeframe. If a project cannot adapt to changes or deficiencies as they arise, it can quickly find itself in a deep pit that is not easily escaped.
Cloud services themselves are also a rapidly growing field. They drive agility because they have moved resource and infrastructure management from data centers to web browsers, source code, and command lines. Setting up a new server no longer requires buying hardware, clearing space, and installing equipment; configuring settings and clicking buttons in a browser is now enough.
This concept is taken even further with the serverless computing model, where setting up underlying resources is simplified into a very straightforward process, with many of the associated tasks hidden beneath the surface. Despite its name, serverless services still use servers - application developers just don't need to worry about configuring or maintaining them.
Agility and flexibility have also become important in the world of integrations. The keyword here is reactivity: instead of handling events between systems in large batches either manually or on a schedule, events can be reacted to as they happen. Major cloud service providers often offer various services that enable this type of event-driven architecture.
Reactivity, however, is not a one-size-fits-all solution; it still depends on the customer's needs. Sometimes it's more sensible to perform larger batch processing when system usage is low. It's also possible to implement a hybrid solution where events are collected and stored for processing as they occur, but instead of a single large batch job, multiple jobs can be scheduled at regular intervals.
In addition to agility, flexibility, and reactivity, integrations also require other commonly desired features from software and systems: scalability, fault tolerance, cost-effectiveness, internationalization, and security. The importance of fault tolerance is particularly emphasized in integrations: messages between systems should reliably reach their destinations, and in case of disruptions, they should be retained until the system is operational again and ready to receive and process them.
AWS's Capabilities for Building Agile Integrations
So, how does Amazon Web Services (AWS) fit into this picture? What does AWS offer, and how does it enable the building of modern integrations? Flexibility in running programs is one good example.
Elastic Compute Cloud (EC2), one of AWS's oldest services alongside Simple Storage Service (S3), allows customers to launch virtual machine instances where their own programs or software can be installed and served to the outside world.
Elastic Container Service (ECS), on the other hand, enables the execution of Docker containers on EC2 instances. ECS can also be used in a serverless manner with AWS Fargate: Fargate handles tasks such as launching and configuring EC2 instances behind the scenes.
Another important serverless computing service provided by AWS is AWS Lambda. It is a Function-as-a-Service (FaaS) that allows code execution without the need to define the underlying infrastructure in detail. Lambda is particularly designed for event-driven architecture, as it typically triggers in response to some event. Lambda supports multiple programming languages and provides runtime environments for applications developed in those languages.
As of the time of writing this text, AWS offers over 200 different services. Not all of these services are essential for integrations, some may provide additional benefits, and certain services are critical components of any system. So, which of these services are most essential for our view of modern integrations?
The anatomy of Skillwell's Integration Platform
AWS Lambda is the muscle memory of the integration platform
Lambda functions form the backbone of the integration platform - they are the pieces that actually perform integration tasks.
Lambda functions are triggered by some event, which can be a schedule or an event in another service. When triggered, Lambda executes the program code assigned to it, performing the specific integration task required for that function. We build the "muscle memory" or the operational logic of these functions based on the customer's needs.
Amazon Simple Queue Service (SQS) and API Gateway provide sensory inputs to the integration platform
Simple Queue Service is Amazon's message queue service, and API Gateway is a service for creating, publishing, and managing web protocol-based programming interfaces (e.g., REST). These services together function like the peripheral nervous system and senses. They bring information to different parts of the system, enabling actions to be taken based on that information.
API Gateway allows HTTP requests to be stored in an SQS queue; API Gateway serves as the eyes or ears that perceive the external world and transmit the information they detect forward through the peripheral nervous system, which is SQS. SQS stores this information in a queue, where it can be retrieved and processed by other components, effectively serving as a form of memory.
Amazon EventBridge acts as both the central nervous system and an autonomous nervous system in the system
EventBridge can act as both the central nervous system and an autonomous nervous system for the integration platform. It can control events and actions by routing events from AWS and some third-party services to other services.
EventBridge can also regulate what and how often data is sent to various systems and services. This ability to act as both the central and autonomous nervous systems is important for reactivity.
Amazon DynamoDB stores data like a brain
Data is put into DynamoDB when direct access and fast data retrieval is needed
Amazon DynamoDB is a key-value and document database that delivers single-digit millisecond performance at any scale. DynamoDB acts as the central nervous system's short-term memory, where data that needs to be accessed and retrieved quickly is put. Lambda functions use DynamoDB to store and retrieve data required for decision-making and to serve end-users through the Skillwell integration platform's API.
Data stored in DynamoDB can be accessed through the API and can be used for both reading and writing, enabling real-time interactivity with the platform.
Let's look for the benefits here
So how does this architecture and AWS services support the customer's goals? It can be determined by looking at the integration platform through the previously mentioned good features: agility, flexibility, reactivity, fault tolerance, internationality, cost-effectiveness and information security.
1. Agility and scalability
One important way of thinking when working in an AWS environment is IaC, i.e. Infrastructure as code. Its idea is to manage resources and infrastructure as code: resources can be defined and managed like code as template files, which are used by the services that interpret them to set up the resources exactly as they are defined in the file. AWS's own IaC service is CloudFormation, which enables both AWS and third-party resources to be modeled into JSON or YAML format documents.
With AWS and CloudFormation, the system can be built and defined completely within the code base, and the code base can also be versioned within AWS using Git using AWS' own Git repository service CodeCommit. This, combined with AWS DevOps services such as CodeBuild and CodePipeline, helps build agile development environments, which in turn translates into a better ability to adapt to change.
Scalability is also one of the traditional challenges of information systems – growth is difficult to predict accurately, and expanding a traditional IT room to meet increased needs takes a lot of time and resources. One of the biggest advantages of infrastructure moved to the cloud is its scalability.
Services and infrastructures can be set up virtually on top of AWS's own, massive infrastructure.
Lambda functions are automatically scalable: each event starts its own run from the Lambda function.
S3 offers unlimited storage space for any object data.
SQS queues can store an unlimited number of messages, and messages can be processed simultaneously, depending on the queue type, from 20,000 to 120,000 messages.
So there is no need to worry about business growth - the system grows with it.
2. Reliable stability
In addition to easy scalability, the AWS infrastructure supports fault tolerance and internationalization. AWS has spread its infrastructure geographically into several different areas, i.e. Regions, which in turn are divided into several different Availability Zones, which are separate data centers (or clusters of data centers) within the regions. The Availability Zones are also isolated from each other so that the crash of one AZ does not crash the entire Region or other Availability Zones. Some services also enable the replication of data to another Availability Zone or Region, improving the security of applications and data.
AWS infrastructure can be found around the world, enabling systems to be built and/or provided where customers are. In addition to Regions and Availability Zones, there are so-called Edge locations, which can also be found around the world. They are often placed in large cities close to users, and Edge locations are only used by certain AWS services, most importantly CloudFront, which is AWS's own CDN or content distribution network service.
CloudFront can use Edge locations as a Cache; For example, CloudFront can store frequently requested files in these Edge locations and distribute them from there, instead of fetching it from the actual source deeper in the AWS infrastructure. This can speed up the loading of web pages, for example, if the static resources of the pages, such as images, HTML templates and CSS style files, are fetched from the cache of a nearby Edge location.
3. Cloud Security first
Of course, speed and stability are important in terms of the system's usability and thus its desirability, but one of the most important things in general in information systems is information security. It is easy to agree with this statement. Unfortunately, however, information security often lags behind or is completely neglected, both in large and small organizations.
In the AWS world, information security is divided into two camps: cloud information security and cloud internal information security. Cloud security means the security of the cloud infrastructure maintained by AWS: AWS makes sure that the infrastructure enabling its cloud services is up-to-date with regard to security threats. The data centers follow strict security protocols and the software needed to provide the services is kept up-to-date.
Cloud security therefore concerns AWS's own resources, which it must maintain and monitor in order to offer its services to its customers. Internal data security in the cloud is again the responsibility of the AWS customer, and it covers everything that is done with AWS services and that is stored in the AWS cloud. Services must be configured correctly, usernames or secrets must be secure and properly stored, programs or software installed in the cloud must be kept up-to-date, and in software development projects one must personally take care of the security of the software being built.
We take care of the security and functionality of our integration platform by following good data security practices when building and maintaining it. We use many
Services and features provided by AWS, for example for safely storing and using secrets, encrypting data and monitoring resources. This facilitates our own work in securing and monitoring the platform and reduces the customer's burden of taking care of information security.
4.But at what price?
Unfortunately, the reality is that beautiful ideas only go as far as the budget allows. The framework of economic realities limits the possibilities of building and maintaining a business, so cost-effectiveness is always a topical issue in the design of systems.
Fortunately, AWS pricing models offer flexibility depending on the situation. Perhaps the most common AWS pricing model is pay-as-you-go billing – a certain fee is paid for services based on how much the service in question has been used. For example, in the Lambda service, you pay for the execution time of the function in milliseconds and the number of calls. In addition, the price is affected by the amount of memory assigned to the Lambda function, the processor architecture and the Region where the service is used.
AWS also has so-called Free Tier offering, which gives a certain amount of free access to some services. Some Free Tier offers are only valid for 12 months after you create your AWS account, but some offers are valid forever. For example, in the Lambda service, the first million calls and 400,000 gigabytes of runtime of each month are always free. In addition, DynamoDB always offers e.g. With 25 GB of free storage and SQS, you get 1 million free calls every month.
AWS' so-called "pay-as-you-go" pricing model and Free Tier offers offer an affordable starting point to get started. AWS costs for the initial development phase may well be close to zero. As the business and the number of customers grow, the amounts of the invoices will of course start to rise, but there are solutions for this situation as well: Savings Plans and Reserved Instances.
Savings Plans and Reserved Instances both work with a similar logic: the customer commits to a certain amount of use for one or three years. Savings Plans commit to a certain amount of resource use, measured in dollars per hour, and Reserved Instances reserve a certain type of EC2 instance for the duration of the contract. In return, the customer can use the resources included in the contract during that time at a significantly cheaper price - at best approx. 60-70 percent cheaper.
All of these different AWS pricing models allow you to start small and help you save when you need to, as your usage grows much larger. The flexibility of AWS is therefore not only related to services but also to costs, so whatever your situation, not only the system but also the costs adapt to changes in your business.
To sum it up
Modern integrations need certain characteristics: agility, flexibility, reactivity, fault tolerance, internationality, cost efficiency and data security. In the AWS world, these are easy problems to solve.
Many services are immediately easily scalable, such as Lambda and SQS, which the platform utilizes for integration operations and data transmission. DevOps and infrastructure as code services, such as CodeCommit, CodeBuild, CodePipeline and CloudFormation, enable agile development and setting up resources in different environments as well as in different geographical areas quickly and repeatably.
EventBridge, as well as many AWS services' events and/or features or services that mediate them (for example, DynamoDB Streams), enable the construction of reactive action chains, so that events can be reacted to as soon as they occur.
Integration cases where quick response is important are just as possible as larger mass runs or hybrid implementations, where, for example, the SQS queue is reviewed at short intervals.
SQS queues are also part of fault tolerance and failure management. Messages moving in SQS queues can be moved to so-called dead letter queues if they have been tried to be processed too many times. This limit is self-definable. AWS's physical infrastructure is built to withstand failure situations in server hardware and data centers, so focus can be placed on error handling in source code and AWS services.
As an AWS partner and an operator focused on the AWS ecosystem, we have been looking for the best services and ways to properly implement all the different features for our integration platform.
Top comments (0)