What is computing? Where does the cloud fit into this universe? While these questions may seem basic, revisiting them can provide fresh insights for professionals and organizations. After all, cloud computing technology has been around for less than 20 years within our broader computing landscape, which has been marked by research and innovation since the late 19th century.
A fitting starting point for this story would be the 1890s, when American inventor Herman Hollerith developed the punched card and a set of machines for the U.S. Census Bureau. These machines could sort, retrieve, count, and perform simple calculations on data stored in punched cards.
(Source: https://ethw.org/Inventing_the_Computer)
I love content like this, which takes us back in time to uncover historical novelties that enhance our understanding of modern concepts. This apparent paradox — discovering “new” aspects of the past — makes sense. After all, we can always uncover fresh perspectives and fascinating facts about the tools and systems we use daily but rarely explore deeply.
Computing has been a part of human history for centuries. The drive to find more efficient ways to perform tasks is age-old. Milestones such as discovering electricity revolutionized lighting, replacing whale oil, candles, and gas lamps, and transforming mechanical energy sources like windmills and fossil fuels.
Another pivotal invention was the transistor, which revolutionized electronics and enabled the creation of key devices like calculators and computers. By replacing vacuum tubes, transistors reduced energy consumption and allowed for more compact and efficient equipment.
These advancements and others have transformed how businesses and individuals access and use technology, shaping the world we know today.
But what exactly is the cloud, and how does it work? How do these historical breakthroughs relate to this topic?
This article delves into the fundamentals of cloud computing, focusing on services offered by AWS, their benefits, trends, and innovations. At the same time, it offers nostalgic moments and reflections for all of us!
Computing: The Foundation of Digital Transformation
Before diving into the world of cloud computing, it’s worth reflecting on how computing has evolved over the decades. From early analog computers to modern networks, computing has been the engine driving technological innovation.
It has fueled revolutions like the internet and mobile devices, profoundly transforming how we live and work. Today, computing transcends hardware and software, connecting people, systems, and data on a global scale.
The cloud represents the next frontier in this transformation, introducing new consumption models and limitless possibilities. Let’s explore how this technology is shaping the future and how you can harness its potential.
Below is a slide from my presentation on digital transformation at AWS Summit São Paulo 2024, providing a brief overview of computing advancements over the past 84 years.
Cloud Computing: The Pillar of Digital Transformation
Clearly, computing is not a recent phenomenon. It has been around for thousands of years, and during my research, I discovered several advances in the field of computing - albeit analog ones - starting in the 1890s. At first, this seemed incorrect, but upon reflection, I realized it marked the beginning of more than 130 years of technological evolution. Although this timeline covers a long period, it essentially represents only two generations of continuous research. Fascinating!
Now, let’s define cloud computing. I find the NIST (National Institute of Standards and Technology) definition particularly interesting because it provides a clear framework for comparing services:
Cloud computing is a model for enabling ubiquitous, convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction. This cloud model is composed of five essential characteristics, three service models, and four deployment models.
(Source: https://ethw.org/Inventing_the_Computer)
Cloud computing is an ever-evolving paradigm. The NIST definition highlights fundamental aspects of this technology, offering a solid foundation for comparing services, understanding deployment models, and using it efficiently.
Cloud involves delivering computing services — including servers, storage, databases, networking, software, analytics, and AI — over the internet (“the cloud”). This is made possible through software that predefines resources using APIs (Application Programming Interfaces).
Using APIs allows resources to be predefined, promoting decoupling and interoperability among resources, services, and systems. However, these topics warrant a separate discussion, don’t they?
Instead of owning and managing physical infrastructure, businesses can rent resources as needed, paying only for what they use. This method of providing and consuming computing resources emerged around 2006.
For instance, during a sudden spike in website traffic, a company can rapidly scale up its resources (Rapid Elasticity).
Leading global tech companies that emerged in the 1990s began using cloud computing internally before expanding their services to startups and enterprises.
The cloud market has continued to evolve since its inception, with projections indicating growth in Brazil through 2030. According to a study by AWS and Accenture, Brazilian small and medium-sized enterprises (SMEs) adopting cloud technology are expected to generate 8.4 million jobs in sectors like healthcare, education, and agriculture by 2030.
AWS, for 18 years, has been democratizing access to the cloud.
We’ve previously mentioned that the rise of the “cloud” — a colloquial term widely used in the market — began around 2006. Today, we’d like to focus on the development of AWS services, the global leader in cloud computing, according to Gartner, a renowned IT consulting firm. Gartner evaluates the strengths and weaknesses of major cloud providers from the customer’s perspective, analyzing what is important for businesses, and which providers align best with immediate and long-term cloud goals, among other aspects.
I particularly appreciate using Gartner’s insights to highlight which providers lead the market and their strengths and weaknesses based on customer profiles. This can assist in defining non-functional requirements for projects and choosing the most appropriate services.
AWS has several strengths, including:
Operational excellence, with a track record of delivering robust and reliable services.
Highly distributed architecture with documented fault isolation limits.
At least three independent Availability Zones (AZs) per region, which we’ll discuss in detail later.
Efficient supply chain management, enabling global demand fulfilment, including needs related to GenAI (Generative AI).
This insight is invaluable for IT professionals aiming to develop their cloud solutions and IT managers overseeing organizational strategies. In my view, choosing comprehensive technology with a broad range of functionalities to serve customers while specializing effectively is essential.
AWS launched its first cloud services on March 14, 2006, and since then, it has been driving global business transformation with a positive impact worldwide. In Brazil, the story is no different. Since arriving in 2011, AWS has invested over $3.8 billion** **(approximately R$19.2 billion) to establish, connect, operate, and maintain its data centers in South America (São Paulo) region.
The adoption of AWS services in Brazil and the growth of local operations — AWS’s 8th global region — occurred gradually, solidifying Brazil as a key reference within AWS.
Cloud Computing Characteristics According to NIST
Let’s revisit the NIST definitions, which are particularly useful in understanding how cloud computing works:
On-Demand Self-Service: Users can automatically provision computing resources, such as servers and storage, without requiring direct interaction with the service provider.
Broad Network Access: Resources are available over the network and can be accessed from various devices, including mobile phones, tablets, laptops, and workstations, using standard access protocols.
Resource Pooling: Provider resources are shared among multiple customers through a multi-tenant model, dynamically allocated as needed without the user knowing their physical location.
Rapid Elasticity: Resources can be quickly scaled up or down, often automatically, based on demand. To users, resources appear unlimited and readily available at any time.
Measured Service: Resource usage is automatically monitored, controlled, and measured. This ensures transparency for both the provider and the consumer, optimizing resource usage and ensuring fair billing based on actual consumption.
These NIST definitions highlight the key aspects of cloud computing, providing a common foundation for comparing services and deployment strategies and fostering effective discussions about cloud adoption.
This knowledge is essential for system planners, program managers, technologists, and other professionals adopting cloud computing as either consumers or service providers.
Service Models and Deployment Models
Understanding cloud service models and deployment models is crucial for businesses and technology professionals, as these choices directly impact business needs, budgets, and technology strategies.
Do you know what IaaS, SaaS, and PaaS models entail? How do they differ, and which one is right for you? And among deployment models, which suits your organization’s strategic goals best?
NIST provides important references for these models:
1. Infrastructure as a Service (IaaS)
Consumers can provision essential computing resources such as processing, storage, and networking.
They can install and run operating systems, applications, and other software as needed.
While they don’t control the underlying physical infrastructure, they have control over OS, storage, deployed applications, and in some cases, specific network components like host firewalls.
2. Software as a Service (SaaS)
Consumers use applications provided by the cloud provider, accessible via web interfaces or dedicated programs.
Users don’t manage or control the underlying infrastructure (network, servers, OS, or storage).
Their responsibility is limited to application-specific configurations like user preferences.
3. Platform as a Service (PaaS)
Consumers can deploy applications on cloud infrastructure using programming languages, libraries, and tools provided by the provider.
They don’t control the underlying infrastructure, but they can manage their deployed applications and configurations.
AWS Service Examples by Model:
IaaS: Amazon EC2 for computing power, Amazon EBS for storage.
SaaS: Amazon WorkDocs for file collaboration.
PaaS: AWS Elastic Beanstalk for web application deployment.
Deployment Models (Including the New Multi-Cloud Model)
According to NIST, there are four deployment models, and the addition of 5 is new:
Public Cloud: Resources are shared and managed by an external provider (e.g., AWS). Ideal for scalability and low initial costs.
Private Cloud: Infrastructure is dedicated to a single organization, offering greater control and security.
Hybrid Cloud: Combines public and private clouds, allowing data and applications to move between them. This offers flexibility for diverse workloads.
Community Cloud: Shared infrastructure among organizations with common interests (e.g., security, policies, compliance).
Multi-Cloud: Uses multiple cloud providers to avoid vendor lock-in, enhance redundancy, and optimize costs.
Below, we present a table with some use cases for deployment models.
For the public cloud model, we can use AWS Cloud to support a mobile app development startup that requires scalable and cost-effective infrastructure to launch its product in the market. Since it does not have a data center and wants to focus on product development, the public cloud is the ideal choice.
We can use services such as, for example, EC2 instances for application computing, hosting its virtual servers, S3 buckets to store app data and backups, and CloudFront to distribute content globally.
The benefits can be the low investment and the speed in building the solution.
For the private model, we can consider a financial institution with critical workloads that need to ensure security and regulatory compliance to process sensitive financial data and meet local regulations. Of course, it could create its own data center for full management, but by using AWS, it can handle security and regulatory compliance with various security, monitoring, and management services. For example, it can use AWS Outposts, which creates AWS infrastructure in private data centers; AWS Direct Connect, which provides a dedicated and secure connection between the local data center and the AWS cloud; along with other services for security layers. This bank can benefit from full control over its infrastructure and data while ensuring compliance with regulatory standards.
For the hybrid cloud, the customer can use AWS cloud resources in addition to internal workloads in a private model. Many organizations choose to reduce the size of local data centers by migrating large volumes of data to the public cloud, cutting costs, and transitioning smoothly while experimenting with scalable resources in the public cloud. For instance, consider a healthcare company. A hospital stores highly sensitive patient information but also needs scalability to manage online scheduling systems and remote access for doctors. It can use AWS Outposts for secure storage of critical data locally and Amazon EC2 for processing less sensitive data in the public cloud. The benefits include not only secure storage for critical information but also scalability for online applications.
The community cloud model, which I haven’t seen widely discussed in my research, can be particularly relevant in the public sector. Imagine several public universities needing to share technological infrastructure for academic management systems while adhering to strict data privacy and government regulations. In this case, Amazon WorkSpaces, shared virtual desktops for remote access, and multiple Amazon VPCs (Virtual Private Cloud) for securely segmenting communication between different universities could be used. The benefits include efficient sharing of technological resources and compliance with educational regulations.
Finally, we see a new cloud model emerging: Multicloud. This model allows organizations to leverage the best features of each provider in a complex, highly available, and customized environment. However, cost and operational management can also grow in complexity. A prime example might be a large e-commerce company needing high availability, redundancy, and cost optimization. To achieve this, it uses different cloud providers for different functions. For example, it could use AWS (Amazon S3) for storing product images and videos and Google Cloud (BigQuery) for analyzing large volumes of data. The benefits include risk reduction by diversifying providers and choosing the best service for each specific need.
Do you now see the importance of understanding these concepts?
AWS Global Infrastructure and Its Key Services
The Cloud is a technology based on the delivery model of on-demand service provisioning, meaning organizations can configure their systems through service customizations and pay only for what they use — pay-as-you-go. This approach simplifies cost management as well as the business planning of lean companies. In fact, all businesses, not just startups and mid-sized companies, are focused on cost efficiency, a critical factor for the sustainability of any enterprise.
We can therefore recognize the cloud’s ability to provide essential services as a crucial factor for organizations. The physical infrastructure, including data centers and network connectivity, serves as the essential foundation for any cloud application. In AWS, this structure is represented by the AWS Global Infrastructure, which consists of Regions and Availability Zones (AZs).
The concepts of data redundancy and availability are made possible by an infrastructure with multiple available servers. Events such as natural disasters and local accidents are increasingly likely, especially considering the climate conditions of the last decade. To address this, AWS has Regions, which are collections of data centers connected by dedicated links, ensuring low latency and high connection speeds.
An Availability Zone (AZ) is a cluster of data centers with one or more units, all equipped with redundant power, network, and connectivity systems. Locally, this redundant architecture performs very well, but localized disasters can still occur. Therefore, the concept of Regions expands protection by connecting multiple AZs via dedicated links. This more robust cluster is referred to as an AWS Region.
Organizations can design architectures by selecting a Region and AZs. Additionally, a multi-AZ approach is often applied across multiple resources to enhance protection against downtime and other potential issues.
To choose the right Region based on business and functional requirements, AWS recommends evaluating four key aspects (AWS Technical Essentials — Part 1):
Compliance: Where should I store my data according to my country’s regulations?
Latency: Which Region makes the most sense for my data? What is the latency tolerance of my workload?
Cost: Consider the regional pricing differences, as costs may vary depending on local taxes and fees.
Service Availability: Does the desired Region offer the AWS resources and services required for my workload? Some services may not be available in specific Regions.
Currently, AWS operates 34 Regions with 108 Availability Zones distributed worldwide, in addition to Points of Presence (PoPs) and Edge Caches.
It’s Cloud, but Shared Responsibilities Are Essential
In the AWS Cloud, managing security and compliance follows a shared responsibility model between AWS and the customer. This model clearly defines which responsibilities belong to AWS and which fall to the customer. Typically, this division is explained as security of the cloud (AWS’s responsibility) and security in the cloud (customer’s responsibility).
In the cloud, there is a shared responsibility model between the customer and the provider (AWS):
AWS: Responsible for the security of the cloud (physical infrastructure, hardware, software, network).
Customer: Responsible for security in the cloud (permission configuration, user management, stored data).
The idea is that both AWS and the customer share responsibility for managing their hosted solutions. AWS is accountable for the physical and virtualization layers as the data center provider, while the customer must monitor and manage their data.
Regarding security, the physical infrastructure where the cloud operates, the global backbones, and the software powering AWS services — including databases, compute, storage, and networking — are AWS’s responsibility. Virtualization itself also falls under AWS’s purview. AWS must ensure that machines and software remain up-to-date, eliminating concerns for customers on this front.
On the other hand, customers are responsible for security in the cloud. This includes activities such as:
Applying patches to the operating systems (OS) of their virtual machines.
Configuring security resources, such as firewalls and Anti-DDoS protections.
Segmenting networks using VPCs (Virtual Private Clouds).
Implementing access controls.
Setting up data encryption both at rest and in transit.
Everything requiring manual configuration by customers needs to be carefully monitored. Proper security ports must be closed to ensure effective cloud security.
See the diagram below for a visual representation of these aspects.
The Impact of 2025 Technology Trends on AWS and the Cloud Ecosystem
According to Gartner, in its report Top 10 Strategic Technology Trends for 2025, the ten trends are: Agent AI, Post-Quantum Cryptography, Spatial Computing, AI Governance Platforms, Ambient Invisible Intelligence, Multifunctional Robots, Disinformation Security, Energy-Efficient Computing, Neurological Enhancement, and Hybrid Computing.
These trends are organized into three themes: AI Imperatives and Risks, New Frontiers of Computing, and Human-Machine Synergy.
The key trends involving artificial intelligence, such as the use of AI agents and AI governance platforms, along with energy-efficient computing, have direct connections to Cloud Computing. The cloud will become an enabler for many of these trends. Through cloud computing and the resource-sharing model, green cloud initiatives make it possible today to optimize energy consumption and physical resource usage across organizations, adding sustainability to this technology. It’s no coincidence that Sustainability has become the 6th pillar of the AWS Well-Architected Framework. This sustainability pillar aims to reduce the environmental impact of cloud workloads. It addresses shared responsibility, understanding environmental impact, and optimizing resource usage to minimize waste and reduce harm to the environment.
AWS has played a crucial role in enabling emerging technology trends, and predictions for 2025 promise to accelerate this synergy. Some key areas of focus include:
Artificial Intelligence and AI Governance: With the rise of AI agents and governance platforms, services such as Amazon SageMaker and AWS AI & Machine Learning become essential. They provide scalability for building and training models, along with features to ensure compliance and ethical standards in AI applications.
Energy-Efficient Computing: AWS leads sustainable computing initiatives with the Sustainability pillar of the Well-Architected Framework. This ensures that companies can operate environmentally responsibly, aligning with global ESG (Environmental, Social, and Governance) goals.
Spatial and Hybrid Computing: Technologies such as AWS Wavelength and Outposts bridge local and cloud computing, enabling augmented reality (AR) and virtual reality (VR) applications to deliver low-latency immersive experiences.
AWS continues to drive innovation and sustainability, ensuring its services remain aligned with global technological advancements and the growing needs of modern businesses.
Practical Tips for Small and Medium-Sized Businesses Starting Their Cloud Journey
Small and medium-sized businesses (SMBs) can greatly benefit from cloud computing, but they often hesitate to adopt this technology due to a lack of knowledge or fear of initial costs. To ease the transition, here are some practical tips:
Start Small: Identify a low-criticality application or process to move to the cloud. This allows you to understand the benefits without risking your core operations.
Leverage AWS Free Tier: Use free-tier services to explore the platform and test ideas without additional costs. Services like Amazon S3, EC2, and RDS offer free-tier options ideal for prototyping.
Implement Backup and Recovery: Set up a backup plan using Amazon S3 or Glacier. This ensures your data is protected and ready for recovery in case of failures.
Educate Your Team: Invest in basic cloud training for your employees. Platforms like AWS Skillbuilder offer free courses for beginners. It’s also important to develop technical and cultural skills in DevOps and FinOps to automate pipelines and optimize cost management as part of your company culture.
Monitor Costs: Use tools like AWS Cost Explorer to track spending and avoid surprises. Set up alerts and budgets to stay in control. Once again, FinOps plays a significant role in this area.
By following these steps, your company will be well-prepared to harness the benefits of the cloud seamlessly, paving the way for innovation and growth.
An Invitation to Share Knowledge!
Cloud Computing is more than just a trend — it’s a technological revolution shaping the present and defining the future of business and innovation. With AWS, the possibilities are nearly limitless, ranging from the digital transformation of companies to empowering individuals to master new tools and skills.
Studying and diving deeper into this field not only opens doors in the job market but also positions professionals and organizations ahead in an increasingly competitive and dynamic environment. The demand for cloud specialists is growing exponentially, with opportunities in areas such as security, application development, data analysis, and artificial intelligence.
For those who still have their systems hosted on-premises, transitioning to the cloud isn’t just a technological evolution, but a strategic decision that can reduce costs, increase efficiency, and enable scalability.
If you already understand the importance of this transformation, share this knowledge! Encourage young talents to explore the possibilities offered by the cloud and support traditional businesses in taking this crucial step towards digital transformation.
Whether you’re starting out or expanding your cloud journey, understanding the fundamentals and leveraging available innovations can be the key to success.
Want to learn more or share your experience? Let’s connect and build a more connected, sustainable, and innovative future together!
Purple Pills:
Access the AWS Technical Essentials course on the Skillbuilder portal to learn all the cloud details:
AWS Technical Essentials CourseCheck out Gartner’s report on the Top 10 Tech Trends for 2025:
Gartner Top Tech Trends 2025
Top comments (0)