When I set out to build my gas emissions monitoring system, I faced a choice. I could build the system the "textbook" way using AWS IoT Core, MQTT protocols, and device shadows. Or, I could build it the "Ship It" way using AWS API Gateway, Lambda, and Amplify.
The textbook approach is robust. However, I had strict time constraints and only one gas sensing device to configure. I chose the "Ship It" way.
In this guide, I will show you how to build a serverless architecture to monitor IoT devices. We will skip the complex protocols and create a frontend dashboard using standard web tools.
Prerequisites:
- A configured gas sensing unit: This guide assumes your ESP32 controller is ready to retrieve data.
- An AWS Account: You will need complete CLI and console access.
- Basic Knowledge: You should understand APIs and how serverless functions work.
Architecture Diagram
Lane 1: The Machine Lane
First, we need to create the engine that acts as the brain of our application. To reduce maintenance and keep costs low, we will use AWS Lambda. This function will trigger a write operation to our NoSQL database in DynamoDB when it receives data from our API. It will also send this data to our frontend application.
STEP 1: Creating the Database
Log in to the AWS management console. Type “DynamoDB” in the search bar and click on the service:
On the right side of the dashboard, click Create table. You will see a table creation interface. DynamoDB is schemaless, but you must specify a primary key. Enter the following details:
- Partition Key:
device_id(String). This groups your data by device. - Sort Key:
timestamp(String). This orders your data by time.
Even though DynamoDB handles unstructured data, defining these keys makes it cheaper to query your table later. This guide from the AWS documentation explains how choosing the right keys can save you money on query costs.
Leave all other settings as default and click “Create table.” Take note of the region your table is in because you will need it later. After a few minutes, your table will appear with an active status:
Step 2: Create a Lambda Function
AWS Lambda is responsible for the ingestion and read logic of your architecture. It allows us to send and query data without managing servers. To create a function, search for “Lambda” in the search bar and open it in a new tab.
You will be redirected to the Functions page. Click on “Create function” in the top-right corner of that page.
You have the option of authoring your function, using a Blueprint or using a container image. We will keep things simple and stick to writing our function from scratch. Keep “Author from scratch” selected.
The next thing we have to do is fill in the basic information like you see below:
When naming your Lambda functions, it is important that you use a name that accurately describes the function of your code. In this guide, we are going to be using Node.js as our preferred language. However, AWS Lambda supports many other programming languages.
For our server, we will be sticking to the standard x86_64 AMD architecture. This is the safest standard option. If you are curious about the difference between arm64 and x86, this article by AWS explains the use cases perfectly.
Leave the durable execution setting unchecked and click “Create function.” You should now see your function running in the Functions tab.
Step 3: Write the Code to Run the Function
Enter the console of your function and navigate to the “Code” tab. Replace the existing code with the following block:
import { DynamoDBClient } from "@aws-sdk/client-dynamodb";
import { DynamoDBDocumentClient, ScanCommand, PutCommand } from "@aws-sdk/lib-dynamodb";
const client = new DynamoDBClient({ region: "eu-north-1" });
const docClient = DynamoDBDocumentClient.from(client);
export const handler = async (event) => {
const tableName = "VehicleEmissionData";
// 1. Identify what kind of request this is (GET or POST)
const method = event.httpMethod || (event.requestContext && event.requestContext.http ? event.requestContext.http.method : null);
console.log("Incoming Request Method:", method);
const headers = {
"Access-Control-Allow-Origin": "*",
"Access-Control-Allow-Headers": "Content-Type",
"Access-Control-Allow-Methods": "GET, POST, OPTIONS"
};
try {
// SCENARIO 1: WRITING DATA (POST) - For ESP32
if (method === 'POST') {
if (!event.body) {
throw new Error("No body found in the request");
}
const body = typeof event.body === 'string' ? JSON.parse(event.body) : event.body;
console.log("Received Data to Write:", body);
const newItem = {
device_id: body.device_id || "UNKNOWN_DEVICE",
timestamp: new Date().toISOString(), // Server-side timestamp is safer
CO: Number(body.CO),
CO2: Number(body.CO2),
location: body.location || "Unknown"
};
const command = new PutCommand({
TableName: tableName,
Item: newItem
});
await docClient.send(command);
return {
statusCode: 200,
headers: headers,
body: JSON.stringify({ message: "Data saved successfully!", item: newItem })
};
}
// SCENARIO 2: READING DATA (GET) - For React App
else {
const command = new ScanCommand({ TableName: tableName });
const data = await docClient.send(command);
return {
statusCode: 200,
headers: headers,
body: JSON.stringify(data.Items)
};
}
} catch (err) {
console.error("Error:", err);
return {
statusCode: 500,
headers: headers,
body: JSON.stringify({ error: err.message })
};
}
};
In this block of code, we use a simple if/else statement to control the architectural flow. This acts like a traffic cop, sending POST requests one way and GET requests another.
When we get a POST request, we read the data and add our own timestamp. We do this because the clocks on small devices are often wrong. This guide on IoT time synchronization explains why you should not trust them. Once the data is ready, we save it to the database.
The else block handles GET requests from our frontend dashboard. Here, we grab every record in the table and send it back as a list. It is a simple way to dump all your data straight to the user.
Step 4: Grant Write Permissions to Lambda
For our sensor data to be stored in our database, we need to give Lambda function permission to write to and read from DynamoDB.
To change the permission settings on your function, Click on the “Configuration” tab in your Lambda function. Select “Permissions” on the left navigation bar.
Click the role name under "Execution role". This redirects you to the AWS IAM console. Click the “Add permissions” dropdown and select “Attach policies.”
Search AmazonDynamoDBFullAccess in the new tab that opens up and check the box next to the policy name. Click “Add permissions”. Now your Lambda function has full access to DynamoDB.
Step 5: Creating the API with API Gateway
APIs allow different software to communicate and exchange data. Here, the APIs we create will enable our Lambda function to retrieve information from the IoT device and send data to the web user.
To begin, Search for API Gateway in the console. In the API Gateway interface, click “Create an API” to begin:
After clicking that, you will see a list of API types that AWS offers. Click “Build” on the HTTP API as that is what we will be using here.
Next name your API and leave the end point as IPV4. As good practice, your API name should be descriptive of what you will use it for.
In the Integrations section, click on “Add integration” and Select Lambda. Choose your function (EmissionDataProcessor) from the list as you see below:
Now, we need to add Routes to our API to tell our API what functions it will be carrying out.
Click on Next and you will see the Configure routes interface. Click “Add routes” and you will be directed to a drop down with different route options. Configure your routes this way:
The GET route is the web users’ access to the sensor data from a frontend dashboard. The POST route will allow us to write to our database. The Resource Path is essentially the street address or name for a specific set of actions within your API.
It tells the API Gateway what resource the client wants to interact with. For example, in the path /EmissionDataProcessor, the resource is the sensor data itself. Click “Next” to review and create your API.
To connect your IoT device, you need the Invoke URL. You can find this URL on the main API page under the "Stages" section. Copy this URL and add it to your Arduino code.
Lane 1: The Human Lane (Velocity via Amplify)
The end goal of this project is to create a human-readable dashboard available to users via the web. This dashboard will read data live from our database and display it beautifully with the help of our API.
To create and deploy this dashboard, we will use this React app in this GitHub repository. Clone this repository to your local machine. I recommend using a Linux shell with the AWS CLI configured. This article by AWS teaches how to securely configure the AWS CLI.
Understanding the Project Layout
Before we deploy, it helps to understand what we are building. Here is the layout of our application:
emissions-dashboard/
├── public/
│ ├── index.html
│ └── favicon.ico
├── src/
│ ├── components/
│ │ ├── Header.js
│ │ ├── SummaryCards.js
│ │ ├── Charts.js
│ │ ├── ForecastCard.js
│ │ └── DataTable.js
│ ├── services/
│ │ └── dataService.js # DynamoDB connection logic
│ ├── utils/
│ │ ├── statusCalculator.js # Emission status logic
│ │ └── constants.js # Thresholds, colors, etc.
│ ├── styles/
│ │ └── styles.js # Shared styles
│ ├── App.js # Main component
│ ├── index.js
│ └── aws-exports.js # Auto-generated by Amplify
├── amplify/
│ └── (Amplify configuration files)
├── .env.local # Local environment variables
├── .gitignore
├── package.json
└── README.md
Two files here are critical. dataService.js handles the connection to DynamoDB, while statusCalculator.js determines if the emission levels are safe.
After cloning this application, check that it runs with the npm start command and you should see this:
Step 6: Deploy with AWS Amplify
Now we need to get this application online. We will use the AWS Amplify CLI to host our frontend. This tool does more than just host files. It also distributes your site across a global content delivery network (CDN) called Amazon CloudFront, ensuring your dashboard loads instantly for users anywhere in the world.
To begin, open your terminal and install the Amplify tool globally. Follow this Amplify documentation to install and configure the Amplify CLI on your local machine.
Once it is installed, initialize the project in your terminal:
amplify init
The CLI will ask you a series of questions to understand your project. You can answer them as follows:
? Enter a name for the project: emissionsdashboard
? Initialize the project with the above configuration? No
? Enter a name for the environment: dev
? Choose your default editor: Visual Studio Code (or your preference)
? Choose the type of app that you're building: javascript
? What javascript framework are you using: react
? Source Directory Path: src
? Distribution Directory Path: build
? Build Command: npm run build
? Start Command: npm start
? Select the authentication method you want to use: AWS profile
? Please choose the profile you want to use: default
After that, you should see:
✔ Successfully created initial AWS cloud resources for deployments.
✔ Initialized provider successfully.
Next, we tell Amplify how we want to host the site. Run the following command:
amplify add hosting
Select Hosting with Amplify Console (Managed hosting with custom domains, Continuous Deployment) and choose Manual deployment for now.
Finally, push your application to the cloud:
amplify publish
This command runs your build script, uploads the files to an S3 bucket, and deploys them to the content delivery network. After a few minutes, the terminal will display your live URL.
After waiting 2-5 minutes, you should see:
✔ Deployment complete!
https://dev.xxxxxxxx.amplifyapp.com
View your project in the Amplify Console with the command:
amplify console
Your live application needs to know where to find your API. In local development, we used a .env file, but in the cloud, we must set this manually. Open your application and click on "Environment variables” in the side menu.
Click on “Manage variables” and add the new variable in this format:
For the value, paste your API Gateway Invoke URL. The variable name must match exactly what is in your code. This ensures that when Amplify builds your app in the cloud, it knows exactly where to send your API requests.
Step 7: Set Up Continuous Deployment With Git
Manually running amplify publish every time you change a line of code is tedious. It is also error-prone. In a professional environment, we use Continuous Deployment (CD). This means that every time you save changes to your code repository, AWS automatically detects the update, builds the new version, and deploys it for you.
This guide on CI/CD best practices explains why automating your deployment pipeline is the single best thing you can do for your team's velocity.
First, initialize a Git repository in your project folder:
git init
git add .
git commit -m "<commit message>"
Next, create a new repository on GitHub. Then, link your local project to GitHub and push your code:
git remote add origin https://github.com/yourusername/emissions-dashboard.git
git branch -M main
git push -u origin main
Step 8: Connect GitHub to Amplify Console
Now we will tell AWS to listen to that GitHub repository.
- Go back to the AWS Amplify Console.
- Find your app (
emissionsdashboard) and open it. - Navigate to the “Hosting environments” tab and click “Connect branch.”
- Select GitHub as your source code provider and authorize AWS to access your account.
Amplify usually detects the settings automatically, but you should double-check the Build settings section. Ensure the Build output directory is set to build. If it is set to /, your deployment will fail because Amplify won't know which folder contains your website files.
Click “Save and deploy.” AWS will now trigger a build. You can watch the progress bars as it provisions a server, downloads your code, builds the React app, and deploys it. Once the circles turn green, your automated pipeline is live.
How I Would Optimize This for Production
I built this system to work fast, not to last forever. The "Ship It" approach is perfect for prototypes or hackathons, but if I were tasked with scaling this for a paying client, I would make three major changes.
- Closing the Open Door
In the Lambda code, I set the CORS header to *. This is the digital equivalent of leaving your front door wide open. It means any website on the internet can send requests to my API. In a real production environment, I would lock this down to only allow requests from my specific Amplify domain.
- Checking ID Cards
Right now, my API is public. Anyone with the URL can send fake data to my database or read my sensor history. To secure this, I would implement AWS IAM authentication or API Keys. This ensures that only trusted devices and users can interact with the system. For an extra layer of defense, I might even add a Web Application Firewall (WAF) to block bots and malicious traffic, but checking IDs comes first.
- Escaping the "Scan" Trap
I used ScanCommand to read my data. This works fine for a few hundred records, but it reads every single item in the database every time the dashboard loads. If this sensor ran for a year, that table would grow to millions of records. A Scan operation on a table that size would be incredibly slow and expensive. This breakdown of DynamoDB pricing models highlights why I would switch to QueryCommand and use proper indexes for a long-term solution.
For now, though, the dashboard is live. I built something from scratch that actually works, and that is the most important step in engineering.




















Top comments (0)