DEV Community

VesselAPI
VesselAPI

Posted on • Originally published at vesselapi.com

Self-Hosted Vessel Email Alerts with AWS Lambda and SES

The thing you actually want is an email.

Not a dashboard you have to remember to open. Not a webhook you have to write a server for. An email — the kind your bank sends when a charge clears, the kind your airline sends when the gate changes — that quietly arrives in your inbox and tells you the Ever Given just entered Suez, or that one of your chartered tankers' ETA has shifted by six hours.

VesselAPI sends notifications as webhooks and WebSocket messages. That's the right contract for software, the wrong contract for humans. Webhooks are how systems talk. Email is how systems talk to people. The translation layer between them — the thing that turns a webhook stream into vessel port arrival departure email notifications, the sort of thing your inbox already knows how to thread, sort, and search — is small enough to read in one sitting, which is what this post is.

  1. The Architecture in One Diagram
  2. Step 1: Verify an Email Address in SES
  3. Step 2: Scaffold the Project
  4. The Two Non-Obvious Parts
  5. Step 3: Deploy with SAM
  6. Step 4: Register the Webhook
  7. Step 5: Send a Test Event
  8. When It Doesn't Work the First Time
  9. What This Costs
  10. Going to Production
  11. What Comes Next

You'll want these installed before starting:

  • AWS CLI — for SES identity verification and CloudWatch log reads.
  • AWS SAM CLI — builds and deploys the stack.
  • Python 3.12 — matches the Lambda runtime; needed if you want to run the example repo's test suite locally.
  • AWS credentials configured for your target region (aws configure or env vars).
  • A VesselAPI key on a plan that includes notifications (see the pricing CTA below).

🤖 Setup with Claude Code

Or paste this into a Claude Code session and let it sort the local toolchain out:

check whether the AWS CLI, AWS SAM CLI, and Python 3.12 are installed, and that my AWS credentials are configured for us-east-1 — install or fix anything that's missing

Full source for this post: vessel-api/vessel-email-alerts-aws-sam on GitHub →

The Architecture in One Diagram

vesselapi notification
   |  POST  (HMAC-signed JSON)
   v
API Gateway (HTTP API)
   v
Lambda
   |-- verify HMAC signature
   |-- DynamoDB: have we seen this delivery_id before?
   |-- render email (per event type)
   `-- SES: SendEmail
   v
Your inbox
Enter fullscreen mode Exit fullscreen mode

Five components:

  • API Gateway HTTP API is the public URL VesselAPI POSTs to. The HTTP API flavor (not REST API) is roughly a third of the price and supports everything we need for one route.
  • Lambda does the work: verify, deduplicate, render, send.
  • DynamoDB stores delivery IDs we've already processed so retries from VesselAPI don't double-send. One row per event, 24-hour TTL.
  • SES sends the mail.
  • IAM glues the function's permissions together.

The function only runs when a webhook arrives, and at idle the whole thing costs zero.

A note before we start: the events VesselAPI emits — port arrival, port departure, ETA changed, geofence enter/exit — are derived from AIS, not ground truth. AIS-reported destinations and ETAs are entered by the bridge and are sometimes stale, abbreviated ("FOR ORDERS"), or misspelled. Geofence events apply hysteresis to avoid edge-flapping, but you'll still see the occasional false positive — a vessel drifting past a polygon edge in a busy anchorage, a duplicate broadcast from two coastal stations, an ETA tweak that flips back two minutes later. Worth knowing if you're tuning thresholds for an operational workflow; for "tell me when this ship enters that port," the defaults are fine.

Need a VesselAPI key? See the pricing & plans

Step 1: Verify an Email Address in SES

🤖 Doing it with Claude Code

Skip the console. Ask:

verify the SES identity me@example.com in us-east-1, then poll until it's confirmed

Claude runs aws ses verify-email-identity, then loops on aws ses get-identity-verification-attributes until the status flips to Success.

In the SES console, pick your region (we'll use us-east-1 throughout — change to taste), open Verified identities, click Create identity, and create an Email address identity. AWS sends a confirmation email; click the link.

That's it. We're using a verified email identity rather than a domain because it's instant — no DNS to wait on. For production you'll want a domain identity with DKIM, but for alerts to your own inbox an email identity is fine.

Step 2: Scaffold the Project

The project layout:

vessel-email-alerts/
├── template.yaml
└── src/
    ├── handler.py
    ├── render.py
    └── requirements.txt
Enter fullscreen mode Exit fullscreen mode

requirements.txt is empty — boto3 ships with the Lambda Python runtime, and we're not pulling in a templating library. Plain f-strings handle this.

Here's handler.py in full:

import os
import json
import hmac
import hashlib
import base64
import logging
import boto3
from datetime import datetime, timezone, timedelta
from botocore.exceptions import ClientError

from render import render_email

logger = logging.getLogger()
logger.setLevel(logging.INFO)

# Module-scope clients are reused across warm invocations -- one TLS
# handshake per container, not one per request.
ses = boto3.client("ses")
ddb = boto3.client("dynamodb")

WEBHOOK_SECRET = os.environ["WEBHOOK_SECRET"].encode()
TO_ADDRESS = os.environ["TO_ADDRESS"]
FROM_ADDRESS = os.environ["FROM_ADDRESS"]
IDEMPOTENCY_TABLE = os.environ["IDEMPOTENCY_TABLE"]


def lambda_handler(event, _context):
    headers = {k.lower(): v for k, v in (event.get("headers") or {}).items()}
    raw = event.get("body") or ""
    body: bytes = (
        base64.b64decode(raw) if event.get("isBase64Encoded") else raw.encode("utf-8")
    )

    if not _verify_signature(body, headers.get("x-signature-256", "")):
        return _resp(401, "invalid signature")

    delivery_id = headers.get("x-delivery-id")
    if not delivery_id:
        return _resp(400, "missing X-Delivery-ID")

    if not _claim_delivery(delivery_id):
        # Already processed -- ack so vesselapi stops retrying.
        return _resp(200, "duplicate")

    payload = json.loads(body)
    evt = payload["event"]
    subject, html, text = render_email(evt)

    try:
        ses.send_email(
            Source=FROM_ADDRESS,
            Destination={"ToAddresses": [TO_ADDRESS]},
            Message={
                "Subject": {"Data": subject},
                "Body": {
                    "Html": {"Data": html},
                    "Text": {"Data": text},
                },
            },
        )
    except Exception:
        # Release the claim so the next retry can try again. Without this,
        # any SES failure would silently lose the email.
        _release_delivery(delivery_id)
        raise

    return _resp(200, "sent")


def _verify_signature(body: bytes, sig_header: str) -> bool:
    if not sig_header.startswith("sha256="):
        return False
    expected = sig_header[len("sha256="):]
    mac = hmac.new(WEBHOOK_SECRET, body, hashlib.sha256).hexdigest()
    return hmac.compare_digest(mac, expected)


def _claim_delivery(delivery_id: str) -> bool:
    ttl = int((datetime.now(timezone.utc) + timedelta(hours=24)).timestamp())
    try:
        ddb.put_item(
            TableName=IDEMPOTENCY_TABLE,
            Item={
                "delivery_id": {"S": delivery_id},
                "expires_at": {"N": str(ttl)},
            },
            ConditionExpression="attribute_not_exists(delivery_id)",
        )
        return True
    except ClientError as e:
        if e.response["Error"]["Code"] == "ConditionalCheckFailedException":
            return False
        raise


def _release_delivery(delivery_id: str) -> None:
    try:
        ddb.delete_item(
            TableName=IDEMPOTENCY_TABLE,
            Key={"delivery_id": {"S": delivery_id}},
        )
    except ClientError as e:
        logger.error("failed to release claim (delivery_id=%s): %s", delivery_id, e)


def _resp(status: int, body: str):
    return {"statusCode": status, "body": body}
Enter fullscreen mode Exit fullscreen mode

About a hundred lines, abridged a little for the post — the version in the example repo also has structured logging on every branch and a try/except around the JSON parse that returns 400 for malformed bodies. Two lines above are the load-bearing ones: the compare_digest call and the _release_delivery call. They're the difference between a pipeline that's fine on the happy path and one that survives the unhappy paths. We'll come back to both.

And render.py. There's one renderer per event type and a small dispatcher. Three of the seven for shape — port arrival, ETA change, and the generic fallback:

from datetime import datetime


def render_email(evt: dict):
    handler = _RENDERERS.get(evt["type"], _render_generic)
    return handler(evt)


def _vessel_label(v: dict) -> str:
    name = v.get("vesselName") or "Unknown vessel"
    imo = v.get("imo")
    return f"{name} (IMO {imo})" if imo else name


def _fmt_time(iso: str) -> str:
    try:
        return datetime.fromisoformat(iso.replace("Z", "+00:00")).strftime("%Y-%m-%d %H:%M UTC")
    except Exception:
        return iso


def _render_port_arrival(evt):
    v = _vessel_label(evt["vessel"])
    port = evt["data"]["portEvent"]["port"]
    when = _fmt_time(evt["timestamp"])
    subject = f"{v} arrived at {port['name']}"
    text = f"{v} arrived at {port['name']}, {port.get('country', '')} on {when}."
    html = f"<p><strong>{v}</strong> arrived at <strong>{port['name']}</strong>.</p><p>Reported {when}.</p>"
    return subject, html, text


def _render_eta_changed(evt):
    v = _vessel_label(evt["vessel"])
    change = evt["data"]["etaChange"]
    prev = _fmt_time(change["previousEta"])
    cur = _fmt_time(change["currentEta"])
    shift = change["shiftMinutes"]
    subject = f"{v} ETA shifted by {shift} min"
    text = f"{v} ETA changed: {prev} -> {cur} (shift: {shift} minutes)."
    html = f"<p><strong>{v}</strong> ETA shifted by {shift} minutes.</p><p>Previous: {prev}<br>Current: {cur}</p>"
    return subject, html, text


def _render_generic(evt):
    v = _vessel_label(evt["vessel"])
    subject = f"{v}: {evt['type']}"
    body = f"{evt['type']} event for {v} at {_fmt_time(evt['timestamp'])}."
    return subject, f"<p>{body}</p>", body


_RENDERERS = {
    "port.arrival": _render_port_arrival,
    "port.departure": _render_port_departure,        # same shape as arrival
    "eta.eta_changed": _render_eta_changed,
    "eta.destination_changed": _render_destination_changed,
    "position.geofence_enter": _render_geofence,
    "position.geofence_exit": _render_geofence,
}
Enter fullscreen mode Exit fullscreen mode

The other four renderers (departure, destination changed, draught changed, geofence) follow the same shape — pull the relevant fields from evt["data"], format them, return a (subject, html, text) triple. The structure that scales is the dispatcher dict; adding a Slack or Teams channel later is the same pattern with a different output format.

A sample vessel draught-changed alert email rendered in a mail client

The Two Non-Obvious Parts

It would be tempting to skim past two of those lines. They're the lines that decide whether this thing works or quietly turns into a problem in three months.

HMAC Verification, and Why hmac.compare_digest

Every VesselAPI webhook arrives with an X-Signature-256 header: sha256= followed by the hex of HMAC-SHA256(webhook_secret, raw_request_body).

The instinct is to skip this. The Lambda is on HTTPS. The URL is unguessable. Why bother? Because the URL leaks. It ends up in CloudWatch logs, in someone's terminal scrollback, in screenshots, in an AWS console somebody screen-shared on a call. Anyone who knows the URL can POST a fake event to it. Without verification, your inbox is now a free email-sending service for whoever finds it.

The verification is two lines:

mac = hmac.new(WEBHOOK_SECRET, body, hashlib.sha256).hexdigest()
return hmac.compare_digest(mac, expected)
Enter fullscreen mode Exit fullscreen mode

The reason it's hmac.compare_digest and not == is the load-bearing detail of this whole post. A naive == short-circuits at the first byte mismatch — meaning the comparison is faster when the strings differ early, slower when they differ late. That timing difference is enough for an attacker to recover the signature one byte at a time, given enough requests. compare_digest always takes the same amount of time regardless of where the strings diverge. Use it.

One byte-handling note: keep body as bytes end-to-end. hmac.new wants bytes, json.loads accepts bytes, and skipping the decode()/encode() round-trip saves you from a class of bugs that only show up when the webhook source ever sends non-UTF-8 content.

Idempotency, and What Happens When SES Fails

VesselAPI retries failed webhook deliveries with exponential backoff. That's a feature: a transient Lambda timeout doesn't lose the event. It's also a problem: the same event can arrive at your Lambda more than once, and without protection you'll send the same email twice. Or three times. Or thirteen.

Every webhook arrives with a unique X-Delivery-ID. We use it as the primary key in a small DynamoDB table with a conditional put: if the row already exists, the put fails atomically, we return 200, and no email goes out. The 24-hour TTL means the table never grows.

We return 200 for duplicates rather than an error. If we returned an error, VesselAPI would retry, the put would fail again, we'd return another error, and we'd loop forever. Returning 200 says "I've handled this," and the chain stops.

The trap, which is easy to fall into and tempting to call "good enough": claim the delivery ID and then call SES. If SES throws — throttle, transient 5xx, suppression-list hit — the claim is now sitting in DynamoDB with no email behind it. The next retry from VesselAPI looks up the delivery ID, finds the row, returns 200 duplicate, and the email is silently lost forever. That's the failure mode _release_delivery exists for: on any SES exception, delete the claim row and re-raise so the retry reaches a fresh slate.

The release itself can also fail (DynamoDB throttling, tiny window, very unlucky). When that happens we log loudly and let the original SES error propagate — the retry will be ack'd as a duplicate this once, but the failure is visible in CloudWatch instead of vanishing into the gap between two AWS services. The unit test for the whole sequence lives in test_handler.py in the example repo — it's the case that catches the bug if you ever refactor the order of those two calls.

Step 3: Deploy with SAM

🤖 Doing it with Claude Code

In the example repo, ask:

deploy this stack with my SES identity me@example.com and a fresh webhook secret

Claude generates the secret with openssl rand -hex 32, runs sam build and sam deploy --guided, and reads the WebhookUrl back to you.

Drop template.yaml next to src/:

AWSTemplateFormatVersion: '2010-09-09'
Transform: AWS::Serverless-2016-10-31
Description: Self-hosted vessel email alerts (vesselapi -> Lambda -> SES)

Parameters:
  WebhookSecret:
    Type: String
    NoEcho: true
    Description: The secret you'll configure on the vesselapi notification.
  ToAddress:
    Type: String
    Description: Verified SES recipient (the inbox the alerts go to).
  FromAddress:
    Type: String
    Description: Verified SES sender identity.

Resources:
  IdempotencyTable:
    Type: AWS::DynamoDB::Table
    Properties:
      BillingMode: PAY_PER_REQUEST
      AttributeDefinitions:
        - AttributeName: delivery_id
          AttributeType: S
      KeySchema:
        - AttributeName: delivery_id
          KeyType: HASH
      TimeToLiveSpecification:
        AttributeName: expires_at
        Enabled: true

  EmailSenderFunction:
    Type: AWS::Serverless::Function
    Properties:
      Runtime: python3.12
      Handler: handler.lambda_handler
      CodeUri: ./src
      Timeout: 10
      MemorySize: 256
      Environment:
        Variables:
          WEBHOOK_SECRET: !Ref WebhookSecret
          TO_ADDRESS: !Ref ToAddress
          FROM_ADDRESS: !Ref FromAddress
          IDEMPOTENCY_TABLE: !Ref IdempotencyTable
      Policies:
        - DynamoDBCrudPolicy:
            TableName: !Ref IdempotencyTable
        - Statement:
            - Effect: Allow
              Action: ses:SendEmail
              Resource: !Sub 'arn:aws:ses:${AWS::Region}:${AWS::AccountId}:identity/${FromAddress}'
      Events:
        Webhook:
          Type: HttpApi
          Properties:
            Path: /webhook
            Method: post

Outputs:
  WebhookUrl:
    Description: Paste this into your vesselapi notification's webhook_url.
    Value: !Sub 'https://${ServerlessHttpApi}.execute-api.${AWS::Region}.amazonaws.com/webhook'
Enter fullscreen mode Exit fullscreen mode

Function, API, table, IAM role, log group — about fifty lines of YAML. The equivalent Terraform is noticeably more verbose because it has to express each resource individually instead of leaning on AWS::Serverless::Function's built-in defaults. NoEcho: true keeps the secret out of CloudFormation outputs and the AWS console; the env-var value is still readable to anyone with lambda:GetFunctionConfiguration, which is the usual tradeoff and the reason the production checklist below moves it to Secrets Manager. The ses:SendEmail policy is scoped to the verified FromAddress identity ARN — a compromised function role can't be used to send mail from other identities in the account.

The SAM template.yaml open in a code editor

Build and deploy:

sam build
sam deploy --guided \
  --parameter-overrides \
    WebhookSecret=<some-long-random-string> \
    ToAddress=you@example.com \
    FromAddress=you@example.com
Enter fullscreen mode Exit fullscreen mode

--guided walks you through stack name, region, and the IAM confirmation. Pick us-east-1, accept the IAM prompt, and let it run. About 90 seconds later:

Outputs
-----------------------------------------
Key           WebhookUrl
Value         https://abc123xyz.execute-api.us-east-1.amazonaws.com/webhook
Enter fullscreen mode Exit fullscreen mode

Save the URL. For the secret, openssl rand -hex 32 is a sensible default — use the same value for both the SAM parameter and the VesselAPI notification config below.

The deployed EmailSenderFunction in the AWS Lambda console

Step 4: Register the Webhook with VesselAPI

🤖 Doing it with Claude Code

register a vesselapi webhook for IMO 9811000 pointing at the WebhookUrl from the last deploy, use the same secret

Claude pulls the URL out of the SAM stack output, reuses the secret, and POSTs the notification config.

curl -X POST https://api.vesselapi.com/v1/notifications \
  -H "Authorization: Bearer $VESSELAPI_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "name": "lambda-email-alerts",
    "imos": [9811000],
    "event_types": ["port.arrival", "port.departure", "eta.eta_changed"],
    "webhook_url": "https://abc123xyz.execute-api.us-east-1.amazonaws.com/webhook",
    "webhook_secret": "<the same secret you passed to SAM>"
  }'
Enter fullscreen mode Exit fullscreen mode

9811000 is the IMO of the Ever Given itself — handy if you want a vessel that moves often and is easy to recognise in your inbox while you verify the pipeline. Swap in the IMOs you actually want to watch. The event_types filter is optional; omit it to get every event type for the watched vessels.

Step 5: Send a Test Event

🤖 Doing it with Claude Code

fire a test event at the lambda-email-alerts notification and tail the CloudWatch logs until it reports sent

Claude hits the test endpoint, follows the log stream, and stops when it sees the success line — or the error line, with a translation of what's actually wrong.

VesselAPI exposes a test endpoint that fires a synthetic event through your full delivery path:

curl -X POST https://api.vesselapi.com/v1/notifications/<id>/test \
  -H "Authorization: Bearer $VESSELAPI_KEY"
Enter fullscreen mode Exit fullscreen mode

A few seconds later, the email should be in your inbox.

When It Doesn't Work the First Time

🤖 Doing it with Claude Code

read the last 5 minutes of CloudWatch logs for the EmailSenderFunction and tell me what's failing

Claude pulls the log stream and surfaces the relevant error — usually one of the two below, before you've finished asking.

It usually doesn't. Two things go wrong on first deploy, in roughly this order:

  1. MessageRejected: Email address is not verified — SES is in sandbox and either the FromAddress or ToAddress isn't a verified identity. Verify both, retry. If you used the same email for both, you only need to verify it once.
  2. invalid signature in the Lambda logs — the secret in the SAM parameter doesn't match the one in the VesselAPI notification. They have to be byte-for-byte identical, including no trailing newlines.

Check the CloudWatch log group at /aws/lambda/<stack-name>-EmailSenderFunction-<hash>.

What This Costs

For a single user with ten watched vessels and a few events a day, in us-east-1 as of April 2026:

Service Volume Cost
API Gateway HTTP API ~100 req/mo $0.0001
Lambda well inside free tier $0
DynamoDB pay-per-request ~100 writes/mo $0.0001
SES 100 emails/mo $0.01
CloudWatch logs small pennies

Total: a few cents to a dollar a month, dominated by SES if you send a lot. Idle cost is zero. AWS prices drift; check the current rates if you scale this beyond personal use.

Going to Production

The deploy above is the right shape for "alerts to my own inbox." For anything wider, four changes — each incremental, none of which break the pipeline above:

  • Move out of SES sandbox. Request production access in the SES console; AWS Support's initial response usually arrives within 24 hours, full approval can take longer.
  • Verify a domain identity with DKIM. Send From: alerts@yourdomain.com and have it display as authenticated rather than as a generic AWS noreply.
  • Wire up bounce and complaint handling. Attach an SES Configuration Set and route Bounce and Complaint events to an SNS topic. A single typo'd ToAddress can otherwise quietly land you on the SES suppression list.
  • Move the webhook secret to Secrets Manager. A small SAM change; the function reads it on cold start and caches it.

A future post will go deep on each of these. For now, the pipeline you have is sufficient.

Common Questions

How can I receive automatic email alerts when a vessel enters or leaves a port?

Subscribe to VesselAPI's port-event webhook, point it at an AWS Lambda function, and have Lambda format the event and send it via Amazon SES. The result is an email in your inbox each time a tracked vessel arrives at or departs from a port — no dashboard polling and no extra server.

What's the difference between maritime email alerts and API-based vessel monitoring?

API-based monitoring gives your software a stream of events to process programmatically — useful for dashboards, fleet analytics, or automated triggers. Email alerts translate that same stream into something a human reads in their inbox, and are best when one or two people need passive awareness of specific vessels rather than a full dashboard. The two are layers, not alternatives: the API is what your systems react to, the email is what a human reads at 2am.

How much does it cost to run vessel email alerts on AWS Lambda?

For a typical fleet of a few dozen vessels generating tens of port events per day, the cost is essentially zero — well within the AWS Lambda and SES free tiers. Even at 1,000 alerts per month, the total monthly cost is well under one US dollar. See the cost breakdown above for the line items.

Do I need a server to receive vessel email alerts?

No. The pipeline runs entirely on AWS Lambda (serverless) with SES for email delivery and DynamoDB for idempotency. There is no long-running server to provision or maintain.

How do I prevent duplicate emails when AWS retries a webhook?

Use DynamoDB with the webhook event ID as the partition key and a conditional PutItem to deduplicate. If the conditional put fails because the row already exists, return success to the webhook caller without sending the email.

Can I set up alerts for multiple vessels at once?

Yes. Register each vessel as a separate watcher through the VesselAPI dashboard or pass a list of MMSIs to the bulk-subscription endpoint. The same Lambda and SES pipeline serves all of them — the per-vessel cost is negligible.

What Comes Next

This is part 1 of a series on building your own notification consumers on top of VesselAPI webhooks. Part 2 keeps the same Lambda, the same HMAC verify, the same DynamoDB idempotency — and swaps the email renderer for Slack Block Kit, Microsoft Teams Adaptive Cards, or Discord embeds. The dispatcher pattern in render.py is the seam to plug those in: one function per channel, one config flag to pick which one runs.

People often treat maritime email alerts vs API-based vessel monitoring as a choice. They aren't — they're layers. The API integration is the thing your services react to. The email pipeline above it is what a human reads at 2am. Now your inbox knows when your ships move.

Top comments (0)