AI agents are everywhere these days, but if you’ve tried to get them out of the demo and into production, then you’ve probably hit the same wall, which is production reality. A demo can work once; production has to work all the time, under shifting UIs, flaky network conditions and messy edge cases that are never included in the examples. Nowhere is that gap clearer than in browser and UI automation, where workflows remain painfully manual because traditional approaches are weak, break almost continuously as interfaces evolve, and are expensive to maintain.
Even modern setups usually become fragile Selenium scripts that break when a selector changes, endless maintenance to get tests passing and UI updates turning automation into someone’s full-time job without most people noticing. The real opportunity is not a better demo, it’s something much more practical. Dependable agents that automate repetitive UI work at scale. That’s the gap Amazon is targeting with a product called Amazon Nova Act, which centres around #NormcoreAgents, the mission-critical automation that teams really need in prod.
In this article, we are going to cover:
- Why browser agents fail in production
- How Nova Act approaches dependable UI automation
- A real developer workflow from Playground to IDE and then production
- Practical use cases, including QA testing and business automation
This article is sponsored by Amazon.
What Is Amazon Nova Act?
Amazon Nova Act is a service for creating and managing robust AI agents that automate UI workflows over a browser. Think of it as "use the computer", but production-grade.
At a high level, Nova Act is:
- A developer-first way to create UI automation agents
- Trained and built for actual workflows (QA testing, data entry, extraction, checkout flows)
- For success at scale, not small occasional victories
- Powered by a custom Amazon Nova 2 Lite model
And the important point is that Nova Act is not just a model. It’s a complete workflow from prototype to build/debug to deploy/operate, centred around the fact that UI automation is hard because reliability is hard.
Why Reliability Is The Hardest Problem In AI Agent Development
If you have ever played around with agentic browser automation, I’m sure that you are familiar with the common failure modes such as flaky behaviour that succeeds once and fails on the next run, workflows that break as soon as a layout or button label changes, awful integration with logging and observability tooling, and the consistent challenge of turning a promising tech demo into something deployable and observable. What appears impressive in isolation is all too often exposed as small under real-world conditions.
At the heart of this matter is a simple problem, which is that intelligence alone doesn’t ship. What actually ships is a system that’s debuggable, observable, repeatable, and safe to run in real workflows. Now, Nova Act’s positioning is something like this, where it can stop stitching random components together and hoping they behave, instead, build UI automation on a foundation designed explicitly for production safeguards.
Developer Workflow: From Playground to Production
Nova Act’s builder experience is meant to feel like the workflow developers actually want:
- Prove it quickly
- Move into an IDE
- Ship when ready
Step 1: Prototype In The Nova Act Playground
The Playground is where you can go from idea to working workflow without bootstrapping a framework.
A typical flow could look like:
- Enter the URL you want to automate
- Describe the workflow in natural language
- Run it in a live browser session
- Iterate until you have something reliable enough to refine
Example prompt you might try in the Playground (QA smoke test):
In this upcoming example, we can use the AI agent to browse the Amazon website. A workflow prompt is shown below:
Go to https://www.amazon.com/
Navigate to the Amazon Basics page
Verify that the page loaded and that the "Electronics" button section is visible
Take a screenshot and return a short pass/fail summary
And in the screenshot, we can see what that looks like in Amazon Act Playground right before we initiate the first action.
In this screenshot, we can see how we can use Amazon Act Playground to browse the Amazon e-commerce website.
It's worth noting that the key here isn’t "wow, it clicked buttons." Instead, it’s supposed to be "can I get a workflow that behaves predictably and gives me useful output?" And that's the brilliance of using AI agents: their ability to complete tasks, giving us the ability to offload tasks, freeing up time to work on more pressing matters.
Step 2: Export And Refine In Your IDE
Once your Playground workflow is done, the intended path is:
- Export the generated Python script
- Create an API key on the Nova Act Dev Tools page
- Install the Nova Act IDE extension (VS Code, Cursor, or Kiro)
- Import the workflow into Builder Mode
- Run step-by-step with a live browser preview and logs
Here you can see what the Download Python script option in Playground looks like:
A Shop_on_Amazon.py file was exported with this code, which is what our agent is using behind the scenes, as it follows our commands:
from nova_act import NovaAct
import os
# Browser args enables browser debugging on port 9222.
os.environ["NOVA_ACT_BROWSER_ARGS"] = "--remote-debugging-port=9222"
# Get your API key from https://nova.amazon.com/act
# Set API Key using Set API Key command (CMD/Ctrl+Shift+P) or pass directly to constructor below.
# Initialize Nova Act with your starting page.
nova = NovaAct(
starting_page="https://www.amazon.com/",
headless=True,
tty=False,
nova_act_api_key="<your_api_key>" # Replace with your actual API key
)
# Running nova.start will launch a new browser instance.
# Only one nova.start() call is needed per Nova Act session.
nova.start()
# To learn about the difference between nova.act and nova.act_get visit
# https://github.com/aws/nova-act?tab=readme-ov-file#extracting-information-from-a-web-page
nova.act_get("Navigate to the Amazon Basics page. Verify that the page loaded and that the 'Electronics' button section is visible. Take a screenshot and return a short pass/fail summary. ")
# Leaving nova.stop() commented keeps NovaAct session running.
# To stop a NovaAct instance uncomment nova.stop() - note this also shuts down the browser instantiated by NovaAct so subsequent nova.act() calls will fail.
# nova.stop()
The code is self-explanatory, thanks to the comments and essentially follows the workflow we made.
This is the Amazon Nova Act Dev Tools page, which shows API key creation:
And this is what the Nova Act IDE extension looks like when installing in VS Code/Cursor/Kiro:
Security best practice: don’t leak your API key
Amazon Nova’s docs emphasise a familiar rule, which is to never expose API keys client-side.
Here’s the safe baseline for different OS and environments:
# Mac/Linux
export NOVA_API_KEY="your_key_here"
# Windows PowerShell
$env:NOVA_API_KEY="your_key_here"
And in Python:
import os
api_key = os.getenv("NOVA_API_KEY")
if not api_key:
raise RuntimeError("Set NOVA_API_KEY in your environment")
Using the Amazon Nova Act VS Code Extension
This is what the VS Code extension looks like once you have installed it. For authentication to work, you must either create an API Key or use IAM authentication; then you will have full access to the Builder Mode.
We can mirror our Playground experience locally in our IDE using Amazon Nova, which is how most developers are likely to use the platform. We will use API Key Authentication to run our shop on Amazon Playground example locally.
Just make sure that you obviously have:
- Your development environment set up with an IDE
- Python installed on your computer
- The Amazon Nova extension is installed in your IDE
- An Amazon Nova account and API Key
Then, create a folder for the project on your computer, such as "Amazon Nova Demo". Only two files are needed now, a Shop_on_Amazon.py file and an .env file. The Python file will contain the code to run the build, and the environment file will store your Amazon Nova API key.
See the code below. The files created should be put inside the Amazon Nova Demo folder:
We modified and updated the Shop_on_Amazon.py file from earlier:
import os
from dotenv import load_dotenv
from nova_act import NovaAct
load_dotenv()
api_key = os.getenv("NOVA_ACT_API_KEY")
if not api_key:
raise ValueError("NOVA_ACT_API_KEY not found")
os.environ["NOVA_ACT_BROWSER_ARGS"] = "--remote-debugging-port=9222"
nova = NovaAct(
starting_page="https://www.amazon.com/",
headless=False, # better for interactive debugging in Builder Mode
tty=True,
nova_act_api_key=api_key
)
nova.start()
result = nova.act_get("""
Navigate to the Amazon Basics page.
Verify that the page loaded and that the 'Electronics' button section is visible.
Take a screenshot and return a short pass/fail summary.
""")
print(result)
nova.stop()
This file runs better in build mode and imports our API key.
Our .env file contains only one key: our Amazon Nova API key.
NOVA_ACT_API_KEY="PUT_YOUR_AMAZON_NOVA_API_KEY_HERE"
Now we need to click on the button which lets us open a Python file it looks like an open folder, so select the Shop_on_Amazon.py file and load it, and you should see the file loaded in the Build mode:
Now hit the "Run all cells" button to run the Python script. Running the script gave me a live view of the Amazon website inside of VS Code, and it also opened my Chrome browser at the same time as the agent ran through the process. Of course, this can be configured so you only have the live view or have the agent run in an external browser.
This screenshot shows the build process right at the beginning as the Amazon website is opened:
In this screenshot, we can see the build going through the steps:
The text with the purple background is for the captions, which can be turned on or off.
Example: What the exported script refinement often looks like
Exact Nova Act SDK details vary by environment/version, and the extension can generate its own structure, so think of this snippet as illustrative. The idea is to turn natural language steps into something you can version, test, and maintain:
"""
Example structure for refining an exported Nova Act workflow script.
Goal:
- keep steps explicit
- add checkpoints (assertions)
- capture artifacts (screenshots/logs)
"""
from dataclasses import dataclass
@dataclass
class SmokeTestResult:
passed: bool
reason: str
def assert_visible(page, selector: str, label: str):
# Replace with Nova Act / browser tool assertion helpers from the SDK/extension.
if not page.is_visible(selector):
raise AssertionError(f"Expected '{label}' to be visible ({selector})")
def run_smoke_test(agent):
"""
'agent' represents your Nova Act runtime/driver from the generated script.
Replace 'agent.page' access patterns with what your generated script provides.
"""
page = agent.page
# 1) Navigate
page.goto("https://example.com")
# 2) Log in (illustrative)
page.fill("input[name='email']", agent.secrets["EMAIL"])
page.fill("input[name='password']", agent.secrets["PASSWORD"])
page.click("button[type='submit']")
# 3) Navigate to Pricing
page.click("a[href='/pricing']")
# 4) Assertions (critical for QA)
assert_visible(page, "h1", "Pricing page header")
assert_visible(page, "button:has-text('Get Started')", "Get Started button")
# 5) Capture artifact
page.screenshot(path="artifacts/pricing-page.png")
return SmokeTestResult(passed=True, reason="Pricing page loaded and key UI elements visible")
This is important because assertions transform "it seemed to work" into "it definitely worked," whereas artifacts (gathering relevant screenshots, logs, etc.) allow failures to be actionable rather than mysterious and explicit checkpoints eliminate flakiness by specifying exactly what success looks like at each step in the workflow.
Step 3: Deploy And Run Agents In Production
Once your workflow is stable locally, Nova Act is able to deploy and operate it using AWS services and tooling. Before you can deploy the agent, you have to make sure that you have these prerequisites:
- An Amazon/AWS account
- The AWS CLI is installed on your computer
- Docker is installed and running
Also, don't forget that Amazon Nova currently only works in the US, so use the AWS CLI to configure your region using this command: aws configure set region us-east-1.
Now, in your IDE, use the Nova Act extension and go to the deploy tab as shown here:
Create an AWS Workflow Definition Name. I used "shop-amazon". You might see a warning in yellow about a workflow not being present. If you click on the button to Convert to Deployment Format it will create a workflow for you.
Alternatively, you can run this command to create a workflow using the CLI:
aws nova-act create-workflow-definition \
--name shop-amazon \
--region us-east-1
And then confirm the workflow and region with this command:
aws nova-act list-workflow-definitions --region us-east-1
Before you hit the Deploy Your Workflow button, ensure that Docker is up and running on your computer. Assuming you did everything correctly, you should now be able to deploy your agent as shown here:
Now, when you go to Amazon Nova Act on your AWS account, you should see the workflow you created as I have here:
Real-World Use Cases: What You Can Build With Nova Act
Nova Act is best when the job at hand is repetitive, browser-based, significant enough that automation is warranted, and annoying enough that humans shouldn’t waste their time clicking through it manually.
This scenario is best outlined when seen in a few use cases. Three examples include:
- Automated QA testing
- Automating repetitive business workflows
- Automating developer workflows
Use case 1: Automated QA testing
And this is probably the most immediately practical #NormcoreAgents situation. QA and UI testing are ideal for trusted browser automation because QA often becomes the bottleneck for shipping velocity; UI regressions are painful and often only detected late; more traditional scripts tend to be weak, having high upkeep costs while not detecting issues in production; API tests will fail to catch what real users experience (or don’t) within the interface. Teams need automation that actually represents the full end-to-end journey rather than only backend responses.
Nova Act-style UI testing can fill those gaps with after-deploy automated smoke tests, checkout flow validation, login flow validation, and critical-path regression checks as they traverse environments. Instead of keeping weak selectors alive all over the dozens of scripts, teams can define high-value workflows that run consistently and provide a valuable pass/fail signal before problems reach users.
Here we can see what that looks like in an Audible automated test example:
Example: Audible Smoke test agent spec (human-readable)
Run after each deploy (or nightly):
1) Open https://www.amazon.com/audible
2) Verify the Audible homepage loads:
- Hero headline is visible ("Love books? You'll Love Audible." OR similar)
- Primary CTA (e.g., "Explore Membership" or "Start free trial") is visible
3) Navigate to pricing:
- Click "See all plans & pricing" OR scroll to pricing section
4) Verify pricing cards render:
- "Standard" plan card exists
- "Premium Plus" plan card exists
- Each plan shows a "Try ... Free" CTA button
5) Validate pricing details:
- Confirm trial text exists (e.g., "30-day trial")
- Confirm post-trial pricing text exists
6) Test search functionality:
- Use the top search bar
- Search for "Harry Potter"
- Verify search results load
- Confirm at least one result contains "Harry Potter"
7) Click a result (optional deeper check):
- Verify book detail page loads
- Confirm title, author, and trial CTA are visible
8) Capture artifacts:
- Screenshot: Homepage hero
- Screenshot: Pricing section
- Screenshot: Search results page
- Screenshot: Book detail page
9) Return a structured pass/fail report
It's also worth noting that your agent should not just return "failed", it should return something like this JSON data, which makes it more usable for CI pipelines, Slack alerts, GitHub checks and QA dashboards.
{
"workflow": "audible-smoke-test",
"status": "failed",
"failed_step": "verify_pricing_cards",
"reason": "Premium Plus card not found",
"last_url": "https://www.amazon.com/audible",
"artifacts": {
"homepage": "artifacts/homepage.png",
"pricing": "artifacts/pricing.png"
},
"timestamp": "2026-03-03T19:05:22Z"
}
Use case 2: Automating repetitive business workflows
Much real work still occurs inside browser tools where automation is clunky: either the system has no API, the API is limited/unreliable, or you’re forced through manual UI steps regardless (uploads, exports, approvals, ugly dashboards). That’s why even deeply technical teams still waste hours each week manually doing simple tasks, not because the tasks are difficult, but because they exist in the UI.
Nova Act shines when the work is UI-centric and otherwise needs a human to click around: updating CRM records, reconciling tickets between systems, copying data from internal dashboards into other systems or pulling reports from third-party portals. Instead of writing one-off weak scripts, you can automate #NormcoreAgents that follow the same workflow every time, store artifacts if something changes between runs and escalate to a human only when it decides real judgement is needed.
Agent workflow: Google Sheets monthly client income tracker (business-focused, human-readable)
This example creates a brand-new spreadsheet and sets up a simple but useful Monthly Client Income Tracker for one month, including basic totals and a clean structure. It uses a Google Sheet.
Run on the 1st of each month (or on-demand):
1) Open https://docs.google.com/spreadsheets/u/0/?pli=1
2) Create a new blank spreadsheet
3) Rename the file:
"Client Income Tracker — <Month YYYY>" (e.g., "Client Income Tracker — March 2026")
4) Set up the header row in Sheet1 (row 1):
Date | Client | Project | Description | Invoice # | Status | Currency | Amount | Payment Method | Paid Date | Notes
5) Apply formatting:
- Freeze row 1
- Bold header row
- Turn on filter for header row
- Set Amount column to currency format
- Add data validation for Status: Draft, Sent, Paid, Overdue
- Add data validation for Currency: GBP, USD, EUR
6) Create a second sheet named "Summary"
7) In "Summary", generate monthly rollups:
- Total Invoiced (sum of Amount where Status is Sent or Paid)
- Total Paid (sum of Amount where Status is Paid)
- Outstanding (Total Invoiced - Total Paid)
- Optional: Total by Client table (pivot-like summary)
8) Capture artifacts:
- Screenshot: spreadsheet created + file name visible
- Screenshot: Sheet1 headers + formatting
- Screenshot: Summary tab with totals visible
9) Return a pass/fail report with:
- Link to the created spreadsheet (URL)
- What was created (sheets + columns + validations)
- Any issues encountered (e.g., permissions, popups)
Use case 3: Automating developer workflows
This is where the developer excitement angle really flies. A robust UI automation agent can automate several routine checks that developers are forced to do day-to-day: post-deploy smoke testing, checking feature flags or settings in admin panels, validating critical pages after a frontend change or performing consistent bug reporting with screenshots and reproducible steps.
Even if you don’t go all the way to automating your entire workflow, having an agent that runs the boring checks automatically is already a massive reduction of manual work. It eliminates the need for developers to manually click through the same pages over and over again, placing that repetitive validation work in the hands of an agent, which can share clear results, saving hours each week while giving engineers more time to build and debug actual features.
Agent workflow: Post-deploy frontend validation
In many teams, developers manually verify a few critical things after every deployment: the homepage loads, key pages render correctly, feature flags are enabled, and nothing obvious is broken. An agent can reliably run those checks and produce a structured report with screenshots, like with this example workflow.
Run after each deployment:
1) Open the production application URL
2) Verify the homepage loads successfully
- Page title is present
- Main navigation renders
- No obvious error messages appear
3) Navigate to key pages
- /dashboard
- /settings
- /pricing
4) Validate page integrity
- Confirm important UI components render
- Check that expected headings or CTAs exist
- Verify navigation links work
5) Check feature flags or admin settings
- Navigate to the admin/settings panel
- Confirm the latest feature flag is enabled
- Verify the new feature toggle appears in the UI
6) Capture artifacts
- Screenshot: homepage
- Screenshot: dashboard page
- Screenshot: feature flag/settings page
- Screenshot: pricing page
7) Detect errors
- Look for console error indicators
- Check for visible error banners or “500” pages
8) Generate a structured report
- Pass/fail status
- Failed step (if any)
- URLs visited
- Attached screenshots
Human-in-the-loop: keeping humans in control
A reason teams shy away from running UI agents in production is very much to do with the obvious: some decisions should not be automated blindly. And that’s where Human-in-the-Loop (HITL) comes in. Instead of making the agent guess, when it hits something out of the ordinary or sensitive, the workflow can hold and escalate that step to a human who can review and decide how to proceed.
Nova Act’s HITL approach enables agents to keep performing routine automation and offload decision-making moments like authentication flows, CAPTCHA, critical approvals, and sensitive actions such as payments and account modifications. The idea is, as ever, simple: Automation should be confident when it can be confident, and smart enough to ask for help when it can't.
Why Nova Act Fits Modern AI Agent Architectures
Modern agent systems typically implement composed architectures where different parts of the system take care of concrete aspects instead of building everything into a single tool. Nova Act itself provides a high-level model where each Act is an isolated piece of UI automation code that can be reused across different pipelines and needs; it’s a specialised capability (good at being safe) that sits nicely in the developer's experience, from trial to production.
Instead of forcing teams to choose between a quick demo or a fully engineered system, Nova Act aims to provide a path that looks like prototype fast, then engineer properly, then ship.
Some of the main features of this method are:
- One component handling orchestration
- Purpose-built tools for specialised tasks
- Strong observability and debugging capabilities
- Patterns for safely executing production workflows
- Seamless integration into the developer workflow (Playground to IDE)
- Interoperability with other tools (e.g. Python functions, external integration)
- A focus on production working, not clever reasoning
Getting Started With Amazon Nova Act
Onboarding is not too difficult when you visit the Amazon Nova Act website. It's fairly straightforward after following the onboarding flow as outlined below:
- Visit the Nova Act playground and prototype a workflow in natural language (Agent, Voice Agent, UI Agent)
- Export the generated Python script
- Create your API key and set it via environment variables
- Install the IDE extension (VS Code, Cursor, or Kiro)
- Import the script into Builder Mode, run/debug step-by-step
- When ready, deploy and run in production
Give Nova Act a try in the playground and share your feedback on social media. You can find more information on the What is Nova Act page on AWS and the Getting started with Nova Act guide.
To learn more, just refer to the Amazon Nova Act Documentation.
Gotchas to know up front:
- Nova Act is currently ONLY available in the AWS Region US East (N. Virginia)
- You must have a US-based Amazon account to access
The Bigger Picture: From Automation Tools To Autonomous Workflows
We are seeing a renaissance happen across software automation: evolving from weak scripts to intelligent agents, one-off automations to repeatable workflows, and cool demos to workhorse systems, where devs are actually deploying. The real unlock here is being robust; potentially, once agents are working reliably in production, they cease to be a novelty and start to become infrastructure. The sort that quietly saves hundreds of hours for teams while improving product quality. That’s the idea behind #NormcoreAgents: not flashy magic assistants, but solid teammates who take care of the annoying tasks that keep real products functioning.
Conclusion
Amazon Nova Act targets one of the hardest problems in agent tooling today: how to actually build UI automation that holds up in production. The challenge isn’t getting a demo to work; lots of tools can do that. The real value is in supporting the full lifecycle of reliable automation, quickly prototyping an agent in the Playground, iterating on and debugging that workflow from your IDE, deploying and operating those workflows until they’re ready for production use.
The use cases are right this second useful: automated QA testing; repetitive business workflows, developer productivity automations that contract out hours of guide work each week. If you're adventurous, play with Nova Act in Playground and create a simple #NormcoreAgent that does something you already do manually but saves your time.
A big thank you to Amazon for sponsoring this article and making it possible to benefit the developer community.
Stay up to date with AI, tech, productivity, and personal growth
If you enjoyed these articles, connect and follow me across social media, where I share content related to all of these topics 🔥
I also have a newsletter where I share my thoughts and knowledge on AI, tech, productivity, and personal growth.



















Top comments (0)