Made a New Year's resolution to learn something new? We've got you covered!
Back in December, our team turned the holidays into a developer advent calendar, unwrapping a different Amazon tool each day leading up to Christmas. We covered 12 different Amazon tools / services in 12 LinkedIn posts with real code snippets and working demos. Whether you're curious about AI voice generation or custom Kiro agent consider it your New Year's resolution starter pack for building with Amazon's developer.
So grab your coffeee, and let's see what Amazon Developer gave to theee 🎵
🎵 On the 1st day of Christmas 🎄, Amazon Developer gave to me… voice generation with Polly 🎵
Okay I’ll admit it, I also yell at Alexa, but I’ll also admit her voice is oddly comforting. So when I needed a voice for my AI assistant app, I turned to Amazon Polly. Polly is AWS’s text-to-speech service that turns text into audio with neural voices across 30+ languages, the same underlying tech that powers Alexa’s voice.
The setup was surprisingly simple. Polly handles all the heavy lifting, so with just a few lines of code
const result = await pollyClient.send(
new SynthesizeSpeechCommand({
OutputFormat: "mp3",
Text: "Welcome!",
VoiceId: "Matthew",
Engine: "neural",
})
);
I had my app 'talking' in different accents. The British version even wanted to banter with me 😂
🎵 On the 1st day of Christmas 🎄, Amazon Developer gave to me… voice generation with Polly 🎵
Okay I’ll admit it, I also yell at Alexa, but I’ll also admit her voice is oddly comforting. So when I needed a voice for my AI assistant app, I turned to Amazon Polly (➡️ https://lnkd.in/d7kNn4UR). Polly is AWS’s text-to-speech service that turns text into audio with neural voices across 30+ languages, the same underlying tech that powers Alexa’s voice.
The setup was surprisingly simple. Polly handles all the heavy lifting, so with just a few lines of code (➡️ https://lnkd.in/dJjKDT4H), I had my app 'talking' in different accents. The British version even wanted to banter with me 😂
➕ Follow along as today is just Day 1 of our 𝟭𝟮 𝗗𝗮𝘆𝘀 𝗼𝗳 𝗔𝗺𝗮𝘇𝗼𝗻 𝗗𝗲𝘃𝗲𝗹𝗼𝗽𝗲𝗿 🎄
What would you build with Polly? Drop your ideas below 👇
linkedin.com
🎵 On the 2nd day of Christmas 🎄, Amazon Developer gave to me… Kiro power🎵
In the hashtag AI Agents world Context overload is a problem! I love how Kiro approached this issue by using Kiro Powers
Now you might be asking, what is a Kiro Power? Think of it as Santa's list for your AI agent: it gives instant access to specialized knowledge, tools, and best practices for one technology (like multi-platform TV 📺 builds), loading it only when you need it for maximum efficiency and speed.
I put this power to the test, creating one and using it to successfully extend my VegaOS TV app (built in ReactNative) to other TV platforms. It automatically managed the complex, chilly build systems, leaving us with a warm, clean React Native code.
🎵 On the 2nd day of Christmas 🎄, Amazon Developer gave to me… Kiro power🎵
In the #AI Agents world Context overload is a problem!
I love how #Kiro approached this issue by using Kiro Powers https://lnkd.in/e99fDAfR
What is a Kiro Power? Think of it as Santa's list for your AI agent: it gives instant access to specialized knowledge, tools, and best practices for one technology (like multi-platform TV 📺 builds), loading it only when you need it for maximum efficiency and speed.
I put this power to the test, creating one and using it to successfully extend my #VegaOS TV app (built in #ReactNative) to other TV platforms. It automatically managed the complex, chilly build systems, leaving us with a warm, clean React Native code.
Check it out here : 👉 https://lnkd.in/eAHA3gha
Day 2 of 12 Days of Amazon Developer 🎄
linkedin.com
🎵 On the 3rd day of Christmas, Amazon Developer gave to me… 3 CLI Custom Agents 🎵
Every time I switch tasks, I used to re-explain my whole world to an agentic sidekick: My team structure. The acronyms. My writing style. Over and over. Blaaargggh!
AWS Developers' custom agents for Kiro CLI fixed that
A custom agent is a JSON config that defines:
Which tools the agent can access
What files/docs auto-load as context
A custom system prompt for the persona you need
I've set up a few for different modes:
trag – auto-loads my teammates, work glossary, and style guide
social-media-lead – writes channel-specific posts for LinkedIn, etc
pair-progammer – gets me the feedback I need while coding
Setup: /agent generate
Swap anytime: /agent swap
Now when I start a session, the agent already knows who I am and what I'm working on. No preamble. Just work.
🎵 On the 3rd day of Christmas, Amazon Developer gave to me… 3 CLI Custom Agents 🎵
Every time I switch tasks, I used to re-explain my whole world to an agentic sidekick: My team structure. The acronyms. My writing style. Over and over. Blaaargggh!
AWS Developers' custom agents for Kiro CLI fixed that (➡️ https://lnkd.in/d5zhHmbj)
A custom agent is a JSON config that defines:
• Which tools the agent can access
• What files/docs auto-load as context
• A custom system prompt for the persona you need
I've set up a few for different modes:
trag – auto-loads my teammates, work glossary, and style guide
social-media-lead – writes channel-specific posts for LinkedIn, etc
pair-progammer – gets me the feedback I need while coding
Setup:
`/agent generate`
Swap anytime:
`/agent swap`
Now when I start a session, the agent already knows who I am and what I'm working on. No preamble. Just work.
Follow along for more Amazon developer as today is just Day 3 of our 12 days of Amazon Developer 🎄
linkedin.com
🎵 On the 4th day of Christmas, Amazon Developer gave to me… Kiro CLI: https://kiro.dev/cli/
Everyone reading this has probably used an AI chatbot, but have you used one on the CLI? It’s a whole different ballgame.
Using an LLM on the CLI is a lot like using Chat GPT or Claude in your browser, but instead you chat with it directly from the terminal.
What’s so great about it is how convenient it is for vibe coding an app or testing one.
When I was testing React Native libraries on Fire OS, it made the process so much easier and faster. I would just give it the URL of the library and tell it to create an app and test it on a Fire device. That’s it, just one step. No need to manually create a new app, no need to download the repo myself or integrate it or copy-paste code from a browser-based LLM, and no need to set up adb or manually run the app. It took care of everything.
Whether you’re a veteran vibe-coder or thinking about dipping your toes into the water for the first time, check out Kiro CLI
🎵 On the 4th day of Christmas, Amazon Developer gave to me… Kiro CLI: https://kiro.dev/cli/
Everyone reading this has probably used an AI chatbot, but have you used one on the CLI (command-line interface)? It’s a whole different ballgame.
Using an LLM on the CLI is a lot like using Chat GPT or Claude in your browser, but instead you chat with it directly from the terminal.
What’s so great about it is how convenient it is for vibe coding an app or testing one.
When I was testing React Native libraries on Fire OS ( https://lnkd.in/ghvNFv-K ), it made the process so much easier and faster. I would just give it the URL of the library and tell it to create an app and test it on a Fire device. That’s it, just one step. No need to manually create a new app, no need to download the repo myself or integrate it or copy-paste code from a browser-based LLM, and no need to set up adb or manually run the app. It took care of everything.
Whether you’re a veteran vibe-coder or thinking about dipping your toes into the water for the first time, check out Kiro CLI: https://kiro.dev/cli/
linkedin.com
🎵 On the 5th day of Christmas, Amazon Developer gave to me… Amazon Bedrock 🎵
Ever wondered what powers the AI behind Prime Video & Amazon MGM Studios's personalized recaps, Ring's smart video search or Alexa+'s conversational intelligence? Meet Amazon Bedrock
Bedrock is a fully managed service that makes leading foundation models from Amazon, Anthropic, AI21 Labs, and more accessible through a single API. This allows you to build and scale generative AI applications without managing infrastructure
🎬 Prime Video & Amazon MGM Studios uses Bedrock to power conversational and personalized interactions across tens of thousands of services and devices with agentic capabilities. X-Ray Recaps also uses Bedrock to understand storylines, emotions and character relationships.
🔔 Ring uses Bedrock for video understanding and search, making it easier to find specific moments and identify patterns in your footage.
🏀 Live Sports uses Bedrock to detect and capture slam dunks, three-pointers, and key plays in real-time, then generates instant highlight clips.
🗣️ Alexa+ uses Bedrock to route requests to specialized models for more natural conversations.
🛍️ Rufus uses Bedrock to combine multiple foundation models with Amazon's product knowledge, reviews, and Q&A data to deliver sub-second responses to millions of shoppers.
Ready to transform your AI journey? Start sleigh-ing your AI goals today with this rock-solid solution! 🪨🛷
🎵 On the 5th day of Christmas, Amazon Developer gave to me… Amazon Bedrock 🎵
Ever wondered what powers the AI behind Prime Video & Amazon MGM Studios's personalized recaps, Ring's smart video search or Alexa+'s conversational intelligence? Meet Amazon Bedrock (➡️ https://lnkd.in/grzc8ptt)!
Bedrock is a fully managed service that makes leading foundation models from Amazon, Anthropic, AI21 Labs, and more accessible through a single API. This allows you to build and scale generative AI applications without managing infrastructure
🎬 Prime Video & Amazon MGM Studios uses Bedrock (➡️https://lnkd.in/gEE_a99R) to power conversational and personalized interactions across tens of thousands of services and devices with agentic capabilities. X-Ray Recaps also uses Bedrock to understand storylines, emotions and character relationships.
🔔 Ring uses Bedrock (➡️ https://lnkd.in/gErQRQxS) for video understanding and search, making it easier to find specific moments and identify patterns in your footage.
🏀 Live Sports uses Bedrock (➡️ https://lnkd.in/gCDDC8CW) to detect and capture slam dunks, three-pointers, and key plays in real-time, then generates instant highlight clips.
🗣️ Alexa+ uses Bedrock (➡️ https://lnkd.in/gV494Sia) to route requests to specialized models for more natural conversations.
🛍️ Rufus uses Bedrock (➡️ https://lnkd.in/gQe-eqeN) to combine multiple foundation models with Amazon's product knowledge, reviews, and Q&A data to deliver sub-second responses to millions of shoppers.
Ready to transform your AI journey? Start sleigh-ing your AI goals today with this rock-solid solution! 🪨🛷
➕ Follow along as today is Day 5 of our 𝟭𝟮 𝗗𝗮𝘆𝘀 𝗼𝗳 Amazon Developer 🎄
linkedin.com
🎵 On the 6th day of Christmas 🎄, Amazon Developer gave to me… browser automation with Amazon Nova Act 🎵
One of the most exciting things this year has been the shift from AI that just talks to AI that acts. Amazon Nova Act certainly knocks it out of the park; it's the AI agent service turning browsers into autonomous coworkers. If you’ve ever dreamed of “set it and forget it” automation for complex UI workflows, this is your new best friend. 👇
Nova Act lets developers build, deploy, and manage fleets of reliable AI agents for automating browser-based tasks at enterprise scale. And it’s trained specifically to act – not just chat – driving browsers, filling forms, and clicking buttons with >90% task reliability in production. Think of it as a “digital intern” for automating business processes that never gets distracted.
🔧 Why this matters:
Reliability at Scale: While most agentic tools struggle at ~50% accuracy, Nova Act achieves >90% success rates on tricky UI elements (date pickers, popups, dropdowns) thanks to reinforcement learning on 1000s of simulated web environments.
Speed to Value: Go from natural-language prototype → production in hours (not months). The new Nova Act Playground lets you refine workflows visually in minutes, while the Python SDK supports advanced deployments.
Native AWS Integration: Seamlessly ties into Amazon Bedrock, CloudWatch, and IAM. No “glue code” needed – just secure, scalable automation.
Multi-Agent Orchestration: Pair with Strands Agents framework to coordinate complex, cross-domain workflows (e.g., QA → data extraction → API calls).
🧪 See it in action: Here’s a simple script where Nova Act navigates to the Amazon.com website, looks for a board game, about everyone’s favorite Blue Heeler, ensures it can get here in time, and adds it to my cart!
🌍 How customers are winning:
QA Testing: Tyler Technologies cut test-suite creation time from weeks to minutes by converting manual test plans into automated scenarios with natural language prompts
Internal Tools: Amazon Leo uses Nova Act to validate 1000s of test cases across web/mobile for its upcoming satellite internet launch.
🎵 On the 6th day of Christmas 🎄, Amazon Developer gave to me… browser automation with Amazon Nova Act 🎵
One of the most exciting things this year hasn’t been another AI demo – it’s the shift from AI that just talks to AI that acts. Amazon Nova Act certainly knocks it out of the park; it's the AI agent service turning browsers into autonomous coworkers. If you’ve ever dreamed of “set it and forget it” automation for complex UI workflows, this is your new best friend. 👇
Nova Act lets developers build, deploy, and manage fleets of reliable AI agents for automating browser-based tasks at enterprise scale. And it’s trained specifically to act – not just chat – driving browsers, filling forms, and clicking buttons with >90% task reliability in production. Think of it as a “digital intern” for automating business processes that never gets distracted.
🔧 Why this matters:
· Reliability at Scale: While most agentic tools struggle at ~50% accuracy, Nova Act achieves >90% success rates on tricky UI elements (date pickers, popups, dropdowns) thanks to reinforcement learning on 1000s of simulated web environments.
· Speed to Value: Go from natural-language prototype → production in hours (not months). The new Nova Act Playground lets you refine workflows visually in minutes, while the Python SDK supports advanced deployments.
· Native AWS Integration: Seamlessly ties into Amazon Bedrock, CloudWatch, and IAM. No “glue code” needed – just secure, scalable automation.
· Multi-Agent Orchestration: Pair with Strands Agents framework to coordinate complex, cross-domain workflows (e.g., QA → data extraction → API calls).
🧪 See it in action:
Christmas is coming up, and everybody needs a little help getting the last of the presents on Santa’s lists. Here’s a simple script (https://lnkd.in/g97HBuyq) where Nova Act navigates to the Amazon.com website, looks for a board game about everyone’s favorite Blue Heeler, ensures it can get here in time, and adds it to my cart
🌍 How customers are winning:
· QA Testing: Tyler Technologies cut test-suite creation time from weeks to minutes by converting manual test plans into automated scenarios with natural language prompts. https://lnkd.in/g2gKxP-t
· Internal Tools: Amazon Leo uses Nova Act to validate 1000s of test cases across web/mobile for its upcoming satellite internet launch. https://lnkd.in/gidrHN92
💬 Let’s Talk Automation!
What repetitive browser tasks are you tired of doing manually? 🤔
👇 Comment below with your biggest automation headache – let’s brainstorm how Nova Act could solve it!
🔗 Dive deeper: Nova Act Home https://lnkd.in/gVzN59xk
➕ Follow along as today is Day 6 of our 𝟭𝟮 𝗗𝗮𝘆𝘀 𝗼𝗳 Amazon Developer 🎄
#AWS #AI #Developers #Automation #GenAI #AmazonNova #NovaAct
linkedin.com
🎵 On the 8th day of Christmas 🎄, Amazon Developer gave to me ... one prompt UI magic with Kiro ✏️ 📺 🎵
by giolaq
From idea → wireframe → UI → working app… it usually takes too many steps.
I love how Kiro changes this with a single simple prompt.
This time, I pushed it further:
👉 I gave Kiro one prompt and a simple pencil sketch…
👉 and asked it to turn that into a full 10-foot TV UI web app.
Kiro understood the TV navigation patterns (D-pad, focus states) and the 10-foot UI guidelines with a great TV layout 📺
And the code is available here
🎵 𝐎𝐧 𝐭𝐡𝐞 8𝐭𝐡 𝐝𝐚𝐲 𝐨𝐟 𝐂𝐡𝐫𝐢𝐬𝐭𝐦𝐚𝐬 🎄, Amazon Developer 𝐠𝐚𝐯𝐞 𝐭𝐨 𝐦𝐞… 𝐨𝐧𝐞 𝐩𝐫𝐨𝐦𝐩𝐭 𝐔𝐈 𝐦𝐚𝐠𝐢𝐜 𝐰𝐢𝐭𝐡 #𝐊𝐢𝐫𝐨 ✏️📺 🎵
From idea → wireframe → UI → working app… it usually takes too many steps.
I love how #Kiro changes this with a single simple prompt.
This time, I pushed it further:
👉 I gave Kiro one prompt and a simple pencil sketch…
👉 and asked it to turn that into a full 10-foot TV UI web app.
Kiro understood the TV navigation patterns (D-pad, focus states) and the 10-foot UI guidelines with a great TV layout 📺
And the code is available here 👉 https://lnkd.in/eUFb9p5p
Follow along for more Amazon Developer as today is just Day 8 of our 12 days of Amazon Developer 🎄
#Kiro #AIAgents #TVDevelopment #UX #UIDesign #10FootUI #AmazonDeveloper #BuildInPublic
linkedin.com
🎵 On the 9th day of Christmas, Amazon Developer gave to me… AWS Lightsail! 🎵
by trag
I need a secure cloud box to run Kiro CLI, Codex, Claude Code, and batch scripts without exposing my home network. Lightsail from Amazon Web Services (AWS) was able to get me up and running in minutes for under $5/month
Here's my workflow:
📱 iSH – SSH terminal for iOS
📂 Textastic – secure file transfer and SSH
☁️ Lightsail – my Ubuntu instance with an SSH alias as tragbox for quick access
I SSH in, run either kiro-cli / claude / codex depending on the project, and my custom agents are live from my phone. Plus, I have a few MCPs running including GitHub and Context7 for extended capabilities.
My top use cases:
Pull conference speaker data and build tables of mutual connections
Run batch image cleanup and CSV processing when I'm away from my laptop
Execute long-running scripts without tying up my local machine
Why Lightsail?
I considered EC2, Fargate, and other options but all too much setup for my use case. Lightsail gave me an Ubuntu box with a straightforward console, flat monthly pricing (no surprise bills), and I can bump RAM or storage when I need it. It's firewalled away from my home network, so I'm not worried about exposing internal endpoints.
It really just works - I spin up agents on demand, run what I need, and move on.
🎵 On the 9th day of Christmas, Amazon Developer gave to me… AWS Lightsail! 🎵
I need a secure cloud box to run Kiro CLI, Codex, Claude Code, and batch scripts without exposing my home network. Lightsail from Amazon Web Services (AWS) was able to get me up and running in minutes for under $5/month
Here's my workflow:
📱 iSH (https://ish.app/) – SSH terminal for iOS
📂 Textastic (https://lnkd.in/gKee8EC5) – secure file transfer and SSH
☁️ Lightsail – my Ubuntu instance with an SSH alias as `tragbox` for quick access
I SSH in, run either kiro-cli / claude / codex depending on the project, and my custom agents are live from my phone. Plus, I have a few MCPs running including GitHub and Context7 for extended capabilities.
My top use cases:
- Pull conference speaker data and build tables of mutual connections
- Run batch image cleanup and CSV processing when I'm away from my laptop
- Execute long-running scripts without tying up my local machine
Why Lightsail?
I considered EC2, Fargate, and other options but all too much setup for my use case. Lightsail gave me an Ubuntu box with a straightforward console, flat monthly pricing (no surprise bills), and I can bump RAM or storage when I need it. It's firewalled away from my home network, so I'm not worried about exposing internal endpoints.
It really just works - I spin up agents on demand, run what I need, and move on.
Check it out: https://lnkd.in/gqQyZ35t
Follow along for more Amazon developer as today is just Day 9 of our 12 days of Amazon Developer 🎄
linkedin.com
🎵 On the 10th day of Christmas, Amazon Developer gave to me… Kiro specs!
by mosesroth
To paraphrase Forrest Gump, vibe coding is a like a box of chocolates, you never know what you're gonna get.
Unfortunately, that’s not such a good thing. So what are you supposed to do?
With specs, when you give Kiro a prompt, instead of building the app right away, it gives you three docs: 1. requirements, 2. design, and 3. tasks. These documents list exactly what Kiro intends to do when building your app. You can then review them, confirm if the specs follow your vision, and either edit them or give Kiro a new prompt.
That way your vibe coded app actually conforms to your vision.
🎵 On the 10th day of Christmas, Amazon Developer gave to me… Kiro specs!
To paraphrase Forrest Gump, vibe coding is a like a box of chocolates, you never know what you're gonna get.
Unfortunately, that’s not such a good thing. So what are you supposed to do?
Meet spec-driven development with Kiro: https://kiro.dev/
With specs, when you give Kiro a prompt, instead of building the app right away, it gives you three docs: 1. requirements, 2. design, and 3. tasks. These documents list exactly what Kiro intends to do when building your app. You can then review them, confirm if the specs follow your vision, and either edit them or give Kiro a new prompt.
That way your vibe coded app actually conforms to your vision.
For more info, check out Eric Fahsl’s talk: https://lnkd.in/gRhYubiV
linkedin.com
🎵 On the 11th day of Christmas, Amazon Developer gave to me…Checkpointing in Kiro 🎵
by knmeiss
Have you ever let an AI agent refactor your code, only to realize you want to try a different approach? Kiro's checkpointing feature lets you rewind to any point in your session with one click.
✨ Automatic checkpoint markers are added to your session as Kiro modifies code.
🔄 Easily test different approaches. If one doesn't pan out, you're one click from reverting to any point in your session to try another.
💬 Retain your conversation context while reverting changes.
✔️Available in both the Kiro IDE and CLI
While it doesn’t replace version control, it does provide an experimentation playground while Git handles the permanent record.
The real gift? The confidence to experiment without fear 🎄🎁
🎵 On the 11th day of Christmas, Amazon Developer gave to me…Checkpointing in Kiro 🎵
Have you ever let an AI agent refactor your code, only to realize you want to try a different approach? Kiro's checkpointing feature(➡️ https://lnkd.in/gRKCxyvZ) lets you rewind to any point in your session with one click.
✨ Automatic checkpoint markers are added to your session as Kiro modifies code.
🔄 Easily test different approaches. If one doesn't pan out, you're one click from reverting to any point in your session to try another.
💬 Retain your conversation context while reverting changes.
✔️Available in both the Kiro IDE and CLI
While it doesn’t replace version control, it does provide an experimentation playground while Git handles the permanent record.
The real gift? The confidence to experiment without fear 🎄🎁
➕ Follow along as today is Day 11 of our 𝟭𝟮 𝗗𝗮𝘆𝘀 𝗼𝗳 𝗔𝗺𝗮𝘇𝗼𝗻 𝗗𝗲𝘃𝗲𝗹𝗼𝗽𝗲𝗿 🎄
linkedin.com
🎵 On the 12th day of Christmas, Amazon Developer gave to me . . . Alexa+ 🎵
by emersonsklar
If the past year has proven anything, it’s that conversational AI and voice assistants are very much back in the spotlight. Our friends in the industry have absolutely reinvigorated excitement in this space, and, candidly, their very visible rough edges have made one thing clear: this is still hard to get right. Which is exactly why I’m so excited about Alexa+.
🚀 What’s different this time?
Alexa+ represents a step-change from the original Alexa experience:
More natural, contextual conversations
Better reasoning and follow-through, not just command → response
Deeper integration across devices and services
Designed to feel less like a skill invocation and more like an assistant that actually gets what you’re trying to do
🤖 Why now?
Let’s be honest. Many of us have had recent moments with other assistants where we thought: “Wow… this should be better than this by now.” The good news? That bar (currently lying on the floor 😅) makes it even easier to be genuinely excited about what Alexa+ brings to the table.
🌟 Why developers should care
This isn’t just a UI refresh—it’s a rethinking of how voice, AI, and agents come together. Alexa+ opens up new possibilities for building experiences that feel more human, more useful, and more embedded in everyday life. Our developer tools aren't publicly available yet, but stay tuned - we have some extraordinary new solutions currently in private beta that make it easier than ever before to make incredibly engaging, useful, and functional experiences.
I’m thrilled to see where this goes, and even more excited about what builders will create on top of it.
🎄 On the 12th day of Christmas, Amazon Developer gave to me…
✨ #Alexa+ ✨
If the past year has proven anything, it’s that conversational AI and voice assistants are very much back in the spotlight. Our friends in the industry have absolutely reinvigorated excitement in this space, and, candidly, their very visible rough edges have made one thing clear: this is still hard to get right.
Which is exactly why I’m so excited about Alexa+.
🚀 What’s different this time?
Alexa+ represents a step-change from the original Alexa experience:
- More natural, contextual conversations
- Better reasoning and follow-through, not just command → response
- Deeper integration across devices and services
- Designed to feel less like a skill invocation and more like an assistant that actually gets what you’re trying to do
🤖 Why now?
Let’s be honest. Many of us have had recent moments with other assistants where we thought: “Wow… this should be better than this by now.” The good news? That bar (currently lying on the floor 😅) makes it even easier to be genuinely excited about what Alexa+ brings to the table.
🌟 Why developers should care
This isn’t just a UI refresh—it’s a rethinking of how voice, AI, and agents come together. Alexa+ opens up new possibilities for building experiences that feel more human, more useful, and more embedded in everyday life. Our developer tools aren't publicly available yet, but stay tuned - we have some extraordinary new solutions currently in private beta that make it easier than ever before to make incredibly engaging, useful, and functional experiences.
I’m thrilled to see where this goes, and even more excited about what builders will create on top of it.
🎁 Onward to Christmas (and with a cheeky little tune, generated with Alexa and Suno - https://lnkd.in/gnDvzW_m)
#AmazonDeveloper #AlexaPlus #ConversationalAI #VoiceAI #GenerativeAI #AWSreInvent #BuiltOnAWS
linkedin.com
And with that we would like to leave you with a cheeky tune created by Alexa and Suno!
Top comments (0)
Subscribe
For further actions, you may consider blocking this person and/or reporting abuse
We're a place where coders share, stay up-to-date and grow their careers.
Top comments (0)