<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Ed Miller</title>
    <description>The latest articles on DEV Community by Ed Miller (@bluevalhalla).</description>
    <link>https://dev.to/bluevalhalla</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/bluevalhalla"/>
    <language>en</language>
    <item>
      <title>Meet Stache Forcache, a Movember-themed AI character created using Meta AI Studio</title>
      <dc:creator>Ed Miller</dc:creator>
      <pubDate>Tue, 31 Dec 2024 01:27:52 +0000</pubDate>
      <link>https://dev.to/bluevalhalla/meet-stache-forcache-a-movember-themed-ai-character-created-using-meta-ai-studio-18kh</link>
      <guid>https://dev.to/bluevalhalla/meet-stache-forcache-a-movember-themed-ai-character-created-using-meta-ai-studio-18kh</guid>
      <description>&lt;p&gt;&lt;em&gt;Note: AI Studio is currently only available in the United States.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;We’re a little over a week into Movember. On the 1st of the month, I wrote a &lt;a href="https://www.linkedin.com/pulse/how-ill-mo-my-own-way-movember-ed-miller-51h4c/?trk=article-ssr-frontend-pulse_little-text-block" rel="noopener noreferrer"&gt;blog on LinkedIn about how I will Mo My Own Way&lt;/a&gt;by building a men’s health chatbot. For my first exploration, I used Meta AI Studio. &lt;a href="https://ai.meta.com/ai-studio/" rel="noopener noreferrer"&gt;AI Studio&lt;/a&gt; enables you to quickly create AI characters and share them across Messenger, Instagram and WhatsApp.&lt;/p&gt;

&lt;p&gt;In less than an hour using AI Studio, I was able to create and customize an AI character: a hipster, Australian barber with a passion for men’s health and Movember. Before I walk through how I did this, let me first introduce you to &lt;a href="https://aistudio.instagram.com/ai/8961958923866535/" rel="noopener noreferrer"&gt;Stache Forcache&lt;/a&gt;:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7j5hn5uc0pn683art32d.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7j5hn5uc0pn683art32d.jpeg" alt="Stache Forcache" width="800" height="800"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;AI Character: Stache Forcache&lt;/em&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Getting started with AI Studio
&lt;/h3&gt;

&lt;p&gt;To get create your own AI character, head over to &lt;a href="https://aistudio.instagram.com/" rel="noopener noreferrer"&gt;https://aistudio.instagram.com/&lt;/a&gt; (you can also do this in the apps, but I’m using a web browser on a laptop). You should see something like this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxr3ljjj0cdkcsp2srgup.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxr3ljjj0cdkcsp2srgup.png" width="800" height="520"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;AI Studio front page&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;If you have already developed AI characters, you can find them under &lt;strong&gt;Your AI characters&lt;/strong&gt;  . You can use the &lt;strong&gt;Discover&lt;/strong&gt; search box to find other people’s characters. For now, let’s get started by clicking &lt;strong&gt;Create an AI&lt;/strong&gt; which will let you create a custom AI character or get started from examples.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6rpky74pojf9vc4b0ifx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6rpky74pojf9vc4b0ifx.png" width="800" height="520"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Create an AI character&lt;/em&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  Creating an AI character
&lt;/h3&gt;

&lt;p&gt;You start by describing what your AI character is and what is does. At this stage, we can provide a high level description, we’ll get into the details later. Cycle through the examples for ideas. You can modify one of the examples or write your own under “Custom AI character”. Once you are happy with your description, click Next.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbxamovgtby76fmddnv7c.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbxamovgtby76fmddnv7c.png" width="800" height="520"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Add details about your AI character&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;AI Studio will generate a name, tagline and some options for an avatar. You can edit these now or wait until later. Once you’re ready, click Create AI. This will create your character with prefilled details and take you to AI Studio’s main editing page.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzoc0cvg7bx3skxgn6bx9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzoc0cvg7bx3skxgn6bx9.png" width="800" height="520"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Edit details page&lt;/em&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  Example Chat Panel
&lt;/h3&gt;

&lt;p&gt;On the right side of the page, you will see an example chat where you can talk to your AI while your are adding information.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fatowr4v5aeg07y7uuw6d.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fatowr4v5aeg07y7uuw6d.png" width="743" height="781"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Test chat panel&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;You can also edit the character’s avatar, name and tagline by clicking the respective pencil icon. The name and tagline are simple text fields. Editing the avatar will pop up a dialog box.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe2o8dos9jpbq3gwdnyxv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe2o8dos9jpbq3gwdnyxv.png" width="715" height="751"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Stache’s avatar&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Click the pencil in the description field to update the description and generate new images. Pick on that you like and click &lt;strong&gt;Save&lt;/strong&gt;  . You can be quite detailed in the description. Here’s what I have for Stache (some of this was suggested by AI Studio and some modified by me):&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;A warm, medium shot of the hipster barber character standing in front of a vintage, wooden barber chair, with a friendly smile on his face. He has a bushy, handlebar mustache and no beard. He is wearing a crisp, white apron with a black mustache embroidered on the chest. His sleeves are rolled up, and he’s holding a straight razor in one hand, while the other is gently stroking the back of the chair. The background is a warm, golden color, with subtle, muted tones of green and brown, evoking a sense of a classic, Australian barbershop. The overall style is reminiscent of a 35mm film still, with a slight, nostalgic flair.&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;
&lt;h3&gt;
  
  
  Refining your character’s knowledge
&lt;/h3&gt;

&lt;p&gt;On the left side of the page is where you can really customize your character’s behavior. There are two tabs, the first of which is &lt;strong&gt;Knowledge&lt;/strong&gt;.&lt;/p&gt;
&lt;h4&gt;
  
  
  What does your AI character do?
&lt;/h4&gt;

&lt;p&gt;Here you will describe what your character does and who it is. Add some details about what your character knows about and give it a personality. Here’s what I have for Stache:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1yn9mukdiamttwthuc0k.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1yn9mukdiamttwthuc0k.png" width="730" height="192"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Description of Stache&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Full text:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Stache is a hipster barber with a bushy mustache and a passion for men's health. He's always ready to share his expertise and advice on a wide range of men's health concerns, especially mental health, prostate cancer and testicular cancer. With a friendly and approachable demeanor, Stache makes it easy for men to open up about their health concerns and get the support they need. Whether you're looking for advice on how to improve your mental health or seeking guidance on how to talk to your friends about health issues, Stache is here to help.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Instructions
&lt;/h4&gt;

&lt;p&gt;In this section you can define how your character behaves. You can have up to a dozen instructions for your character. It’s best if you write the instruction in third person. For example:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;Stache is knowledgeable about men’s health issues, especially prostate cancer, testicular cancer and mental health.&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;To help promote Movember, I included:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;Stache participates in the Movember movement and encourages other to get involved.&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Since Stache is Australian (Movember strarted in Australia), I have included:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;Stache speaks with a friendly, Aussie accent and uses colloquialisms like “mate” and “g’day” to make users feel at ease.&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h4&gt;
  
  
  Example dialogue
&lt;/h4&gt;

&lt;p&gt;To really bring your characters voice to life, you can add example dialog. You can use this to amplify personality traits, specify formatting for responses, etc. I haven’t added any example dialog for Stache yet.&lt;/p&gt;

&lt;h3&gt;
  
  
  Editing introduction and capabilities
&lt;/h3&gt;

&lt;p&gt;In the &lt;strong&gt;Introduction and Capabilities&lt;/strong&gt; tab, you can define how your AI starts conversations and enable specific capabilities.&lt;/p&gt;

&lt;h4&gt;
  
  
  How does your AI character greet new people?
&lt;/h4&gt;

&lt;p&gt;Here you define your AI character’s welcome message. This is your characters first impression. Stache always starts with&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;G’day mate! What’s on your mind? Need some advice on men’s health or just want to chat?&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h4&gt;
  
  
  Prompts for people to start the conversation
&lt;/h4&gt;

&lt;p&gt;These are suggestions to get the conversation started. You can use these to indicate what the AI is capable of or nudge users in the right direction.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwir10nt1q8ggtizzj9ns.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwir10nt1q8ggtizzj9ns.png" width="732" height="268"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Conversation prompts&lt;/em&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  What your AI character can do
&lt;/h4&gt;

&lt;p&gt;Here you enable additional capabilities for your AI character. There are three capabilities enabled by default:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Image generation: generate images when asked by the user.&lt;/li&gt;
&lt;li&gt;Long-term memory: remembers previous conversations.&lt;/li&gt;
&lt;li&gt;Reels sharing: shares reels when specifically asked by user.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;There are two optional capabilities:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Dynamic image generation: can generate images without being asked.&lt;/li&gt;
&lt;li&gt;Search: can search the internet and share links.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;For Stache, I have search turned on, to hopefully get access to the latest information on Movember and men’s health topics. I have dynamic image generation turned off.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbuuu4k64ihdvryc1kdcx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbuuu4k64ihdvryc1kdcx.png" alt="AI character capabilities" width="731" height="188"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;AI character capabilities&lt;/em&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Publishing your AI character
&lt;/h3&gt;

&lt;p&gt;Once you’re happy with your AI character, you can publish. First, select the audience: everyone, followers, close friends, or only me. Under discoverability, you can set in which apps your AI character will be visible. You can also choose if you want your AI character to appear on your Instagram profile.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9zzsshygcqf9n70skqkm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9zzsshygcqf9n70skqkm.png" alt="Publish your AI character" width="623" height="645"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Publish your AI character&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Once you’re ready, click &lt;strong&gt;Publish&lt;/strong&gt;  . For sharing publicly, your AI will be reviewed to ensure it meets the &lt;a href="https://www.linkedin.com/redir/redirect?url=https%3A%2F%2Faistudio%2Einstagram%2Ecom%2Fpolicies%2F&amp;amp;urlhash=1_EC&amp;amp;trk=article-ssr-frontend-pulse_little-text-block" rel="noopener noreferrer"&gt;AI Studio policies&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Sharing and insights
&lt;/h3&gt;

&lt;p&gt;When your AI character is ready, you can find it on your AI Studio dashboard under Your AI characters.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1r44oqd4226y2tv9m4mw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1r44oqd4226y2tv9m4mw.png" alt="AI Studio characters" width="800" height="520"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;AI Studio characters&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;You can see I have two public AI characters: &lt;a href="https://www.linkedin.com/redir/redirect?url=https%3A%2F%2Faistudio%2Einstagram%2Ecom%2Fai%2F8961958923866535%2F%3Futm_source%3Dshare&amp;amp;urlhash=LV7I&amp;amp;trk=article-ssr-frontend-pulse_little-text-block" rel="noopener noreferrer"&gt;Stache Forcache&lt;/a&gt;and &lt;a href="https://aistudio.instagram.com/ai/1074883010913918/?utm_source=share" rel="noopener noreferrer"&gt;Andino&lt;/a&gt; (an Andean bear that likes to chat about the environment) as well as a private one: GrAIzer (a brown bear in Alaska). Select a character to start a chat. Click the pencil to resume editing a character. Under the &lt;strong&gt;°°°&lt;/strong&gt; menu, you can &lt;strong&gt;Copy share link&lt;/strong&gt; , &lt;strong&gt;See insights&lt;/strong&gt; , and &lt;strong&gt;Delete AI&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;For example, the share link for Stache Forcache is &lt;a href="https://aistudio.instagram.com/ai/8961958923866535/?utm_source=share" rel="noopener noreferrer"&gt;https://aistudio.instagram.com/ai/8961958923866535/?utm_source=share&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Under See insights, you can see statistics like total chats, average message sends, total messages, and positive feedback (as a percentage). I don’t have any data for my AI character so far.&lt;/p&gt;

&lt;p&gt;Depending on your discoverability settings, you may also be able to find your AI character in Messenger, Instagram and WhatsApp. Here’s an example of Stache in each app:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvxdbbdydg93rf108ksfb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvxdbbdydg93rf108ksfb.png" alt="Stache Forcache in Messenger, Instagram, and WhatsApp" width="800" height="458"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Stache Forcache in Messenger, Instagram, and WhatsApp&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;That’s it! Anyone can create a custom AI character using AI Studio. For additional information, check out the &lt;a href="https://www.linkedin.com/redir/redirect?url=https%3A%2F%2Fstatic%2Exx%2Efbcdn%2Enet%2Frsrc%2Ephp%2FyX%2Fr%2FjMmNa83q89C%2Epdf&amp;amp;urlhash=K6vJ&amp;amp;trk=article-ssr-frontend-pulse_little-text-block" rel="noopener noreferrer"&gt;AI Studio Handbook&lt;/a&gt;(or click Creation guide in AI Studio).&lt;/p&gt;

&lt;p&gt;If you want to learn more about Movember or want to chat about men’s health, give &lt;a href="https://aistudio.instagram.com/ai/8961958923866535/?utm_source=share" rel="noopener noreferrer"&gt;Stache&lt;/a&gt; a try. You can also search for &lt;em&gt;Stache Forcache&lt;/em&gt; in Messenger, Instagram and WhatsApp.&lt;/p&gt;

&lt;p&gt;Give AI Studio a try and let me know what you create!&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Originally published at&lt;/em&gt; &lt;a href="https://www.linkedin.com/pulse/meet-stache-forcache-movember-themed-ai-character-created-ed-miller-bfy7c/" rel="noopener noreferrer"&gt;&lt;em&gt;https://www.linkedin.com&lt;/em&gt;&lt;/a&gt;&lt;em&gt;.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>movember</category>
      <category>metaai</category>
      <category>chatbots</category>
      <category>generativeaitools</category>
    </item>
    <item>
      <title>Meet Stache Forcache, a Movember-themed AI created using Amazon PartyRock</title>
      <dc:creator>Ed Miller</dc:creator>
      <pubDate>Wed, 27 Nov 2024 01:21:09 +0000</pubDate>
      <link>https://dev.to/aws-heroes/meet-stache-forcache-a-movember-themed-ai-created-using-amazon-partyrock-4lc3</link>
      <guid>https://dev.to/aws-heroes/meet-stache-forcache-a-movember-themed-ai-created-using-amazon-partyrock-4lc3</guid>
      <description>&lt;p&gt;We're nearing the end of Movember. On the 1st of the month, I wrote a &lt;a href="https://www.linkedin.com/pulse/how-ill-mo-my-own-way-movember-ed-miller-51h4c/" rel="noopener noreferrer"&gt;blog about how I will Mo My Own Way&lt;/a&gt; by building a men's health chatbot. For my first exploration, I used Meta AI Studio (read about it &lt;a href="https://www.linkedin.com/pulse/meet-stache-forcache-movember-themed-ai-character-created-ed-miller-bfy7c/" rel="noopener noreferrer"&gt;here&lt;/a&gt;). In this post, I will describe building a similar chatbot using &lt;a href="https://partyrock.aws/" rel="noopener noreferrer"&gt;Amazon PartyRock Playground&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;PartyRock is an Amazon playground for building AI-generated apps. It's powered by &lt;a href="https://aws.amazon.com/bedrock/" rel="noopener noreferrer"&gt;Amazon Bedrock&lt;/a&gt;, so it has powerful generative AI foundation models, but it also uses GenAI to help you create your app without any heavy lifting. Geat started by heading to &lt;a href="https://partyrock.aws/" rel="noopener noreferrer"&gt;https://partyrock.aws/&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F877zat7kh90eg6x0bplv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F877zat7kh90eg6x0bplv.png" alt="PartyRock Welcome page" width="800" height="520"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Get started by clicking &lt;strong&gt;Generate app&lt;/strong&gt;.You will get a popup where you can define what your app will do in plain text.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foz1noj0djydgnlyp2w2b.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foz1noj0djydgnlyp2w2b.png" alt="Generate app dialog" width="727" height="466"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;For example, I have started with:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Stache is a hipster barber with a bushy mustache and a passion for men's health. He's always ready to share his expertise and advice on a wide range of men's health concerns, especially mental health, prostate cancer and testicular cancer. With a friendly and approachable demeanor, Stache makes it easy for men to open up about their health concerns and get the support they need. Whether you're looking for advice on how to improve your mental health or seeking guidance on how to talk to your friends about health issues, Stache is here to help.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;PartyRock will generate an app, including a set of widgets with various functionality:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb72284myh5je0lmzlfea.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb72284myh5je0lmzlfea.png" alt="Generate app" width="800" height="630"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Click &lt;strong&gt;Edit&lt;/strong&gt; to modify widget locations, layout and content:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgvfcxjjigf629ar1l5xq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgvfcxjjigf629ar1l5xq.png" alt="Edit" width="800" height="635"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;For example, I removed a few of the widgets, leaving only &lt;strong&gt;Chat with Stache&lt;/strong&gt;. For a chat widget, you can set the prompt, labels and model. For the &lt;strong&gt;Prompt&lt;/strong&gt;, I describe how I want Stache to talk and defined what he knows:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6b3o7c3zli10zn2u8sr4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6b3o7c3zli10zn2u8sr4.png" alt="Chat prompt" width="449" height="628"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The &lt;strong&gt;Labels&lt;/strong&gt; tab include the widget title, some placeholder text and the initial message:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fejqcyelew0ypmitc2j7j.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fejqcyelew0ypmitc2j7j.png" alt="Chat labels" width="444" height="605"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The &lt;strong&gt;Model&lt;/strong&gt; tab lets you select a model:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyziitkeq5rpvmf8apukn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyziitkeq5rpvmf8apukn.png" alt="Chat model select" width="411" height="530"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I selected Claude 3.5 Sonnet. You can also set model Temperature and Top P, which affect the randomness and creativity. Since I am looking for a more factual chatbot, I have set both to 0.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1kpymkkd06guhkgy2963.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1kpymkkd06guhkgy2963.png" alt="Chat model" width="442" height="446"&gt;&lt;/a&gt; &lt;/p&gt;

&lt;p&gt;Once you have everything set up, you can click &lt;strong&gt;Leave edit&lt;/strong&gt; and try it out:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe5pdnfyagmnm4l3cxhwa.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe5pdnfyagmnm4l3cxhwa.png" alt="Stache's Health Cache example" width="800" height="498"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;By default, all apps are private. If you want to share, click Share and set the sharing level (public, shared or private):&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa3t2pjdavcbwzf3wb2b5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa3t2pjdavcbwzf3wb2b5.png" alt="Share dialog" width="771" height="525"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you have an especially interesting interaction with your app, you can click &lt;strong&gt;Snapshot&lt;/strong&gt; to save the session. You can find all your snapshots in the &lt;strong&gt;Snapshots&lt;/strong&gt; section. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0ilpzov4jlf0sx61tann.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0ilpzov4jlf0sx61tann.png" alt="Snapshots" width="800" height="321"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Here you can get a URL to share or delete the snapshot. For example, you can see the snapshot from a chat with Stache here:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://partyrock.aws/u/bluevalhalla/b2ofzFpv1/Stache's-Health-Cache/snapshot/bKOgCILud" rel="noopener noreferrer"&gt;https://partyrock.aws/u/bluevalhalla/b2ofzFpv1/Stache's-Health-Cache/snapshot/bKOgCILud&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;That's all it takes to create an AI chatbot app using PartyRock. Give it a try and drop a comment below to let me know what you created.&lt;/p&gt;

&lt;p&gt;If you linked this post, please consider donating to my Movember fundraising page here:&lt;br&gt;
&lt;a href="https://movember.com/m/bluevalhalla" rel="noopener noreferrer"&gt;https://movember.com/m/bluevalhalla&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Let's Mo!&lt;/p&gt;

</description>
      <category>partyrock</category>
      <category>genai</category>
      <category>aws</category>
    </item>
    <item>
      <title>Intro to Llama on Graviton</title>
      <dc:creator>Ed Miller</dc:creator>
      <pubDate>Wed, 28 Aug 2024 20:19:35 +0000</pubDate>
      <link>https://dev.to/aws-heroes/intro-to-llama-on-graviton-1dc</link>
      <guid>https://dev.to/aws-heroes/intro-to-llama-on-graviton-1dc</guid>
      <description>&lt;p&gt;&lt;em&gt;Note: Banner image generated by AI.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Are you curious about how to supercharge your application with AI while cutting costs? Discover how running Large Language Models on AWS Graviton can offer you the necessary performance at a fraction of the price.&lt;/p&gt;

&lt;p&gt;It has been less than two years since ChatGPT changed the virtual face of AI. Since then, large language models (LLMs) have been all the rage. Adding a chatbot in your application may dramatically increase user interaction, but LLMs require complicated and costly infrastructure. Or do they?&lt;/p&gt;

&lt;p&gt;After watching the “&lt;a href="https://www.twitch.tv/videos/2212333167?t=0h40m8s" rel="noopener noreferrer"&gt;Generative AI Inference using AWS Graviton Processors&lt;/a&gt;” session from the AWS AI Infrastructure Day, I was inspired to share how you can run an LLM using the same Graviton processors as the rest of your application.&lt;/p&gt;

&lt;p&gt;In this post, we will:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Set up a Graviton instance.&lt;/li&gt;
&lt;li&gt;Following the steps (with some modifications) in "Deploy a Large Language Model (LLM) chatbot on Arm servers” from the Arm Developer Hub to:

&lt;ul&gt;
&lt;li&gt;Download and compile llama.cpp&lt;/li&gt;
&lt;li&gt;Download a Meta Llama 3.1 model using huggingface-cli&lt;/li&gt;
&lt;li&gt;Re-quantize the model using llama-quantize to optimize it for the target Graviton platform&lt;/li&gt;
&lt;li&gt;Run the model using llama-cli&lt;/li&gt;
&lt;li&gt;Evaluate performance&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;Compare different instances of Graviton and discuss the pros and cons of each&lt;/li&gt;

&lt;li&gt;Point to resources for getting started&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;Subsequent posts will dive deeper into application use cases, costs, and sustainability.&lt;/p&gt;

&lt;h2&gt;
  
  
  Set up a Graviton instance
&lt;/h2&gt;

&lt;p&gt;First, let’s focus on the Graviton3-based r7g.16xlarge. This is a common instance type these days. I’ll be running it in us-west-2. Using the console, navigate to EC2 Instances and select “Launch instances”. There are only a few fields necessary for a quick test:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Name: this is up to you; I have called mine ed-blog-r7g-16xl&lt;/li&gt;
&lt;li&gt;Application and OS Images

&lt;ul&gt;
&lt;li&gt;AMI: I am using Ubuntu Server 24.04 LTS (the default if you select Ubuntu)&lt;/li&gt;
&lt;li&gt;Architecture: Choose 64-bit (Arm)&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;Instance type: r7g.16xlarge&lt;/li&gt;

&lt;li&gt;Key pair: Select an existing one or create a new one&lt;/li&gt;

&lt;li&gt;Configure storage: I’m bumping this up to 32 GiB to make sure I have room for the code and Meta Llama models.&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuauhkt2vmeqc1217b7k8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuauhkt2vmeqc1217b7k8.png" alt="AWS Console EC2 Launch Settings" width="800" height="1165"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You can leave defaults for the rest, just click “Launch instance” after the Summary.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F84hhtsybua9g476zb824.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F84hhtsybua9g476zb824.png" alt="AWS Console EC2 Launch Summary" width="800" height="554"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once the instance has started, you can connect using your favorite method. For simplicity, I will use the EC2 Instance Connect method, which will provide a terminal in your browser window:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkxc6x3kj0tsni8a45o8u.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkxc6x3kj0tsni8a45o8u.png" alt="AWS Web Terminal" width="800" height="610"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Build and Run Meta Llama 3.1
&lt;/h2&gt;

&lt;p&gt;To build and run Meta Llama 3.1, we will follow the steps (with some modifications) in "&lt;a href="https://learn.arm.com/learning-paths/servers-and-cloud-computing/llama-cpu/" rel="noopener noreferrer"&gt;Deploy a Large Language Model (LLM) chatbot on Arm servers&lt;/a&gt;” from the Arm Developer Hub to:&lt;/p&gt;

&lt;h3&gt;
  
  
  Download and compile llama.cpp
&lt;/h3&gt;

&lt;p&gt;First, we install any prerequisites:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo apt update
sudo apt install make cmake -y
sudo apt install gcc g++ -y
sudo apt install build-essential -y
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then we clone llama.cpp and build it (the &lt;code&gt;-j$(nproc)&lt;/code&gt; flag will use all available vCPU cores to speed up compilation):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;git clone https://github.com/ggerganov/llama.cpp
cd llama.cpp
make -j$(nproc)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Finally, we can test it using the help flag:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;./llama-cli -h
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Download Meta Llama 3.1
&lt;/h3&gt;

&lt;p&gt;Next, we’ll set up a virtual environment for Python packages:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo apt install python-is-python3 python3-pip python3-venv -y
python -m venv venv
source venv/bin/activate
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now install HuggingFace Hub and use it to download a 4-bit quantized version of Meta Llama 3.1:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;pip install huggingface_hub
huggingface-cli download cognitivecomputations/dolphin-2.9.4-llama3.1-8b-gguf dolphin-2.9.4-llama3.1-8b-Q4_0.gguf --local-dir . --local-dir-use-symlinks False
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Re-quantize the model
&lt;/h3&gt;

&lt;p&gt;The model we downloaded is already 4-bit quantized (half-byte per weight). This gives us a 4x improvement in model size compared with the original bfloat16 (2-byte per weight). However, the width of the Scalable Vector Extension (SVE) is different for Graviton3 (2x256-bit &lt;a href="https://developer.arm.com/documentation/102476/0100/Introducing-SVE" rel="noopener noreferrer"&gt;SVE&lt;/a&gt;) and Graviton4 (4x128-bit &lt;a href="https://developer.arm.com/documentation/102340/0100/Introducing-SVE2" rel="noopener noreferrer"&gt;SVE2&lt;/a&gt;). Graviton2 does not have SVE but will use 2x128-bit &lt;a href="https://developer.arm.com/Architectures/Neon" rel="noopener noreferrer"&gt;Arm Neon&lt;/a&gt; technology. To maximize the throughput for each generation, you should re-quantize the model with the following block layouts: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Graviton2: 4x4 (Q4_0_4_4)&lt;/li&gt;
&lt;li&gt;Graviton3: 8x8 (Q4_0_8_8)&lt;/li&gt;
&lt;li&gt;Graviton4: 4x8 (Q4_0_4_8)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;For the Graviton3 instance, we will re-quantize the model using &lt;code&gt;llama-quantize&lt;/code&gt; as follows:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;./llama-quantize --allow-requantize dolphin-2.9.4-llama3.1-8b-Q4_0.gguf dolphin-2.9.4-llama3.1-8b-Q4_0_8_8.gguf Q4_0_8_8
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Run the model
&lt;/h3&gt;

&lt;p&gt;Finally, we can run the model using llama-cli. There are a few arguments we will use:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Model (-m): The optimized model for Graviton3, dolphin-2.9.4-llama3.1-8b-Q4_0_8_8.gguf&lt;/li&gt;
&lt;li&gt;Prompt (-p): As a test prompt, we’ll use “Building a visually appealing website can be done in ten simple steps”&lt;/li&gt;
&lt;li&gt;Response length (-n): We’ll ask for 512 characters&lt;/li&gt;
&lt;li&gt;Thread count (-t): We want to use all 64 of the vCPUs&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Here’s the command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;./llama-cli -m dolphin-2.9.4-llama3.1-8b-Q4_0_8_8.gguf -p "Building a visually appealing website can be done in ten simple steps:" -n 512 -t 64
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;When you run the command, you should see several parameters print out followed by the generated text (starting with the prompt) and finally performance statistics:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4z77wnfd3slvr7k6bnlp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4z77wnfd3slvr7k6bnlp.png" alt="llama-cli Terminal Output" width="800" height="486"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Evaluate Performance
&lt;/h3&gt;

&lt;p&gt;The two lines highlighted above are the prompt evaluation time and the text generation time. These are two of the key metrics for user experience with LLMs. The prompt evaluation time relates to how long it takes the LLM to process the prompt and start to respond. The text generation time is how long it takes to generate the output. In both cases, the metric can be viewed in terms of tokens per second (T/s). For our run we see:&lt;/p&gt;

&lt;p&gt;Evaluation: 278.2 T/s&lt;br&gt;
Generation: 47.7 T/s    &lt;/p&gt;

&lt;p&gt;If you run the standard Q4_0 quantization with everything else the same, as with this command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;./llama-cli -m dolphin-2.9.4-llama3.1-8b-Q4_0.gguf -p "Building a visually appealing website can be done in ten simple steps:" -n 512 -t 64
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You will see a decrease in performance:&lt;/p&gt;

&lt;p&gt;Evaluation: 164.6 T/s&lt;br&gt;
Generation: 28.1 T/s&lt;/p&gt;

&lt;p&gt;Using the correct quantization format (Q4_0_8_8, in this case) you get close to 70% improvement on evaluation and generation!&lt;/p&gt;

&lt;p&gt;When you are done with your tests, don’t forget to stop the instance!&lt;/p&gt;

&lt;h2&gt;
  
  
  Comparing Graviton-based instances
&lt;/h2&gt;

&lt;p&gt;Using the process above, we can run the same model on similarly equipped Graviton2 and Graviton4-based instances. Using the optimum quantization format for each, we can see an increase in performance from generation to generation:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Generation&lt;/th&gt;
&lt;th&gt;Instance&lt;/th&gt;
&lt;th&gt;Quant&lt;/th&gt;
&lt;th&gt;Eval (T/s)&lt;/th&gt;
&lt;th&gt;Gen (T/s)&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Graviton2&lt;/td&gt;
&lt;td&gt;r6g.16xlarge&lt;/td&gt;
&lt;td&gt;Q4_0_4_4&lt;/td&gt;
&lt;td&gt;175.4&lt;/td&gt;
&lt;td&gt;25.1&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Graviton3&lt;/td&gt;
&lt;td&gt;r7g.16xlarge&lt;/td&gt;
&lt;td&gt;Q4_0_8_8&lt;/td&gt;
&lt;td&gt;278.2&lt;/td&gt;
&lt;td&gt;42.7&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Graviton4&lt;/td&gt;
&lt;td&gt;r8g.16xlarge&lt;/td&gt;
&lt;td&gt;Q4_0_4_8&lt;/td&gt;
&lt;td&gt;341.8&lt;/td&gt;
&lt;td&gt;65.6&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;The performance differences are due to vectorization extensions, caching, clock speed, and memory bandwidth. You may see some variation at lower vCPU/thread counts and when using different instance types: general purpose (M), compute optimized (C), etc. Graviton4 also has more cores per chip, with instances available up to 192 vCPUs!&lt;/p&gt;

&lt;p&gt;Determining which instances meet your needs depends on your application. For interactive applications, you may want low evaluation latency and a text generation speed of more than 10 Tok/s. Any of the 64 vCPU instances can easily meet the generation requirement, but you may need to consider the expected size of prompts to determine evaluation latency. Graviton2 performance shows that serverless solutions using AWS Lambda may be possible, especially for non-time critical applications.&lt;/p&gt;

&lt;h2&gt;
  
  
  Get Started!
&lt;/h2&gt;

&lt;p&gt;As you can see, running Meta Llama models on AWS Graviton is straightforward. This is an easy way to test out models for your own applications. In many cases, Graviton may be the most cost-effective way of integrating LLMs with your application. I’ll explore this further in the coming months.&lt;/p&gt;

&lt;p&gt;In the meantime, here are some resources to help you get started:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://learn.arm.com/learning-paths/servers-and-cloud-computing/llama-cpu/" rel="noopener noreferrer"&gt;Deploy a Large Language Model (LLM) chatbot on Arm servers&lt;/a&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://aws.amazon.com/ec2/graviton/" rel="noopener noreferrer"&gt;AWS Graviton Processors&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/aws/aws-graviton-getting-started" rel="noopener noreferrer"&gt;AWS Graviton Getting Started&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/aws/aws-graviton-getting-started/blob/main/machinelearning/llama.cpp.md" rel="noopener noreferrer"&gt;Large Language Model (LLM) inference on Graviton CPUs with llama.cpp&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;Have fun!&lt;/p&gt;

</description>
      <category>llm</category>
      <category>arm</category>
      <category>tutorial</category>
      <category>aws</category>
    </item>
    <item>
      <title>AWS ML Heroes in 15: Amazon Rekognition for Wildlife Conservation</title>
      <dc:creator>Ed Miller</dc:creator>
      <pubDate>Mon, 13 Nov 2023 00:54:44 +0000</pubDate>
      <link>https://dev.to/aws-heroes/aws-ml-heroes-in-15-amazon-rekognition-for-wildlife-conservation-4gnf</link>
      <guid>https://dev.to/aws-heroes/aws-ml-heroes-in-15-amazon-rekognition-for-wildlife-conservation-4gnf</guid>
      <description>&lt;p&gt;On August 4 I presented an AWS Heroes in 15 session, "Amazon Rekognition for Wildlife Conservation". Here's the abstract:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;In this webinar you'll learn how we used Amazon Rekognition as a bear detector to quickly implement the first stage in our Bearcam Companion machine learning pipeline. After collecting data for a season and engaging the Bearcam viewers to help label the data, we trained an improved bear detection model using Amazon Rekognition Custom Labels."&lt;br&gt;
Learning Objectives: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Objective 1: Learn how we integrated Amazon Rekognition with an Amplify application.&lt;/li&gt;
&lt;li&gt;Objective 2: Learn how we use a serverless Lambda function with Amazon Rekognition to automate bear detection.&lt;/li&gt;
&lt;li&gt;Objective 3: Learn how we engaged bearcam viewers and Amazon Rekognition Custom Labels to improve the bear detector.&lt;/li&gt;
&lt;/ul&gt;
&lt;/blockquote&gt;

&lt;p&gt;Check it out here:&lt;/p&gt;

&lt;p&gt;&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/1FUEvwUw-4I"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

&lt;p&gt;You can also read a related blog &lt;a href="https://dev.to/aws-heroes/amazon-rekognition-custom-labels-1dj5"&gt;here&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>machinelearning</category>
      <category>tech4wildlife</category>
      <category>video</category>
    </item>
    <item>
      <title>Amazon Rekognition Custom Labels for Bears</title>
      <dc:creator>Ed Miller</dc:creator>
      <pubDate>Tue, 25 Jul 2023 01:18:24 +0000</pubDate>
      <link>https://dev.to/aws-heroes/amazon-rekognition-custom-labels-1dj5</link>
      <guid>https://dev.to/aws-heroes/amazon-rekognition-custom-labels-1dj5</guid>
      <description>&lt;p&gt;In previous blogs about the &lt;a href="https://app.bearid.org/" rel="noopener noreferrer"&gt;Bearcam Companion&lt;/a&gt; application, I wrote about using &lt;a href="https://dev.to/aws-builders/bearcam-companion-github-user-groups-and-rekognition-3kdk"&gt;Amazon Rekognition as a bear detector&lt;/a&gt; and &lt;a href="https://dev.to/aws-builders/bearcam-companion-my-first-lambda-5931"&gt;automating it with a Lambda function&lt;/a&gt;. Rekognition saved me a lot of time in getting the application running. It can detect bears (and nearly 300 other objects), but it may not be a perfect fit for your application.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Faf7irzoo61ck91j53h4a.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Faf7irzoo61ck91j53h4a.png" alt="Bear image with bad labels" width="800" height="448"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As you can see in the image above, sometimes Rekognition mis-labels bears as dogs, kangaroos, people, etc. I expect “bears” are not a big portion of the training data, especially in all the various poses we see on the bearcam. These mislabels need to be fixed before users can identify the bears in the application. &lt;/p&gt;

&lt;p&gt;I don’t really want to spend so much time manually adjusting labels. For most machine learning, the next step would be to fine tune your model. You can essentially fine tune &lt;a href="https://aws.amazon.com/rekognition/" rel="noopener noreferrer"&gt;Amazon Rekognition&lt;/a&gt; by using &lt;a href="https://aws.amazon.com/rekognition/custom-labels-features/?nc=sn&amp;amp;loc=3&amp;amp;dn=4" rel="noopener noreferrer"&gt;Custom Labels&lt;/a&gt;. You can do this to make it better at detecting specific objects (like bears) or train it to detect new objects like your product or logo. It really depends on your application needs.&lt;/p&gt;

&lt;h2&gt;
  
  
  Custom Labels
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5ui1wzg6aziy274zuaj8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5ui1wzg6aziy274zuaj8.png" alt="Amazon Rekognition Custom Labels diagram" width="800" height="578"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;There are 6 steps for Amazon Rekognition Custom Labels:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Create project&lt;/li&gt;
&lt;li&gt;Create dataset&lt;/li&gt;
&lt;li&gt;Label images&lt;/li&gt;
&lt;li&gt;Train model&lt;/li&gt;
&lt;li&gt;Evaluate&lt;/li&gt;
&lt;li&gt;Use model&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Step 1: Create project
&lt;/h3&gt;

&lt;p&gt;You can create a Custom Labels project in the AWS console:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzx215poonsvrumade48m.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzx215poonsvrumade48m.png" alt="Create project" width="800" height="309"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 2: Create dataset
&lt;/h3&gt;

&lt;p&gt;Next, create a dataset. You can pre-split the dataset into train and test, or you can let Custom Labels do it for you.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F90m101nxw53z0q8t9iw8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F90m101nxw53z0q8t9iw8.png" alt="Create dataset" width="800" height="390"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;There are several ways to import the images, in my case I imported them as a manifest file. The manifest can come from SageMaker GroundTruth or you can create your own. Since I’ve been collecting and updating image labels in Bearcam Companion, I generated my own manifest using a python script to extract data from DynamoDB Object and Image tables with links to images on S3. I uploaded the manifest file to S3 and provided the link in the form:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5fz5ywnlyel3v264cx73.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5fz5ywnlyel3v264cx73.png" alt="Dataset form" width="800" height="782"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I had pre-split dataset: 80% for train and 20% for test. This resulted in 2486 training images and 605 test images. Many images have multiple bears, which increases the total number of labels. You can view these in the console:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb6knm6jxuhbj4ba3thaq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb6knm6jxuhbj4ba3thaq.png" alt="Training dataset" width="800" height="488"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 3: Label images
&lt;/h3&gt;

&lt;p&gt;My manifest file already included labels from Bearcam Companion. If your data is not labeled, you can add labels directly in Custom Labels interface. You can also use GroundTruth for larger labeling jobs and farm them out to a broader workforce. I did find some errors in my dataset&lt;br&gt;
like in the image below where the bear on the left is missing a bounding box. I adjusted labels in Custom Labels directly:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0pvlwpn2pg4x0rmwsr9f.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0pvlwpn2pg4x0rmwsr9f.png" alt="Relabel data" width="800" height="401"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 4: Train model
&lt;/h3&gt;

&lt;p&gt;When the dataset is ready, you can start training. This may take several hours. When completed, the model is ready to run. But first, let’s look at the model performance:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc2knd2osgngvdu5h3m6f.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc2knd2osgngvdu5h3m6f.png" alt="Training completed" width="800" height="359"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 5: Evaluate
&lt;/h3&gt;

&lt;p&gt;Clicking on the model link will bring you to the evaluation page. Here you will find per label performance, such as F1, precision, recall and the assumed threshold used for evaluation. In my model, there’s only one label: bears, but you may have others.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0cay3jbuukgqz27jaxt8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0cay3jbuukgqz27jaxt8.png" alt="Model evaluation" width="800" height="388"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;View the test results to see performance on each image in test set. You can quickly filter on errors to see where your model may need improvement.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F70dki1tbf6odde6lixyo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F70dki1tbf6odde6lixyo.png" alt="Test performance" width="800" height="416"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Here is a false negative example where the model missed one of the bears (correct in green; incorrect in red):&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F029kxqrtgx6u88ja09iv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F029kxqrtgx6u88ja09iv.png" alt="False negative" width="800" height="405"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Here is a false positive example where the model labeled branches in the water as bear:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F54q71ml7txd7q4equ1bv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F54q71ml7txd7q4equ1bv.png" alt="False positive" width="800" height="415"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You may even find errors in the test set, like this image where the boxes in red should actually be correct. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzl6cboy8xf1l4p9b8jl9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzl6cboy8xf1l4p9b8jl9.png" alt="Bad false positive" width="800" height="413"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You can fix the labels and retrain the model&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 6: Use model
&lt;/h3&gt;

&lt;p&gt;Once you are happy with the model, you can start (and stop) it in the console:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fynj1krztmuwjkrioul02.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fynj1krztmuwjkrioul02.png" alt="Use model" width="800" height="446"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Example code for the model is provided. You can choose how many instances of your model you need. This will increase the throughput. Keep in mind, you are charged by instance-hours, that is, how many instances you have running for how many hours.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;In this post, we learned how to create a specialized detector using Amazon Rekognition Custom Labels. We did this all in the console (other than creating our dataset). The process is simple and cost effective for many applications.&lt;/p&gt;

&lt;p&gt;It turns out Custom Labels may not be good fit for Bearcam Companion. We only need 12 inference per hour, and I'm not sure if we can spin Custom Labels up and down frequently enough to keep the costs down. For our low usage, we prefer an on-demand model for bear detection. So one of the next projects is to train a bear detector using Amazon SageMaker.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4s0gsyz1t4x4zpuffqwi.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4s0gsyz1t4x4zpuffqwi.png" alt="Bear detector with Amazon SageMaker" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We'll save that for another time.&lt;/p&gt;

&lt;p&gt;For more on Amazon Rekognition and Custom Labels, check out my &lt;a href="https://www.linkedin.com/events/7078058962075471872/" rel="noopener noreferrer"&gt;AWS ML Heroes in 15: Amazon Rekognition for Wildlife Conservation&lt;/a&gt; talk on August 4, 2023.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>machinelearning</category>
      <category>tech4wildlife</category>
      <category>nocode</category>
    </item>
    <item>
      <title>AWS Heroes in 15: Bear Conservation with ML, Serverless and Citizen Science</title>
      <dc:creator>Ed Miller</dc:creator>
      <pubDate>Sun, 14 May 2023 18:40:02 +0000</pubDate>
      <link>https://dev.to/aws-heroes/aws-heroes-in-15-bear-conservation-with-ml-serverless-and-citizen-science-5cia</link>
      <guid>https://dev.to/aws-heroes/aws-heroes-in-15-bear-conservation-with-ml-serverless-and-citizen-science-5cia</guid>
      <description>&lt;p&gt;On April 7 I presented an AWS Heroes in 15 session, "Bear Conservation with ML, Serverless and Citizen Science". In the session I covered the following&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Developing a web application using AWS Amplify Studio&lt;/li&gt;
&lt;li&gt;Using serverless Lambda functions with Amazon Rekognition to automate a web application&lt;/li&gt;
&lt;li&gt;Leveraging Amazon Rekognition and Amazon SageMaker to engage Explore.org viewers for human-in-the-loop machine learning&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Check it out here:&lt;/p&gt;

&lt;p&gt;&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/nHNP3-IHTkQ"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

</description>
      <category>aws</category>
      <category>machinelearning</category>
      <category>tech4wildlife</category>
      <category>video</category>
    </item>
    <item>
      <title>Interview: Using Machine Learning to Build Bear Identification Technology</title>
      <dc:creator>Ed Miller</dc:creator>
      <pubDate>Sun, 14 May 2023 18:29:02 +0000</pubDate>
      <link>https://dev.to/aws-heroes/interview-using-machine-learning-to-build-bear-identification-technology-31ai</link>
      <guid>https://dev.to/aws-heroes/interview-using-machine-learning-to-build-bear-identification-technology-31ai</guid>
      <description>&lt;p&gt;I spoke with Linda Haviv about the BearID Project at AWS re:Invent in 2022. See the full interview here:&lt;/p&gt;

&lt;p&gt;&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/sKGau7c53go"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

</description>
      <category>aws</category>
      <category>machinelearning</category>
      <category>tech4wildlife</category>
      <category>video</category>
    </item>
    <item>
      <title>2022: A Year in Review</title>
      <dc:creator>Ed Miller</dc:creator>
      <pubDate>Wed, 04 Jan 2023 07:52:49 +0000</pubDate>
      <link>https://dev.to/aws-heroes/2022-a-year-in-review-npd</link>
      <guid>https://dev.to/aws-heroes/2022-a-year-in-review-npd</guid>
      <description>&lt;p&gt;2022 started off no different from other years (well, at least compared with years in the COVID-era). However, things changed in early March when I was accepted into the &lt;a href="https://aws.amazon.com/developer/community/community-builders/" rel="noopener noreferrer"&gt;AWS Community Builders&lt;/a&gt; program.&lt;/p&gt;

&lt;h3&gt;
  
  
  AWS Community Builder
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8ubpe1urp3cz2uim5um1.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8ubpe1urp3cz2uim5um1.jpeg" alt="AWS Community Builder Banner" width="800" height="418"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;The AWS Community Builders program offers technical resources, education, and networking opportunities to AWS technical enthusiasts and emerging thought leaders who are passionate about sharing knowledge and connecting with the technical community.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;I was interested in the AWS Community Builder program as it related to my role at Arm, where I lead many of our technical engagements with Amazon (including AWS). Arm and AWS have a deep relationship around the Graviton EC2 family, which is comprised on Arm compute cores. However, I am more involved in IoT and Machine Learning, so I ended up in the Machine Learning community.&lt;/p&gt;

&lt;p&gt;I planned to use this opportunity to share my experience in IoT and Machine Learning while expanding my knowledge of AWS services. I set up a new blog on &lt;a href="https://dev.to/bluevalhalla"&gt;dev.to&lt;/a&gt; and wrote my first &lt;a href="https://dev.to/aws-builders/aws-community-builders-my-first-step-2b07"&gt;post&lt;/a&gt; describing my goals. More on my involvement with the AWS Community Builders soon…&lt;/p&gt;

&lt;h3&gt;
  
  
  Multi-species BearID
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwtsqxw4u9od86c8om3oo.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwtsqxw4u9od86c8om3oo.jpeg" alt="Multi-species Bear Face Detector" width="800" height="260"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In April, our second paper from the BearID Project, &lt;a href="https://link.springer.com/article/10.1007/s42991-021-00168-5" rel="noopener noreferrer"&gt;Multispecies facial detection for individual identification of wildlife: a case study across ursids&lt;/a&gt;, was published in Mammalian Biology. For this paper we collaborated with Russ Van Horn of San Diego Zoo Wildlife Alliance to train a multispecies bear face detector using images of bears living under human care. We were then able to use this multispecies detector to build a full pipeline for identifying Andean Bears.&lt;/p&gt;

&lt;h3&gt;
  
  
  AWS Summit San Francisco
&lt;/h3&gt;

&lt;p&gt;Also in April, I attended the AWS Summit in San Francisco. Although I have been to many AWS events, this was my first as an AWS Community Builder. The AWS CB team hosted a great reception one of the evenings where I had a chance to meet my fellow community builders and the AWS team.&lt;/p&gt;

&lt;h3&gt;
  
  
  BearID at the Edge
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs4rmjt77on4oe32rjcdv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs4rmjt77on4oe32rjcdv.png" alt="Azure Percept Screen Capture" width="800" height="646"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In May, I completed an experimental edge deployment of BearID models trained in Azure Custom Vision using &lt;a href="https://azure.microsoft.com/en-us/products/azure-percept/" rel="noopener noreferrer"&gt;Azure Percept&lt;/a&gt;. The blog post on the Microsoft IoT Blog, &lt;a href="https://techcommunity.microsoft.com/t5/internet-of-things-blog/wildlife-monitoring-and-conservation-with-azure-percept/ba-p/3390910" rel="noopener noreferrer"&gt;Wildlife Monitoring and Conservation with Azure Percept&lt;/a&gt;, details the training and deployment.&lt;/p&gt;

&lt;h3&gt;
  
  
  Amazon re:MARS
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frq4sb2cyoldzqt6ztr39.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frq4sb2cyoldzqt6ztr39.jpeg" alt="Jetbot and Magic Leap" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In June I attended the &lt;a href="https://remars.amazonevents.com/" rel="noopener noreferrer"&gt;Amazon re:MARS&lt;/a&gt; conference in Las Vegas. This was my second time attending this event which focuses on Machine Learning, Automation, Robotics and Space. I attended a lot of great sessions, including a workshop combining robotics and augmented reality (see the photo of the Jetbot and Magic Leap above). I managed to connect with other AWS Community Builders and a couple AWS Heroes, including &lt;a href="https://twitter.com/petehanssens" rel="noopener noreferrer"&gt;Peter Hanssens&lt;/a&gt;:&lt;/p&gt;

&lt;p&gt;&lt;iframe class="tweet-embed" id="tweet-1539455887517130753-106" src="https://platform.twitter.com/embed/Tweet.html?id=1539455887517130753"&gt;
&lt;/iframe&gt;

  // Detect dark theme
  var iframe = document.getElementById('tweet-1539455887517130753-106');
  if (document.body.className.includes('dark-theme')) {
    iframe.src = "https://platform.twitter.com/embed/Tweet.html?id=1539455887517130753&amp;amp;theme=dark"
  }



&lt;/p&gt;

&lt;h3&gt;
  
  
  Bearcam Companion
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1q62d8tnkvvm9mvfzg5d.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1q62d8tnkvvm9mvfzg5d.png" alt="Bearcam Companion Screen Capture" width="800" height="336"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;From April through September, I was working on an AWS-based web application, the &lt;a href="https://app.bearid.org/" rel="noopener noreferrer"&gt;Bearcam Companion&lt;/a&gt;. I documented the development from start to finish in the following 8-part blog series:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://dev.to/aws-builders/bearcam-companion-585i"&gt;Part 1&lt;/a&gt;: provide the background and define the M❤️P&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://dev.to/godspowercuche/bearcam-companion-amplify-studio-5edk-temp-slug-3395732"&gt;Part 2&lt;/a&gt;: use Amplify Studio to define the data model&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://dev.to/aws-builders/bearcam-companion-amplify-and-react-ok3"&gt;Part 3&lt;/a&gt;: develop the frontend with Amplify and React&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://dev.to/aws-builders/bearcam-companion-ui-improvements-authentication-and-identifications-3h4i"&gt;Part 4&lt;/a&gt;: add authentication and improve the UX&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://dev.to/aws-builders/bearcam-companion-github-user-groups-and-rekognition-3kdk"&gt;Part 5&lt;/a&gt;: version control with GitHub and ML with Rekognition&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://dev.to/godspowercuche/bearcam-companion-amplify-storage-2pib-temp-slug-2348641"&gt;Part 6&lt;/a&gt;: storage with S3&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://dev.to/aws-builders/bearcam-companion-my-first-lambda-5931"&gt;Part 7&lt;/a&gt;: Lamdas and Lanbda Layers&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://dev.to/aws-builders/bearcam-companion-hosting-with-amplify-and-github-5bpm"&gt;Part 8&lt;/a&gt;: Deployment&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You can see a demo of the application here:&lt;/p&gt;

&lt;p&gt;&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/bkUBLLvfvV0"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

&lt;p&gt;In August, the Bearcam Companion blogs got some ❤️ from the &lt;a href="https://twitter.com/awsdevelopers" rel="noopener noreferrer"&gt;AWS Developers&lt;/a&gt; account on Twitter:&lt;/p&gt;

&lt;p&gt;&lt;iframe class="tweet-embed" id="tweet-1555643305207160833-910" src="https://platform.twitter.com/embed/Tweet.html?id=1555643305207160833"&gt;
&lt;/iframe&gt;

  // Detect dark theme
  var iframe = document.getElementById('tweet-1555643305207160833-910');
  if (document.body.className.includes('dark-theme')) {
    iframe.src = "https://platform.twitter.com/embed/Tweet.html?id=1555643305207160833&amp;amp;theme=dark"
  }



&lt;/p&gt;

&lt;h3&gt;
  
  
  IoT Builders Live
&lt;/h3&gt;

&lt;p&gt;In September, I represented Arm as the guest on the first ever &lt;a href="https://www.youtube.com/@iotbuilders" rel="noopener noreferrer"&gt;AWS IoT Builders&lt;/a&gt; live stream. I had a great time talking about IoT development with hosts &lt;a href="https://twitter.com/dangross" rel="noopener noreferrer"&gt;Dan Gross&lt;/a&gt; and &lt;a href="https://twitter.com/nenadilic84" rel="noopener noreferrer"&gt;Nenad Ilic&lt;/a&gt;:&lt;/p&gt;

&lt;p&gt;&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/JqoHIt7wm7E"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

&lt;h3&gt;
  
  
  Imagine
&lt;/h3&gt;

&lt;p&gt;The &lt;a href="https://www.edgeimpulse.com/imagine" rel="noopener noreferrer"&gt;Edge Impulse Imagine&lt;/a&gt; conference was also in September. I attended the first day, which was live at the &lt;a href="https://computerhistory.org/" rel="noopener noreferrer"&gt;Computer History Museum&lt;/a&gt; in Mountain View, CA. The event featured presentation and demos of the latest innovations in edge machine learning. During the keynote, &lt;a href="https://twitter.com/EdgeImpulse" rel="noopener noreferrer"&gt;Edge Impulse&lt;/a&gt; announced “the ultimate AI for nature camera” which they are developing with &lt;a href="https://twitter.com/conservationx" rel="noopener noreferrer"&gt;Conservation X Labs&lt;/a&gt; and the &lt;a href="https://twitter.com/arribada_i" rel="noopener noreferrer"&gt;Arribada Initiative&lt;/a&gt;. There was also a great Impact Panel on conservation and AI ethics, moderated by &lt;a href="https://www.wildlabs.net/" rel="noopener noreferrer"&gt;WILDLAB.NET&lt;/a&gt;’s &lt;a href="https://twitter.com/Steph_ODonnell" rel="noopener noreferrer"&gt;Stephanie O’Donnell&lt;/a&gt; (there was even a BearID cameo in her intro slides):&lt;/p&gt;

&lt;p&gt;&lt;iframe class="tweet-embed" id="tweet-1575182474346319872-707" src="https://platform.twitter.com/embed/Tweet.html?id=1575182474346319872"&gt;
&lt;/iframe&gt;

  // Detect dark theme
  var iframe = document.getElementById('tweet-1575182474346319872-707');
  if (document.body.className.includes('dark-theme')) {
    iframe.src = "https://platform.twitter.com/embed/Tweet.html?id=1575182474346319872&amp;amp;theme=dark"
  }



&lt;/p&gt;

&lt;h3&gt;
  
  
  October Accolades
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5xcknx6x2ip1oizlsj4x.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5xcknx6x2ip1oizlsj4x.png" alt="Amplify-ing Bears Blog" width="800" height="512"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;October brought more accolades for BearID, starting with an article in &lt;a href="https://www.thedailybeast.com/bearid-projects-facial-recognition-ai-will-help-save-the-katmai-national-park-fat-bears" rel="noopener noreferrer"&gt;The Daily Beast&lt;/a&gt; about the project and Bearcam Companion application. The article corresponded with &lt;a href="https://explore.org/fat-bear-week" rel="noopener noreferrer"&gt;Fat Bear Week&lt;/a&gt;. I was then interviewed on the AWS Twitch channel during a DeepRacer live stream &lt;em&gt;[Edit: the stream is no longer available]&lt;/em&gt;. The Bearcam Companion blogs were promoted by the &lt;a href="https://www.linkedin.com/feed/update/urn:li:activity:6993300636926713856/" rel="noopener noreferrer"&gt;AWS Machine Learning account on LinkedIn&lt;/a&gt;. Finally, I was recognized as a runner up in the &lt;a href="https://townhall.hashnode.com/aws-amplify-x-hashnode-hackathon-winners" rel="noopener noreferrer"&gt;AWS Amplify x Hashnode Hackathon&lt;/a&gt; for the Bearcam Companion as detailed in &lt;a href="https://bluevalhalla.hashnode.dev/amplify-ing-bears" rel="noopener noreferrer"&gt;Amplify-ing Bears&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Arm DevSummit
&lt;/h3&gt;

&lt;p&gt;Also in October, I published a blog titled &lt;a href="https://dev.to/godspowercuche/accelerate-iot-development-with-arm-virtual-hardware-on-aws-ej6-temp-slug-3246660"&gt;Accelerate IoT Development with Arm Virtual Hardware on AWS&lt;/a&gt;, leading up to the &lt;a href="https://devsummit.arm.com/" rel="noopener noreferrer"&gt;Arm DevSummit&lt;/a&gt;. &lt;em&gt;Note: you can still watch content on-demand!&lt;/em&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  AWS Machine Learning Hero
&lt;/h3&gt;

&lt;p&gt;In one of the top highlights of the year, I was selected as an &lt;a href="https://aws.amazon.com/developer/community/heroes/?community-heroes-all.sort-by=item.additionalFields.sortPosition&amp;amp;community-heroes-all.sort-order=asc&amp;amp;awsf.filter-hero-category=heroes%23ml&amp;amp;awsf.filter-location=*all&amp;amp;awsf.filter-year=*all&amp;amp;awsf.filter-activity=*all" rel="noopener noreferrer"&gt;AWS Machine Learning Hero&lt;/a&gt; in the final cohort of 2022. This came as a surprise, as I had only been selected to the &lt;a href="https://aws.amazon.com/developer/community/community-builders/" rel="noopener noreferrer"&gt;AWS Community Builders&lt;/a&gt; in February.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuy4gzwfeoxv636uo6klw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuy4gzwfeoxv636uo6klw.png" alt="Hero Card" width="800" height="377"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I believe my work on the Bearcam Companion, the blogs and interviews, and most importantly, the amazing support from the AWS Machine Learning channel team, is what put me on the fast track to becoming an AWS Machine Learning Hero. But what is an AWS Machine Learning Hero? According to the &lt;a href="https://aws.amazon.com/developer/community/heroes/" rel="noopener noreferrer"&gt;AWS Heroes webpage&lt;/a&gt;:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;AWS Machine Learning Heroes are developers and academics who are proficient with deep learning frameworks and are passionate enthusiasts of emerging Amazon ML technologies. They enjoy helping developers of all machine learning proficiencies learn and apply ML, at speed and scale, through Hero blog posts, videos, and technical sessions, as well as direct engagement.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;You can read about myself and my 6 fellow AWS Hero inductees on this &lt;a href="https://aws.amazon.com/blogs/aws/introducing-our-final-aws-heroes-of-the-year-november-2022/" rel="noopener noreferrer"&gt;blog post&lt;/a&gt;. You can also find my card (shown above) &lt;a href="https://aws.amazon.com/developer/community/heroes/ed-miller/?did=dh_card&amp;amp;trk=dh_card" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  AWS re:Invent
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvw3e5772yvclaz35ruzi.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvw3e5772yvclaz35ruzi.jpeg" alt="AWS re:Invent CTO Keynote" width="800" height="335"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The last major activities for 2022 took place in Las Vegas at AWS re:Invent. This was my 4th re:Invent, but the first as an AWS Hero. I must say, being a Hero at re:Invent comes with some serious perks. From the Heroes dinner to special lounge access and front-row seats for a shout-out from &lt;a href="https://twitter.com/Werner" rel="noopener noreferrer"&gt;Werner Vogels&lt;/a&gt; during his keynote, it was quite the experience.&lt;/p&gt;

&lt;p&gt;I had also been selected as a &lt;a href="https://reinvent.awsevents.com/peertalk/experts/" rel="noopener noreferrer"&gt;PeerTalk Expert&lt;/a&gt;. &lt;a href="https://reinvent.awsevents.com/peertalk/" rel="noopener noreferrer"&gt;PeerTalk&lt;/a&gt; is a new onsite networking program for AWS re:Invent attendees. From the AWS Events app, you can connect with other attendees and set up face to face meetings. As an expert, I was available for connections and meetings and helped with promotion. I had a few great meetings through this tool.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhgp55aebrqfemgltzrqb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhgp55aebrqfemgltzrqb.png" alt="Arm Booth at re:Invent" width="800" height="295"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;My main function at re:Invent was representing my employer, &lt;a href="https://arm.com/" rel="noopener noreferrer"&gt;Arm&lt;/a&gt;. Our team manned a small booth providing information about Graviton and our IoT solutions. I helped set up a Software Defined Camera demo which showed the same containerized software stack consisting of a vision based ML model communicating to AWS IoT Core through Greengrass running on 2 different physical boards and &lt;a href="https://www.arm.com/products/development-tools/simulation/virtual-hardware" rel="noopener noreferrer"&gt;Arm Virtual Hardware&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3bn83ztv88dvrdjmvzxs.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3bn83ztv88dvrdjmvzxs.jpeg" alt="Dev Chat at re:Invent" width="768" height="482"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Photo by Reed Hinkel&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;As the culmination of my &lt;a href="https://app.bearid.org/" rel="noopener noreferrer"&gt;Bearcam Companion&lt;/a&gt; work for this year, I presented a DevChat in the Developer Lounge in the re:Invent Expo. The presentation covered background on the &lt;a href="http://bearid.org/" rel="noopener noreferrer"&gt;BearID Project&lt;/a&gt;, the &lt;a href="https://explore.org/livecams/brown-bears/brown-bear-salmon-cam-brooks-falls" rel="noopener noreferrer"&gt;Explore.org bearcams&lt;/a&gt; and the application I built using various AWS services. Sadly the presentation wasn’t recorded (perhaps I will record something similar and post it at a later date). The following day I was interviewed on the AWS Build On Live stream by the amazing &lt;a href="https://twitter.com/lindavivah" rel="noopener noreferrer"&gt;Linda Haviv&lt;/a&gt;. This stream is still available (for now):&lt;/p&gt;

&lt;p&gt;&lt;iframe src="https://player.twitch.tv/?video=1671969054&amp;amp;parent=dev.to&amp;amp;autoplay=false" height="399" width="710"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

&lt;h3&gt;
  
  
  Migrating (Back) to Medium
&lt;/h3&gt;

&lt;p&gt;I plan to utilize &lt;a href="https://bluevalhalla.medium.com/" rel="noopener noreferrer"&gt;Medium&lt;/a&gt; as my primary blog moving forward. I have been posting and crossposting on Medium for years, so you can find many of my posts and follow me at &lt;a href="https://bluevalhalla.medium.com/" rel="noopener noreferrer"&gt;bluevalhalla.medium.com&lt;/a&gt;. I will cross post on my &lt;a href="https://dev.to/bluevalhalla"&gt;dev.to blog&lt;/a&gt; when it is relevant. &lt;/p&gt;

&lt;h3&gt;
  
  
  Looking Ahead to 2023
&lt;/h3&gt;

&lt;p&gt;Overall, 2022 was a big year for me, but I’m already looking forward to next year. The BearID Project team will be hard at work extending our application to work with trail camera video clips (look for a new paper in the coming year). I will continue to extend the Bearcam Companion by using the 2022 data to build models to automatically identify the bears on camera in 2023 (with corrections from the bearcam community!). At Arm I will continue to focus on IoT and Machine Learning at the edge and hope to publish more content utilizing AWS services (who knows, maybe I’ll even tie it in with BearID).&lt;/p&gt;

&lt;p&gt;Wishing you all a happy and prosperous 2023!&lt;/p&gt;

&lt;p&gt;Follow me at &lt;a href="https://bluevalhalla.medium.com/" rel="noopener noreferrer"&gt;bluevalhalla.medium.com&lt;/a&gt;!&lt;/p&gt;

</description>
      <category>aws</category>
      <category>arm</category>
      <category>machinelearning</category>
    </item>
    <item>
      <title>Accelerate IoT Development with Arm Virtual Hardware on AWS</title>
      <dc:creator>Ed Miller</dc:creator>
      <pubDate>Thu, 13 Oct 2022 17:20:20 +0000</pubDate>
      <link>https://dev.to/aws-builders/accelerate-iot-development-with-arm-virtual-hardware-on-aws-3ndd</link>
      <guid>https://dev.to/aws-builders/accelerate-iot-development-with-arm-virtual-hardware-on-aws-3ndd</guid>
      <description>&lt;p&gt;For more than a decade, the Internet of Things, or IoT, has been growing in both magnitude and complexity. The complexity derives from the requirements of an IoT device including connectivity, security, cloud service clients, over-the-air update and, increasingly, machine learning. Embedded developers building IoT devices face challenges with developing and testing these applications at scale. DevOps teams, now responsible for managing these device and service integrations at scale, face challenges with incorporating IoT devices into cloud-native flows like Continuous Integration and Continuous Delivery (CI/CD) and Infrastructure as Code (IaC). &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyjbkwgefmour1r15vgsr.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyjbkwgefmour1r15vgsr.jpg" alt="Board Farm" width="260" height="261"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The use of “board farms” can address some of these issues. However, they cause significant setup and maintenance costs. Testability can also be compromised due to limited access to peripherals and public cloud services. A new paradigm for IoT software development and testing is needed, and Arm Virtual Hardware is the solution. &lt;/p&gt;

&lt;h2&gt;
  
  
  What is Arm Virtual Hardware?
&lt;/h2&gt;

&lt;p&gt;Arm Virtual Hardware (AVH) is a family of functionally accurate representations of Arm-based processors, systems, and third-party hardware. AVH enables embedded and IoT developers to build and test software using modern agile software practices without the need for hardware. &lt;/p&gt;

&lt;p&gt;There are three main classes of AVH: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;AVH for Cortex-M Processors&lt;/strong&gt;: Software models of individual Cortex processors provided in containers, along with relevant development tools, which run in the cloud. An Amazon Machine Image (AMI) is available on AWS Marketplace and can be run on various Elastic Compute Cloud (EC2) instances. You can find it &lt;a href="https://aws.amazon.com/marketplace/pp/prodview-urbpq7yo5va7g" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;AVH for Corstone&lt;/strong&gt;: Software models of Corstone subsystems which are available in the same AMI as listed above.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;AVH for 3rd Party Hardwar&lt;/strong&gt;: Partial or complete reference platforms, including CPUs, sensors and connectivity modules. They are available as a Software as a Service (SaaS) solution from Arm. Under the hood, they leverage the Arm instruction set architecture of EC2 instances powered by Graviton. You can sign up for the private beta program &lt;a href="https://www.arm.com/resources/contact-us/virtual-hardware-boards" rel="noopener noreferrer"&gt;here&lt;/a&gt;. &lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Use Cases
&lt;/h2&gt;

&lt;p&gt;One valuable scenario for AVH is developing software before silicon is available. For example, the &lt;a href="https://developer.arm.com/Processors/Corstone-310" rel="noopener noreferrer"&gt;Corstone-310&lt;/a&gt; combines the latest Cortex-M85 microcontroller with the Ethos-U55 neural processing unit. You can develop and test tinyML applications with this AVH while waiting for the first silicon to land. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fblj3q41by7pn3w09332o.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fblj3q41by7pn3w09332o.png" alt="Cloud-native software development" width="800" height="395"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Another use case for AVH is CI/CD. A great example of this is AVH integration with GitHub Actions. You can set up your repository to build and run your test suite on AVH after every merge. Here’s a video that shows how to get started: &lt;/p&gt;

&lt;p&gt;&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/BZRGb0GHRjg"&gt;
&lt;/iframe&gt;
 &lt;/p&gt;

&lt;p&gt;You can find the example &lt;a href="https://github.com/ARM-software/AVH-GetStarted" rel="noopener noreferrer"&gt;here&lt;/a&gt;. There are integrations available for other leading CI/CD solutions such as &lt;a href="https://github.com/james-crowley/AVH-TFLmicrospeech/tree/circleci" rel="noopener noreferrer"&gt;CircleCI&lt;/a&gt;, &lt;a href="https://bit.ly/AVH-On-GitLab" rel="noopener noreferrer"&gt;GitLab&lt;/a&gt; and &lt;a href="https://github.com/ARM-software/AVH-GetStarted" rel="noopener noreferrer"&gt;Jenkins&lt;/a&gt;. For MLOps, have a look at &lt;a href="https://docs.qeexo.com/guides/userguides/arm-virtual-hardware" rel="noopener noreferrer"&gt;this guide from Qeexo&lt;/a&gt;. &lt;/p&gt;

&lt;h2&gt;
  
  
  Arm Virtual Hardware and AWS IoT
&lt;/h2&gt;

&lt;p&gt;As noted, all classes of Arm Virtual Hardware are available in the AWS cloud. However, the real value of AVH comes to light when developing with AWS software and services. &lt;/p&gt;

&lt;p&gt;For Cortex-M based systems, you can develop software leveraging FreeRTOS and AWS IoT services. The AVH Corstone-300 is qualified for AWS IoT Core and is listed in the &lt;a href="https://devices.amazonaws.com/detail/a3G8a000000U8RbEAK/Arm-Virtual-Hardware-Corstone-300" rel="noopener noreferrer"&gt;AWS Partner Device Catalog&lt;/a&gt;. For an in-depth workshop from AWS, check out &lt;a href="https://catalog.us-east-1.prod.workshops.aws/workshops/30043722-0362-4859-bc6f-c28836a2d7ac/en-US" rel="noopener noreferrer"&gt;Develop AWS IoT projects on Arm Virtual Hardware with FreeRTOS and CMSIS packs&lt;/a&gt;. This workshop explores more advanced AWS IoT concepts like Device Shadows and Device Jobs. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgswll89jzmhazkt9r0xy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgswll89jzmhazkt9r0xy.png" alt="AWS IoT workshop for AVH" width="800" height="468"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;For Cortex-A based systems running Linux, try one of the AVH 3rd Party Hardware platforms in the private beta. You can easily get AWS IoT Greengrass running on the virtual Raspberry Pi 4. If you are working with machine learning, check out the &lt;a href="https://catalog.us-east-1.prod.workshops.aws/workshops/8d1c3528-8abb-4674-a2b9-d15fa593c392/en-US" rel="noopener noreferrer"&gt;Machine Learning Operations with AWS IoT Greengrass v2 and Amazon SageMaker Edge Manager&lt;/a&gt;. You can run much of this workshop on the AVH i.MX8M Arm Cortex Complex. &lt;/p&gt;

&lt;h2&gt;
  
  
  Call to Action
&lt;/h2&gt;

&lt;p&gt;Join the upcoming AVH workshops and masterclasses at &lt;a href="https://devsummit.arm.com/flow/arm/devsummit22/home/page/lp/?utm_source=dev_to_publication&amp;amp;utm_medium=referral&amp;amp;utm_campaign=2210_armdevsummit_mk17_publication_blog_na_ondemand&amp;amp;utm_content=blog" rel="noopener noreferrer"&gt;Arm DevSummit&lt;/a&gt; and ask experts your questions: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://devsummit.arm.com/flow/arm/devsummit22/sessions-catalog/page/sessions/session/16540936519630014eiA?utm_source=dev_to_publication&amp;amp;utm_medium=referral&amp;amp;utm_campaign=2210_armdevsummit_mk17_publication_blog_na_ondemand&amp;amp;utm_content=blog" rel="noopener noreferrer"&gt;Accelerated Development of Cloud Applications for AWS IoT Connected Devices&lt;/a&gt; &lt;/li&gt;
&lt;li&gt;&lt;a href="https://devsummit.arm.com/flow/arm/devsummit22/sessions-catalog/page/sessions/session/1655822846744001CDON?utm_source=dev_to_publication&amp;amp;utm_medium=referral&amp;amp;utm_campaign=2210_armdevsummit_mk17_publication_blog_na_ondemand&amp;amp;utm_content=blog" rel="noopener noreferrer"&gt;Simplify embedded software development and Continuous Integration (CI) at scale&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://devsummit.arm.com/flow/arm/devsummit22/sessions-catalog/page/sessions/session/1664974508601001eqcQ?utm_source=dev_to_publication&amp;amp;utm_medium=referral&amp;amp;utm_campaign=2210_armdevsummit_mk17_publication_blog_na_ondemand&amp;amp;utm_content=blog" rel="noopener noreferrer"&gt;Build, Train, Test, and Deploy your Machine Learning Applications on Virtual Hardware&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Register for a free pass &lt;a href="https://devsummit.arm.com/flow/arm/devsummit22/home/page/lp/?utm_source=dev_to_publication&amp;amp;utm_medium=referral&amp;amp;utm_campaign=2210_armdevsummit_mk17_publication_blog_na_ondemand&amp;amp;utm_content=blog" rel="noopener noreferrer"&gt;here&lt;/a&gt;. &lt;/p&gt;

&lt;p&gt;For further information and resources on Arm Virtual Hardware, visit (&lt;a href="https://www.arm.com/products/development-tools/simulation/virtual-hardware" rel="noopener noreferrer"&gt;https://www.arm.com/products/development-tools/simulation/virtual-hardware&lt;/a&gt;). &lt;/p&gt;

</description>
      <category>arm</category>
      <category>iot</category>
      <category>aws</category>
      <category>devops</category>
    </item>
    <item>
      <title>Bearcam Companion: Demo Video</title>
      <dc:creator>Ed Miller</dc:creator>
      <pubDate>Sat, 01 Oct 2022 02:58:29 +0000</pubDate>
      <link>https://dev.to/aws-builders/bearcam-companion-demo-video-2gne</link>
      <guid>https://dev.to/aws-builders/bearcam-companion-demo-video-2gne</guid>
      <description>&lt;p&gt;I have created a demo video for the Bearcam Companion web application:&lt;/p&gt;

&lt;p&gt;&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/bkUBLLvfvV0"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

</description>
      <category>aws</category>
      <category>amplify</category>
      <category>tech4wildlife</category>
      <category>video</category>
    </item>
    <item>
      <title>Bearcam Companion: Hosting with Amplify and GitHub</title>
      <dc:creator>Ed Miller</dc:creator>
      <pubDate>Sun, 04 Sep 2022 06:29:10 +0000</pubDate>
      <link>https://dev.to/aws-builders/bearcam-companion-hosting-with-amplify-and-github-5bpm</link>
      <guid>https://dev.to/aws-builders/bearcam-companion-hosting-with-amplify-and-github-5bpm</guid>
      <description>&lt;p&gt;The Bearcam Companion application was pretty much ready to go after my &lt;a href="https://dev.to/aws-builders/bearcam-companion-my-first-lambda-5931"&gt;last post on Lambdas&lt;/a&gt;. The final step (at least for a minimum viable product) was to publish the site. Not surprisingly, I chose to use &lt;a href="https://aws.amazon.com/amplify/hosting/" rel="noopener noreferrer"&gt;AWS Amplify Hosting&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  AWS Amplify Hosting
&lt;/h2&gt;

&lt;p&gt;There is an overview on how to use Amplify to set up hosting for your site in the &lt;a href="https://docs.aws.amazon.com/amplify/latest/userguide/getting-started.html" rel="noopener noreferrer"&gt;AWS Amplify User Guide&lt;/a&gt;. I used the Amplify Console to get started.&lt;/p&gt;

&lt;h3&gt;
  
  
  Connect to GitHub
&lt;/h3&gt;

&lt;p&gt;Since I already have my code in a GitHub repository (as described &lt;a href="https://dev.to/aws-builders/bearcam-companion-github-user-groups-and-rekognition-3kdk"&gt;here&lt;/a&gt;), decided to use a continuous integration flow. In the setup, I selected GitHub and connected Amplify to &lt;a href="https://github.com/hypraptive/bearcam-companion" rel="noopener noreferrer"&gt;my repository&lt;/a&gt;. I only have one branch and one back end (staging) so far, so I selected those. The build settings were automatically detected for both the front end and back end, so I confirmed them. After clicking Save and Deploy, the build process started.&lt;/p&gt;

&lt;h3&gt;
  
  
  Auto-build Process
&lt;/h3&gt;

&lt;p&gt;The build process goes through 4 steps:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Provision - Sets up the build environment on a default host.&lt;/li&gt;
&lt;li&gt;Build - Clones the repo, deploys the backend and builds the front end.&lt;/li&gt;
&lt;li&gt;Deploy - Deploys artifacts to a managed hosting environment.&lt;/li&gt;
&lt;li&gt;Verify - Screen shots of the application are rendered.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The first build failed because I didn't have &lt;code&gt;aws-amplify&lt;/code&gt; in the GitHub repo. I ran&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight console"&gt;&lt;code&gt;&lt;span class="go"&gt;npm install aws-amplify
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;and committed the change. Once I pushed it to GitHub, the Amplify build process started automatically.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxf2lku1974qblstz35ps.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxf2lku1974qblstz35ps.png" alt="Build Process" width="800" height="314"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This time the build was successful, and I was able to view the application at the provided URL:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://master.d7ijzylsc7qfm.amplifyapp.com/" rel="noopener noreferrer"&gt;https://master.d7ijzylsc7qfm.amplifyapp.com/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fusokbuq99squkk7azz10.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fusokbuq99squkk7azz10.png" alt="Website with default URL" width="800" height="478"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The build history, and all the details for each step, are available in the console.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft71a479mpcmr7el2pn4d.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft71a479mpcmr7el2pn4d.png" alt="Build History" width="800" height="249"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Every time I push commits to the main repo, the above process kicks off, ending with a fresh new deployment of the site.&lt;/p&gt;

&lt;h2&gt;
  
  
  Custom URL
&lt;/h2&gt;

&lt;p&gt;The default URL is not very user friendly. I want the web application to appear as part of the BearID Project (&lt;a href="http://bearid.org/" rel="noopener noreferrer"&gt;http://bearid.org/&lt;/a&gt;). Specifically, I want to set it up at &lt;strong&gt;app.bearid.org&lt;/strong&gt;. &lt;/p&gt;

&lt;p&gt;To achieve this, I followed the &lt;a href="https://docs.aws.amazon.com/amplify/latest/userguide/custom-domains.html" rel="noopener noreferrer"&gt;Set up custom domains&lt;/a&gt; section of the Amplify User Guide. Essentially, you need to add some DNS records wherever your domain is managed. In my case, I had to add 2 CNAME records. The first record maps the subdomain (app) to the default Amplify app url above. The second record points to the AWS Certificate Manager (ACM) validation server, which enables TLS, and therefore, https. Finally I set up a subdomain forward so that all accesses to my URL go to the Amplify application.&lt;/p&gt;

&lt;p&gt;Once I completed the setup, I had to wait a few hours for all the records to propagate and for the SLL configuration to complete:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7mua6a5qrxibaplahjfv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7mua6a5qrxibaplahjfv.png" alt="Custom Domain Setup" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once the Domain Activation step above completed, I am able to view the Bearcam Companion at &lt;a href="https://app.bearid.org/" rel="noopener noreferrer"&gt;https://app.bearid.org/&lt;/a&gt;. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9te9ie6jxdnowdisv5cy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9te9ie6jxdnowdisv5cy.png" alt="Website with custom URL" width="800" height="478"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;In this post I discussed hosting my application with Amplify, connecting it to my GitHub repo for automatic deployment and setting up a custom URL. Once again, Amplify made the process extremely easy. &lt;/p&gt;

&lt;p&gt;If you find yourself watching the &lt;a href="https://explore.org/livecams/brown-bears/brown-bear-salmon-cam-brooks-falls" rel="noopener noreferrer"&gt;Brooks Falls Brown Bears cam on Explore.org&lt;/a&gt; and you are interested in learning who is who, sign up and log in to the &lt;a href="https://app.bearid.org/" rel="noopener noreferrer"&gt;Bearcam Companion&lt;/a&gt;. If you already know the bear of Brooks River, log in and help us label the images!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fijz06vqk8sq7w2gf6iow.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fijz06vqk8sq7w2gf6iow.png" alt="Bearcam Companion Example" width="800" height="453"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>amplify</category>
      <category>beginners</category>
      <category>webdev</category>
      <category>github</category>
    </item>
    <item>
      <title>Bearcam Companion: My First Lambda</title>
      <dc:creator>Ed Miller</dc:creator>
      <pubDate>Tue, 23 Aug 2022 02:34:30 +0000</pubDate>
      <link>https://dev.to/aws-builders/bearcam-companion-my-first-lambda-5931</link>
      <guid>https://dev.to/aws-builders/bearcam-companion-my-first-lambda-5931</guid>
      <description>&lt;p&gt;I have been making progress on the Bearcam Companion web application. I have implemented most of the main React frontend components with the associated Amplify backends. However, some of the functionality which I had implemented in the UI should really be automated. This calls for one of the staples of serverless, &lt;a href="https://aws.amazon.com/lambda/" rel="noopener noreferrer"&gt;AWS Lambda&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  AWS Lambda
&lt;/h2&gt;

&lt;p&gt;What is AWS Lambda? Here's what the &lt;a href="https://aws.amazon.com/lambda/" rel="noopener noreferrer"&gt;AWS Lambda page&lt;/a&gt; says:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;AWS Lambda is a serverless, event-driven compute service that lets you run code for virtually any type of application or backend service without provisioning or managing servers. You can trigger Lambda from over 200 AWS services and software as a service (SaaS) applications, and only pay for what you use.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  Creating a Lambda with Amplify CLI
&lt;/h3&gt;

&lt;p&gt;The first thing I wanted to automate was running the object detection machine learning models on every new image. In a &lt;a href="https://dev.to/aws-builders/bearcam-companion-github-user-groups-and-rekognition-3kdk"&gt;previous post&lt;/a&gt;, I described how I accomplished this using &lt;a href="https://aws.amazon.com/rekognition/" rel="noopener noreferrer"&gt;Amazon Rekognition&lt;/a&gt; from the UI. In my &lt;a href="https://dev.to/aws-builders/bearcam-companion-amplify-storage-cak"&gt;most recent post&lt;/a&gt;, I described how I upload images to S3 and update the &lt;strong&gt;Images&lt;/strong&gt; table. Now I want to use the &lt;strong&gt;Images&lt;/strong&gt; table update to trigger a Lambda to run Rekognition on the image and save the object detection results to the &lt;strong&gt;Objects&lt;/strong&gt; table.&lt;/p&gt;

&lt;p&gt;I created the Lambda using the &lt;a href="https://docs.amplify.aws/cli/function/" rel="noopener noreferrer"&gt;Amplify CLI to add a function&lt;/a&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight console"&gt;&lt;code&gt;&lt;span class="go"&gt;amplify add function
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;There are numerous options for setting up the Lambda, so read the documentation carefully. For my needs, here are some key settings:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Function Name: bcOnImagesFindObjects&lt;/li&gt;
&lt;li&gt;Runtime: NodeJS&lt;/li&gt;
&lt;li&gt;Function template: CRUD function for Amazon DynamoDB, since I will be reading from the &lt;strong&gt;Images&lt;/strong&gt; table and saving the Rekognition results in the &lt;strong&gt;Objects&lt;/strong&gt; table&lt;/li&gt;
&lt;li&gt;Resource access: GraphQL endpoints for &lt;strong&gt;Images&lt;/strong&gt; and &lt;strong&gt;Objects&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Trigger: &lt;a href="https://docs.amplify.aws/cli/usage/lambda-triggers/#dynamodb-lambda-triggers" rel="noopener noreferrer"&gt;DynamoDB Lambda Trigger&lt;/a&gt; for &lt;strong&gt;Images&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Developing the Lambda
&lt;/h3&gt;

&lt;p&gt;After creation, the function template appears in your project under:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;amplify/backend/function/&amp;lt;function-name&amp;gt;/src/index.js
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The template provides a basic structure from which to build. The trigger data comes in an event stream (multiple events can be batched for efficiency). The first thing I did was to parse the event records. I only care about INSERT events. From those I pull out the S3 information for the image. Here's my &lt;code&gt;parseRecords()&lt;/code&gt; function:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;parseRecords&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;records&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="kd"&gt;var&lt;/span&gt; &lt;span class="nx"&gt;inserts&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[];&lt;/span&gt;
  &lt;span class="nx"&gt;records&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;forEach&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;record&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;record&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;eventName&lt;/span&gt; &lt;span class="o"&gt;===&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;INSERT&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="c1"&gt;// get image info&lt;/span&gt;
      &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;imageS3obj&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;record&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;dynamodb&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;NewImage&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;file&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;M&lt;/span&gt;
      &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;insert&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="na"&gt;imageID&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;record&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;dynamodb&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;NewImage&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;S&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="na"&gt;Bucket&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;imageS3obj&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;bucket&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;S&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="na"&gt;Region&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;imageS3obj&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;region&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;S&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="na"&gt;Key&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;public/&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="nx"&gt;imageS3obj&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;key&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;S&lt;/span&gt;
      &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="p"&gt;});&lt;/span&gt;
  &lt;span class="k"&gt;return &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;inserts&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Next I loop through the images, calling &lt;code&gt;processLabel()&lt;/code&gt; which sends the image to Rekognition for object detection using &lt;code&gt;rekognition.detectLabels&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;processImage&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;imageInfo&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;params&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="na"&gt;Image&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="na"&gt;S3Object&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="na"&gt;Bucket&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;imageInfo&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;Bucket&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="na"&gt;Name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;imageInfo&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;Key&lt;/span&gt;
      &lt;span class="p"&gt;},&lt;/span&gt;
    &lt;span class="p"&gt;},&lt;/span&gt;
    &lt;span class="na"&gt;MinConfidence&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;MinimumConfidence&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;rekognition&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;detectLabels&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;params&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;promise&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For each result, I call &lt;code&gt;parseDetections()&lt;/code&gt; to pull out the relevant bounding box information from the JSON response:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;parseDetections&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;detections&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="kd"&gt;var&lt;/span&gt; &lt;span class="nx"&gt;boxes&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[];&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;labels&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;detections&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;Labels&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="nx"&gt;labels&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;forEach&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;object&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;object&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;Instances&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;forEach&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;instance&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="kd"&gt;var&lt;/span&gt; &lt;span class="nx"&gt;bb&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;instance&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;BoundingBox&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
      &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;box&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="na"&gt;Name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;object&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;Name&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="na"&gt;Confidence&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;instance&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;Confidence&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="na"&gt;Width&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;bb&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;Width&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="na"&gt;Height&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;bb&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;Height&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="na"&gt;Left&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;bb&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;Left&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="na"&gt;Top&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;bb&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;Top&lt;/span&gt;
      &lt;span class="p"&gt;}&lt;/span&gt;
      &lt;span class="nx"&gt;boxes&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;push&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;box&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="p"&gt;})&lt;/span&gt;
  &lt;span class="p"&gt;})&lt;/span&gt;
  &lt;span class="k"&gt;return &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;boxes&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Finally, I save each box to the &lt;strong&gt;Objects&lt;/strong&gt; table by using &lt;code&gt;fetch()&lt;/code&gt; to POST the data to the appropriate GraphQL endpoint. The main handler looks like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="nx"&gt;exports&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;handler&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="kd"&gt;function&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;event&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;context&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;callback&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="k"&gt;try&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="c1"&gt;// Parse DynamoDB Images Records&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;inserts&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;parseRecords&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;event&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;Records&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="k"&gt;for &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;insert&lt;/span&gt; &lt;span class="k"&gt;of&lt;/span&gt; &lt;span class="nx"&gt;inserts&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="c1"&gt;// Call Rekognition on every new image&lt;/span&gt;
      &lt;span class="kd"&gt;let&lt;/span&gt; &lt;span class="nx"&gt;detections&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nf"&gt;processImage&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;insert&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
      &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;boxes&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;parseDetections&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;detections&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
      &lt;span class="k"&gt;for &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;box&lt;/span&gt; &lt;span class="k"&gt;of&lt;/span&gt; &lt;span class="nx"&gt;boxes&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="c1"&gt;// Save each bounding box to Objects&lt;/span&gt;
        &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;options&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;getFetchOptions&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;box&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;insert&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;imageID&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
        &lt;span class="nx"&gt;response&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nf"&gt;fetch&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;GRAPHQL_ENDPOINT&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;options&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
        &lt;span class="nx"&gt;body&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;response&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;json&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
        &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;body&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;errors&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
          &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;GraphQL error&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;body&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;else&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
          &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;GraphQL success&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt;
      &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;catch &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;err&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nf"&gt;callback&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;err&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;message&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;status&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;complete&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt; &lt;span class="p"&gt;};&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once complete, you can deploy the Lambda with &lt;code&gt;amplify push&lt;/code&gt;. Of course it didn't work at first!&lt;/p&gt;

&lt;h3&gt;
  
  
  Testing the Lambda Locally
&lt;/h3&gt;

&lt;p&gt;There are multiple ways to debug Lambdas. You can start testing locally, using &lt;code&gt;amplify mock function&lt;/code&gt;. The mock capability will run the Lambda locally and feed it with event data from a JSON file. I was able to capture a DynamoDB stream event from CloudWatch, which I used as my test JSON. &lt;/p&gt;

&lt;p&gt;One of my main problems, and not for the first time, had to do with asynchronous functions. I still don't have I still have some problems with awaits and promises, etc. I am mainly using &lt;code&gt;await&lt;/code&gt; inside of &lt;code&gt;async&lt;/code&gt; functions, but sometimes I find no data is coming back because I have somehow returned from the function before the data arrived.&lt;/p&gt;

&lt;p&gt;Another problem I encountered was writing data directly to DynamoDB. This works, but it doesn't fill in all the automatic fields that Amplify had created. Instead, use the GraphQL endpoints to write through AppSync.&lt;/p&gt;

&lt;h3&gt;
  
  
  Testing the Lambda in the Console
&lt;/h3&gt;

&lt;p&gt;One of the first problems I ran into when I did an &lt;code&gt;amplify push&lt;/code&gt; to deploy the Lambda was a missing module. The following line was failing:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;fetch&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;require&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;node-fetch&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Not surprisingly, node-fetch is not part of the standard NodeJS runtime. Somehow I needed to include this package. I could either go to the src directory of the Lambda function and install the package there, or I could use a &lt;a href="https://docs.amplify.aws/cli/function/layers/" rel="noopener noreferrer"&gt;Lambda Layer&lt;/a&gt;. I chose the latter. More on that in a bit.&lt;/p&gt;

&lt;p&gt;Once the Lambda is loading properly, you can test and modify code in the Lambda console:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz39oept3t2gt8nd0hgr0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz39oept3t2gt8nd0hgr0.png" alt="Lambda Code Panel" width="800" height="268"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You can test with pre-defined event JSON files, much as you can with &lt;code&gt;amplify mock&lt;/code&gt;:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F69v4bdkfysp8hku3hkc2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F69v4bdkfysp8hku3hkc2.png" alt="Lambda Test Panel" width="800" height="399"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;From this console, you can also access various monitor logs:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm45muz6qb403awcibp34.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm45muz6qb403awcibp34.png" alt="Lambda Monitor Panel" width="800" height="301"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;From the monitors logs you can jump to the CloudWatch LogStream:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4x39uh37gndnxaffrdba.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4x39uh37gndnxaffrdba.png" alt="CoudWatch Log" width="800" height="262"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Lambda Layers
&lt;/h2&gt;

&lt;p&gt;Lambda Layers provide a means to share common libraries across multiple Lambdas. Here's a diagram from the &lt;a href="https://docs.amplify.aws/cli/function/layers/" rel="noopener noreferrer"&gt;Amplify docs on layers&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxa439013magqjiv8vez9.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxa439013magqjiv8vez9.gif" alt="CloudWatch Log" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;With amplify, you add a Lambda Layer much like you add a Lambda&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;amplify add function
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once I have the layer, I can add packages with the appropriate package manager, in my case, npm for NodeJS:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;npm i node-fetch
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;When I'm done setting up the Lambda Layer, I need to update the Lambda function to have it use the layer:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;amplify update function
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;When I am done with everything, I can deploy the updates function and new layer with &lt;code&gt;amplify push&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;I still had an error related to JavaScript versions. I had to downgrade node-fetch from 3.x to 2.x. Once I did, I redeployed the Lambda Layer and updated the Lambda function to use the new version. I can see the trigger and layer information in the Lambda function overview:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk6ycjj4w4d125txpnyal.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk6ycjj4w4d125txpnyal.png" alt="Image description" width="800" height="311"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;In this post I described&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Creating a Lambda function triggered by a change in a DynamoDB table&lt;/li&gt;
&lt;li&gt;Testing the Lambda function locally and in the console&lt;/li&gt;
&lt;li&gt;Implementing a Lambda Layer for common libraries&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Overall, Amplify continues to impress by making it easy to deploy backend functionality. I was able to deploy a serverless function using a Lambda written in the same language as my frontend code. I still have some challenges with asynchronous functions, but that's more to do with my own inexperience with NodeJS/JavaScript.&lt;/p&gt;

&lt;p&gt;Next time I will write about publishing my shiny new website. Follow along here and on Twitter (&lt;a href="https://twitter.com/bluevalhalla" rel="noopener noreferrer"&gt;bluevalhalla&lt;/a&gt;).&lt;/p&gt;

</description>
      <category>amplify</category>
      <category>serverless</category>
      <category>machinelearning</category>
    </item>
  </channel>
</rss>
