<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Artur Schneider</title>
    <description>The latest articles on DEV Community by Artur Schneider (@arturschneider).</description>
    <link>https://dev.to/arturschneider</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/arturschneider"/>
    <language>en</language>
    <item>
      <title>Why Sovereignty fails when it isn’t measurable and what AWS tries to fix with ESC-SRF</title>
      <dc:creator>Artur Schneider</dc:creator>
      <pubDate>Fri, 16 Jan 2026 17:44:54 +0000</pubDate>
      <link>https://dev.to/aws-builders/why-sovereignty-fails-when-it-isnt-measurable-and-what-aws-tries-to-fix-with-esc-srf-20km</link>
      <guid>https://dev.to/aws-builders/why-sovereignty-fails-when-it-isnt-measurable-and-what-aws-tries-to-fix-with-esc-srf-20km</guid>
      <description>&lt;h2&gt;
  
  
  The underestimated innovation of the AWS European Sovereign Cloud: Sovereignty as evidence (Not Slides)
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;As of 15 January 2026, the AWS European Sovereign Cloud (ESC) is officially live.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In Europe, “digital sovereignty” shows up everywhere: procurement requirements, regulatory conversations, risk committees, architecture boards. And yet the debate still often stalls at vague statements like &lt;strong&gt;“EU-only”&lt;/strong&gt; or &lt;strong&gt;“fully sovereign,”&lt;/strong&gt; without a consistent way to prove what those claims mean in practice.&lt;/p&gt;

&lt;p&gt;That’s why I think the most interesting part of the &lt;strong&gt;AWS European Sovereign Cloud (ESC)&lt;/strong&gt; isn’t the headline of “a new EU-based cloud,” but something much more operational:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;AWS is trying to turn sovereignty into an auditable control model and not a marketing label.&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;AWS calls this approach the &lt;strong&gt;European Sovereign Cloud – Sovereign Reference Framework (ESC-SRF)&lt;/strong&gt;, published via &lt;strong&gt;AWS Artifact&lt;/strong&gt;, and positioned as the basis for a dedicated &lt;strong&gt;SOC 2 attestation&lt;/strong&gt; for the European Sovereign Cloud. [S1]&lt;/p&gt;

&lt;p&gt;This post goes deeper than the usual headlines and focuses on the part that’s still not widely discussed:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Sovereignty as a control model (ESC-SRF), not a label&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Customer-created metadata residency (a surprisingly big deal)&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Operational autonomy: EU-only operations boundary + dedicated EU SOC&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;How to turn all of this into your own “Sovereignty Control Map” and evidence pipeline&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  1) First, define the problem correctly: sovereignty ≠ residency
&lt;/h2&gt;

&lt;p&gt;In practice, teams mix three different concepts:&lt;/p&gt;

&lt;h3&gt;
  
  
  a) Data residency
&lt;/h3&gt;

&lt;p&gt;Where your &lt;strong&gt;customer content&lt;/strong&gt; is stored and processed.&lt;/p&gt;

&lt;h3&gt;
  
  
  b) Operational control
&lt;/h3&gt;

&lt;p&gt;Who can &lt;strong&gt;operate&lt;/strong&gt;, &lt;strong&gt;support&lt;/strong&gt;, and &lt;strong&gt;access&lt;/strong&gt; systems (especially during incidents).&lt;/p&gt;

&lt;h3&gt;
  
  
  c) Governance + jurisdiction exposure
&lt;/h3&gt;

&lt;p&gt;How the provider is structured, and what legal and organizational boundaries exist.&lt;/p&gt;

&lt;p&gt;ESC tries to address all three explicitly through technical measures, operational constraints, and governance structure*and then ties them together with an auditable reference framework (ESC-SRF).* [S1][S2]&lt;/p&gt;

&lt;p&gt;That last part is the underappreciated innovation: a provider saying, “Here are the sovereignty criteria, here’s the control mapping, and here’s how it will be validated.”&lt;/p&gt;




&lt;h2&gt;
  
  
  2) What AWS publicly says is different about ESC (the “what”)
&lt;/h2&gt;

&lt;p&gt;AWS describes ESC as an independent cloud for Europe that is &lt;strong&gt;separate and independent&lt;/strong&gt; from existing Regions, with infrastructure located wholly within the EU. [S3]  &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyllq07f2d3pwex5oo26l.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyllq07f2d3pwex5oo26l.JPG" alt="Architecture_1" width="800" height="370"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The main differentiators include:&lt;/p&gt;

&lt;h2&gt;
  
  
  2.1 EU-restricted operations (“Qualified Staff” boundary)
&lt;/h2&gt;

&lt;p&gt;The ESC whitepaper introduces &lt;strong&gt;Qualified AWS European Sovereign Cloud Staff&lt;/strong&gt;, located within the EU, controlling day-to-day operations, including support and customer service. [S2][S3]&lt;/p&gt;

&lt;p&gt;It also describes controlled consultation mechanisms with global specialists*but with policies and monitoring explicitly designed to protect the boundary.* [S2]&lt;/p&gt;

&lt;h2&gt;
  
  
  2.2 Dedicated European Security Operations Center (SOC)
&lt;/h2&gt;

&lt;p&gt;AWS describes a &lt;strong&gt;dedicated European SOC&lt;/strong&gt; for the ESC, supported by a dedicated security leader who is an EU citizen residing in the EU. [S2]&lt;/p&gt;

&lt;h2&gt;
  
  
  2.3 Isolation beyond “normal” Regions
&lt;/h2&gt;

&lt;p&gt;AWS Regions are designed to be isolated, but the ESC whitepaper explicitly positions ESC as a distinct cloud partition and calls out that it is designed to go further (e.g., independent core systems such as billing/account/identity). [S2]&lt;/p&gt;

&lt;h2&gt;
  
  
  2.4 “Customer-created metadata” residency (this is the sleeper topic)
&lt;/h2&gt;

&lt;p&gt;AWS explicitly includes &lt;strong&gt;customer-created metadata&lt;/strong&gt; in its sovereignty scope (e.g., roles, permissions, tags/labels, configurations) and states it will remain in the EU. [S2][S6]&lt;/p&gt;

&lt;h2&gt;
  
  
  2.5 Dedicated European trust &amp;amp; certificate services
&lt;/h2&gt;

&lt;p&gt;AWS describes establishing a dedicated &lt;strong&gt;European trust service provider&lt;/strong&gt; to autonomously operate CA key material and perform certificate issuance functions within ESC. [S4]&lt;br&gt;&lt;br&gt;
The ESC whitepaper also mentions a dedicated root European Certificate Authority. [S2]&lt;/p&gt;

&lt;h2&gt;
  
  
  2.6 Dedicated Route 53 routing and autonomous Direct Connect
&lt;/h2&gt;

&lt;p&gt;AWS describes sovereign DNS/routing properties (dedicated Route 53 routing, European TLD-based nameserver naming) and “sovereign” Direct Connect PoPs. [S2]&lt;/p&gt;




&lt;h2&gt;
  
  
  3) The 4 sovereignty pillars AWS is emphasizing — and what they &lt;em&gt;actually&lt;/em&gt; mean in practice
&lt;/h2&gt;

&lt;p&gt;Most “sovereign cloud” articles stop at geography: &lt;em&gt;“data stays in the EU.”&lt;/em&gt;&lt;br&gt;&lt;br&gt;
AWS’ European Sovereign Cloud (ESC) narrative goes further and breaks sovereignty into &lt;strong&gt;four pillars&lt;/strong&gt; that are much closer to an &lt;strong&gt;assurance model&lt;/strong&gt; than a marketing statement:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Data sovereignty (including &lt;strong&gt;customer content&lt;/strong&gt; &lt;em&gt;and&lt;/em&gt; &lt;strong&gt;customer-created metadata&lt;/strong&gt;)&lt;/li&gt;
&lt;li&gt;Corporate governance under EU law&lt;/li&gt;
&lt;li&gt;Operational autonomy&lt;/li&gt;
&lt;li&gt;A defined approach to law enforcement requests&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Below is the detailed context behind each pillar — plus the engineering implications and the “questions auditors will ask” version.&lt;/p&gt;




&lt;h2&gt;
  
  
  3.1 Data sovereignty: it’s not just &lt;em&gt;where your data sits&lt;/em&gt; — it’s also your &lt;strong&gt;control-plane&lt;/strong&gt; artifacts
&lt;/h2&gt;

&lt;p&gt;AWS draws a clear line between different data categories:&lt;/p&gt;

&lt;h3&gt;
  
  
  A) Customer content
&lt;/h3&gt;

&lt;p&gt;This is the “obvious” part: objects, database rows, files, messages—your business data.&lt;br&gt;&lt;br&gt;
AWS states ESC is designed so &lt;strong&gt;customer content is stored and processed within the ESC boundary&lt;/strong&gt;, unless you explicitly choose otherwise (for example, by configuring cross-boundary access or integrations). [S2]&lt;/p&gt;

&lt;h3&gt;
  
  
  B) Customer-created metadata (the sleeper topic)
&lt;/h3&gt;

&lt;p&gt;This is the part many posts ignore. AWS explicitly says ESC also keeps &lt;strong&gt;customer-created metadata&lt;/strong&gt; in the EU and defines it as metadata customers create to manage/configure resources—examples include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;IAM &lt;strong&gt;roles&lt;/strong&gt; and &lt;strong&gt;permissions&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Resource labels/tags&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;Resource &lt;strong&gt;configurations&lt;/strong&gt; [S2][S6]&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;AWS even provides a concrete mental model (e.g., for S3):&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Images stored in S3 → &lt;strong&gt;customer content&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Bucket name, object names, permissions, tags → &lt;strong&gt;customer-created metadata&lt;/strong&gt; [S2]&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Why metadata residency matters more than people think:&lt;/strong&gt;&lt;br&gt;
In regulated environments, metadata often reveals &lt;strong&gt;as much&lt;/strong&gt; as the data itself:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;IAM roles can expose privileged access patterns&lt;/li&gt;
&lt;li&gt;Tags may unintentionally include sensitive business context (and sometimes PII)&lt;/li&gt;
&lt;li&gt;Resource names can leak internal project names, customer names, locations&lt;/li&gt;
&lt;li&gt;Network configuration metadata can reveal segmentation and trust boundaries&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;So, by explicitly treating &lt;strong&gt;customer-created metadata&lt;/strong&gt; as part of sovereignty scope, AWS is addressing a class of audit and threat-model concerns that typically show up &lt;em&gt;late&lt;/em&gt; (right before go-live or during the first big audit).&lt;/p&gt;

&lt;h3&gt;
  
  
  C) The nuance you must not hide: AWS operational data
&lt;/h3&gt;

&lt;p&gt;AWS also states that some information that is &lt;strong&gt;neither&lt;/strong&gt; customer content &lt;strong&gt;nor&lt;/strong&gt; customer-created metadata may leave the EU (examples include &lt;strong&gt;internal service metrics&lt;/strong&gt; used for capacity management, performance monitoring, and certain security support functions). [S2]&lt;/p&gt;

&lt;p&gt;This is important because it forces a grown-up sovereignty discussion:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Are those operational signals acceptable under your regulator or customer contracts?&lt;/li&gt;
&lt;li&gt;If yes: under which protections and transparency expectations?&lt;/li&gt;
&lt;li&gt;If no: what compensating controls are required?&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Blog angle:&lt;/strong&gt; sovereignty isn’t “nothing ever leaves the EU,” it’s “define scope precisely, control it, and prove it.”&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;strong&gt;Practical engineering takeaways (customer-side controls):&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Define a hard policy: &lt;strong&gt;no sensitive data in tags/names&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Classify “customer-created metadata” explicitly in your data classification scheme&lt;/li&gt;
&lt;li&gt;Add automated checks (policy-as-code) to block risky metadata patterns (e.g., tag value regex rules)&lt;/li&gt;
&lt;li&gt;Treat observability as a data surface: avoid logging payloads that create accidental “content in logs”&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  3.2 Corporate governance under EU law: sovereignty includes &lt;em&gt;who is in charge&lt;/em&gt; and &lt;em&gt;who can authorize exceptions&lt;/em&gt;
&lt;/h2&gt;

&lt;p&gt;ESC is not only described as infrastructure in the EU; AWS describes a distinct &lt;strong&gt;governance model&lt;/strong&gt; under EU/German corporate law for the first region. [S2][S3]&lt;/p&gt;

&lt;p&gt;From the published structure, AWS outlines multiple entities (e.g., staffing, infrastructure, trust certificates, holding structure) and describes leadership and oversight constraints. [S2]&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What this means in real life (not just legal diagrams):&lt;/strong&gt;&lt;br&gt;
In sovereignty programs, governance matters because it determines:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Who can approve&lt;/strong&gt; operational exceptions (e.g., emergency access paths)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Who owns&lt;/strong&gt; sovereignty-related policy decisions (and can be held accountable)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;How oversight works&lt;/strong&gt; (advisory/board mechanisms, escalation paths)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Whether sovereignty commitments&lt;/strong&gt; have institutional guardrails beyond technical controls&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;AWS describes an advisory board mechanism with sovereignty-related oversight responsibilities and EU-resident constraints. [S2]&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Audit version of this topic:&lt;/strong&gt;&lt;br&gt;
Auditors don’t only ask “Where is the data?”&lt;br&gt;&lt;br&gt;
They ask “Who can make an exception—and under what authority?”&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Practical takeaways for your architecture + governance docs:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Document your “exception path” (break-glass, escalations, emergency changes)&lt;/li&gt;
&lt;li&gt;Map who approves what (RACI) and align it with your sovereignty narrative&lt;/li&gt;
&lt;li&gt;Ensure your contracting and support model is consistent with your sovereignty scope [S2]&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  3.3 Operational autonomy: sovereignty is tested during incidents, not during PowerPoint
&lt;/h2&gt;

&lt;p&gt;AWS positions operational autonomy as a combination of:&lt;/p&gt;

&lt;h3&gt;
  
  
  A) EU-restricted operations (“Qualified Staff” boundary)
&lt;/h3&gt;

&lt;p&gt;AWS describes restricting ESC operations (including support and data center operations) to &lt;strong&gt;Qualified AWS European Sovereign Cloud Staff&lt;/strong&gt; located in the EU, with controlled consultation mechanisms to access global expertise—but under policies designed to preserve the boundary. [S2][S3]&lt;/p&gt;

&lt;p&gt;Why this matters:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Your highest-risk sovereignty moment is &lt;strong&gt;incident response&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Next is &lt;strong&gt;support escalation&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Then &lt;strong&gt;forensics&lt;/strong&gt; and post-incident evidence handling&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If operational autonomy isn’t designed for these moments, it’s not autonomy.&lt;/p&gt;

&lt;h3&gt;
  
  
  B) Dedicated European Security Operations Center (SOC)
&lt;/h3&gt;

&lt;p&gt;AWS also describes a &lt;strong&gt;dedicated European SOC&lt;/strong&gt; and EU-resident security leadership for ESC. [S2]&lt;/p&gt;

&lt;p&gt;Why this matters:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Incident command structure is part of sovereignty&lt;/li&gt;
&lt;li&gt;Who can access telemetry and make containment decisions is part of sovereignty&lt;/li&gt;
&lt;li&gt;Who interacts with external parties (including authorities) is part of sovereignty&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  C) Autonomy at the platform layer (partition-level separation)
&lt;/h3&gt;

&lt;p&gt;AWS describes ESC as “more than another Region”—it’s positioned as a distinct cloud partition with &lt;strong&gt;independent billing, account, and identity systems&lt;/strong&gt;, and “no critical dependencies” on non-EU infrastructure. [S2][S3]&lt;/p&gt;

&lt;p&gt;This implies: it’s not only about “EU workloads” but also about &lt;strong&gt;reducing non-EU dependencies for core control functions&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Practical engineering takeaways:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Write a sovereignty-focused incident runbook:

&lt;ul&gt;
&lt;li&gt;Who can execute break-glass actions?&lt;/li&gt;
&lt;li&gt;Where are approvals logged?&lt;/li&gt;
&lt;li&gt;Who can access logs and snapshots?&lt;/li&gt;
&lt;li&gt;How is evidence preserved?&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;Run “sovereignty game days”:

&lt;ul&gt;
&lt;li&gt;Simulate an incident escalation&lt;/li&gt;
&lt;li&gt;Validate the boundary: who can access what, from where, with what approvals&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;




&lt;h2&gt;
  
  
  3.4 Approach to law enforcement requests: the real sovereignty question is &lt;em&gt;what happens when pressure arrives&lt;/em&gt;
&lt;/h2&gt;

&lt;p&gt;AWS explicitly frames ESC’s approach to law enforcement requests as a combination of:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;technical measures&lt;/li&gt;
&lt;li&gt;operational measures&lt;/li&gt;
&lt;li&gt;legal and contractual measures [S2]&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This topic is often the most politically charged—so it’s worth being very precise.&lt;/p&gt;

&lt;h3&gt;
  
  
  A) Technical measures (constrain what’s possible)
&lt;/h3&gt;

&lt;p&gt;AWS describes design elements intended to limit access to ESC-restricted data from outside the EU and emphasizes control mechanisms plus logging/visibility. [S2]&lt;/p&gt;

&lt;p&gt;Your takeaway as a customer:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Strong encryption + key management strategy matters more than ever&lt;/li&gt;
&lt;li&gt;Evidence (logs, access records) is not optional—it’s your safety net&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  B) Operational measures (define the decision chain)
&lt;/h3&gt;

&lt;p&gt;AWS describes operational handling that involves the appropriate EU-qualified staff, training, and locally aligned processes for request handling. [S2]&lt;/p&gt;

&lt;p&gt;Your takeaway:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;If your own IR process doesn’t define “who is allowed to respond to external requests,” you’ll improvise under stress—which is exactly what auditors don’t want.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  C) Legal + contractual measures (define the posture)
&lt;/h3&gt;

&lt;p&gt;AWS states it reviews requests individually, aims to redirect requests to customers where possible, notifies customers when allowed, and challenges requests that conflict with applicable law, plus points to transparency reporting. [S2]&lt;/p&gt;

&lt;p&gt;Your takeaway:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Your sovereignty narrative should include:

&lt;ul&gt;
&lt;li&gt;notification expectations&lt;/li&gt;
&lt;li&gt;customer ownership of disclosures&lt;/li&gt;
&lt;li&gt;how requests are routed and handled&lt;/li&gt;
&lt;li&gt;what evidence you keep&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Practical takeaways for a “sovereignty-by-design” program:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Decide what your stance is on third-party requests (and document it)&lt;/li&gt;
&lt;li&gt;Ensure your encryption key strategy supports your sovereignty posture&lt;/li&gt;
&lt;li&gt;Make transparency and auditability part of your operating model (not “as needed”)&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  3.5 Turning the four pillars into an assurance model: Requirements → Controls → Evidence
&lt;/h2&gt;

&lt;p&gt;If you want sovereignty to survive procurement, audits, and real incidents, treat it exactly like security compliance:&lt;/p&gt;

&lt;h3&gt;
  
  
  1) Requirements
&lt;/h3&gt;

&lt;p&gt;Example: “Customer-created metadata must remain within the EU boundary.” [S2]&lt;/p&gt;

&lt;h3&gt;
  
  
  2) Controls
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Policy-as-code guardrails (tag conventions, naming constraints)&lt;/li&gt;
&lt;li&gt;Access controls (least privilege, break-glass approvals)&lt;/li&gt;
&lt;li&gt;Data egress controls (explicit allowlists for cross-boundary integration)&lt;/li&gt;
&lt;li&gt;Operational runbooks (incident response, escalation)&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  3) Evidence
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;AWS Config / CloudTrail exports and conformance snapshots&lt;/li&gt;
&lt;li&gt;Break-glass approval logs + incident reports&lt;/li&gt;
&lt;li&gt;Architecture Decision Records (ADRs) documenting boundary choices&lt;/li&gt;
&lt;li&gt;Provider artifacts (ESC-SRF via AWS Artifact; assurance reports when available) [S1]&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;This is why ESC-SRF is interesting:&lt;/strong&gt; it pushes the conversation toward a reusable mapping between sovereignty criteria and verifiable controls. [S1]&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  3.6 Questions to include in your blog (because they force real answers)
&lt;/h2&gt;

&lt;p&gt;If you want this section to hit harder, end with questions people can’t dodge:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;What do we classify as &lt;strong&gt;customer-created metadata&lt;/strong&gt; in our organization—and how do we prevent sensitive leakage into tags, names, or logs?&lt;/li&gt;
&lt;li&gt;What data do we accept as “AWS operational data,” and how do we document that risk decision? [S2]&lt;/li&gt;
&lt;li&gt;Who can authorize sovereignty exceptions—and how is that approval audited?&lt;/li&gt;
&lt;li&gt;In an incident, &lt;strong&gt;who&lt;/strong&gt; can access &lt;strong&gt;what&lt;/strong&gt;, &lt;strong&gt;from where&lt;/strong&gt;, and &lt;strong&gt;how is it evidenced&lt;/strong&gt;?&lt;/li&gt;
&lt;li&gt;What is our documented approach to third-party requests, and how does encryption/key custody support it?&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These questions transform “sovereignty” from a slogan into a system.&lt;/p&gt;




&lt;h2&gt;
  
  
  4) The under-discussed part: ESC-SRF turns sovereignty into a control model (the “how”)
&lt;/h2&gt;

&lt;p&gt;Here’s the key: AWS doesn’t just list features. It positions ESC sovereignty as criteria aligned across domains like:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;governance independence&lt;/li&gt;
&lt;li&gt;operational control&lt;/li&gt;
&lt;li&gt;data residency&lt;/li&gt;
&lt;li&gt;technical isolation [S1]&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Then it publishes the reference framework in &lt;strong&gt;AWS Artifact&lt;/strong&gt; and states it forms the basis for a dedicated ESC &lt;strong&gt;SOC 2 attestation&lt;/strong&gt;. [S1]&lt;/p&gt;

&lt;h3&gt;
  
  
  What’s new about that?
&lt;/h3&gt;

&lt;p&gt;In most sovereignty programs, teams build a messy “evidence story” from scratch:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;vendor PDFs + scattered attestations&lt;/li&gt;
&lt;li&gt;internal architecture decisions&lt;/li&gt;
&lt;li&gt;custom risk narratives&lt;/li&gt;
&lt;li&gt;painful audit cycles&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;ESC-SRF is trying to standardize the provider side into an auditable mapping you can reuse.&lt;/p&gt;

&lt;p&gt;AWS also states ESC controls are undergoing independent third-party audit, and that ESC-SRF can be used as:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;an &lt;strong&gt;assurance model&lt;/strong&gt; (end-to-end traceability from criteria to implementation)&lt;/li&gt;
&lt;li&gt;a &lt;strong&gt;design reference framework&lt;/strong&gt; (how customers build sovereignty controls on top) [S1]&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  5) Deep dive: the most important ESC design nuances (and what they imply for architects)
&lt;/h2&gt;

&lt;h2&gt;
  
  
  5.1 Understand the boundary: content vs metadata vs “AWS operational data”
&lt;/h2&gt;

&lt;p&gt;AWS draws a line between:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;customer content&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;customer-created metadata&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;some AWS operational data&lt;/strong&gt; (neither content nor customer-created metadata) that may leave the EU (e.g., internal metrics) [S2]&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;What this means for your risk model&lt;/strong&gt;&lt;br&gt;
If your sovereignty program requires strict “nothing leaves the EU,” you must explicitly address:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;what counts as operational data&lt;/li&gt;
&lt;li&gt;how it’s protected&lt;/li&gt;
&lt;li&gt;whether it’s acceptable under your regulator / customer contracts&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  5.2 Governance isn’t optional
&lt;/h2&gt;

&lt;p&gt;The ESC whitepaper describes EU-law governance constraints for the first region, including oversight mechanisms like an advisory board. [S2]&lt;br&gt;&lt;br&gt;
Even if you’re “just an engineer,” your architecture can fail the sovereignty conversation if governance and exception authority aren’t explainable.&lt;/p&gt;

&lt;h2&gt;
  
  
  5.3 Operational autonomy: plan your incident mechanics
&lt;/h2&gt;

&lt;p&gt;If you’re adopting ESC (or designing with it in mind), define:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Break-glass policy&lt;/li&gt;
&lt;li&gt;Support engagement model&lt;/li&gt;
&lt;li&gt;Forensics flow&lt;/li&gt;
&lt;li&gt;Evidence preservation workflow&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;ESC describes EU-only operational control and a dedicated EU SOC; your job is mapping that into your org’s runbooks. [S2][S3]&lt;/p&gt;




&lt;h2&gt;
  
  
  6) Service reality: “sovereign” doesn’t mean “all services on day one”
&lt;/h2&gt;

&lt;p&gt;AWS states ESC will launch with a set of core services across categories (compute, storage, database, networking, security, and also AI/ML) and then expand based on demand. [S3][S7]&lt;/p&gt;

&lt;p&gt;AWS also published roadmap updates (example items include IAM Identity Center expected in Q1 2026; CloudFront expected by end of 2026). [S7]&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Practical implication:&lt;/strong&gt; plan for service gaps and design substitution patterns (auth, DNS, observability, delivery edge).&lt;/p&gt;




&lt;h2&gt;
  
  
  7) Make sovereignty measurable: your “Sovereignty Control Map” template
&lt;/h2&gt;

&lt;p&gt;Copy/paste and adapt this table. This is where your sovereignty program becomes operational.&lt;/p&gt;

&lt;h2&gt;
  
  
  Sovereignty Control Map (SCM) — starter template
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Domain&lt;/th&gt;
&lt;th&gt;Requirement (your wording)&lt;/th&gt;
&lt;th&gt;ESC capability (provider)&lt;/th&gt;
&lt;th&gt;Your control (customer)&lt;/th&gt;
&lt;th&gt;Evidence you produce&lt;/th&gt;
&lt;th&gt;Cadence / Owner&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Data residency&lt;/td&gt;
&lt;td&gt;Customer content stays in EU&lt;/td&gt;
&lt;td&gt;Customer content stored/processed within boundary unless customer chooses otherwise [S2]&lt;/td&gt;
&lt;td&gt;Data classification + deny policies for replication/export&lt;/td&gt;
&lt;td&gt;Config + CloudTrail + ADRs&lt;/td&gt;
&lt;td&gt;Monthly / Platform Sec&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Metadata residency&lt;/td&gt;
&lt;td&gt;Customer-created metadata stays in EU&lt;/td&gt;
&lt;td&gt;Explicitly kept in EU [S2][S6]&lt;/td&gt;
&lt;td&gt;Tagging/IAM naming rules + policy-as-code&lt;/td&gt;
&lt;td&gt;Policy repo, Config rules, Access Analyzer reports&lt;/td&gt;
&lt;td&gt;Monthly / IAM Owner&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Operational control&lt;/td&gt;
&lt;td&gt;Only EU-resident staff operate day-to-day&lt;/td&gt;
&lt;td&gt;EU “Qualified Staff” model [S2][S3]&lt;/td&gt;
&lt;td&gt;Break-glass workflow + approvals + ticket SOP&lt;/td&gt;
&lt;td&gt;Ticket evidence + access logs + PIRs&lt;/td&gt;
&lt;td&gt;Quarterly / SOC Lead&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Incident response&lt;/td&gt;
&lt;td&gt;EU-operated SOC &amp;amp; escalation&lt;/td&gt;
&lt;td&gt;Dedicated EU SOC [S2]&lt;/td&gt;
&lt;td&gt;IR runbooks + tabletop exercises&lt;/td&gt;
&lt;td&gt;IR exercise reports&lt;/td&gt;
&lt;td&gt;Quarterly / Security&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Trust anchors&lt;/td&gt;
&lt;td&gt;CA operations controlled in EU&lt;/td&gt;
&lt;td&gt;EU trust service provider + EU root CA [S2][S4]&lt;/td&gt;
&lt;td&gt;Certificate lifecycle policy + key custody policy&lt;/td&gt;
&lt;td&gt;CA policy docs + issuance logs&lt;/td&gt;
&lt;td&gt;Quarterly / PKI Owner&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Isolation&lt;/td&gt;
&lt;td&gt;Strong separation from other Regions&lt;/td&gt;
&lt;td&gt;Separate/independent partition; independent core systems [S2][S3]&lt;/td&gt;
&lt;td&gt;Network/org boundary + egress controls&lt;/td&gt;
&lt;td&gt;Network configs + firewall policies + egress logs&lt;/td&gt;
&lt;td&gt;Monthly / Network Sec&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Assurance&lt;/td&gt;
&lt;td&gt;Sovereignty controls auditable&lt;/td&gt;
&lt;td&gt;ESC-SRF via Artifact; SOC2 basis [S1]&lt;/td&gt;
&lt;td&gt;Integrate provider evidence into your GRC&lt;/td&gt;
&lt;td&gt;Artifact exports + mapping doc&lt;/td&gt;
&lt;td&gt;Quarterly / Compliance&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;




&lt;h2&gt;
  
  
  8) Common misconceptions (and better questions to ask)
&lt;/h2&gt;

&lt;h2&gt;
  
  
  “We’re already in an EU Region, so we’re sovereign.”
&lt;/h2&gt;

&lt;p&gt;EU Regions help with residency, but sovereignty often requires deeper answers:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Who can operate/support?&lt;/li&gt;
&lt;li&gt;What about metadata?&lt;/li&gt;
&lt;li&gt;What evidence exists?&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  “Sovereignty means nothing ever leaves the EU.”
&lt;/h2&gt;

&lt;p&gt;Then explicitly define how you treat:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;operational metrics / telemetry&lt;/li&gt;
&lt;li&gt;managed service signals&lt;/li&gt;
&lt;li&gt;support workflows
AWS explicitly distinguishes these categories in its ESC materials. [S2]&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  “Sovereignty is a procurement problem.”
&lt;/h2&gt;

&lt;p&gt;Sovereignty becomes a production problem the first time you have:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;a major incident&lt;/li&gt;
&lt;li&gt;a regulator question&lt;/li&gt;
&lt;li&gt;a customer audit
Treat it like engineering.&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Conclusion: the real story is auditability
&lt;/h2&gt;

&lt;p&gt;The AWS European Sovereign Cloud will be summarized by many as “an EU-based cloud with EU operations.” But the more interesting story—still not widely reflected—is the move toward &lt;strong&gt;sovereignty as an assurance model&lt;/strong&gt;:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Criteria → Controls → Independent validation → Evidence (ESC-SRF in Artifact; SOC 2 basis).&lt;/strong&gt; [S1]&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;If sovereignty matters to you, treat it like security:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Requirements → Controls → Evidence.&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Everything else is just vocabulary.&lt;/p&gt;




&lt;h2&gt;
  
  
  Sources
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;[S1] Exploring the new AWS European Sovereign Cloud: Sovereign Reference Framework (ESC-SRF)&lt;br&gt;&lt;br&gt;
&lt;a href="https://aws.amazon.com/blogs/security/exploring-the-new-aws-european-sovereign-cloud-sovereign-reference-framework/" rel="noopener noreferrer"&gt;https://aws.amazon.com/blogs/security/exploring-the-new-aws-european-sovereign-cloud-sovereign-reference-framework/&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;[S2] Overview of the AWS European Sovereign Cloud (whitepaper PDF)&lt;br&gt;&lt;br&gt;
&lt;a href="https://d1.awsstatic.com/onedam/marketing-channels/website/aws/en_US/whitepapers/compliance/Overview_of_the_AWS_European_Sovereign_Cloud.pdf" rel="noopener noreferrer"&gt;https://d1.awsstatic.com/onedam/marketing-channels/website/aws/en_US/whitepapers/compliance/Overview_of_the_AWS_European_Sovereign_Cloud.pdf&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;[S3] European Digital Sovereignty FAQ (AWS)  &lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgwc5fymyvr9uzpp73tjf.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgwc5fymyvr9uzpp73tjf.JPG" alt=" " width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;[S4] Establishing a European trust service provider for the AWS European Sovereign Cloud (AWS Security Blog)&lt;br&gt;&lt;br&gt;
&lt;a href="https://aws.amazon.com/blogs/security/establishing-a-european-trust-service-provider-for-the-aws-european-sovereign-cloud/" rel="noopener noreferrer"&gt;https://aws.amazon.com/blogs/security/establishing-a-european-trust-service-provider-for-the-aws-european-sovereign-cloud/&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;[S5] AWS plans to invest €7.8 billion into the AWS European Sovereign Cloud (AboutAmazon EU)&lt;br&gt;&lt;br&gt;
&lt;a href="https://www.aboutamazon.eu/news/aws/aws-plans-to-invest-7-8-billion-into-the-aws-european-sovereign-cloud" rel="noopener noreferrer"&gt;https://www.aboutamazon.eu/news/aws/aws-plans-to-invest-7-8-billion-into-the-aws-european-sovereign-cloud&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;[S6] Design approach (AWS Docs – ESC whitepaper page)&lt;br&gt;&lt;br&gt;
&lt;a href="https://docs.aws.amazon.com/whitepapers/latest/overview-aws-european-sovereign-cloud/design-approach.html" rel="noopener noreferrer"&gt;https://docs.aws.amazon.com/whitepapers/latest/overview-aws-european-sovereign-cloud/design-approach.html&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;[S7] Announcing initial services available in the AWS European Sovereign Cloud (incl. roadmap updates)&lt;br&gt;&lt;br&gt;
&lt;a href="https://aws.amazon.com/blogs/security/announcing-initial-services-available-in-the-aws-european-sovereign-cloud-backed-by-the-full-power-of-aws/" rel="noopener noreferrer"&gt;https://aws.amazon.com/blogs/security/announcing-initial-services-available-in-the-aws-european-sovereign-cloud-backed-by-the-full-power-of-aws/&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>aws</category>
      <category>cloud</category>
      <category>security</category>
      <category>sovereign</category>
    </item>
    <item>
      <title>Architecting your GenAI data pipeline with AWS native services</title>
      <dc:creator>Artur Schneider</dc:creator>
      <pubDate>Sun, 22 Jun 2025 18:40:11 +0000</pubDate>
      <link>https://dev.to/aws-builders/architecting-your-genai-data-pipeline-with-aws-native-services-2nnm</link>
      <guid>https://dev.to/aws-builders/architecting-your-genai-data-pipeline-with-aws-native-services-2nnm</guid>
      <description>&lt;p&gt;When I started building GenAI solutions, I felt confident working with models, prompts, and architecture — but &lt;strong&gt;the data part always felt like a black box&lt;/strong&gt;. I kept asking myself: Where do I even begin if I want to use my own data? Every time I looked into it, I found a pile of scattered advice, incomplete setups, or tools that didn’t quite fit together. It reminded me of moving into a new house and opening the garage — only to find it packed with boxes from ten different people. You know there’s valuable stuff in there, but it’s all mixed up, mislabeled, and overwhelming.&lt;/p&gt;

&lt;p&gt;This post is what I wish someone had handed me back then — a clear, hands-on walkthrough of how to turn that chaotic garage into a well-organized workshop. If you’re comfortable with AWS and GenAI but still wondering how to structure, clean, and prepare your own data properly, this is for you.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fevezdjaant7bi28l8m7i.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fevezdjaant7bi28l8m7i.png" alt="Image 1"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  1. Getting Started
&lt;/h2&gt;

&lt;p&gt;First things first: &lt;strong&gt;what do I need before I even begin?&lt;/strong&gt; Beyond an AWS account and some basic AWS skills, start by &lt;strong&gt;taking inventory of your data&lt;/strong&gt;. Ask yourself: What types of data do I have? (CSVs, JSON logs, PDFs of documents, etc.) Where is it coming from? (On-prem systems, databases, S3 buckets, etc.) This will inform your pipeline design. Honestly, at first I felt paralyzed: “Should I use Glue? Athena? Lambda? Everything?” I was like a newbie chef staring at a pantry full of ingredients. The answer is: &lt;strong&gt;you don’t need magic&lt;/strong&gt; – just a plan. &lt;/p&gt;

&lt;p&gt;Start small: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Create or identify an S3 bucket (or a couple) for raw data.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;If you have existing sources (RDS, on-prem, SaaS APIs, etc.), consider how to move that data in – AWS Glue has connectors, and AWS DataSync or Transfer Family can help with files. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Enable AWS CloudTrail for your S3 bucket so events (like new file uploads) can trigger processing. &lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Honestly, the first thing I realized was: &lt;strong&gt;you don’t have to nail everything at once&lt;/strong&gt;. Get one data source flowing into S3, see how it goes, then expand. It’s like testing a new recipe by cooking a small batch first.&lt;/p&gt;

&lt;h2&gt;
  
  
  2. Structuring Your S3 Data Lake
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhth5zxkq0ljhxkdxg9cn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhth5zxkq0ljhxkdxg9cn.png" alt="Image2"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once you’re ready to store data, &lt;strong&gt;structure is key&lt;/strong&gt;. AWS best practices recommend &lt;strong&gt;multiple S3 “zones”&lt;/strong&gt; or layers, typically separate buckets for &lt;strong&gt;raw, stage/processing&lt;/strong&gt;, and &lt;strong&gt;analytics/curated&lt;/strong&gt; data. Think of it as organizing your garage: one rack for “just moved in – untouched stuff” (raw), one workbench for “mid-cleanup – in progress” (stage), and one shelf for “ready-to-use” (analytics). For example:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Raw layer&lt;/strong&gt; – store files exactly as you received them (CSV, JSON, PDF, etc.). Enable versioning here so you never lose the original. This is your time-capsule.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Stage layer&lt;/strong&gt; – place intermediate, cleaned-up data here. Convert formats (e.g. CSV→Parquet), perform initial transformations, and catalog the schema with AWS Glue.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Analytics layer&lt;/strong&gt; – put your fully processed, query-ready tables (often Parquet or Iceberg) here. This is what your models or analysts will ultimately consume.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You will likely use separate S3 buckets named by layer (e.g. myorg-data-raw, myorg-data-stage, myorg-data-analytics), possibly including environment or account info for clarity. Good naming makes governance and cost-tracking easier. &lt;/p&gt;

&lt;p&gt;A handy checklist:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Use at least &lt;strong&gt;3 layers (raw, stage, analytics)&lt;/strong&gt;, each in its own S3 bucket.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Keep originals intact in raw (no manual edits!).&lt;br&gt;
Plan a folder structure/prefixes inside each bucket (date-based partitions can help).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Enable encryption (S3 or via AWS KMS) and versioning on raw and stage buckets.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;By thinking of your S3 lake as a well-labeled garage, you will save countless headaches later.&lt;/p&gt;

&lt;h2&gt;
  
  
  3. Ingesting Mixed Data Types
&lt;/h2&gt;

&lt;p&gt;Your data recipe likely has &lt;strong&gt;all sorts of ingredients&lt;/strong&gt;: relational tables, CSVs, JSON logs, and even &lt;strong&gt;PDFs or images&lt;/strong&gt;. AWS provides tools for each:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;For &lt;strong&gt;batch files&lt;/strong&gt; (CSV, JSON, images, PDFs in S3): you can simply upload them to S3 (via CLI, SDK, or GUI) or use AWS DataSync/Transfer for large/migrated datasets. Once in S3, set up AWS Glue Crawlers to auto-detect schema on CSV/JSON and populate the Glue Data Catalog. Crawlers are lazy scientists that automatically say, “Hey, new data here!” and create tables you can query with Athena.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;For &lt;strong&gt;streaming&lt;/strong&gt; or &lt;strong&gt;real-time data&lt;/strong&gt; (e.g. logs, IoT, social feeds): use Amazon Kinesis Data Firehose to ingest streams directly into S3. Firehose can even &lt;strong&gt;convert JSON to Parquet&lt;/strong&gt; on the fly for you, and it supports invoking Lambda for custom transforms (e.g. turning CSV logs into JSON via a blueprint). It’s like having a smart conveyor belt that batches, compresses, and drops your data into S3.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;For &lt;strong&gt;databases&lt;/strong&gt; (on-prem or RDS): consider AWS Database Migration Service (DMS) or Glue’s JDBC connections to pull data into S3 or directly into Redshift/Athena as needed.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;For &lt;strong&gt;documents and images (PDFs, scanned docs)&lt;/strong&gt;: Amazon Textract is your friend. It’s an AI OCR service that can extract text and structured data from scanned files. For example, drop a PDF of a report into S3 and trigger a Lambda that calls Textract to get the text. Save that output (say as plain text or JSON) back into S3 so it can join the rest of the lake. Bedrock Knowledge Bases (KBs) can even ingest common document formats like PDF, Word, HTML, and CSV directly. (Just keep each source file ≤50 MB.)&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Key points&lt;/strong&gt;: Your pipeline will likely be a mix of the above. You might set an EventBridge rule (via S3 events/CloudTrail) so that any new S3 upload triggers a Glue or Lambda job to process it. For example, new CSVs trigger a Glue ETL that cleans and moves data into the stage bucket; new PDFs trigger a Textract process that spits out text to S3. Remember, AWS is great at handling heterogenous data – just string the right services together. &lt;/p&gt;

&lt;h2&gt;
  
  
  4. Cataloging and Schema Harmonization (Glue &amp;amp; DataBrew)
&lt;/h2&gt;

&lt;p&gt;Once data is landing in S3, you need a &lt;strong&gt;catalog&lt;/strong&gt;(index) and some cleaning. AWS Glue handles the catalog and ETL magic. Glue Crawlers will scan your raw S3 folders and register tables in the Glue Data Catalog. Think of the Data Catalog as a library card catalog for all your datasets. You can then use AWS Glue jobs (PySpark) or &lt;strong&gt;AWS Glue DataBrew&lt;/strong&gt; (no-code GUI) to clean and harmonize. &lt;/p&gt;

&lt;p&gt;DataBrew is a visual data-prep tool – no coding needed – that can profile and transform your data. It has 250+ built-in functions (filtering, renaming columns, converting formats, handling missing values, etc.). It is like having a friendly spreadsheet on steroids: you load a dataset, click to apply fixes (e.g. fix date formats, standardize field names, remove duplicates), and output a clean file. For example, if one source has “FirstName” and another has “first_name”, you can use DataBrew to unify them. &lt;/p&gt;

&lt;p&gt;Meanwhile, Glue ETL jobs can join, enrich, or further transform data. The advantage: everything can be automated in Glue Workflows. Glue (with DataBrew) lets “&lt;strong&gt;clean and normalize without writing code&lt;/strong&gt;” – perfect for teams where not everyone is a developer. After transformation, write outputs to the “stage” or “analytics” bucket, and update the Data Catalog with the new schema. &lt;/p&gt;

&lt;p&gt;In practice, you can do:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Crawlers&lt;/strong&gt; auto-detect schema on raw data and log errors for missing or invalid values.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Run a &lt;strong&gt;DataBrew recipe&lt;/strong&gt; to fix common issues across datasets (format dates, fill nulls, etc.).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Use a &lt;strong&gt;Glue job&lt;/strong&gt; (Spark) to join datasets or convert everything to an analytics-friendly format (like Parquet/Apache Iceberg tables).&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This process is like cleaning that messy garage: Dust off the data, sort it onto shelves, and create a manifest (the Glue catalog) so you can find what you need. All those crawlers and DataBrew steps mean your files go from “&lt;strong&gt;hot mess&lt;/strong&gt;” to “&lt;strong&gt;whew, this is actually usable&lt;/strong&gt;”.&lt;/p&gt;

&lt;h2&gt;
  
  
  5. Normalizing Documents for GenAI (Bedrock KBs, Textract)
&lt;/h2&gt;

&lt;p&gt;Text documents are a special case. For Generative AI (especially Retrieval-Augmented Generation), you often feed documents into a knowledge base or query engine. AWS’s Amazon Bedrock Knowledge Bases let you ingest text-based files (see supported formats above) and then query them with LLMs. But first you may need to &lt;strong&gt;normalize and chunk&lt;/strong&gt; the text. &lt;/p&gt;

&lt;p&gt;For example, if you have PDFs of manuals or reports, use &lt;strong&gt;Textract&lt;/strong&gt; to extract the text and tables. Once you have raw text, you might run a Glue job or Lambda to split it into logical sections or “chunks” of a few hundred words each. This is because Bedrock KBs often work best when content is broken into pieces (like paragraphs). Think of it as chopping a long document into bite-size pieces.&lt;/p&gt;

&lt;p&gt;Advanced note: Bedrock now supports things like &lt;strong&gt;semantic and hierarchical chunking&lt;/strong&gt; and even using foundation models to parse tricky PDFs. But at a beginner level, start simple: extract text, remove any scanned gibberish, and put everything in plain &lt;code&gt;.txt&lt;/code&gt; or &lt;code&gt;.md&lt;/code&gt; files in S3. Then connect that S3 as the data source for your Bedrock KB. The service will parse and index it for RAG queries. &lt;/p&gt;

&lt;p&gt;One more tip – normalizing terminology: Different documents might use “DOB”, “Date of Birth”, and “Birth Date”. You can use Glue or even LLM prompts to standardize these keys (so your GenAI sees them as the same concept). In intelligent document processing terms, this is “template and normalization” (defining aliases for different terms). It’s like telling the AI, “hey, whenever you see DOB or Birthdate, they all mean the same thing.” This ensures your knowledge base is clean and consistent.&lt;/p&gt;

&lt;h2&gt;
  
  
  6. Securing and Governing the Data Lake (IAM, Lake Formation, LF-Tags)
&lt;/h2&gt;

&lt;p&gt;Now that you have valuable data, lock it up properly. Start with &lt;strong&gt;IAM&lt;/strong&gt;: enforce the principle of least privilege, use IAM roles for your Glue jobs and Lambda functions (they get just the S3/Glue permissions they need), and consider AWS KMS for key management. But IAM alone can get tedious for per-table or per-column access. That’s where &lt;strong&gt;AWS Lake Formation&lt;/strong&gt; comes in. &lt;/p&gt;

&lt;p&gt;Lake Formation lets you set &lt;strong&gt;fine-grained access controls&lt;/strong&gt; on your data catalogs and S3 tables. A powerful feature is &lt;strong&gt;LF-Tags&lt;/strong&gt;: attribute-based tags you attach to tables/columns (like &lt;code&gt;department=finance&lt;/code&gt; or &lt;code&gt;sensitivity=PII&lt;/code&gt;), and then grant user roles those same tag values. It’s literally like parking passes and parking lots: imagine a corporate garage with zones. Each dataset has a parking sticker (LF-Tag) and each data analyst has a matching pass. Only matching stickers let you “drive through” to access the data. LF-Tags scale better than hand-granting each table to each user one by one.&lt;/p&gt;

&lt;p&gt;(Important note: LF-Tags are not the same as IAM tags. LF-Tags gate Lake Formation table access; IAM tags control IAM policies. Don’t confuse the two – one is for data access, the other for service permissions.) &lt;/p&gt;

&lt;p&gt;Also use Lake Formation to enforce column- or row-level filtering if needed (e.g. mask emails or only show Europe-region rows). And don’t forget S3 bucket policies or Access Points for cross-account sharing. In short, treat your data lake like Fort Knox: multi-layer security, logging everything. &lt;/p&gt;

&lt;p&gt;I like to joke that LF-Tags are like &lt;em&gt;parking passes&lt;/em&gt;: you stick a tag on the dataset and give matching passes to users; if the tags match, voila, access granted. It’s way easier than juggling dozens of user policies as your lake grows.&lt;/p&gt;

&lt;h2&gt;
  
  
  7. Automating the Pipeline (EventBridge, Glue Workflows, Step Functions)
&lt;/h2&gt;

&lt;p&gt;Manual clicks are doomed to error. Automate your pipeline so new data &lt;strong&gt;flows&lt;/strong&gt; on its own. AWS EventBridge is your event router: for example, set it to catch all S3 &lt;code&gt;PutObject&lt;/code&gt; events (via CloudTrail) and trigger AWS Glue Workflows. Glue Workflows let you chain multiple Glue jobs, crawlers, and triggers as a single pipeline. &lt;/p&gt;

&lt;p&gt;A common pattern: &lt;strong&gt;S3 → CloudTrail → EventBridge rule → Glue workflow&lt;/strong&gt;. In practice I do this: when ten files land or after a 5-minute window, EventBridge starts the Glue workflow. Glue then runs a crawler (to detect new schema), runs an ETL job (to clean/convert data), and maybe another job to move data to analytics zone. This is event-driven ETL. The nice part is you don’t have to poll or schedule useless jobs – it reacts to real events. &lt;/p&gt;

&lt;p&gt;You could also use AWS Step Functions for orchestration (especially if you need branching logic or parallel tasks). Step Functions can call Glue, Lambda, SageMaker, etc., with built-in retries. I’ve used it to coordinate complex flows: e.g. after Glue finishes, step through custom quality checks (via Lambda) before loading data. &lt;/p&gt;

&lt;p&gt;In short, trigger-on-upload or scheduled rules in EventBridge start your pipeline; Glue workflows or Step Functions manage the steps; and the pipeline runs itself. It’s like setting up a line of dominoes – once you tip the first piece (new data arrived), everything else happens automatically. &lt;/p&gt;

&lt;h2&gt;
  
  
  8. Making Your Data Discoverable (Amazon DataZone &amp;amp; SageMaker)
&lt;/h2&gt;

&lt;p&gt;Lastly, once your data lake is humming, &lt;strong&gt;make it easy for your teams to find and use the data&lt;/strong&gt;. Amazon DataZone is a data catalog and governance service – think of it as a Google for your company’s data assets. You can register your S3 data (via Glue tables) in DataZone, tag them (e.g. &lt;code&gt;sales&lt;/code&gt;, &lt;code&gt;raw-data&lt;/code&gt;), and write descriptions. Users across the org can then &lt;strong&gt;search&lt;/strong&gt; or &lt;strong&gt;browse&lt;/strong&gt; for the datasets they need. This is golden for collaboration. &lt;/p&gt;

&lt;p&gt;The best part: SageMaker now integrates directly with DataZone. Data scientists and ML engineers can search the DataZone catalog inside SageMaker Studio or Canvas, and even pull data into notebooks or Canvas by clicking “Subscribe” to a dataset. It’s like having a built-in data shopping mall – shop for datasets (or features/models) right in your IDE. &lt;/p&gt;


&lt;div class="crayons-card c-embed text-styles text-styles--secondary"&gt;
    &lt;div class="c-embed__content"&gt;
        &lt;div class="c-embed__cover"&gt;
          &lt;a href="https://aws.amazon.com/de/blogs/machine-learning/amazon-sagemaker-now-integrates-with-amazon-datazone-to-streamline-machine-learning-governance/" class="c-link align-middle" rel="noopener noreferrer"&gt;
            &lt;img alt="" src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fd2908q01vomqb2.cloudfront.net%2Ff1f836cb4ea6efb2a0b1b99f41ad8b103eff4b59%2F2024%2F05%2F01%2FML-16496-image001-1260x608.png" height="auto" class="m-0"&gt;
          &lt;/a&gt;
        &lt;/div&gt;
      &lt;div class="c-embed__body"&gt;
        &lt;h2 class="fs-xl lh-tight"&gt;
          &lt;a href="https://aws.amazon.com/de/blogs/machine-learning/amazon-sagemaker-now-integrates-with-amazon-datazone-to-streamline-machine-learning-governance/" rel="noopener noreferrer" class="c-link"&gt;
            Amazon SageMaker now integrates with Amazon DataZone to streamline machine learning governance | Artificial Intelligence
          &lt;/a&gt;
        &lt;/h2&gt;
        &lt;div class="color-secondary fs-s flex items-center"&gt;
            &lt;img alt="favicon" class="c-embed__favicon m-0 mr-2 radius-0" src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fa0.awsstatic.com%2Fmain%2Fimages%2Fsite%2Ffav%2Ffavicon.ico"&gt;
          aws.amazon.com
        &lt;/div&gt;
      &lt;/div&gt;
    &lt;/div&gt;
&lt;/div&gt;


&lt;p&gt;For example, you might publish a cleaned Parquet table to DataZone as a “SalesStats” asset. Anyone on the ML team can now discover SalesStats from SageMaker and add it to their Jupyter environment. After training a model, they could even publish the model back to DataZone for others to reuse. &lt;/p&gt;

&lt;p&gt;In short, leverage &lt;strong&gt;DataZone&lt;/strong&gt; (linked to your Lake Formation / Glue Catalog) to catalog everything, and enable SageMaker’s data search/subscribe. That way, your data becomes a first-class citizen in the ML workflow. Your messy garage is now a public library where everyone can check out books (i.e. datasets) they need.&lt;/p&gt;

&lt;h2&gt;
  
  
  Putting It All Together
&lt;/h2&gt;

&lt;p&gt;Phew! That was a lot of detail, but remember: &lt;strong&gt;start simple and iterate&lt;/strong&gt;. The steps are roughly: get your AWS setup, organize S3, ingest all your data sources, clean and catalog with Glue/DataBrew, extract text from documents (Textract) for Bedrock, secure everything with IAM/LF-Tags, automate with EventBridge/Glue/Step Functions, and finally plug it into DataZone/SageMaker for others to find. &lt;/p&gt;

&lt;p&gt;At first it might feel like there are a million AWS services to learn – trust me, I’ve been there. But take it one step at a time. Each piece you set up (a crawler, a Tag, a DataBrew recipe) is like cleaning one corner of that garage. Eventually you’ll stand back and say, “Wow, my GenAI application now actually sees my data!” &lt;/p&gt;

&lt;p&gt;Good luck with your GenAI data adventure. I am still learning every day, and I will admit sometimes I screw up a bucket policy or mix up LF-Tag names. But as long as we keep our data organized, secure, and discoverable, our AI projects have the fuel they need.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>genai</category>
      <category>data</category>
      <category>dataengineering</category>
    </item>
    <item>
      <title>MCP explained: How the Model Context Protocol transforms AI in the Cloud</title>
      <dc:creator>Artur Schneider</dc:creator>
      <pubDate>Mon, 07 Apr 2025 11:28:52 +0000</pubDate>
      <link>https://dev.to/aws-builders/mcp-explained-how-the-model-context-protocol-transforms-ai-in-the-cloud-4hba</link>
      <guid>https://dev.to/aws-builders/mcp-explained-how-the-model-context-protocol-transforms-ai-in-the-cloud-4hba</guid>
      <description>&lt;p&gt;When I first read about MCP, I wasn't exactly sure what it was. But once I understood the Model Context Protocol (MCP), it was like one of those rare "aha!" moments you sometimes have in tech. Suddenly, it clicked why developers and cloud experts were so excited about this technology. Imagine giving your AI assistant superpowers without needing to fully retrain it — that's precisely what MCP makes possible!&lt;/p&gt;

&lt;p&gt;In today's fast-paced tech world, we're constantly looking for ways to accelerate development processes while improving quality. Especially in the AWS environment, where complexity increases with each new service, we need smarter tools that make our work easier. This is where MCP comes into play – a protocol that fundamentally changes the way we interact with AI models.&lt;/p&gt;

&lt;p&gt;But what exactly is MCP? Imagine having a brilliant colleague who is incredibly intelligent but has no access to your company data or systems. Without this information, their abilities are limited. MCP is like a bridge that suddenly gives this colleague access to all your data sources, tools, and systems – in a secure, controlled way. The result? An AI assistant that not only has general knowledge but also deep, specialized expertise in exactly the areas that are relevant to you.&lt;/p&gt;

&lt;p&gt;In this article, I want to show you why MCP is a real game-changer – especially for beginners who are just diving into the world of AI and cloud development. Together, we'll discover:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;How MCP works and why it's revolutionizing the AI landscape&lt;/li&gt;
&lt;li&gt;What concrete benefits MCP offers for developers and companies&lt;/li&gt;
&lt;li&gt;How you can immediately recognize the value of MCP through practical examples&lt;/li&gt;
&lt;li&gt;How you can take your first steps with MCP without being an AI expert&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;After this article, you'll not only understand what MCP is, but also why it's so important for modern cloud applications. You'll see how MCP can help you save time, improve quality, and ensure the security of your data.&lt;/p&gt;

&lt;p&gt;So buckle up – we're embarking on an exciting journey into the world of Model Context Protocols that will show you what the future of AI integration in the cloud looks like!&lt;/p&gt;

&lt;h2&gt;
  
  
  What is the Model Context Protocol (MCP)?
&lt;/h2&gt;

&lt;p&gt;If you've ever worked with AI assistants like ChatGPT or Claude, you know their impressive capabilities – but also their limitations. They can help you with many tasks, but as soon as it comes to specialized knowledge or accessing your own data, they quickly reach their limits. This is exactly where the Model Context Protocol (MCP) comes in.&lt;/p&gt;

&lt;h3&gt;
  
  
  MCP Simply Explained
&lt;/h3&gt;

&lt;p&gt;At its core, MCP is a standardized, open protocol that enables seamless interaction between large language models (LLMs), data sources, and tools. Think of MCP as a universal translator that mediates between your AI model and the outside world.&lt;/p&gt;

&lt;p&gt;Without losing technical expertise, here's a simple analogy: Think of your LLM as a brilliant advisor sitting in a soundproof room. This advisor has learned a lot during their training, but cannot access current information or interact with systems outside their room. MCP is like a communication system that suddenly allows this advisor to speak with the outside world, access your company data, and even perform actions in your systems – all while sensitive data remains securely in place.&lt;/p&gt;

&lt;h3&gt;
  
  
  Why Was MCP Developed?
&lt;/h3&gt;

&lt;p&gt;The development of MCP was a response to a fundamental problem: How can we expand the capabilities of AI models without constantly having to retrain them?&lt;/p&gt;

&lt;p&gt;Before MCP, there were essentially two approaches:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Complete retraining of the model&lt;/strong&gt; with specialized data – expensive, time-consuming, and inefficient&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Prompt Engineering&lt;/strong&gt; – embedding context in requests, which is limited by token limits and lack of up-to-date information&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;MCP offers an elegant third way: It extends the capabilities of the model by giving it access to external knowledge sources and tools without changing the model itself.&lt;/p&gt;

&lt;h3&gt;
  
  
  How Does MCP Work Technically?
&lt;/h3&gt;

&lt;p&gt;Without overwhelming you with too many technical details, here's a simplified look under the hood:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Request&lt;/strong&gt;: You ask a question or give a task to your LLM&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Recognition&lt;/strong&gt;: The LLM recognizes that it needs external information or tools&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Communication&lt;/strong&gt;: Via the MCP protocol, the LLM communicates with specialized servers&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Data Retrieval&lt;/strong&gt;: The MCP servers access relevant data sources or tools&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Integration&lt;/strong&gt;: The obtained information is integrated into the context of the LLM&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Response&lt;/strong&gt;: The LLM generates a response that is now enriched with specialized knowledge&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The special thing about this: The sensitive data remains local and is not integrated into the model itself. This is an enormous advantage for data security.&lt;/p&gt;

&lt;h3&gt;
  
  
  MCP in the AWS World
&lt;/h3&gt;

&lt;p&gt;AWS recognized the potential of MCP early on and developed a suite of specialized MCP servers with the "AWS MCP Servers for code assistants." These servers bring AWS best practices directly into your development workflow.&lt;/p&gt;

&lt;p&gt;Imagine you're working on a complex AWS project. Without MCP, you would have to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Spend hours reading documentation&lt;/li&gt;
&lt;li&gt;Research best practices&lt;/li&gt;
&lt;li&gt;Understand and implement security policies&lt;/li&gt;
&lt;li&gt;Manually incorporate cost optimizations&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;With AWS MCP Servers, you can simply ask your AI assistant: "How do I implement a secure, cost-optimized Amazon Bedrock Knowledge Base?" and immediately receive code that follows AWS best practices, with built-in security controls and optimized resource configurations.&lt;/p&gt;

&lt;h3&gt;
  
  
  MCP Visualized
&lt;/h3&gt;

&lt;p&gt;To make the concept more tangible, here's a simplified representation of the MCP architecture:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;+---------------+       +----------------+       +-------------------+
|               |       |                |       |                   |
|  User         |------&amp;gt;|  LLM with MCP  |&amp;lt;-----&amp;gt;|  MCP Server       |
|  (You)        |       |  Integration   |       |  (Specialized)    |
|               |       |                |       |                   |
+---------------+       +----------------+       +------+------------+
                                                        |
                                                        v
                                               +------------------+
                                               |                  |
                                               |  Data Sources    |
                                               |  &amp;amp; Tools         |
                                               |                  |
                                               +------------------+
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In this model, the LLM remains unchanged but gains access to specialized knowledge and capabilities through the MCP connection.&lt;/p&gt;

&lt;h3&gt;
  
  
  Why is MCP a Breakthrough?
&lt;/h3&gt;

&lt;p&gt;MCP fundamentally changes how we can work with AI models:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Extensibility&lt;/strong&gt;: Models can acquire new capabilities without being retrained&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Currency&lt;/strong&gt;: Access to the latest information, not just training data&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Specialization&lt;/strong&gt;: General models can become domain experts&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Data Protection&lt;/strong&gt;: Sensitive data remains local and secure&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Agentic AI&lt;/strong&gt;: Enables AI assistants that can actually perform actions in your systems&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;In the next section, we'll look at the concrete benefits of MCP for developers and companies – with practical examples that show you how MCP can revolutionize your daily work.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Benefits of MCP for Developers and Companies
&lt;/h2&gt;

&lt;p&gt;Now that we understand what MCP is and how it works, let's look at what concrete benefits it brings for you as a developer or for your company. And don't worry – I'll explain everything with practical examples that are easy to understand even for beginners.&lt;/p&gt;

&lt;h3&gt;
  
  
  1. Access to Specialized Knowledge Without Retraining Models
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;What does this mean?&lt;/strong&gt; Imagine being able to transform a general practitioner into a heart specialist within seconds – without them having to study for years. That's exactly what MCP enables for AI models.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Practical example:&lt;/strong&gt; &lt;br&gt;
Let's say you're working on an AWS project and need help implementing a secure Amazon Bedrock Knowledge Base. Without MCP, you would either:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Spend hours in the AWS documentation&lt;/li&gt;
&lt;li&gt;Hire an expensive AWS specialist&lt;/li&gt;
&lt;li&gt;Experiment with trial and error and hope not to overlook anything&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;With AWS MCP Servers, you can simply ask: "How do I implement a secure Amazon Bedrock Knowledge Base for my company data?" and immediately receive:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="c1"&gt;# AWS CDK Code with Best Practices for Amazon Bedrock Knowledge Base
&lt;/span&gt;&lt;span class="n"&gt;const&lt;/span&gt; &lt;span class="n"&gt;knowledgeBase&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;BedrockKnowledgeBase&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;this&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;CompanyKB&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="n"&gt;embeddingModel&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;BedrockFoundationModel&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;TITAN_EMBED_TEXT_V1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="n"&gt;vectorStore&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;OpenSearchServerlessVectorStore&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;this&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;VectorStore&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="n"&gt;encryption&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;OpenSearchEncryption&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;KMS&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;ebs&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;OpenSearchEbsOptions&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;provisioned&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;100&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;OpenSearchVolumeType&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;GP3&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
  &lt;span class="p"&gt;})&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;

&lt;span class="o"&gt;//&lt;/span&gt; &lt;span class="n"&gt;Automatically&lt;/span&gt; &lt;span class="n"&gt;generated&lt;/span&gt; &lt;span class="n"&gt;security&lt;/span&gt; &lt;span class="n"&gt;controls&lt;/span&gt;
&lt;span class="n"&gt;cdk&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;nag&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;NagSuppressions&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;addResourceSuppressions&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
  &lt;span class="n"&gt;knowledgeBase&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; 
  &lt;span class="p"&gt;[{&lt;/span&gt; &lt;span class="nb"&gt;id&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;AwsSolutions-IAM4&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;reason&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;Managed policy used only for Bedrock service role&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt; &lt;span class="p"&gt;}]&lt;/span&gt;
&lt;span class="p"&gt;);&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;What's special: This code already contains all AWS best practices, security controls, and optimizations – without you having to be an AWS expert yourself.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Data Security Through Local Data Storage
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;What does this mean?&lt;/strong&gt; Your sensitive company data doesn't need to be uploaded to the AI model or used for training. It remains secure in your environment.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Practical example:&lt;/strong&gt;&lt;br&gt;
Imagine you have confidential customer data that must not leave your company under any circumstances. With MCP, you can:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Set up a local MCP server that accesses your internal database&lt;/li&gt;
&lt;li&gt;Connect your AI assistant to this server via MCP&lt;/li&gt;
&lt;li&gt;Make requests like: "Summarize the sales figures for the last quarter"&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The process then looks like this:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The request goes to the LLM&lt;/li&gt;
&lt;li&gt;The LLM recognizes that it needs sales data&lt;/li&gt;
&lt;li&gt;Via MCP, the request is forwarded to your local server&lt;/li&gt;
&lt;li&gt;The server retrieves the data from your database (the data never leaves your network)&lt;/li&gt;
&lt;li&gt;The summary is generated and returned&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The sensitive sales data remains in your secure environment the entire time – an enormous advantage over conventional approaches.&lt;/p&gt;
&lt;h3&gt;
  
  
  3. Reduction of Development Time
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;What does this mean?&lt;/strong&gt; Tasks that would normally take days or weeks can be completed in minutes with MCP.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Practical example:&lt;/strong&gt;&lt;br&gt;
Let's say you need to create an AWS Lambda function that reads data from a DynamoDB table, processes it, and writes it to an S3 bucket. Traditionally, you would:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Read the documentation for Lambda, DynamoDB, and S3 (2-3 hours)&lt;/li&gt;
&lt;li&gt;Understand and configure IAM roles and policies (1-2 hours)&lt;/li&gt;
&lt;li&gt;Write and test the code (3-4 hours)&lt;/li&gt;
&lt;li&gt;Fix bugs and optimize (2-3 hours)&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Total time: 8-12 hours&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;With AWS MCP Servers:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;You describe your project: "Create a Lambda function that reads data from DynamoDB and writes to S3"&lt;/li&gt;
&lt;li&gt;The MCP server generates:

&lt;ul&gt;
&lt;li&gt;The complete Lambda code&lt;/li&gt;
&lt;li&gt;The IAM roles with least-privilege principle&lt;/li&gt;
&lt;li&gt;CloudFormation/CDK for the infrastructure&lt;/li&gt;
&lt;li&gt;Logging and monitoring configuration&lt;/li&gt;
&lt;li&gt;Error handling and retry logic&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Total time: 10-15 minutes&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This is not an exaggeration – I've experienced myself how MCP can drastically reduce development time, especially for AWS implementations.&lt;/p&gt;
&lt;h3&gt;
  
  
  4. Automatic Application of Best Practices
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;What does this mean?&lt;/strong&gt; You no longer need to know and manually implement all best practices – MCP does this automatically for you.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Practical example:&lt;/strong&gt;&lt;br&gt;
Imagine you're developing a new web application on AWS and need to ensure it meets security standards. Without MCP, you would have to:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Study the AWS Well-Architected Framework&lt;/li&gt;
&lt;li&gt;Understand Security Hub recommendations&lt;/li&gt;
&lt;li&gt;Research compliance requirements&lt;/li&gt;
&lt;li&gt;Manually implement and verify everything&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;With AWS MCP Servers, you automatically receive:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# Automatically generated CloudFormation template with integrated best practices&lt;/span&gt;
&lt;span class="na"&gt;Resources&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;WebAppBucket&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;Type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;AWS::S3::Bucket&lt;/span&gt;
    &lt;span class="na"&gt;Properties&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;BucketEncryption&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;ServerSideEncryptionConfiguration&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;ServerSideEncryptionByDefault&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
              &lt;span class="na"&gt;SSEAlgorithm&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;AES256&lt;/span&gt;
      &lt;span class="na"&gt;PublicAccessBlockConfiguration&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;BlockPublicAcls&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
        &lt;span class="na"&gt;BlockPublicPolicy&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
        &lt;span class="na"&gt;IgnorePublicAcls&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
        &lt;span class="na"&gt;RestrictPublicBuckets&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
      &lt;span class="na"&gt;LoggingConfiguration&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;DestinationBucketName&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kt"&gt;!Ref&lt;/span&gt; &lt;span class="s"&gt;LoggingBucket&lt;/span&gt;
        &lt;span class="na"&gt;LogFilePrefix&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;webapp-access-logs/&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The MCP server has automatically:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Activated encryption&lt;/li&gt;
&lt;li&gt;Blocked public access&lt;/li&gt;
&lt;li&gt;Configured logging&lt;/li&gt;
&lt;li&gt;Implemented other best practices&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You no longer need to research and implement these things individually – an enormous time saver and security boost.&lt;/p&gt;

&lt;h3&gt;
  
  
  5. Scalability and Flexibility
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;What does this mean?&lt;/strong&gt; MCP grows with your requirements and can be adapted to different use cases.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Practical example:&lt;/strong&gt;&lt;br&gt;
Imagine you start with a simple application that only uses basic AWS services. Over time, your project grows and you need:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Integration with Amazon Bedrock for AI functions&lt;/li&gt;
&lt;li&gt;Complex data processing with AWS Glue&lt;/li&gt;
&lt;li&gt;Serverless architecture with AWS Lambda and API Gateway&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;With MCP, you don't have to become an expert for each new service. You can simply add new MCP servers that specialize in these services:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Core MCP Server&lt;/strong&gt;: Basic AWS functions&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;AWS CDK MCP Server&lt;/strong&gt;: Infrastructure as code&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Bedrock Knowledge Bases MCP Server&lt;/strong&gt;: AI integration&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Cost MCP Server&lt;/strong&gt;: Cost optimization&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Your AI assistant can then seamlessly switch between these servers depending on which expertise is needed – like a team of specialists working perfectly together.&lt;/p&gt;

&lt;h3&gt;
  
  
  Real Example: From Days to Minutes
&lt;/h3&gt;

&lt;p&gt;Let me share a real example from my own experience:&lt;/p&gt;

&lt;p&gt;Recently, I was tasked with developing a solution that integrates company documents into an Amazon Bedrock Knowledge Base and makes them accessible via a chatbot. Traditionally, this project would have taken about two weeks:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Research on Amazon Bedrock Knowledge Bases (2-3 days)&lt;/li&gt;
&lt;li&gt;Development of the document ingestor (3-4 days)&lt;/li&gt;
&lt;li&gt;Implementation of the chatbot with Bedrock integration (3-4 days)&lt;/li&gt;
&lt;li&gt;Testing and optimization (2-3 days)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;With AWS MCP Servers, I could:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Describe my project&lt;/li&gt;
&lt;li&gt;Adapt the generated code&lt;/li&gt;
&lt;li&gt;Deploy the solution&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Total time: Under 4 hours instead of 2 weeks!&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The generated code already contained:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Optimal vector database configuration&lt;/li&gt;
&lt;li&gt;Secure IAM roles and policies&lt;/li&gt;
&lt;li&gt;Efficient document processing&lt;/li&gt;
&lt;li&gt;Cost-optimized resource configuration&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  How MCP Differs from Current Approaches with LLMs and Knowledge Bases
&lt;/h2&gt;

&lt;p&gt;To truly understand the revolutionary nature of MCP, it's important to compare it with current approaches for enhancing LLMs with specialized knowledge. Let's explore how MCP differs from and improves upon existing methods:&lt;/p&gt;

&lt;h3&gt;
  
  
  1. Traditional RAG (Retrieval-Augmented Generation) vs. MCP
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Traditional RAG:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Creates a vector database of documents&lt;/li&gt;
&lt;li&gt;When a query arrives, it searches for relevant documents&lt;/li&gt;
&lt;li&gt;Inserts these documents into the prompt&lt;/li&gt;
&lt;li&gt;Limited by context window size (typically 8K-128K tokens)&lt;/li&gt;
&lt;li&gt;Data must be pre-processed and indexed&lt;/li&gt;
&lt;li&gt;Static knowledge that requires manual updates&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;MCP Approach:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Provides a standardized protocol for LLM-to-tool communication&lt;/li&gt;
&lt;li&gt;Dynamically accesses data sources as needed, not just pre-indexed documents&lt;/li&gt;
&lt;li&gt;Can perform complex queries and transformations on data&lt;/li&gt;
&lt;li&gt;Not limited by context window size since data is processed externally&lt;/li&gt;
&lt;li&gt;Can access real-time information and live systems&lt;/li&gt;
&lt;li&gt;Automatically stays up-to-date with source systems&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Key Difference:&lt;/strong&gt; RAG is like giving the LLM a relevant book to read before answering your question. MCP is like giving the LLM the ability to use a computer, search databases, and run specialized tools to find and process information.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Fine-tuning vs. MCP
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Fine-tuning:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Requires collecting specialized training data&lt;/li&gt;
&lt;li&gt;Modifies the model's weights through additional training&lt;/li&gt;
&lt;li&gt;Knowledge is "baked in" and static&lt;/li&gt;
&lt;li&gt;Expensive and time-consuming process&lt;/li&gt;
&lt;li&gt;Requires specialized ML expertise&lt;/li&gt;
&lt;li&gt;Knowledge becomes outdated as the world changes&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;MCP Approach:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Leaves the base model unchanged&lt;/li&gt;
&lt;li&gt;Extends capabilities through external connections&lt;/li&gt;
&lt;li&gt;Knowledge remains in original systems, always current&lt;/li&gt;
&lt;li&gt;Quick to implement with minimal setup&lt;/li&gt;
&lt;li&gt;Requires minimal ML expertise&lt;/li&gt;
&lt;li&gt;Automatically incorporates the latest information&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Key Difference:&lt;/strong&gt; Fine-tuning is like teaching a student everything they might need to know in advance. MCP is like teaching them how to use a library, internet, and specialized tools to find information when needed.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. Function Calling vs. MCP
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Function Calling:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Allows LLMs to call predefined functions&lt;/li&gt;
&lt;li&gt;Functions must be registered in advance&lt;/li&gt;
&lt;li&gt;Limited to the specific API endpoints defined&lt;/li&gt;
&lt;li&gt;Often requires custom implementation for each use case&lt;/li&gt;
&lt;li&gt;Typically lacks standardization across different systems&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;MCP Approach:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Provides a standardized protocol for tool use&lt;/li&gt;
&lt;li&gt;Enables discovery and use of tools not defined at design time&lt;/li&gt;
&lt;li&gt;Creates an ecosystem of compatible tools and services&lt;/li&gt;
&lt;li&gt;Implements a consistent interface across different systems&lt;/li&gt;
&lt;li&gt;Allows for complex workflows across multiple tools&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Key Difference:&lt;/strong&gt; Function calling is like giving an LLM a specific set of tools with instruction manuals. MCP is like giving it the ability to discover, learn about, and use any tool in an entire workshop, even ones that didn't exist when it was created.&lt;/p&gt;

&lt;h3&gt;
  
  
  4. Knowledge Bases vs. MCP
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Traditional Knowledge Bases:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Static repositories of information&lt;/li&gt;
&lt;li&gt;Require manual updates and maintenance&lt;/li&gt;
&lt;li&gt;Often siloed from other systems&lt;/li&gt;
&lt;li&gt;Limited to the information explicitly added&lt;/li&gt;
&lt;li&gt;Typically text-based question and answer&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;MCP Approach:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Dynamic connections to live data sources&lt;/li&gt;
&lt;li&gt;Automatically updated as source systems change&lt;/li&gt;
&lt;li&gt;Integrated with multiple systems and tools&lt;/li&gt;
&lt;li&gt;Can access any information available in connected systems&lt;/li&gt;
&lt;li&gt;Enables not just answers but actions and complex workflows&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Key Difference:&lt;/strong&gt; A knowledge base is like a comprehensive encyclopedia. MCP is like having access to the entire internet, live databases, and specialized tools that can perform actions based on the information.&lt;/p&gt;

&lt;h3&gt;
  
  
  5. Agent Frameworks vs. MCP
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Agent Frameworks:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Often proprietary implementations&lt;/li&gt;
&lt;li&gt;Typically designed for specific use cases&lt;/li&gt;
&lt;li&gt;May lack standardization&lt;/li&gt;
&lt;li&gt;Can be complex to set up and maintain&lt;/li&gt;
&lt;li&gt;Often require significant customization&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;MCP Approach:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Open, standardized protocol&lt;/li&gt;
&lt;li&gt;Designed for general-purpose use&lt;/li&gt;
&lt;li&gt;Consistent implementation across platforms&lt;/li&gt;
&lt;li&gt;Simplified setup with standardized interfaces&lt;/li&gt;
&lt;li&gt;Works out of the box with compatible systems&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Key Difference:&lt;/strong&gt; Agent frameworks are like custom-built robots designed for specific tasks. MCP is like a universal standard that allows any AI to connect with any compatible tool or data source.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Fundamental Shift: From Static to Dynamic Intelligence
&lt;/h3&gt;

&lt;p&gt;The most profound difference between MCP and traditional approaches is the shift from static to dynamic intelligence:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Traditional Approaches:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Knowledge is fixed at training or fine-tuning time&lt;/li&gt;
&lt;li&gt;Updates require retraining or reindexing&lt;/li&gt;
&lt;li&gt;Limited by what was anticipated during design&lt;/li&gt;
&lt;li&gt;Intelligence is contained within the model&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;MCP Approach:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Knowledge is accessed dynamically when needed&lt;/li&gt;
&lt;li&gt;Updates happen automatically as source systems change&lt;/li&gt;
&lt;li&gt;Can adapt to unanticipated needs through tool discovery&lt;/li&gt;
&lt;li&gt;Intelligence is distributed across the model and connected systems&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This shift represents a fundamental evolution in how we think about AI systems - moving from isolated, static models to connected, dynamic systems that can leverage specialized tools and real-time information.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion: MCP as the Key to the AI Revolution in the Cloud
&lt;/h2&gt;

&lt;p&gt;We've taken an exciting journey through the world of Model Context Protocols, and I hope you can now see why I'm so enthusiastic about this technology. MCP is not just another acronym in the already crowded tech world – it's a fundamental paradigm shift in the way we work with AI models.&lt;/p&gt;

&lt;p&gt;Let's briefly summarize the key insights:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;MCP bridges the gap&lt;/strong&gt; between general AI models and specialized use cases by enabling seamless access to external data sources and tools.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;The benefits are impressive&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Access to specialized knowledge without retraining&lt;/li&gt;
&lt;li&gt;Improved data security through local data storage&lt;/li&gt;
&lt;li&gt;Drastic reduction in development time&lt;/li&gt;
&lt;li&gt;Automatic application of best practices&lt;/li&gt;
&lt;li&gt;High scalability and flexibility&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Practical applications&lt;/strong&gt; range from AWS infrastructure development to knowledge base integration to complex automation scenarios.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;The entry barrier is low&lt;/strong&gt; – you don't need to be an AI expert to benefit from MCP.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;MCP represents a fundamental evolution&lt;/strong&gt; from traditional approaches:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Dynamic vs. static knowledge access&lt;/li&gt;
&lt;li&gt;Distributed vs. centralized intelligence&lt;/li&gt;
&lt;li&gt;Standardized vs. custom implementations&lt;/li&gt;
&lt;li&gt;Real-time vs. pre-processed information&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;I'm convinced that MCP will significantly shape the future of AI integration in the cloud. We're just at the beginning of this development, and the possibilities that arise from it are nearly limitless. Imagine how your daily work could change if you had an AI assistant that not only has general knowledge but also deep understanding of your specific domain, your company data, and your technical environment.&lt;/p&gt;

&lt;h3&gt;
  
  
  My Personal Assessment
&lt;/h3&gt;

&lt;p&gt;As someone who works with AWS and cloud technologies daily, I can say from personal experience: MCP has revolutionized my productivity. Tasks that used to take days, I now complete in hours or even minutes. And the best part: The quality of my work has improved because I can access specialized knowledge and best practices without having to be an expert in every area myself.&lt;/p&gt;

&lt;p&gt;I see MCP as a decisive step toward a future where AI is not just a tool, but a real partner in development – a partner that understands you, knows your requirements, and helps you develop better solutions faster.&lt;/p&gt;

&lt;h3&gt;
  
  
  Your Next Steps
&lt;/h3&gt;

&lt;p&gt;If you've become curious about MCP after this article (and I hope you have!), here are some recommendations for your next steps:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Experiment with AWS MCP Servers&lt;/strong&gt;: The MCP servers provided by AWS are an excellent entry point. They are well-documented and easy to use.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Connect your preferred AI assistant with MCP&lt;/strong&gt;: Whether you use Claude, Amazon Q, or other tools – many modern AI assistants already support MCP integration.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Identify use cases in your environment&lt;/strong&gt;: Consider which recurring tasks in your daily work could benefit from MCP.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Share your experiences&lt;/strong&gt;: The MCP community is growing rapidly, and your insights could help others.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Further Resources
&lt;/h3&gt;

&lt;p&gt;If you want to dive deeper into the subject, here are some resources I can recommend:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://github.com/awslabs/mcp/" rel="noopener noreferrer"&gt;AWS MCP Servers on GitHub&lt;/a&gt; – Open-source code and documentation&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://aws.amazon.com/de/blogs/machine-learning/introducing-aws-mcp-servers-for-code-assistants-part-1/" rel="noopener noreferrer"&gt;Amazon MCP Blog&lt;/a&gt; – Introducing AWS MCP Servers for code assistants (Part 1)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The AI revolution is in full swing, and with MCP, you have a powerful tool in hand to actively shape this revolution. I'm excited to see what innovative solutions you'll develop with it!&lt;/p&gt;

&lt;p&gt;Have you already had experiences with MCP or do you have questions about it? I look forward to the exchange – let me know in the comments or contact me directly via LinkedIn.&lt;/p&gt;

&lt;p&gt;Until next time – happy coding and good luck with MCP!&lt;/p&gt;

</description>
      <category>mcp</category>
      <category>ai</category>
      <category>aws</category>
      <category>llm</category>
    </item>
    <item>
      <title>How I built my AWS AI assistant: Integrating Amazon Bedrock Agents with Slack</title>
      <dc:creator>Artur Schneider</dc:creator>
      <pubDate>Tue, 01 Apr 2025 14:47:43 +0000</pubDate>
      <link>https://dev.to/aws-builders/how-i-built-my-aws-ai-assistant-integrating-amazon-bedrock-agents-with-slack-3e8n</link>
      <guid>https://dev.to/aws-builders/how-i-built-my-aws-ai-assistant-integrating-amazon-bedrock-agents-with-slack-3e8n</guid>
      <description>&lt;p&gt;Ever felt like Dr. Frankenstein, creating a digital assistant that follows your every command? That's exactly what happened when I integrated Amazon Bedrock Agent into Slack! "IT'S ALIVE!" was literally my reaction when my AWS AI assistant sent its first response. &lt;/p&gt;

&lt;p&gt;Unlike Mary Shelley's monster, this creation won't turn against you (as long as you set those IAM permissions right! 😅). It's the perfect colleague - never complains about overtime, doesn't steal your lunch from the fridge, and actually reads your documentation!&lt;/p&gt;

&lt;p&gt;In this article, I'll show you how to build your own agentic AI assistant for AWS that you have FULL control over. The power is intoxicating! "Restart that EC2 instance, my faithful servant!" &lt;em&gt;evil laugh&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;But before we dive into the technical details, let's talk about something crucial: &lt;strong&gt;security&lt;/strong&gt;. When building AI assistants that can interact with your AWS environment, security isn't just a feature—it's a fundamental requirement. Throughout this guide, we'll emphasize secure implementation practices, from properly scoped IAM permissions to secure API endpoints and authentication mechanisms. Remember, with great power comes great responsibility, and we want our AI assistant to be helpful without creating security vulnerabilities.&lt;/p&gt;

&lt;p&gt;I'll demonstrate the capabilities of this integration using a simple example of describing EC2 instances, but the potential use cases extend far beyond this. At the end of this article, I'll share additional use cases that leverage this integration while maintaining a strong security posture. Whether you're looking to streamline operations, enhance monitoring, or improve incident response, this integration can be adapted to meet various needs—all with security at its core.&lt;/p&gt;

&lt;p&gt;So, let's kick off this journey to create our own AWS AI assistant that's both powerful and secure. By the end, you'll have a fully functional integration between Amazon Bedrock Agents and Slack that you can customize to your specific requirements.&lt;/p&gt;

&lt;h2&gt;
  
  
  Prerequisites
&lt;/h2&gt;

&lt;p&gt;Before we dive into creating our monster... I mean, assistant, let's make sure you have everything you need:&lt;/p&gt;

&lt;h3&gt;
  
  
  AWS Prerequisites
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;An AWS account with appropriate permissions&lt;/li&gt;
&lt;li&gt;Access to Amazon Bedrock service (check regional availability)&lt;/li&gt;
&lt;li&gt;IAM permissions to create and manage:

&lt;ul&gt;
&lt;li&gt;Lambda functions&lt;/li&gt;
&lt;li&gt;API Gateway resources&lt;/li&gt;
&lt;li&gt;Amazon Bedrock Agents&lt;/li&gt;
&lt;li&gt;IAM roles and policies&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;h3&gt;
  
  
  Slack Prerequisites
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;A Slack workspace where you have permissions to create apps&lt;/li&gt;
&lt;li&gt;Admin or appropriate permissions to add apps to your workspace&lt;/li&gt;
&lt;li&gt;Ability to create channels where your bot will post messages&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Development Tools
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Basic understanding of AWS services&lt;/li&gt;
&lt;li&gt;Familiarity with JSON and REST APIs&lt;/li&gt;
&lt;li&gt;Access to a code editor for Lambda function development&lt;/li&gt;
&lt;li&gt;Basic understanding of Python (for Lambda functions)&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Optional Prerequisites (depending on approach)
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;For Slack Bot approach: Slack Bot token with appropriate scopes&lt;/li&gt;
&lt;li&gt;For Webhook approach: No additional requirements beyond Slack app creation&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Understanding Amazon Bedrock Agent
&lt;/h2&gt;

&lt;p&gt;Amazon Bedrock Agent is like the brain of our Frankenstein monster. It's a fully managed service that allows you to create agentic AI assistants powered by foundation models. These agents can be connected to your enterprise systems and data sources, allowing them to take actions on your behalf.&lt;/p&gt;

&lt;p&gt;The key components of Amazon Bedrock Agent include:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Foundation Models&lt;/strong&gt;: The underlying AI models that power your agent (Claude, Llama, etc.)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Knowledge Bases&lt;/strong&gt;: Connect your data sources for the agent to reference&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Action Groups&lt;/strong&gt;: Define specific actions your agent can perform&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;API Schema&lt;/strong&gt;: Define the structure of your API calls&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Security Controls&lt;/strong&gt;: IAM permissions to control what your agent can do&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;For our AWS AI assistant, we'll create an agent that can perform common AWS operations like checking EC2 instance status. This is just for simplicity - you can use this example code to adjust based on your requirements.&lt;/p&gt;

&lt;h2&gt;
  
  
  Architecture Overview
&lt;/h2&gt;

&lt;p&gt;Let's take a look at the overall architecture of our integration:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgy6xt7w2b322xupgdcc6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgy6xt7w2b322xupgdcc6.png" alt="Basic Architecture showing the flow from Slack App through API Gateway and Lambda to Amazon Bedrock" width="800" height="290"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The architecture consists of four main components:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Slack App&lt;/strong&gt;: This is the interface through which users interact with our AI assistant. Users send messages to the bot, and the bot responds with information or performs actions based on those messages.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Amazon API Gateway&lt;/strong&gt;: Acts as the secure entry point for requests from Slack. It receives webhook events from Slack and forwards them to our Lambda function. The API Gateway provides a layer of security by validating requests and controlling access.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;AWS Lambda&lt;/strong&gt;: The SlackHandlerLambda function processes incoming messages from Slack, invokes the Bedrock Agent, and sends responses back to Slack. This Lambda function is the bridge between Slack and Amazon Bedrock.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Amazon Bedrock&lt;/strong&gt;: The brain of our operation. The Bedrock Agent processes user requests, understands their intent, and executes the appropriate actions through the EC2 Operations Lambda function.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;From a security perspective, this architecture implements several important safeguards:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;API Gateway Authentication&lt;/strong&gt;: Validates incoming requests from Slack using signature verification&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;IAM Role-Based Access&lt;/strong&gt;: Each Lambda function has a specific IAM role with only the permissions it needs&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Secure Communication&lt;/strong&gt;: All communication between components uses HTTPS&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Request Validation&lt;/strong&gt;: The SlackHandlerLambda validates incoming requests before processing them&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Least Privilege Principle&lt;/strong&gt;: The EC2 Operations Lambda has only the specific permissions needed to describe EC2 instances&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Now that we understand the architecture, let's dive into creating each component, starting with our Amazon Bedrock Agent.&lt;/p&gt;

&lt;h2&gt;
  
  
  Creating Your Amazon Bedrock Agent
&lt;/h2&gt;

&lt;p&gt;Let's start by creating our agent in Amazon Bedrock:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Navigate to the Amazon Bedrock console&lt;/li&gt;
&lt;li&gt;Select "Agents" from the left navigation&lt;/li&gt;
&lt;li&gt;Click "Create agent"&lt;/li&gt;
&lt;li&gt;Provide a name for your agent (e.g., "AWS-AI-Assistant")&lt;/li&gt;
&lt;li&gt;Select a foundation model (I recommend Claude for this use case)&lt;/li&gt;
&lt;li&gt;Configure basic settings and click "Next"&lt;/li&gt;
&lt;li&gt;Add instructions for the Agent - Your agent needs clear instructions to function properly. Without instructions, you'll get the dreaded "Agent Instruction cannot be null" error when trying to prepare your agent.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Here's an example of good instructions for our AWS AI Assistant:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;You are an AWS AI Assistant designed to help users manage their AWS environment. 
Your primary functions include:
1. Providing information about AWS resources like EC2 instances, S3 buckets, and RDS databases
2. Performing basic operations like restarting EC2 instances
3. Monitoring AWS resources and providing status updates

You should be helpful, concise, and security-conscious. Always confirm before taking actions that modify resources. If you're unsure about a request, ask for clarification rather than guessing.

When users ask about AWS services you don't have direct access to, explain that you can only interact with services configured in your action groups.

Maintain a professional but friendly tone. You can use technical AWS terminology but explain concepts when needed for clarity.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;These instructions serve as the personality and purpose guide for your agent. Without them, your agent won't know what it's supposed to do or how to behave, resulting in the preparation error.&lt;/p&gt;

&lt;h3&gt;
  
  
  Creating Action Groups
&lt;/h3&gt;

&lt;p&gt;Now, let's define what our monster... I mean, assistant can do:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;In the Agent Builder, select "Action groups"&lt;/li&gt;
&lt;li&gt;Click "Add action group"&lt;/li&gt;
&lt;li&gt;Name your action group (e.g., "EC2Operations")&lt;/li&gt;
&lt;li&gt;Define your API schema. Here's the example for EC2 operations:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;openapi&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;3.0.1"&lt;/span&gt;
&lt;span class="na"&gt;info&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;title&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;EC2&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;Management&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;API"&lt;/span&gt;
  &lt;span class="na"&gt;version&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;1.0.0"&lt;/span&gt;
&lt;span class="na"&gt;paths&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;/instances&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;summary&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Operations&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;to&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;list&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;EC2&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;instances"&lt;/span&gt;
    &lt;span class="na"&gt;get&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;operationId&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;listInstances&lt;/span&gt;
      &lt;span class="na"&gt;summary&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;List&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;running&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;EC2&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;instances&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;in&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;a&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;region"&lt;/span&gt;
      &lt;span class="na"&gt;description&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Describe&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;EC2&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;instances"&lt;/span&gt;
      &lt;span class="na"&gt;parameters&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;in&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;query&lt;/span&gt;
          &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;region&lt;/span&gt;
          &lt;span class="na"&gt;required&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
          &lt;span class="na"&gt;schema&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;string&lt;/span&gt;
          &lt;span class="na"&gt;description&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;AWS&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;region&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;to&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;check&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;(e.g.,&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;us-east-1)"&lt;/span&gt;
      &lt;span class="na"&gt;responses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;200'&lt;/span&gt;&lt;span class="err"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;description&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Success"&lt;/span&gt;
          &lt;span class="na"&gt;content&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="na"&gt;application/json&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
              &lt;span class="na"&gt;schema&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
                &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;object&lt;/span&gt;
                &lt;span class="na"&gt;properties&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
                  &lt;span class="na"&gt;count&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
                    &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;integer&lt;/span&gt;
                    &lt;span class="na"&gt;description&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Number&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;of&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;running&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;EC2&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;instances&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;in&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;the&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;region"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This schema defines a simple API endpoint that allows our agent to list EC2 instances in a specified region. The schema is intentionally kept simple for this example, but you can expand it to include more operations like starting, stopping, or describing specific instances.&lt;/p&gt;

&lt;h2&gt;
  
  
  Lambda Functions Implementation
&lt;/h2&gt;

&lt;p&gt;Now, let's create the Lambda functions that will power our integration. We'll need two Lambda functions:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;EC2 Operations Lambda&lt;/strong&gt;: Handles AWS operations requested by the Bedrock Agent&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Slack Handler Lambda&lt;/strong&gt;: Processes incoming Slack messages and communicates with the Bedrock Agent&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Creating the EC2 Operations Lambda
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;Navigate to the AWS Lambda console&lt;/li&gt;
&lt;li&gt;Click "Create function"&lt;/li&gt;
&lt;li&gt;Select "Author from scratch"&lt;/li&gt;
&lt;li&gt;Name your function (e.g., "EC2OperationsLambda")&lt;/li&gt;
&lt;li&gt;Select Python 3.9 as the runtime&lt;/li&gt;
&lt;li&gt;Create a new execution role with basic Lambda permissions&lt;/li&gt;
&lt;li&gt;Click "Create function"&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Now, let's add the code for our Lambda function:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;json&lt;/span&gt;
&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;boto3&lt;/span&gt;
&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;os&lt;/span&gt;
&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;traceback&lt;/span&gt;
&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;datetime&lt;/span&gt;

&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;lambda_handler&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;event&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;context&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Received event: &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;json&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;dumps&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;event&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="c1"&gt;# Extract required fields from the event
&lt;/span&gt;    &lt;span class="n"&gt;agent&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;event&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;agent&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
    &lt;span class="n"&gt;actionGroup&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;event&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;actionGroup&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
    &lt;span class="n"&gt;apiPath&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;event&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;apiPath&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
    &lt;span class="n"&gt;httpMethod&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;event&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;httpMethod&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
    &lt;span class="n"&gt;parameters&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;event&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;parameters&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;[])&lt;/span&gt;
    &lt;span class="n"&gt;requestBody&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;event&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;requestBody&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{})&lt;/span&gt;

    &lt;span class="c1"&gt;# Convert parameters list to dictionary if needed
&lt;/span&gt;    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="nf"&gt;isinstance&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;parameters&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nb"&gt;list&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
        &lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Parameters is a list, converting to dictionary&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="n"&gt;parameters_dict&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{}&lt;/span&gt;
        &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;param&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;parameters&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
            &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="nf"&gt;isinstance&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;param&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nb"&gt;dict&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="ow"&gt;and&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;name&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;param&lt;/span&gt; &lt;span class="ow"&gt;and&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;value&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;param&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
                &lt;span class="n"&gt;parameters_dict&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;param&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;name&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;]]&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;param&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;value&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
        &lt;span class="n"&gt;parameters&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;parameters_dict&lt;/span&gt;

    &lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;API Path: &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;apiPath&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Parameters (processed): &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;json&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;dumps&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;parameters&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="k"&gt;try&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="c1"&gt;# Handle different API paths based on the OpenAPI schema
&lt;/span&gt;        &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;apiPath&lt;/span&gt; &lt;span class="o"&gt;==&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;/instances&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
            &lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Calling describe_instances function&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
            &lt;span class="n"&gt;result&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;describe_instances&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;parameters&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
            &lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;describe_instances response: &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;json&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;dumps&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;result&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

            &lt;span class="c1"&gt;# Format response according to Bedrock Agents expected structure
&lt;/span&gt;            &lt;span class="c1"&gt;# Convert result to JSON string to match the example format
&lt;/span&gt;            &lt;span class="n"&gt;result_json&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;json&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;dumps&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;result&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
            &lt;span class="n"&gt;responseBody&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
                &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;application/json&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
                    &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;body&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;result_json&lt;/span&gt;
                &lt;span class="p"&gt;}&lt;/span&gt;
            &lt;span class="p"&gt;}&lt;/span&gt;

        &lt;span class="k"&gt;else&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
            &lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Unsupported API path: &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;apiPath&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
            &lt;span class="n"&gt;error_message&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;Unsupported API path: &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;apiPath&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;

            &lt;span class="c1"&gt;# Format error response according to Bedrock Agents expected structure
&lt;/span&gt;            &lt;span class="n"&gt;error_json&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;json&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;dumps&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
                &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;error&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;error_message&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
                &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;_source&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;lambda_error&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
                &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;_warning&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;DO NOT FABRICATE OR MODIFY THIS DATA&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;
            &lt;span class="p"&gt;})&lt;/span&gt;

            &lt;span class="n"&gt;responseBody&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
                &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;application/json&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
                    &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;body&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;error_json&lt;/span&gt;
                &lt;span class="p"&gt;}&lt;/span&gt;
            &lt;span class="p"&gt;}&lt;/span&gt;

            &lt;span class="c1"&gt;# Create the action response in the expected format
&lt;/span&gt;            &lt;span class="n"&gt;action_response&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
                &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;actionGroup&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;actionGroup&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
                &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;apiPath&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;apiPath&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
                &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;httpMethod&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;httpMethod&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
                &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;httpStatusCode&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;400&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
                &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;responseBody&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;responseBody&lt;/span&gt;
            &lt;span class="p"&gt;}&lt;/span&gt;

            &lt;span class="c1"&gt;# Return the final response with messageVersion
&lt;/span&gt;            &lt;span class="n"&gt;api_response&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
                &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;response&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;action_response&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
                &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;messageVersion&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;event&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;messageVersion&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
            &lt;span class="p"&gt;}&lt;/span&gt;

            &lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Response: &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;json&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;dumps&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;api_response&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
            &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;api_response&lt;/span&gt;

        &lt;span class="c1"&gt;# Create the action response in the expected format
&lt;/span&gt;        &lt;span class="n"&gt;action_response&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;actionGroup&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;actionGroup&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;apiPath&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;apiPath&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;httpMethod&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;httpMethod&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;httpStatusCode&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;200&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;responseBody&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;responseBody&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt;

        &lt;span class="c1"&gt;# Return the final response with messageVersion
&lt;/span&gt;        &lt;span class="n"&gt;api_response&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;response&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;action_response&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;messageVersion&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;event&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;messageVersion&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt;

        &lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Response: &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;json&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;dumps&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;api_response&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;api_response&lt;/span&gt;

    &lt;span class="k"&gt;except&lt;/span&gt; &lt;span class="nb"&gt;Exception&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;e&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Error: &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="nf"&gt;str&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;e&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;traceback&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;format_exc&lt;/span&gt;&lt;span class="p"&gt;())&lt;/span&gt;

        &lt;span class="c1"&gt;# Format error response according to Bedrock Agents expected structure
&lt;/span&gt;        &lt;span class="n"&gt;error_json&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;json&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;dumps&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
            &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;error&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nf"&gt;str&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;e&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
            &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;_source&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;lambda_error&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;_warning&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;DO NOT FABRICATE OR MODIFY THIS DATA&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;
        &lt;span class="p"&gt;})&lt;/span&gt;

        &lt;span class="n"&gt;responseBody&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;application/json&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
                &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;body&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;error_json&lt;/span&gt;
            &lt;span class="p"&gt;}&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt;

        &lt;span class="c1"&gt;# Create the action response in the expected format
&lt;/span&gt;        &lt;span class="n"&gt;action_response&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;actionGroup&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;actionGroup&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;apiPath&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;apiPath&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;httpMethod&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;httpMethod&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;httpStatusCode&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;500&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;responseBody&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;responseBody&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt;

        &lt;span class="c1"&gt;# Return the final response with messageVersion
&lt;/span&gt;        &lt;span class="n"&gt;api_response&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;response&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;action_response&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;messageVersion&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;event&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;messageVersion&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt;

        &lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Response: &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;json&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;dumps&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;api_response&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;api_response&lt;/span&gt;

&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;describe_instances&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;parameters&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="c1"&gt;# Extract parameters - with additional error handling
&lt;/span&gt;    &lt;span class="n"&gt;region&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;parameters&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;region&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;us-east-1&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="nf"&gt;isinstance&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;parameters&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nb"&gt;dict&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;else&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;us-east-1&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;
    &lt;span class="n"&gt;instance_id&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;parameters&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;instanceId&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="bp"&gt;None&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="nf"&gt;isinstance&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;parameters&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nb"&gt;dict&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;else&lt;/span&gt; &lt;span class="bp"&gt;None&lt;/span&gt;

    &lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Describing instances in region &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;region&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt;, instance_id filter: &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;instance_id&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="c1"&gt;# Initialize EC2 client
&lt;/span&gt;    &lt;span class="n"&gt;ec2&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;boto3&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;client&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;ec2&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;region_name&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;region&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="c1"&gt;# Prepare filters
&lt;/span&gt;    &lt;span class="n"&gt;filters&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[]&lt;/span&gt;
    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;instance_id&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="n"&gt;filters&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;append&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
            &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;Name&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;instance-id&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;Values&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;instance_id&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
        &lt;span class="p"&gt;})&lt;/span&gt;

    &lt;span class="c1"&gt;# Describe instances with pagination to avoid timeouts
&lt;/span&gt;    &lt;span class="n"&gt;instances&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[]&lt;/span&gt;
    &lt;span class="k"&gt;try&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="c1"&gt;# Use paginator for better handling of large results
&lt;/span&gt;        &lt;span class="n"&gt;paginator&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;ec2&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get_paginator&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;describe_instances&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

        &lt;span class="c1"&gt;# Set a small page size to process results faster
&lt;/span&gt;        &lt;span class="n"&gt;page_iterator&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;paginator&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;paginate&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
            &lt;span class="n"&gt;Filters&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;filters&lt;/span&gt; &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;filters&lt;/span&gt; &lt;span class="k"&gt;else&lt;/span&gt; &lt;span class="p"&gt;[],&lt;/span&gt;
            &lt;span class="n"&gt;PaginationConfig&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;MaxItems&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;100&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;PageSize&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;20&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;
        &lt;span class="p"&gt;)&lt;/span&gt;

        &lt;span class="c1"&gt;# Process each page of results
&lt;/span&gt;        &lt;span class="n"&gt;instance_count&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;
        &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;page&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;page_iterator&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
            &lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Processing page of EC2 results with &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="nf"&gt;len&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;page&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;Reservations&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;])&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt; reservations&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

            &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;reservation&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;page&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;Reservations&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;]:&lt;/span&gt;
                &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;instance&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;reservation&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;Instances&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;]:&lt;/span&gt;
                    &lt;span class="n"&gt;instance_count&lt;/span&gt; &lt;span class="o"&gt;+=&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;
                    &lt;span class="n"&gt;instance_info&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
                        &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;InstanceId&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;instance&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;InstanceId&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;
                        &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;InstanceType&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;instance&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;InstanceType&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;
                        &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;State&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;instance&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;State&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;][&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;Name&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;
                        &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;LaunchTime&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;instance&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;LaunchTime&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;].&lt;/span&gt;&lt;span class="nf"&gt;isoformat&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt;
                        &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;PublicIpAddress&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;instance&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;PublicIpAddress&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;N/A&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
                        &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;PrivateIpAddress&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;instance&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;PrivateIpAddress&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;N/A&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
                    &lt;span class="p"&gt;}&lt;/span&gt;
                    &lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Found instance &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;instance_count&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt;: &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;instance_info&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;InstanceId&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
                    &lt;span class="n"&gt;instances&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;append&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;instance_info&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

                    &lt;span class="c1"&gt;# Limit the number of instances to return to avoid response size issues
&lt;/span&gt;                    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;instance_count&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;=&lt;/span&gt; &lt;span class="mi"&gt;50&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
                        &lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Reached maximum instance count (50), stopping pagination&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
                        &lt;span class="k"&gt;break&lt;/span&gt;

                &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;instance_count&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;=&lt;/span&gt; &lt;span class="mi"&gt;50&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
                    &lt;span class="k"&gt;break&lt;/span&gt;

            &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;instance_count&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;=&lt;/span&gt; &lt;span class="mi"&gt;50&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
                &lt;span class="k"&gt;break&lt;/span&gt;

    &lt;span class="k"&gt;except&lt;/span&gt; &lt;span class="nb"&gt;Exception&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;e&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Error calling EC2 API: &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="nf"&gt;str&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;e&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="k"&gt;raise&lt;/span&gt;

    &lt;span class="c1"&gt;# Format the response according to the OpenAPI schema
&lt;/span&gt;    &lt;span class="c1"&gt;# Use current timestamp instead of credentials expiry time
&lt;/span&gt;    &lt;span class="n"&gt;current_time&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;datetime&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;datetime&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;now&lt;/span&gt;&lt;span class="p"&gt;().&lt;/span&gt;&lt;span class="nf"&gt;isoformat&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;

    &lt;span class="n"&gt;result&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;Instances&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;instances&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;Count&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nf"&gt;len&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;instances&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
        &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;Region&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;region&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;DataTimestamp&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;current_time&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;_source&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;lambda_actual_data&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;_warning&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;DO NOT FABRICATE OR MODIFY THIS DATA&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;

    &lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Returning &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="nf"&gt;len&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;instances&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt; instances&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;result&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This Lambda function handles requests from the Bedrock Agent to describe EC2 instances. Let's break down what it does:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;The &lt;code&gt;lambda_handler&lt;/code&gt; function processes incoming events from the Bedrock Agent, extracting the API path and parameters.&lt;/li&gt;
&lt;li&gt;It handles the &lt;code&gt;/instances&lt;/code&gt; API path by calling the &lt;code&gt;describe_instances&lt;/code&gt; function.&lt;/li&gt;
&lt;li&gt;The &lt;code&gt;describe_instances&lt;/code&gt; function uses the AWS SDK to query EC2 instances in the specified region.&lt;/li&gt;
&lt;li&gt;It formats the response according to the expected structure for Bedrock Agents.&lt;/li&gt;
&lt;li&gt;Error handling is implemented throughout to ensure robust operation.&lt;/li&gt;
&lt;/ol&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Important&lt;/strong&gt;: Make sure to extend the Lambda timeout beyond the default 3 seconds. For the EC2 Operations Lambda, I recommend setting it to at least 30 seconds, as querying EC2 instances across regions can take time, especially with pagination.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  Creating the Slack Handler Lambda
&lt;/h3&gt;

&lt;p&gt;Now, let's create the Lambda function that will handle Slack interactions:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Navigate to the AWS Lambda console&lt;/li&gt;
&lt;li&gt;Click "Create function"&lt;/li&gt;
&lt;li&gt;Select "Author from scratch"&lt;/li&gt;
&lt;li&gt;Name your function (e.g., "SlackHandlerLambda")&lt;/li&gt;
&lt;li&gt;Select Python 3.9 as the runtime&lt;/li&gt;
&lt;li&gt;Create a new execution role with basic Lambda permissions&lt;/li&gt;
&lt;li&gt;Click "Create function"&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Now, let's add the code for our Slack Handler Lambda:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;os&lt;/span&gt;
&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;json&lt;/span&gt;
&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;hmac&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;hashlib&lt;/span&gt;
&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;boto3&lt;/span&gt;
&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;base64&lt;/span&gt;
&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;time&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;datetime&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;datetime&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;timedelta&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;botocore.exceptions&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;ClientError&lt;/span&gt;

&lt;span class="c1"&gt;# Read important constants from environment or configuration
&lt;/span&gt;&lt;span class="n"&gt;BEDROCK_AGENT_ID&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;os&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;environ&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;BEDROCK_AGENT_ID&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;       &lt;span class="c1"&gt;# e.g., "agent-1234567890abcdef"
&lt;/span&gt;&lt;span class="n"&gt;BEDROCK_AGENT_ALIAS&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;os&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;environ&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;BEDROCK_AGENT_ALIAS&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="c1"&gt;# e.g., the alias ID (not the name)
&lt;/span&gt;&lt;span class="n"&gt;SLACK_BOT_TOKEN&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;os&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;environ&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;SLACK_BOT_TOKEN&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;         &lt;span class="c1"&gt;# xoxb- token
&lt;/span&gt;&lt;span class="n"&gt;SLACK_SIGNING_SECRET&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;os&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;environ&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;SLACK_SIGNING_SECRET&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;  &lt;span class="c1"&gt;# Slack signing secret
&lt;/span&gt;&lt;span class="n"&gt;BEDROCK_REGION&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;os&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;environ&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;BEDROCK_REGION&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;eu-central-1&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;  &lt;span class="c1"&gt;# Default to eu-central-1 if not specified
&lt;/span&gt;
&lt;span class="c1"&gt;# Initialize the Bedrock client - Using bedrock-agent-runtime for agent invocation with region from environment variable
&lt;/span&gt;&lt;span class="n"&gt;bedrock_client&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;boto3&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;client&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;bedrock-agent-runtime&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;region_name&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;BEDROCK_REGION&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="c1"&gt;# In-memory cache for event deduplication
&lt;/span&gt;&lt;span class="n"&gt;processed_events&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{}&lt;/span&gt;

&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;lambda_handler&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;event&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;context&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="c1"&gt;# If coming via API Gateway HTTP API, the actual Slack payload will be in 'body'
&lt;/span&gt;    &lt;span class="n"&gt;body&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;event&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;body&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;body&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="c1"&gt;# The body might be URL-encoded from Slack, but Slack can send JSON if configured.
&lt;/span&gt;        &lt;span class="c1"&gt;# Assume JSON for simplicity:
&lt;/span&gt;        &lt;span class="k"&gt;try&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
            &lt;span class="n"&gt;slack_event&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;json&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;loads&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;body&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="k"&gt;except&lt;/span&gt; &lt;span class="n"&gt;json&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;JSONDecodeError&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
            &lt;span class="c1"&gt;# If the body was URL encoded (application/x-www-form-urlencoded), parse it differently
&lt;/span&gt;            &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;urllib.parse&lt;/span&gt;
            &lt;span class="n"&gt;slack_event&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;json&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;loads&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;urllib&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;parse&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;parse_qs&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;body&lt;/span&gt;&lt;span class="p"&gt;)[&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;payload&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;][&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;])&lt;/span&gt; &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;payload&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;body&lt;/span&gt; &lt;span class="k"&gt;else&lt;/span&gt; &lt;span class="n"&gt;urllib&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;parse&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;parse_qs&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;body&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="k"&gt;else&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="c1"&gt;# If running in a Flask app, you might directly get JSON
&lt;/span&gt;        &lt;span class="n"&gt;slack_event&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;event&lt;/span&gt;

    &lt;span class="c1"&gt;# 1. Verification challenge handshake
&lt;/span&gt;    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;slack_event&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;type&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;==&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;url_verification&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="n"&gt;challenge&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;slack_event&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;challenge&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;""&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

        &lt;span class="c1"&gt;# Respond with the challenge token to verify endpoint
&lt;/span&gt;        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;statusCode&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;200&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;headers&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Content-Type&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;text/plain&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;
            &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;body&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;challenge&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt;

    &lt;span class="c1"&gt;# 2. Verify Slack request signature (to ensure request is from Slack)
&lt;/span&gt;    &lt;span class="n"&gt;headers&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;event&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;headers&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{})&lt;/span&gt;
    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;SLACK_SIGNING_SECRET&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="n"&gt;timestamp&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;headers&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;x-slack-request-timestamp&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="ow"&gt;or&lt;/span&gt; &lt;span class="n"&gt;headers&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;X-Slack-Request-Timestamp&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="n"&gt;signature&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;headers&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;x-slack-signature&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="ow"&gt;or&lt;/span&gt; &lt;span class="n"&gt;headers&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;X-Slack-Signature&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="ow"&gt;not&lt;/span&gt; &lt;span class="nf"&gt;verify_slack_signature&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;SLACK_SIGNING_SECRET&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;body&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;timestamp&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;signature&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
            &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;statusCode&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;401&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;body&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Invalid signature&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;  &lt;span class="c1"&gt;# reject if signature check fails
&lt;/span&gt;
    &lt;span class="c1"&gt;# 3. Process event callbacks
&lt;/span&gt;    &lt;span class="c1"&gt;# Slack might batch events; we handle one event at a time for simplicity.
&lt;/span&gt;    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;event&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;slack_event&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="n"&gt;event_info&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;slack_event&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;event&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
        &lt;span class="n"&gt;user_id&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;event_info&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;user&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="n"&gt;text&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;event_info&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;text&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;""&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="n"&gt;channel_id&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;event_info&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;channel&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

        &lt;span class="c1"&gt;# Ignore events that are from the bot itself to prevent loops
&lt;/span&gt;        &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;event_info&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;bot_id&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="ow"&gt;or&lt;/span&gt; &lt;span class="n"&gt;user_id&lt;/span&gt; &lt;span class="ow"&gt;is&lt;/span&gt; &lt;span class="bp"&gt;None&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
            &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;statusCode&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;200&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;  &lt;span class="c1"&gt;# nothing to do
&lt;/span&gt;
        &lt;span class="c1"&gt;# Event deduplication - Check if we've seen this event recently
&lt;/span&gt;        &lt;span class="n"&gt;event_id&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;slack_event&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;event_id&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="ow"&gt;or&lt;/span&gt; &lt;span class="n"&gt;slack_event&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;event_time&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="ow"&gt;not&lt;/span&gt; &lt;span class="n"&gt;event_id&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
            &lt;span class="c1"&gt;# Create a synthetic ID if none exists
&lt;/span&gt;            &lt;span class="n"&gt;event_id&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;channel_id&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt;:&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;user_id&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt;:&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;text&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="si"&gt;:&lt;/span&gt;&lt;span class="mi"&gt;10&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt;:&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="nf"&gt;int&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;time&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;time&lt;/span&gt;&lt;span class="p"&gt;())&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;

        &lt;span class="c1"&gt;# Check if we've seen this event recently (within last 5 minutes)
&lt;/span&gt;        &lt;span class="n"&gt;now&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;datetime&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;now&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
        &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;event_id&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;processed_events&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
            &lt;span class="n"&gt;last_processed&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;processed_events&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;event_id&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
            &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;now&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt; &lt;span class="n"&gt;last_processed&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&lt;/span&gt; &lt;span class="nf"&gt;timedelta&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;minutes&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;5&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
                &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;statusCode&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;200&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;  &lt;span class="c1"&gt;# Acknowledge but don't process
&lt;/span&gt;
        &lt;span class="c1"&gt;# Mark this event as processed
&lt;/span&gt;        &lt;span class="n"&gt;processed_events&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;event_id&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;now&lt;/span&gt;

        &lt;span class="c1"&gt;# Clean up old entries to prevent memory growth
&lt;/span&gt;        &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;old_id&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="nf"&gt;list&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;processed_events&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;keys&lt;/span&gt;&lt;span class="p"&gt;()):&lt;/span&gt;
            &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;now&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt; &lt;span class="n"&gt;processed_events&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;old_id&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="nf"&gt;timedelta&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;minutes&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;30&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
                &lt;span class="k"&gt;del&lt;/span&gt; &lt;span class="n"&gt;processed_events&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;old_id&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;

        &lt;span class="c1"&gt;# (Optional auth) Check if user_id is allowed to use the bot
&lt;/span&gt;        &lt;span class="n"&gt;allowed_users&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;os&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;environ&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;ALLOWED_USER_IDS&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;allowed_users&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
            &lt;span class="n"&gt;allowed_list&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;u&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;strip&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;u&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;allowed_users&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;split&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;,&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)]&lt;/span&gt;
            &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;user_id&lt;/span&gt; &lt;span class="ow"&gt;not&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;allowed_list&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
                &lt;span class="c1"&gt;# Notify user they are not authorized
&lt;/span&gt;                &lt;span class="nf"&gt;send_slack_message&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;channel_id&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;:x: You are not authorized to use this bot.&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
                &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;statusCode&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;200&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;

        &lt;span class="c1"&gt;# Validate Bedrock agent configuration
&lt;/span&gt;        &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="ow"&gt;not&lt;/span&gt; &lt;span class="n"&gt;BEDROCK_AGENT_ID&lt;/span&gt; &lt;span class="ow"&gt;or&lt;/span&gt; &lt;span class="ow"&gt;not&lt;/span&gt; &lt;span class="n"&gt;BEDROCK_AGENT_ALIAS&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
            &lt;span class="n"&gt;error_msg&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Bedrock agent configuration is incomplete. Please check BEDROCK_AGENT_ID and BEDROCK_AGENT_ALIAS environment variables.&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
            &lt;span class="nf"&gt;send_slack_message&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;channel_id&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;⚠️ Configuration Error: &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;error_msg&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
            &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;statusCode&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;200&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;

        &lt;span class="c1"&gt;# 4. Invoke the Bedrock agent with the user's text
&lt;/span&gt;        &lt;span class="n"&gt;session_id&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;channel_id&lt;/span&gt;  &lt;span class="c1"&gt;# use channel (or user) as session identifier for continuity
&lt;/span&gt;        &lt;span class="k"&gt;try&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
            &lt;span class="c1"&gt;# Using the correct method with the bedrock-agent-runtime client
&lt;/span&gt;            &lt;span class="n"&gt;response&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;bedrock_client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;invoke_agent&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
                &lt;span class="n"&gt;agentId&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;BEDROCK_AGENT_ID&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
                &lt;span class="n"&gt;agentAliasId&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;BEDROCK_AGENT_ALIAS&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
                &lt;span class="n"&gt;sessionId&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;session_id&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
                &lt;span class="n"&gt;inputText&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;text&lt;/span&gt;
            &lt;span class="p"&gt;)&lt;/span&gt;

            &lt;span class="c1"&gt;# The response is streamed in chunks. Combine them to get full answer.
&lt;/span&gt;            &lt;span class="n"&gt;answer&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;""&lt;/span&gt;
            &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;chunk&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;response&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;completion&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;[]):&lt;/span&gt;
                &lt;span class="n"&gt;chunk_text&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;chunk&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;chunk&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;][&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;bytes&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;].&lt;/span&gt;&lt;span class="nf"&gt;decode&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;utf-8&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;errors&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;ignore&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
                &lt;span class="n"&gt;answer&lt;/span&gt; &lt;span class="o"&gt;+=&lt;/span&gt; &lt;span class="n"&gt;chunk_text&lt;/span&gt;

            &lt;span class="c1"&gt;# 5. Send the agent's answer back to Slack
&lt;/span&gt;            &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;answer&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
                &lt;span class="nf"&gt;send_slack_message&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;channel_id&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;answer&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
            &lt;span class="k"&gt;else&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
                &lt;span class="c1"&gt;# If no answer (empty completion), send a default reply
&lt;/span&gt;                &lt;span class="nf"&gt;send_slack_message&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;channel_id&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;🤖 (No response)&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

        &lt;span class="k"&gt;except&lt;/span&gt; &lt;span class="n"&gt;ClientError&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;e&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
            &lt;span class="n"&gt;error_code&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;e&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;response&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;Error&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{}).&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;Code&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;''&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
            &lt;span class="n"&gt;error_message&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;e&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;response&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;Error&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{}).&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;Message&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nf"&gt;str&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;e&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;

            &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;error_code&lt;/span&gt; &lt;span class="o"&gt;==&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;ResourceNotFoundException&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
                &lt;span class="nf"&gt;send_slack_message&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;channel_id&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Sorry, I couldn&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;t process that request. Error: Resource not found. Please check your Bedrock agent configuration.&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
            &lt;span class="k"&gt;else&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
                &lt;span class="nf"&gt;send_slack_message&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;channel_id&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Sorry, I couldn&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;t process that request. Error: &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;error_message&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

        &lt;span class="k"&gt;except&lt;/span&gt; &lt;span class="nb"&gt;Exception&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;e&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
            &lt;span class="c1"&gt;# Log error and reply with a failure message
&lt;/span&gt;            &lt;span class="nf"&gt;send_slack_message&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;channel_id&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Sorry, I couldn&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;t process that request. Error: &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="nf"&gt;str&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;e&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="c1"&gt;# Return 200 to acknowledge the event was received
&lt;/span&gt;    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;statusCode&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;200&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="c1"&gt;# Utility to verify Slack signatures (optional but recommended)
&lt;/span&gt;&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;verify_slack_signature&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;signing_secret&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;request_body&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;timestamp&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;signature&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="ow"&gt;not&lt;/span&gt; &lt;span class="n"&gt;signature&lt;/span&gt; &lt;span class="ow"&gt;or&lt;/span&gt; &lt;span class="ow"&gt;not&lt;/span&gt; &lt;span class="n"&gt;timestamp&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="bp"&gt;False&lt;/span&gt;
    &lt;span class="c1"&gt;# Slack sends 'v0=' prefix in signature
&lt;/span&gt;    &lt;span class="n"&gt;req&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;encode&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;v0:&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="nf"&gt;str&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;timestamp&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;:&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="n"&gt;request_body&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; 
    &lt;span class="n"&gt;secret&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;encode&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;signing_secret&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;hash_hex&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;v0=&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="n"&gt;hmac&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;new&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;secret&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;req&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;hashlib&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;sha256&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;hexdigest&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;hmac&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;compare_digest&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;hash_hex&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;signature&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="c1"&gt;# Function to send messages to Slack without using requests library
&lt;/span&gt;&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;send_slack_message&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;channel_id&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;text&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="sh"&gt;"""&lt;/span&gt;&lt;span class="s"&gt;
    Send a message to a Slack channel using AWS Lambda&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;s URL fetch capabilities
    instead of the requests library.
    &lt;/span&gt;&lt;span class="sh"&gt;"""&lt;/span&gt;
    &lt;span class="k"&gt;try&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="c1"&gt;# Prepare the request payload
&lt;/span&gt;        &lt;span class="n"&gt;payload&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;channel&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;channel_id&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;text&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;text&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt;

        &lt;span class="c1"&gt;# Convert payload to JSON string
&lt;/span&gt;        &lt;span class="n"&gt;payload_json&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;json&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;dumps&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;payload&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

        &lt;span class="c1"&gt;# Create a presigned URL for the Slack API
&lt;/span&gt;        &lt;span class="n"&gt;url&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;https://slack.com/api/chat.postMessage&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;

        &lt;span class="c1"&gt;# Use AWS SDK to make the HTTP request
&lt;/span&gt;        &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;urllib.request&lt;/span&gt;

        &lt;span class="c1"&gt;# Create a request object
&lt;/span&gt;        &lt;span class="n"&gt;req&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;urllib&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;request&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;Request&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;url&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

        &lt;span class="c1"&gt;# Add headers
&lt;/span&gt;        &lt;span class="n"&gt;req&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;add_header&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Content-Type&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;application/json&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="n"&gt;req&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;add_header&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Authorization&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Bearer &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;SLACK_BOT_TOKEN&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

        &lt;span class="c1"&gt;# Send the request
&lt;/span&gt;        &lt;span class="k"&gt;with&lt;/span&gt; &lt;span class="n"&gt;urllib&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;request&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;urlopen&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;req&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;data&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;payload_json&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;encode&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;utf-8&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;response&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
            &lt;span class="n"&gt;response_body&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;response&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;read&lt;/span&gt;&lt;span class="p"&gt;().&lt;/span&gt;&lt;span class="nf"&gt;decode&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;utf-8&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

            &lt;span class="c1"&gt;# Parse the response
&lt;/span&gt;            &lt;span class="n"&gt;response_json&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;json&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;loads&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;response_body&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
            &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="ow"&gt;not&lt;/span&gt; &lt;span class="n"&gt;response_json&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;ok&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="bp"&gt;False&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
                &lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Error sending message to Slack: &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;response_json&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;error&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;Unknown error&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="k"&gt;except&lt;/span&gt; &lt;span class="nb"&gt;Exception&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;e&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Error sending message to Slack: &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="nf"&gt;str&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;e&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This Lambda function handles the communication between Slack and the Bedrock Agent. Let's break down what it does:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;It processes incoming webhook events from Slack.&lt;/li&gt;
&lt;li&gt;It verifies the Slack signature to ensure the request is legitimate.&lt;/li&gt;
&lt;li&gt;It extracts the user's message and invokes the Bedrock Agent with that message.&lt;/li&gt;
&lt;li&gt;It receives the response from the Bedrock Agent and sends it back to the user in Slack.&lt;/li&gt;
&lt;li&gt;It includes error handling and event deduplication to prevent processing the same message multiple times.&lt;/li&gt;
&lt;/ol&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Important&lt;/strong&gt;: The SlackHandlerLambda requires several environment variables to function properly:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;BEDROCK_AGENT_ID&lt;/code&gt;: The ID of your Bedrock Agent (e.g., "agent-1234567890abcdef")&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;BEDROCK_AGENT_ALIAS&lt;/code&gt;: The alias ID of your Bedrock Agent (not the name)&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;SLACK_BOT_TOKEN&lt;/code&gt;: Your Slack Bot token (starts with "xoxb-")&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;SLACK_SIGNING_SECRET&lt;/code&gt;: Your Slack signing secret for request verification&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;BEDROCK_REGION&lt;/code&gt;: The AWS region where your Bedrock Agent is deployed (e.g., "eu-central-1")&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;ALLOWED_USER_IDS&lt;/code&gt; (optional): Comma-separated list of Slack user IDs allowed to use the bot&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Also, make sure to extend the Lambda timeout beyond the default 3 seconds. For the SlackHandlerLambda, I recommend setting it to at least 30 seconds, as the Bedrock Agent invocation can take time, especially for complex queries.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  Connecting the Lambda Function to the Action Group
&lt;/h3&gt;

&lt;p&gt;This is a crucial step that connects your Lambda function to your action group:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;In the Amazon Bedrock console, navigate back to your agent's action group&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;In the "Action group invocation" section, you have three options:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Quick create a new Lambda function&lt;/strong&gt; - Amazon Bedrock creates a basic Lambda function for you&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Select an existing Lambda function&lt;/strong&gt; - Choose the Lambda function you created earlier&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Return control&lt;/strong&gt; - The agent returns control to your application without invoking a Lambda&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Select "Select an existing Lambda function"&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Choose your Lambda function from the dropdown (e.g., "EC2OperationsLambda")&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Select the appropriate function version (usually $LATEST)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Click "Save" to connect your Lambda function to the action group&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Additionally, you need to grant Amazon Bedrock permission to invoke your Lambda function:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Navigate to the Lambda console and select your function&lt;/li&gt;
&lt;li&gt;Go to the "Permissions" tab&lt;/li&gt;
&lt;li&gt;Under "Resource-based policy statements", click "Add permissions"&lt;/li&gt;
&lt;li&gt;Choose "AWS service" as the principal&lt;/li&gt;
&lt;li&gt;For "Service", select "bedrock.amazonaws.com"&lt;/li&gt;
&lt;li&gt;For "Action", select "lambda:InvokeFunction"&lt;/li&gt;
&lt;li&gt;Click "Save" to add the permission&lt;/li&gt;
&lt;/ol&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Sidenote&lt;/strong&gt;: If you create the Lambda through the Bedrock Console Action Group view, the resource-based policy statements will be set automatically otherwise you need to add that into your Lambda manually.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  Setting Up IAM Permissions
&lt;/h3&gt;

&lt;p&gt;Remember Dr. Frankenstein's mistake? He gave his creation too much freedom. Learn from his error and implement proper IAM permissions! Your monster should only have exactly the access it needs - no more wandering into the village terrifying the EC2 instances.&lt;/p&gt;

&lt;p&gt;Let's create a proper IAM policy for our EC2 Operations Lambda function:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Navigate to the IAM console&lt;/li&gt;
&lt;li&gt;Select "Policies" and click "Create policy"&lt;/li&gt;
&lt;li&gt;Use the JSON editor to create a policy:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"Version"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"2012-10-17"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"Statement"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"Effect"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Allow"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"Action"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="s2"&gt;"ec2:DescribeInstances"&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"Resource"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"*"&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"Effect"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Allow"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"Action"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="s2"&gt;"logs:CreateLogGroup"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="s2"&gt;"logs:CreateLogStream"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="s2"&gt;"logs:PutLogEvents"&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"Resource"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"arn:aws:logs:*:*:*"&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Name your policy (e.g., "BedrockAgentEC2Operations")&lt;/li&gt;
&lt;li&gt;Create the policy&lt;/li&gt;
&lt;li&gt;Attach this policy to your Lambda function's execution role&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;For the SlackHandlerLambda, you'll need a different policy that allows it to invoke the Bedrock Agent:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"Version"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"2012-10-17"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"Statement"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"Effect"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Allow"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"Action"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="s2"&gt;"bedrock:InvokeAgent"&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"Resource"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"arn:aws:bedrock:*:*:agent/*"&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"Effect"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Allow"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"Action"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="s2"&gt;"logs:CreateLogGroup"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="s2"&gt;"logs:CreateLogStream"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="s2"&gt;"logs:PutLogEvents"&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"Resource"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"arn:aws:logs:*:*:*"&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Attach this policy to your SlackHandlerLambda's execution role.&lt;/p&gt;

&lt;h2&gt;
  
  
  API Gateway Setup
&lt;/h2&gt;

&lt;p&gt;Now that we have our Lambda functions ready, we need to create an API Gateway to receive webhook events from Slack. This will serve as the secure entry point for our integration.&lt;/p&gt;

&lt;h3&gt;
  
  
  Creating the API Gateway
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;Navigate to the Amazon API Gateway console&lt;/li&gt;
&lt;li&gt;Click "Create API"&lt;/li&gt;
&lt;li&gt;Select "HTTP API" (not REST API) for a simpler, more cost-effective solution&lt;/li&gt;
&lt;li&gt;Click "Build"&lt;/li&gt;
&lt;li&gt;In the "Integrations" section, select "Lambda"&lt;/li&gt;
&lt;li&gt;Choose your SlackHandlerLambda function&lt;/li&gt;
&lt;li&gt;For "API name", enter a descriptive name (e.g., "slack-bedrock-integration")&lt;/li&gt;
&lt;li&gt;Click "Next"&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Configuring Routes
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;In the "Configure routes" section, click "Add route"&lt;/li&gt;
&lt;li&gt;Set the method to "POST"&lt;/li&gt;
&lt;li&gt;Set the path to "/" (or any path you prefer)&lt;/li&gt;
&lt;li&gt;Select your SlackHandlerLambda as the integration target&lt;/li&gt;
&lt;li&gt;Click "Next"&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Configuring Stages
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;In the "Configure stages" section, keep the default stage name "$default"&lt;/li&gt;
&lt;li&gt;Enable automatic deployments&lt;/li&gt;
&lt;li&gt;Click "Next"&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Reviewing and Creating
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;Review your API configuration&lt;/li&gt;
&lt;li&gt;Click "Create"&lt;/li&gt;
&lt;li&gt;Once created, note the API endpoint URL (e.g., "&lt;a href="https://abcdef123.execute-api.us-east-1.amazonaws.com%22" rel="noopener noreferrer"&gt;https://abcdef123.execute-api.us-east-1.amazonaws.com"&lt;/a&gt;)&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Enhancing API Gateway Security
&lt;/h3&gt;

&lt;p&gt;For a production environment, you should implement additional security measures for your API Gateway:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Add Request Validation&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Navigate to your API in the API Gateway console&lt;/li&gt;
&lt;li&gt;Select "Routes" and then your POST route&lt;/li&gt;
&lt;li&gt;Click "Edit"&lt;/li&gt;
&lt;li&gt;Under "Request validator", select "Validate body"&lt;/li&gt;
&lt;li&gt;Click "Save"&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Add Throttling&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Navigate to your API in the API Gateway console&lt;/li&gt;
&lt;li&gt;Select "Stages" and then your stage&lt;/li&gt;
&lt;li&gt;Click "Edit"&lt;/li&gt;
&lt;li&gt;Set appropriate rate and burst limits (e.g., 10 requests per second)&lt;/li&gt;
&lt;li&gt;Click "Save"&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Add CORS Configuration&lt;/strong&gt; (if needed):&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Navigate to your API in the API Gateway console&lt;/li&gt;
&lt;li&gt;Select "CORS"&lt;/li&gt;
&lt;li&gt;Configure appropriate CORS settings&lt;/li&gt;
&lt;li&gt;Click "Save"&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Add WAF Integration&lt;/strong&gt; (optional but recommended for production):&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Navigate to the AWS WAF console&lt;/li&gt;
&lt;li&gt;Create a web ACL with appropriate rules&lt;/li&gt;
&lt;li&gt;Associate the web ACL with your API Gateway&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Add CloudWatch Logging&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Navigate to your API in the API Gateway console&lt;/li&gt;
&lt;li&gt;Select "Logging"&lt;/li&gt;
&lt;li&gt;Enable CloudWatch logging&lt;/li&gt;
&lt;li&gt;Set an appropriate log level (e.g., ERROR for production)&lt;/li&gt;
&lt;li&gt;Click "Save"&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;These security enhancements will help protect your API Gateway from common threats and ensure it operates reliably in a production environment.&lt;/p&gt;

&lt;h3&gt;
  
  
  Testing the API Gateway
&lt;/h3&gt;

&lt;p&gt;Before proceeding to the Slack integration, let's test our API Gateway to ensure it's working correctly:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Use a tool like curl or Postman to send a POST request to your API endpoint&lt;/li&gt;
&lt;li&gt;Include a simple JSON payload in the request body&lt;/li&gt;
&lt;li&gt;Check the CloudWatch logs for your SlackHandlerLambda to ensure it received and processed the request&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;If everything is working correctly, you should see logs indicating that your Lambda function was invoked and processed the request. Now we're ready to integrate with Slack!&lt;/p&gt;

&lt;h2&gt;
  
  
  Slack Integration
&lt;/h2&gt;

&lt;p&gt;Now that we have our Bedrock Agent and API Gateway set up, it's time to integrate with Slack. This is where our creation truly comes to life!&lt;/p&gt;

&lt;h3&gt;
  
  
  Creating a Slack App
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;Go to the &lt;a href="https://api.slack.com/apps" rel="noopener noreferrer"&gt;Slack API website&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Click "Create New App"&lt;/li&gt;
&lt;li&gt;Choose "From scratch"&lt;/li&gt;
&lt;li&gt;Enter a name for your app (e.g., "AWS AI Assistant")&lt;/li&gt;
&lt;li&gt;Select the workspace where you want to install the app&lt;/li&gt;
&lt;li&gt;Click "Create App"&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Configuring Bot Token Scopes
&lt;/h3&gt;

&lt;p&gt;For our Slack Bot to function properly, we need to configure the appropriate permissions:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;In the left sidebar, click on "OAuth &amp;amp; Permissions"&lt;/li&gt;
&lt;li&gt;Scroll down to the "Scopes" section&lt;/li&gt;
&lt;li&gt;Under "Bot Token Scopes", click "Add an OAuth Scope"&lt;/li&gt;
&lt;li&gt;Add the following scopes:

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;app_mentions:read&lt;/code&gt; - View messages that directly mention @AWS AI Assistant in conversations that the app is in&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;chat:write&lt;/code&gt; - Send messages as @AWS AI Assistant&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;im:history&lt;/code&gt; - View messages and other content in direct messages that AWS AI Assistant has been added to&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;im:read&lt;/code&gt; - View basic information about direct messages that AWS AI Assistant has been added to&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;im:write&lt;/code&gt; - Start direct messages with people&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;mpim:write&lt;/code&gt; - Start group direct messages with people&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;These scopes allow your bot to read messages, send responses, and interact with users in direct messages and group conversations.&lt;/p&gt;

&lt;h3&gt;
  
  
  Activating Messages Tab in App Home
&lt;/h3&gt;

&lt;p&gt;An important step that's easy to miss is activating the Messages Tab in your App Home:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;In the left sidebar, click on "App Home"&lt;/li&gt;
&lt;li&gt;Scroll down to the "Show Tabs" section&lt;/li&gt;
&lt;li&gt;Enable the "Messages Tab"&lt;/li&gt;
&lt;li&gt;Save your changes&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;This step is crucial for allowing users to send direct messages to your bot. Without it, users won't be able to initiate conversations with your assistant.&lt;/p&gt;

&lt;h3&gt;
  
  
  Setting Up Event Subscriptions
&lt;/h3&gt;

&lt;p&gt;Now, we need to configure Slack to send events to our API Gateway:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;In the left sidebar, click on "Event Subscriptions"&lt;/li&gt;
&lt;li&gt;Toggle "Enable Events" to On&lt;/li&gt;
&lt;li&gt;In the "Request URL" field, enter your API Gateway URL&lt;/li&gt;
&lt;li&gt;Slack will send a verification challenge to your endpoint - if your Lambda function is set up correctly, it should automatically respond and verify the URL&lt;/li&gt;
&lt;li&gt;Under "Subscribe to bot events", click "Add Bot User Event"&lt;/li&gt;
&lt;li&gt;Add the following events:

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;message.im&lt;/code&gt; - Subscribe to direct messages sent to your bot&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;app_mention&lt;/code&gt; - Subscribe to mentions of your bot in channels&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Click "Save Changes"&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Installing the App to Your Workspace
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;In the left sidebar, click on "Install App"&lt;/li&gt;
&lt;li&gt;Click "Install to Workspace"&lt;/li&gt;
&lt;li&gt;Review the permissions and click "Allow"&lt;/li&gt;
&lt;li&gt;Copy the "Bot User OAuth Token" (starts with &lt;code&gt;xoxb-&lt;/code&gt;) - you'll need this for your SlackHandlerLambda environment variable&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Updating the SlackHandlerLambda Environment Variables
&lt;/h3&gt;

&lt;p&gt;Now that we have our Slack app set up, we need to update the environment variables in our SlackHandlerLambda:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Navigate to the AWS Lambda console&lt;/li&gt;
&lt;li&gt;Select your SlackHandlerLambda function&lt;/li&gt;
&lt;li&gt;Scroll down to the "Environment variables" section&lt;/li&gt;
&lt;li&gt;Add the following environment variables:

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;BEDROCK_AGENT_ID&lt;/code&gt;: Your Bedrock Agent ID&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;BEDROCK_AGENT_ALIAS&lt;/code&gt;: Your Bedrock Agent Alias ID&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;SLACK_BOT_TOKEN&lt;/code&gt;: The Bot User OAuth Token you copied earlier&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;SLACK_SIGNING_SECRET&lt;/code&gt;: Found in the "Basic Information" section of your Slack app under "App Credentials"&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;BEDROCK_REGION&lt;/code&gt;: The AWS region where your Bedrock Agent is deployed&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Click "Save"&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Testing the Integration
&lt;/h3&gt;

&lt;p&gt;Now it's time to test our integration! There are two ways to test:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Testing in the Bedrock Agent Console&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Navigate to the Amazon Bedrock console&lt;/li&gt;
&lt;li&gt;Select your agent&lt;/li&gt;
&lt;li&gt;Click "Test"&lt;/li&gt;
&lt;li&gt;Try asking about your EC2 instances: "Show me the ec2 instances in eu-central-1"&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8ko8y4woya0srzuc78vi.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8ko8y4woya0srzuc78vi.jpg" alt="Test Agent showing EC2 instance query results in the Bedrock console" width="533" height="272"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Testing in Slack&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Open your Slack workspace&lt;/li&gt;
&lt;li&gt;Find your bot in the Apps section or direct messages&lt;/li&gt;
&lt;li&gt;Send a message like "Show me the ec2 instances in eu-central-1"&lt;/li&gt;
&lt;li&gt;Your bot should respond with a list of instances&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7aqhg9mznzt67w8xnvj3.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7aqhg9mznzt67w8xnvj3.jpg" alt="Slack Integration Success showing EC2 instance details in Slack" width="800" height="684"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If everything is set up correctly, you should see your bot responding with information about your EC2 instances. Congratulations! Your AWS AI Assistant is alive and working!&lt;/p&gt;

&lt;h3&gt;
  
  
  Troubleshooting Common Issues
&lt;/h3&gt;

&lt;p&gt;If you encounter issues with your integration, here are some common problems and solutions:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Bot not responding in Slack&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Check that your SlackHandlerLambda environment variables are set correctly&lt;/li&gt;
&lt;li&gt;Verify that your API Gateway endpoint is correctly configured in Slack's Event Subscriptions&lt;/li&gt;
&lt;li&gt;Check the CloudWatch logs for your SlackHandlerLambda for error messages&lt;/li&gt;
&lt;li&gt;Ensure the Messages Tab is activated in App Home&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Lambda timeout errors&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Increase the timeout for both Lambda functions beyond the default 3 seconds&lt;/li&gt;
&lt;li&gt;For SlackHandlerLambda, set it to at least 60 seconds&lt;/li&gt;
&lt;li&gt;For EC2OperationsLambda, set it to at least 30 seconds&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Permission errors&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Verify that your Lambda functions have the correct IAM permissions&lt;/li&gt;
&lt;li&gt;Check that Bedrock has permission to invoke your EC2OperationsLambda&lt;/li&gt;
&lt;li&gt;Ensure your Slack app has all the required scopes&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Bedrock Agent errors&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Make sure your agent is properly prepared and working in the test console&lt;/li&gt;
&lt;li&gt;Verify that your action group schema matches the implementation in your Lambda function&lt;/li&gt;
&lt;li&gt;Check that your agent has clear instructions&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;By following these troubleshooting steps, you should be able to resolve most common issues with your integration.&lt;/p&gt;

&lt;h2&gt;
  
  
  Security Considerations
&lt;/h2&gt;

&lt;p&gt;Throughout this guide, we've touched on various security aspects, but let's take a moment to dive deeper into the security considerations for this integration. When building an AI assistant that can interact with your AWS environment, security should be your top priority.&lt;/p&gt;

&lt;h3&gt;
  
  
  IAM Permissions and Least Privilege
&lt;/h3&gt;

&lt;p&gt;The principle of least privilege is crucial when setting up IAM permissions for your Lambda functions. Your EC2OperationsLambda should only have the specific permissions it needs to perform its tasks - in our example, that's just &lt;code&gt;ec2:DescribeInstances&lt;/code&gt;. If you expand the functionality to include other operations, add only the specific permissions needed for those operations.&lt;/p&gt;

&lt;p&gt;For example, if you want to allow your assistant to start and stop instances, you would add only these specific permissions:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"Effect"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Allow"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"Action"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="s2"&gt;"ec2:StartInstances"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="s2"&gt;"ec2:StopInstances"&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"Resource"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"arn:aws:ec2:*:*:instance/*"&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Never give your Lambda functions broad permissions like &lt;code&gt;ec2:*&lt;/code&gt; as this would allow the function to perform any EC2 operation, including deleting instances or creating new ones.&lt;/p&gt;

&lt;h3&gt;
  
  
  Slack App Security
&lt;/h3&gt;

&lt;p&gt;Your Slack app is the interface through which users interact with your AWS environment, so it's important to secure it properly:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Request Verification&lt;/strong&gt;: Always verify that requests are coming from Slack using the signing secret and signature verification process.&lt;br&gt;
&lt;strong&gt;User Authorization&lt;/strong&gt;: Consider implementing user-based authorization to control who can use your bot. The SlackHandlerLambda includes an optional &lt;code&gt;ALLOWED_USER_IDS&lt;/code&gt; environment variable for this purpose.&lt;br&gt;
&lt;strong&gt;Token Security&lt;/strong&gt;: Store your Slack tokens securely. In our example, we use environment variables, but for production, consider using AWS Secrets Manager.&lt;br&gt;
&lt;strong&gt;Scoped Permissions&lt;/strong&gt;: Only request the specific Slack scopes your bot needs. Avoid requesting broad scopes like &lt;code&gt;channels:read&lt;/code&gt; if you only need to interact with direct messages.&lt;/p&gt;

&lt;h3&gt;
  
  
  API Gateway Security
&lt;/h3&gt;

&lt;p&gt;Your API Gateway is the entry point to your AWS environment, so it needs to be properly secured:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Request Validation&lt;/strong&gt;: Validate incoming requests to ensure they match the expected format.&lt;br&gt;
&lt;strong&gt;Throttling&lt;/strong&gt;: Implement throttling to prevent abuse and denial-of-service attacks.&lt;br&gt;
&lt;strong&gt;WAF Integration&lt;/strong&gt;: Consider integrating with AWS WAF to protect against common web exploits.&lt;br&gt;
&lt;strong&gt;Logging and Monitoring&lt;/strong&gt;: Enable detailed logging and set up monitoring to detect unusual activity.&lt;/p&gt;

&lt;h3&gt;
  
  
  Data Handling and Privacy
&lt;/h3&gt;

&lt;p&gt;When handling user data and AWS resource information, consider these privacy aspects:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Data Minimization&lt;/strong&gt;: Only collect and process the data you need.&lt;br&gt;
&lt;strong&gt;Data Retention&lt;/strong&gt;: Don't store sensitive information longer than necessary.&lt;br&gt;
&lt;strong&gt;Secure Transmission&lt;/strong&gt;: Ensure all data is transmitted securely using HTTPS.&lt;br&gt;
&lt;strong&gt;Response Sanitization&lt;/strong&gt;: Be careful about what information you return to users. For example, you might want to redact or mask certain sensitive information like IP addresses.&lt;/p&gt;

&lt;h2&gt;
  
  
  Additional Use Cases
&lt;/h2&gt;

&lt;p&gt;While our example focused on describing EC2 instances, this integration can be extended to support a wide range of use cases. Here are some ideas, all with security in mind:&lt;/p&gt;

&lt;h3&gt;
  
  
  Security-Focused Use Cases
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Security Posture Checking&lt;/strong&gt;: Ask your assistant to check for security issues like open security groups, public S3 buckets, or IAM users without MFA.&lt;/p&gt;

&lt;p&gt;Example: "Check for S3 buckets with public access in all regions"&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Compliance Verification&lt;/strong&gt;: Verify that resources comply with your organization's security policies.&lt;/p&gt;

&lt;p&gt;Example: "Show me EC2 instances that don't have encryption enabled"&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Security Incident Response&lt;/strong&gt;: Use your assistant to help with security incident response.&lt;/p&gt;

&lt;p&gt;Example: "Show me all CloudTrail events for user john.doe in the last 24 hours"&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Security Notifications&lt;/strong&gt;: Set up your assistant to notify you about security events.&lt;/p&gt;

&lt;p&gt;Example: "Alert me when there are failed login attempts to the AWS console"&lt;/p&gt;

&lt;h3&gt;
  
  
  Operational Efficiency Use Cases
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Resource Monitoring&lt;/strong&gt;: Monitor the status and health of your AWS resources.&lt;/p&gt;

&lt;p&gt;Example: "Show me all RDS instances with high CPU usage"&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Cost Optimization&lt;/strong&gt;: Identify opportunities to reduce costs.&lt;/p&gt;

&lt;p&gt;Example: "Find unused EBS volumes across all regions"&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Automated Remediation&lt;/strong&gt;: Automate common remediation tasks.&lt;/p&gt;

&lt;p&gt;Example: "Stop all development environment instances outside of business hours"&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Documentation Access&lt;/strong&gt;: Access and search through your documentation.&lt;/p&gt;

&lt;p&gt;Example: "What's our process for deploying to production?"&lt;/p&gt;

&lt;h3&gt;
  
  
  DevOps Use Cases
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Deployment Status&lt;/strong&gt;: Check the status of your deployments.&lt;/p&gt;

&lt;p&gt;Example: "What's the status of the latest deployment to production?"&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Infrastructure Management&lt;/strong&gt;: Manage your infrastructure through natural language.&lt;/p&gt;

&lt;p&gt;Example: "Scale the web server auto-scaling group to 5 instances"&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Log Analysis&lt;/strong&gt;: Search and analyze logs.&lt;/p&gt;

&lt;p&gt;Example: "Show me error logs from the payment service in the last hour"&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Environment Management&lt;/strong&gt;: Manage different environments.&lt;/p&gt;

&lt;p&gt;Example: "Compare the configuration between staging and production"&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;In this guide, we've built a powerful AWS AI assistant by integrating Amazon Bedrock Agents with Slack. We've created a secure, scalable architecture that allows users to interact with AWS resources using natural language.&lt;/p&gt;

&lt;p&gt;The integration we've built demonstrates the power of agentic AI assistants in simplifying AWS operations. By leveraging Amazon Bedrock's capabilities and connecting them to Slack, we've created a tool that can significantly improve productivity and accessibility of AWS resources.&lt;/p&gt;

&lt;p&gt;Remember that security should always be at the forefront when building tools that interact with your AWS environment. By following the principle of least privilege, implementing proper authentication and authorization, and being mindful of data privacy, you can create a powerful assistant without compromising security.&lt;/p&gt;

&lt;p&gt;As you expand your assistant's capabilities beyond the EC2 instance example we've covered, continue to apply these security principles to each new feature. With the right balance of functionality and security, your AWS AI assistant can become an invaluable tool for your team.&lt;/p&gt;

&lt;p&gt;So go ahead, bring your AWS AI assistant to life! Just remember to give it the right permissions, unlike Dr. Frankenstein's creation. Your monster should be helpful, not terrifying! 😉&lt;/p&gt;

</description>
      <category>aws</category>
      <category>ai</category>
      <category>agentaichallenge</category>
      <category>agenticai</category>
    </item>
    <item>
      <title>Securing and enhancing LLM prompts &amp; outputs: A guide using Amazon Bedrock Guardrails and open-source solutions</title>
      <dc:creator>Artur Schneider</dc:creator>
      <pubDate>Tue, 01 Oct 2024 13:53:18 +0000</pubDate>
      <link>https://dev.to/aws-builders/securing-and-enhancing-llm-prompts-outputs-a-guide-using-amazon-bedrock-and-open-source-solutions-3akc</link>
      <guid>https://dev.to/aws-builders/securing-and-enhancing-llm-prompts-outputs-a-guide-using-amazon-bedrock-and-open-source-solutions-3akc</guid>
      <description>&lt;p&gt;In my conversations with customers, I frequently encounter the same critical questions: "How do you secure your LLMs?" and "How do you ensure the quality of the answers?" These concerns highlight a common hesitation among businesses to provide solutions with LLM intelligence to their own customers due to the significant risks. These risks include not only the possibility of incorrect outputs, which can damage a business's reputation and adversely affect their customers, but also security threats such as prompt injection attacks that could lead to the loss of proprietary data or other sensitive information.&lt;/p&gt;

&lt;p&gt;Given these concerns, this blog post aims to address the twin challenges of securing LLMs and ensuring their output quality. Leveraging my AWS background, we will explore the capabilities of Amazon Bedrock native capabilities as well as various open-source tools. We will explore strategies to mitigate these risks and enhance the reliability of your LLM-powered applications.&lt;/p&gt;

&lt;h2&gt;
  
  
  Understanding the risks
&lt;/h2&gt;

&lt;p&gt;As technology evolves, so do the security threats associated with it. The rapid advancement of Large Language Models (LLMs) has brought significant benefits, but it has also introduced new challenges. Ensuring the security and quality of LLM outputs is crucial to prevent potential harm and misuse. Before we dive into the solutions, let’s first understand the key security concerns and quality issues that businesses need to address when offering LLM solutions to their customers.&lt;/p&gt;

&lt;p&gt;For example, in 2023, OpenAI faced scrutiny when a bug allowed users to see snippets of other users' chat histories, raising serious data privacy concerns &lt;a href="https://securityintelligence.com/articles/chatgpt-confirms-data-breach/" rel="noopener noreferrer"&gt;ChatGPT confirms data breach, raising security concerns&lt;/a&gt;. Similarly, Google's AI chat tool Bard sparked controversy due to biased and factually incorrect responses during its debut, demonstrating the importance of maintaining output quality &lt;a href="https://theweek.com/google/959623/googles-bard-ai-chatbot-makes-100bn-mistake" rel="noopener noreferrer"&gt;Google’s Bard: AI chatbot makes $100bn mistake&lt;/a&gt;. In this section, we will explore the various security concerns and quality issues that businesses must address when making LLM solutions available to their customers.&lt;/p&gt;

&lt;h3&gt;
  
  
  Security Concerns
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Data Privacy:&lt;/strong&gt; Sensitive data used for training can be inadvertently exposed.LLMs, if not properly managed, can leak information contained in the training data.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Prompt Injection Attacks:&lt;/strong&gt; Malicious inputs can be crafted to manipulate the model's behavior, potentially causing it to reveal sensitive information or perform unintended actions.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Model Theft:&lt;/strong&gt; Unauthorized access to the model can lead to intellectual property theft, compromising the proprietary techniques and data used in its development.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Adversarial Attacks:&lt;/strong&gt; Adversarial inputs designed to deceive the model can cause it to generate harmful or incorrect outputs, impacting the reliability of the LLM.&lt;/li&gt;
&lt;li&gt;Unauthorized Access:Insufficient access controls can result in unauthorized personnel gaining access to the model, data, or infrastructure, leading to potential misuse.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Inference Attacks:&lt;/strong&gt; Attackers can infer sensitive attributes about the training data by analyzing the model's outputs, posing a privacy risk.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Data Poisoning:&lt;/strong&gt; Attackers can corrupt the training data with malicious content to compromise the integrity of the model's outputs.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Model Evasion:&lt;/strong&gt; Attackers may develop techniques to bypass security measures and exploit vulnerabilities in the model.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Quality Issues
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Bias and Fairness:&lt;/strong&gt; LLMs can perpetuate and amplify biases present in training data, leading to outputs that are unfair or discriminatory.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Accuracy:&lt;/strong&gt; Generated content can be factually incorrect or misleading, which is critical to address, especially in domains where incorrect information can have serious consequences.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Coherence and Relevance:&lt;/strong&gt; Outputs need to be contextually appropriate and coherent. Inconsistent or irrelevant responses can undermine the usefulness and trustworthiness of the LLM.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Ethical Considerations:&lt;/strong&gt; Ensuring that the model's outputs adhere to ethical standards and do not produce harmful, offensive, or inappropriate content.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Robustness:&lt;/strong&gt; Ensuring that the model can handle a wide range of inputs without producing errors or undesirable outputs.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Transparency and Explainability:&lt;/strong&gt; Making sure that the model’s decision-making process is transparent and its outputs are explainable, which helps in building trust with users.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  How to secure and ensure high-quality inputs and outputs for LLMs with Amazon Bedrock Guardrails
&lt;/h2&gt;

&lt;p&gt;Amazon Bedrock provides built-in capabilities to secure and validate Large Language Model (LLM) inputs and outputs through its native guardrails feature. These guardrails are essential for ensuring that LLMs behave according to predefined security, ethical, and operational standards. With Guardrails, you can set up filtering mechanisms to protect against harmful outputs, inappropriate inputs, sensitive information leakage, and more. In this section, we will dive into the details of how you can implement these features using Amazon Bedrock, walking through the setup process with visual aids.&lt;/p&gt;

&lt;h4&gt;
  
  
  Step 1: Provide Guardrail Details
&lt;/h4&gt;

&lt;p&gt;In the first step, you can define the basic information for your guardrail setup:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fypcxszoul228rbhtxwmd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fypcxszoul228rbhtxwmd.png" alt="Bedrock Guardrail_1" width="800" height="370"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Name and Description&lt;/strong&gt;: Here, you define a name (e.g., "DemoGuardrail") and a description of what the guardrail is designed to do (e.g., "Guardrail for demos").&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Messaging for Blocked Prompts&lt;/strong&gt;: If a prompt is blocked by the guardrail, you can customize the message shown to users, such as “Sorry, the model cannot answer this question.”&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;KMS Key Selection (Optional)&lt;/strong&gt;: Optionally, you can select a KMS (Key Management Service) key to encrypt sensitive information within this guardrail.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This provides a foundation for guardrail implementation, allowing you to define how the model responds to blocked content.&lt;/p&gt;

&lt;h4&gt;
  
  
  Step 2: Configure Content Filters
&lt;/h4&gt;

&lt;p&gt;Content filters allow you to detect and block harmful user inputs and model responses across predefined categories like &lt;strong&gt;Hate, Insults, Sexual Content, Violence, and Misconduct&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fszvrjw4nn0r5qw2ur954.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fszvrjw4nn0r5qw2ur954.png" alt="Bedrock Guardrail_2" width="800" height="370"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Harmful Categories&lt;/strong&gt;: For each category, you can adjust the sensitivity to "None", "Low", "Medium", or "High". This flexibility allows you to fine-tune how strictly the model filters content.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Prompt Attacks&lt;/strong&gt;: Enabling prompt attack filters helps detect and block user inputs attempting to override the system's instructions. You can adjust the sensitivity for prompt attacks to ensure robust protection against injection attacks.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These filters are crucial for preventing harmful or unwanted content from being generated by the model or entered by users.&lt;/p&gt;

&lt;h4&gt;
  
  
  Step 3: Add Denied Topics
&lt;/h4&gt;

&lt;p&gt;You can define specific topics that should not be discussed by the model, ensuring the LLM doesn’t respond to sensitive or restricted queries.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi70g83i9o4fnhucz25v8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi70g83i9o4fnhucz25v8.png" alt="Bedrock Guardrail_3" width="800" height="306"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Denied Topics&lt;/strong&gt;: You can add up to 30 denied topics (e.g., "Investment"), and provide sample phrases that the model should block related to that topic.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Customization&lt;/strong&gt;: For each topic, you define a clear explanation and add up to five phrases (e.g., “Where should I invest my money?”) to ensure the model avoids restricted discussions.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This helps prevent the LLM from engaging in specific conversations, such as offering financial or medical advice.&lt;/p&gt;

&lt;h4&gt;
  
  
  Step 4: Add Word Filters
&lt;/h4&gt;

&lt;p&gt;With word filters, you can further refine the model's behavior by blocking certain words or phrases from being used in inputs and outputs.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1gan1dceqmv9f2y4785v.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1gan1dceqmv9f2y4785v.png" alt="Bedrock Guardrail_4" width="800" height="319"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Profanity Filter&lt;/strong&gt;: This built-in filter allows you to block profane words globally across inputs and outputs.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Custom Word List&lt;/strong&gt;: You can manually add specific words or phrases, upload them from a local file, or use an S3 object. This lets you block specific terminology that may be inappropriate for your use case.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These word filters ensure that sensitive or inappropriate terms do not appear in the LLM's responses or user inputs.&lt;/p&gt;

&lt;h4&gt;
  
  
  Step 5: Add Sensitive Information Filters
&lt;/h4&gt;

&lt;p&gt;This step focuses on safeguarding personally identifiable information (PII). You can specify which types of PII should be masked or blocked in LLM responses.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fthljxjxn0pg48lt96b6t.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fthljxjxn0pg48lt96b6t.png" alt="Bedrock Guardrail_5" width="800" height="363"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;PII Types&lt;/strong&gt;: The system lets you add specific PII types such as Phone Numbers, Email Addresses, and Credit/Debit Card Numbers.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Guardrail Behavior&lt;/strong&gt;: For each PII type, you can choose to either mask or block it completely, ensuring that sensitive information is not inadvertently exposed.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This ensures robust data protection and compliance with privacy regulations like GDPR.&lt;/p&gt;

&lt;h4&gt;
  
  
  Step 6: Add Contextual Grounding Check
&lt;/h4&gt;

&lt;p&gt;One of the key features for ensuring output quality is the contextual grounding check. This validates whether model responses are grounded in factual information and are relevant to the user’s query.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp7942lel326mwup8xg97.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp7942lel326mwup8xg97.png" alt="Bedrock Guardrail_6" width="800" height="304"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Grounding Check&lt;/strong&gt;: The grounding check ensures that model responses are factually correct and based on provided reference sources.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Relevance Check&lt;/strong&gt;: This feature validates whether model responses are relevant to the query and blocks outputs that fall below a defined threshold for relevance and accuracy.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These checks are particularly useful in preventing hallucinations, where the model generates incorrect or irrelevant responses.&lt;/p&gt;

&lt;h4&gt;
  
  
  Step 7: Review and Create Guardrails
&lt;/h4&gt;

&lt;p&gt;After configuring all necessary filters and checks, you can review your setup and create the guardrail. Once activated, the system allows you to immediately test the guardrail directly within the console by entering prompts to see how it blocks or modifies responses according to your settings. Additionally, you can attach the guardrail to your specific use case to ensure it functions correctly in real-world scenarios, providing a seamless way to protect and enhance your LLM application.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyp4cmvoih7xnrszbczam.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyp4cmvoih7xnrszbczam.png" alt="Bedrock Guardrail_7" width="337" height="768"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Amazon Bedrock provides powerful native capabilties to secure and ensure high-quality outputs for your LLMs, allowing you to protect against harmful content, ensure data privacy, and prevent model hallucinations. By configuring content filters, denied topics, sensitive information filters, and grounding checks, you can fine-tune your model to meet security, ethical, and operational standards. Now, let's explore some open-source solutions that can be implemented to secure your LLM independently of using Amazon Bedrock.&lt;/p&gt;

&lt;h2&gt;
  
  
  How to secure and ensure high-quality inputs and outputs for LLMs with open-source solutions
&lt;/h2&gt;

&lt;p&gt;When it comes to securing and ensuring high-quality outputs from LLMs, there are numerous open-source solutions available. The choice of solution depends on your specific demands and use case, such as the level of security required, the sensitivity of the data being processed, and the quality standards your business must meet.&lt;/p&gt;

&lt;p&gt;In this section, we will focus on two of the most popular and widely adopted open-source tools for securing LLMs and maintaining output quality:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;LLM Guard&lt;/strong&gt; by &lt;a href="https://protectai.com/" rel="noopener noreferrer"&gt;Protect AI&lt;/a&gt; - A robust solution designed to provide security for LLMs, protecting against various input and output vulnerabilities.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;DeepEval&lt;/strong&gt; by &lt;a href="https://docs.confident-ai.com/" rel="noopener noreferrer"&gt;Confident AI&lt;/a&gt; - A leading tool for evaluating and maintaining the quality of LLM outputs, ensuring accuracy, coherence, and relevance in responses.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Both solutions offer extensive features to enhance the security and quality of your LLM applications. Let's take a closer look at how they work and the benefits they offer.&lt;/p&gt;

&lt;h3&gt;
  
  
  How to secure your LLM input and output with LLM Guard
&lt;/h3&gt;

&lt;p&gt;LLM Guard, developed by Protect AI, is an open-source solution that acts as a proxy between your application and the LLM, filtering inputs and outputs in real-time. By sitting between your application and the LLM, LLM Guard ensures that sensitive data, inappropriate content, or malicious prompt injections are intercepted and handled before they reach or leave the model. This makes LLM Guard highly adaptable to any environment where LLMs are integrated, allowing seamless deployment without directly modifying your LLM's architecture. The tool can be easily included into existing LLM workflows, serving as a middle layer to secure the entire interaction cycle between users and the model.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7kpx91puf03nc3u3kkd1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7kpx91puf03nc3u3kkd1.png" alt="llm-guard_1" width="800" height="466"&gt;&lt;/a&gt;&amp;gt; &lt;em&gt;Image source:&lt;a href="https://llm-guard.com/" rel="noopener noreferrer"&gt;https://llm-guard.com/&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Key features of LLM Guard include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Input and Output Filtering&lt;/strong&gt;: LLM Guard applies a wide range of filters, including anonymization, topic banning, regex matching, and more, to ensure that inputs and outputs comply with security protocols.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Prompt Injection Protection&lt;/strong&gt;: The tool is designed to detect and block prompt injection attempts that could lead to unwanted or harmful behaviors from the LLM.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;PII Detection and Redaction&lt;/strong&gt;: LLM Guard automatically identifies and redacts sensitive information, such as names, phone numbers, and email addresses, ensuring that private data is not exposed in outputs.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Customizable Scanners&lt;/strong&gt;: LLM Guard allows users to define specific "scanners" that monitor for different types of sensitive or inappropriate content, giving flexibility in controlling the behavior of the LLM.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;LLM Guard can be easily integrated into your infrastructure as it functions as a proxy, ensuring that all inputs and outputs go through a comprehensive security check before and after interacting with the LLM. You can find more details on the project’s code and features on the &lt;a href="https://github.com/protectai/llm-guard" rel="noopener noreferrer"&gt;LLM Guard GitHub repository&lt;/a&gt;. To test the tool interactively, Protect AI has provided a &lt;a href="https://huggingface.co/spaces/protectai/llm-guard-playground" rel="noopener noreferrer"&gt;playground hosted on Hugging Face&lt;/a&gt;, where you can try different filters and configurations.&lt;/p&gt;

&lt;p&gt;Let’s now walk through how LLM Guard functions using the Hugging Face playground and real-world examples of processing inputs and outputs.&lt;/p&gt;

&lt;h4&gt;
  
  
  Step 1: Setting Up Input Filters
&lt;/h4&gt;

&lt;p&gt;When configuring LLM Guard, you have the flexibility to apply filters to either prompts (inputs) or outputs, ensuring that both the data being sent to the model and the data generated by the model are secure and compliant. The range of scanners allows for thorough customization, and each scanner can be individually adjusted to meet specific security and compliance needs.&lt;/p&gt;

&lt;p&gt;You can activate multiple scanners based on your requirements. Additionally, each scanner offers fine-tuned control, allowing you to modify thresholds, sensitivity, or specific filter behaviors. For example, you can set the strictness of the BanCode scanner or configure the Anonymize scanner to target specific entities such as credit card numbers or email addresses.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsjd5nv55rk1vcawzmfr3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsjd5nv55rk1vcawzmfr3.png" alt="llm-guard_prompt" width="498" height="842"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Prompt (Input) scanners&lt;/strong&gt;:&lt;br&gt;
Anonymize, BanCode, BanCompetitors, BanSubstrings, BanTopics, Code, Gibberish, Language, PromptInjection, Regex, Secrets, Sentiment, TokenLimit, Toxicity.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgxaur0zruq6prvbuogpk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgxaur0zruq6prvbuogpk.png" alt="llm-guard_output" width="526" height="692"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Output scanners&lt;/strong&gt;:&lt;br&gt;
BanCode, BanCompetitors, BanSubstrings, BanTopics, Bias, Code, Deanonymize, JSON, Language, LanguageSame, MaliciousURLs, NoRefusal, NoRefusalLightResponse, FactualConsistency, Gibberish, Regex, Relevance, Sensitive, Sentiment, Toxicity, URLReachability.&lt;/p&gt;
&lt;h4&gt;
  
  
  Step 2: Processing and Sanitizing the Prompt
&lt;/h4&gt;

&lt;p&gt;Once the input filters are configured, LLM Guard processes the prompt. In this example, a detailed resume containing PII is passed through the system. The tool identifies and sanitizes the sensitive information, including names, addresses, phone numbers, and employment details, ensuring that the LLM only receives sanitized input.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb9ym4otjf2kz5bm14j0e.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb9ym4otjf2kz5bm14j0e.png" alt="llm-guard_2" width="800" height="375"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h4&gt;
  
  
  Step 3: Viewing the Results
&lt;/h4&gt;

&lt;p&gt;After LLM Guard processes the prompt, you can view the sanitized results. In this case, all personal information such as full names, phone numbers, and email addresses have been redacted. The output is clean and complies with privacy standards. Additionally, a detailed breakdown of each filter's performance is provided, indicating whether the input passed or was flagged by each active scanner.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fijr7mv7z2mtrdibvz36z.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fijr7mv7z2mtrdibvz36z.png" alt="llm-guard_3" width="800" height="235"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h4&gt;
  
  
  Wrap-UP
&lt;/h4&gt;

&lt;p&gt;LLM Guard offers robust security and content quality control for large language models by acting as a proxy between your application and the LLM. With its extensive range of customizable scanners for both input and output, it provides granular control over what passes through the model, ensuring compliance with privacy, security, and ethical standards.&lt;/p&gt;

&lt;p&gt;In addition to its powerful filtering capabilities, LLM Guard integrates seamlessly into existing workflows. It can be deployed as a middleware layer in your AI infrastructure without needing to modify the LLM itself. This proxy-style deployment allows you to enforce security rules and quality checks transparently across various applications using LLMs. Whether you are working with APIs, cloud-native architectures, or on-premise models, LLM Guard can be integrated with minimal friction. It also supports real-time scanning and protection, ensuring your LLMs are secured and monitored continuously.&lt;/p&gt;
&lt;h3&gt;
  
  
  How to ensure high-quality input and output of your LLM with DeepEval
&lt;/h3&gt;

&lt;p&gt;DeepEval, developed by Confident AI, is an open-source framework that automates the evaluation of LLM responses based on customizable metrics, ensuring high-quality inputs and outputs. It offers various features to measure LLM performance, helping users improve and maintain model reliability across different applications.&lt;/p&gt;

&lt;p&gt;Key Features of DeepEval include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Customizable Metrics&lt;/strong&gt;: Define specific evaluation metrics, such as relevance, consistency, and correctness, based on your use case.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Automated Test Runs&lt;/strong&gt;: Automate the evaluation of test cases, providing detailed insights into LLM performance.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Experiments and Hyperparameters&lt;/strong&gt;: Compare test runs across various hyperparameter settings, allowing for optimal fine-tuning.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Monitoring &amp;amp; Observability&lt;/strong&gt;: Track LLM performance in real-time, identifying areas for improvement.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Human Feedback Integration&lt;/strong&gt;: Incorporate human feedback into the evaluation cycle for deeper insights into model behavior.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You can get started with DeepEval by simply downloading it from the &lt;a href="https://github.com/confident-ai/deepeval" rel="noopener noreferrer"&gt;GitHub repository&lt;/a&gt;. Once installed, you can create custom evaluation metrics, run test cases, and analyze results directly on your system. DeepEval provides flexibility, making it suitable for anyone looking to test LLMs without additional overhead or setup requirements.&lt;/p&gt;

&lt;p&gt;While the tool can be used independently, creating an account on Confident AI’s platform offers additional benefits. By registering, you gain access to centralized storage for your test results and the ability to manage multiple experiments in one place. This feature can be particularly useful for teams working on larger projects, where tracking and overseeing various iterations and performance evaluations is critical. Additionally, the platform offers enhanced features like integrated monitoring and real-time evaluations, which can streamline the testing process.&lt;/p&gt;

&lt;p&gt;Now, let's dive into how to set up DeepEval, configure a test case, run the test and analyze the output.&lt;/p&gt;
&lt;h4&gt;
  
  
  Step 1: Install DeepEval
&lt;/h4&gt;

&lt;p&gt;First, make sure DeepEval is installed:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;pip install -U deepeval
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Step 2: Write a New Health Recommendation Test Case
&lt;/h4&gt;

&lt;p&gt;Let’s create a new test file that evaluates whether the LLM can provide relevant and accurate recommendations for &lt;strong&gt;maintaining heart health&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Create a new test file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;touch test_health_recommendations.py
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now, open test_health_recommendations.py and write the following test case:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import pytest
from deepeval import assert_test
from deepeval.metrics import AnswerRelevancyMetric
from deepeval.test_case import LLMTestCase

def test_case():
    # Define the relevancy metric
    answer_relevancy_metric = AnswerRelevancyMetric(threshold=0.5)

    # Define the test case with input, actual output, and retrieval context
    test_case = LLMTestCase(
        input="What are the best practices for maintaining heart health?",
        # Replace this with the actual output generated by your LLM
        actual_output="To maintain heart health, it's important to eat a balanced diet, exercise regularly, and avoid smoking.",
        # Relevant information from a knowledge source
        retrieval_context=[
            "Maintaining heart health includes regular physical activity, a healthy diet, quitting smoking, and managing stress.",
            "A diet rich in fruits, vegetables, whole grains, and lean proteins is recommended for heart health.",
            "Limiting alcohol consumption and regular medical checkups can help monitor heart health."
        ]
    )
    # Run the test and evaluate against the relevancy metric
    assert_test(test_case, [answer_relevancy_metric])
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Step 3: Run the Test
&lt;/h4&gt;

&lt;p&gt;Run the test case using the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;deepeval test run test_health_recommendations.py
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Breakdown of This Test Case:
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Input&lt;/strong&gt;:&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;"What are the best practices for maintaining heart health?"&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;This is a common health-related question.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Actual Output&lt;/strong&gt;:&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;"To maintain heart health, it's important to eat a balanced diet, exercise regularly, and avoid smoking."&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;This is the LLM’s response to the query.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Retrieval Context&lt;/strong&gt;:
This is the relevant information that the LLM can refer to when answering the question:&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;ul&gt;
&lt;li&gt;"Maintaining heart health includes regular physical activity, a healthy diet, quitting smoking, and managing stress."&lt;/li&gt;
&lt;li&gt;"A diet rich in fruits, vegetables, whole grains, and lean proteins is recommended for heart health."&lt;/li&gt;
&lt;li&gt;"Limiting alcohol consumption and regular medical checkups can help monitor heart health."&lt;/li&gt;
&lt;/ul&gt;
&lt;/blockquote&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Answer Relevancy Metric&lt;/strong&gt;:
The AnswerRelevancyMetric is used to evaluate how closely the LLM’s output matches the relevant context from a knowledge source. The threshold of 0.5 means the test passes if the relevancy score is 0.5 or higher.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Wrap-Up
&lt;/h4&gt;

&lt;p&gt;DeepEval is an essential tool for maintaining the quality and reliability of LLMs, particularly in high-stakes industries such as healthcare, where the recommendations and outputs provided by AI systems directly impact people's well-being. By leveraging DeepEval, you can rigorously test and evaluate the performance of your LLMs, ensuring that the model's outputs are accurate, relevant, and free from harmful errors or hallucinations.&lt;/p&gt;

&lt;p&gt;One of the key advantages of using DeepEval is its comprehensive set of metrics that allow you to assess various aspects of your model’s performance. Whether you’re monitoring the relevancy of answers to the provided context, the fluency of language used, or detecting potential hallucinations (incorrect or unsupported statements), DeepEval provides out-of-the-box solutions to streamline the testing process. In industries like healthcare, financial services, or legal advice, where strict compliance with factual accuracy and safe recommendations is vital, DeepEvals suite of tools helps minimize risks. For instance, the HallucinationMetric identifies cases where the model introduces information not supported by the retrieval context, which is critical when a model is deployed in sensitive environments like hospitals or clinics. The AnswerRelevancyMetric ensures the response aligns well with the relevant information, eliminating misleading or irrelevant answers that could confuse or harm users.&lt;/p&gt;

&lt;p&gt;Using DeepEval is straightforward. After installing the package and configuring your environment, you can write test cases using familiar Python frameworks like pytest. With DeepEval, you define inputs, expected outputs, and relevant knowledge sources (retrieval contexts). These components allow you to track the model's ability to retrieve accurate information, detect hallucinations, or evaluate the overall fluency of its responses. The set of available metrics makes it uniquely suited to such critical use cases. By utilizing these metrics, organizations can confidently deploy LLM-based solutions in environments that require the highest level of reliability, ensuring that the outputs provided do not harm users or provide incorrect recommendations.&lt;/p&gt;

&lt;h2&gt;
  
  
  Final thoughts
&lt;/h2&gt;

&lt;p&gt;When it comes to ensuring high-quality LLM prompts and outputs, accuracy and safety are crucial. &lt;strong&gt;Amazon Bedrock Guardrails&lt;/strong&gt; provides a robust set of native features for securing, managing, and monitoring LLM outputs within AWS, offering governance and real-time protection to prevent harmful or incorrect outputs. However, when further customization is needed, or if there is no dependency on AWS services, open-source solutions like &lt;strong&gt;LLM Guard&lt;/strong&gt; and &lt;strong&gt;DeepEval&lt;/strong&gt; offer a powerful alternative. These tools enable comprehensive testing, evaluation, and real-time monitoring, ensuring accuracy, relevance, and reliability.&lt;/p&gt;

&lt;p&gt;To put this into practice, focus on developing clear strategies for continuously monitoring performance, refining models to meet the demands of specific use cases, and implementing thorough testing processes to catch inaccuracies before deployment. Real-time oversight is key, especially when models are in production, to ensure only reliable outputs are delivered. And don't forget the importance of collaboration—bringing in domain experts alongside AI teams to validate outputs can help keep things on track, especially in areas where precision is critical.&lt;/p&gt;

</description>
      <category>genai</category>
      <category>aws</category>
      <category>security</category>
      <category>llm</category>
    </item>
    <item>
      <title>Choosing the right AWS solution: Comparing AWS European Sovereign Cloud against other offerings</title>
      <dc:creator>Artur Schneider</dc:creator>
      <pubDate>Fri, 22 Dec 2023 12:04:56 +0000</pubDate>
      <link>https://dev.to/aws-builders/choosing-the-right-aws-solution-comparing-aws-european-sovereign-cloud-against-other-offerings-37ei</link>
      <guid>https://dev.to/aws-builders/choosing-the-right-aws-solution-comparing-aws-european-sovereign-cloud-against-other-offerings-37ei</guid>
      <description>&lt;p&gt;The recent announcement of the &lt;a href="https://aws.amazon.com/de/blogs/aws/in-the-works-aws-european-sovereign-cloud/#:~:text=The%20AWS%20European%20Sovereign%20Cloud,of%20the%20European%20Union%20(EU)" rel="noopener noreferrer"&gt;AWS European Sovereign Cloud&lt;/a&gt; has been extremely well welcomed by the AWS community, indicating a growing demand for specialized cloud solutions that meet specific regional and regulatory requirements. &lt;/p&gt;

&lt;p&gt;While the full details of the newly announced AWS European Sovereign Cloud, are not yet known, this blog post attempts to extract and analyze the best possible assumptions from the information currently available. My aim is to provide a preliminary but insightful comparison of these services, focusing on aspects such as purpose and target audience, compliance and regulatory standards, data sovereignty and location, among others.&lt;/p&gt;

&lt;p&gt;For organizations grappling with the complexities of cloud service adoption and integration, it is important to understand the nuances and potential of these AWS offerings. This blog aims to analyze and compare these services along several key dimensions to help businesses, IT professionals and decision makers choose the AWS service that best aligns with their specific needs. Whether it's compliance with strict data regulations, the search for high-performance computing solutions or the need for on-premises cloud infrastructure, this comparison is designed to point you in the direction of the optimal AWS solution for your individual circumstances.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhf8krnmikh6a1rbhsb3v.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhf8krnmikh6a1rbhsb3v.png" alt="Purpose and Target" width="800" height="800"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In this section, we will look at the distinct purposes and target audiences of the AWS European Sovereign Cloud, AWS GovCloud, AWS Dedicated Local Zones, and AWS Outposts. Understanding these elements is key to discerning which service aligns best with specific organizational needs and regulatory environments.&lt;/p&gt;

&lt;h4&gt;
  
  
  AWS European Sovereign Cloud
&lt;/h4&gt;

&lt;p&gt;&lt;strong&gt;Purpose:&lt;/strong&gt; Tailored for the unique demands of data sovereignty and privacy within the European Union, this service ensures that data handling and storage comply with EU regulations.&lt;br&gt;
&lt;strong&gt;Target Audience:&lt;/strong&gt; Primarily suited for EU-based companies, governmental organizations, and any entity that needs to adhere to the strict data protection laws of the EU, such as GDPR.&lt;/p&gt;

&lt;h4&gt;
  
  
  AWS GovCloud
&lt;/h4&gt;

&lt;p&gt;&lt;strong&gt;Purpose:&lt;/strong&gt; Specifically designed for U.S. government agencies and contractors, AWS GovCloud offers a secure and compliant cloud environment meeting U.S. government regulatory standards.&lt;br&gt;
&lt;strong&gt;Target Audience:&lt;/strong&gt; U.S. government entities, contractors, and businesses handling sensitive data, needing to comply with U.S. specific regulations like FedRAMP and ITAR.&lt;/p&gt;

&lt;h4&gt;
  
  
  AWS Dedicated Local Zones
&lt;/h4&gt;

&lt;p&gt;&lt;strong&gt;Purpose:&lt;/strong&gt; These zones extend AWS's infrastructure to particular geographic locations, delivering low-latency and high-performance connectivity.&lt;br&gt;
&lt;strong&gt;Target Audience:&lt;/strong&gt; Best for businesses requiring immediate, low-latency access to their applications, including media, entertainment, enterprises with critical real-time operations, and those needing localized data processing.&lt;/p&gt;

&lt;h4&gt;
  
  
  AWS Outposts
&lt;/h4&gt;

&lt;p&gt;&lt;strong&gt;Purpose:&lt;/strong&gt; Offering AWS services and infrastructure on-premises, AWS Outposts is designed for a consistent hybrid cloud experience.&lt;br&gt;
&lt;strong&gt;Target Audience:&lt;/strong&gt; Ideal for organizations that require a blend of on-premises and cloud environments, especially where low latency or localized data processing is a priority.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fesjjea757cu7b1ccaiid.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fesjjea757cu7b1ccaiid.png" alt="Compliance and Regulatory Standards" width="800" height="800"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Compliance with regulatory standards is a cornerstone in cloud computing, especially for organizations operating under strict data protection and privacy laws. This section provides an in-depth look into the specific compliance frameworks and certifications that are integral to the selected AWS solutions.&lt;/p&gt;

&lt;h4&gt;
  
  
  AWS European Sovereign Cloud
&lt;/h4&gt;

&lt;p&gt;&lt;strong&gt;Compliance Standards:&lt;/strong&gt; This service is designed to align with the European Union's data protection laws, including the GDPR. It ensures adherence to data residency and sovereignty regulations, which is central to the EU's regulatory framework. A notable aspect of this service is AWS's collaboration with the BSI in Germany. This partnership is a testament to AWS's commitment to meeting the highest standards of data security and regulatory compliance, specifically addressing concerns around data sovereignty and privacy within the EU&lt;/p&gt;

&lt;h4&gt;
  
  
  AWS GovCloud
&lt;/h4&gt;

&lt;p&gt;&lt;strong&gt;Compliance Standards:&lt;/strong&gt; AWS GovCloud is built to comply with U.S. government standards such as FedRAMP for cloud security and ITAR for defense-related data. It also adheres to the Department of Defense (DoD) Cloud Computing Security Requirements Guide (SRG) for various impact levels, ensuring robust data protection and operational security.&lt;/p&gt;

&lt;h4&gt;
  
  
  AWS Dedicated Local Zones
&lt;/h4&gt;

&lt;p&gt;&lt;strong&gt;Compliance Standards:&lt;/strong&gt; These zones adhere to AWS's core security and compliance policies, including ISO/IEC certifications and SOC reports. They also comply with regional data protection laws, with specific capabilities varying based on the local zone's geographic location.&lt;/p&gt;

&lt;h4&gt;
  
  
  AWS Outposts
&lt;/h4&gt;

&lt;p&gt;&lt;strong&gt;Compliance Standards:&lt;/strong&gt; Outposts extend AWS's compliance and security controls into on-premises environments. This includes meeting standards such as HIPAA, PCI DSS, and ISO/IEC certifications, enabling businesses to fulfill local compliance and data residency requirements while using AWS services.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqomg7el6e79i0j9xif8a.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqomg7el6e79i0j9xif8a.png" alt="Performance and Latency" width="800" height="800"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Performance and latency are critical factors in cloud computing, impacting user experience and operational efficiency. This section examines how the AWS European Sovereign Cloud, AWS GovCloud, AWS Dedicated Local Zones, and AWS Outposts are engineered to address these aspects.&lt;/p&gt;

&lt;h4&gt;
  
  
  AWS European Sovereign Cloud
&lt;/h4&gt;

&lt;p&gt;&lt;strong&gt;Performance Characteristics:&lt;/strong&gt; While the primary focus is on data sovereignty, this service is also designed to deliver high-performance computing capabilities within the EU. It is optimized to minimize latency for EU-based users and applications.&lt;br&gt;
&lt;strong&gt;Latency Aspects:&lt;/strong&gt; Ideal for scenarios where data must remain within the EU without compromising on operational speed and efficiency, ensuring responsive access to cloud resources.&lt;/p&gt;

&lt;h4&gt;
  
  
  AWS GovCloud
&lt;/h4&gt;

&lt;p&gt;&lt;strong&gt;Performance Characteristics:&lt;/strong&gt; AWS GovCloud offers robust performance for U.S. government agencies and contractors. It is equipped to handle high-demand workloads, ensuring efficient processing of sensitive data.&lt;br&gt;
&lt;strong&gt;Latency Aspects:&lt;/strong&gt; Tailored to provide low-latency interactions for users within U.S. territories, it's suitable for time-sensitive government operations and applications.&lt;/p&gt;

&lt;h4&gt;
  
  
  AWS Dedicated Local Zones
&lt;/h4&gt;

&lt;p&gt;&lt;strong&gt;Performance Characteristics:&lt;/strong&gt; These zones stand out for their ability to offer ultra-low latency by placing AWS infrastructure closer to end-users. They are specifically designed for applications requiring real-time or near-real-time response rates.&lt;br&gt;
&lt;strong&gt;Latency Aspects:&lt;/strong&gt; Perfect for use cases such as interactive gaming, live video streaming, and other latency-sensitive applications, providing a seamless user experience by reducing delay in data processing.&lt;/p&gt;

&lt;h4&gt;
  
  
  AWS Outposts
&lt;/h4&gt;

&lt;p&gt;&lt;strong&gt;Performance Characteristics:&lt;/strong&gt; AWS Outposts bring AWS services to on-premises locations, ensuring low latency for applications that require close proximity to data sources or end-users. They are particularly beneficial in environments where internet connectivity is limited or unreliable.&lt;br&gt;
&lt;strong&gt;Latency Aspects:&lt;/strong&gt; Ideal for applications that demand on-site data processing, such as industrial automation and healthcare systems, where every millisecond counts.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6r65yozhn3kz3w78cgy2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6r65yozhn3kz3w78cgy2.png" alt="Deployment and Integration" width="800" height="800"&gt;&lt;/a&gt;&lt;br&gt;
The ease of deployment and integration into existing IT infrastructures is a crucial consideration for many organizations. This section discusses the deployment models and integration capabilities of the AWS European Sovereign Cloud, AWS GovCloud, AWS Dedicated Local Zones, and AWS Outposts.&lt;/p&gt;

&lt;h4&gt;
  
  
  AWS European Sovereign Cloud
&lt;/h4&gt;

&lt;p&gt;&lt;strong&gt;Deployment Model:&lt;/strong&gt; Mirroring the approach of AWS GovCloud, the AWS European Sovereign Cloud will be established as a separate region. This design is intended to cater specifically to the data sovereignty and privacy requirements within the European Union.&lt;br&gt;
&lt;strong&gt;Integration Capabilities:&lt;/strong&gt; While functioning as an independent region, it will maintain compatibility with the broader AWS ecosystem, enabling users to leverage AWS services while adhering to EU-specific regulations. This setup allows for a hybrid environment where EU data residency is strictly maintained, yet the service benefits from the scalability and robustness of AWS's global infrastructure.&lt;/p&gt;

&lt;h4&gt;
  
  
  AWS GovCloud
&lt;/h4&gt;

&lt;p&gt;&lt;strong&gt;Deployment Model:&lt;/strong&gt; While AWS GovCloud leverages the core technology of AWS, it operates in isolated environments to ensure that data does not mix with non-government data on the public AWS cloud. This separation is crucial for meeting stringent U.S. government compliance and regulatory standards.&lt;br&gt;
&lt;strong&gt;Integration Capabilities:&lt;/strong&gt; Provides specialized services that comply with U.S. federal regulations, enabling secure and seamless integration with sensitive workloads and government-specific applications.&lt;/p&gt;

&lt;h4&gt;
  
  
  AWS Dedicated Local Zones
&lt;/h4&gt;

&lt;p&gt;&lt;strong&gt;Deployment Model:&lt;/strong&gt; These local zones are designed as extensions of AWS Regions, providing the capability to deploy AWS services locally. They bridge the gap between local data processing needs and cloud scalability.&lt;br&gt;
&lt;strong&gt;Integration Capabilities:&lt;/strong&gt; Enables seamless integration with existing AWS services, offering a hybrid solution that combines the benefits of local processing with the vast array of AWS cloud services.&lt;/p&gt;

&lt;h4&gt;
  
  
  AWS Outposts
&lt;/h4&gt;

&lt;p&gt;&lt;strong&gt;Deployment Model:&lt;/strong&gt; AWS Outposts uniquely brings AWS infrastructure to on-premises locations, offering a hybrid cloud solution that integrates with existing on-site systems.&lt;br&gt;
&lt;strong&gt;Integration Capabilities:&lt;/strong&gt; It supports a broad range of AWS services and tools, allowing for deep integration with on-premises systems and applications. This is particularly beneficial for environments where local data processing and cloud services need to work in tandem.&lt;/p&gt;

&lt;h2&gt;
  
  
  Wrapping up: My key insights
&lt;/h2&gt;

&lt;p&gt;After exploring the AWS European Sovereign Cloud, AWS GovCloud, AWS Dedicated Local Zones and AWS Outposts, I have discovered a raft of information that is key for anyone navigating the AWS ecosystem. Below are my insights and key takeaways from this research to help you to make a decision about the AWS solution that best fit your needs:&lt;/p&gt;

&lt;h4&gt;
  
  
  AWS European Sovereign Cloud:
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Tailor-made for compliance with EU data laws and regulations.&lt;/li&gt;
&lt;li&gt;A perfect fit for EU-based entities needing adherence to GDPR and similar requirements.&lt;/li&gt;
&lt;li&gt;Prioritizes data sovereignty and protection within the EU legal framework.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  AWS GovCloud:
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Specifically designed for U.S. government agencies and contractors.&lt;/li&gt;
&lt;li&gt;Aligns with U.S. federal standards, including FedRAMP and ITAR, for handling sensitive data.&lt;/li&gt;
&lt;li&gt;Ensures a secure and compliant environment for U.S. government-specific data.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  AWS Dedicated Local Zones:
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Brings AWS closer to specific locations for ultra-low latency.&lt;/li&gt;
&lt;li&gt;Ideal for real-time or near-real-time applications, from gaming to live streaming.&lt;/li&gt;
&lt;li&gt;Merges the benefits of local data processing with cloud scalability.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  AWS Outposts:
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Delivers AWS infrastructure and services directly to on-premises facilities.&lt;/li&gt;
&lt;li&gt;Best suited for scenarios demanding on-site data processing in a hybrid cloud setup.&lt;/li&gt;
&lt;li&gt;Consistently upholds AWS’s security and compliance standards on-premises.&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>aws</category>
      <category>gdpr</category>
      <category>cloud</category>
      <category>security</category>
    </item>
    <item>
      <title>Amazon Bedrock vs Amazon SageMaker: Understanding the difference between AWS's AI/ML ecosystem</title>
      <dc:creator>Artur Schneider</dc:creator>
      <pubDate>Sun, 06 Aug 2023 13:59:42 +0000</pubDate>
      <link>https://dev.to/aws-builders/amazon-bedrock-vs-amazon-sagemaker-understanding-the-difference-between-awss-aiml-ecosystem-5364</link>
      <guid>https://dev.to/aws-builders/amazon-bedrock-vs-amazon-sagemaker-understanding-the-difference-between-awss-aiml-ecosystem-5364</guid>
      <description>&lt;p&gt;As businesses and organizations continue to leverage the power of artificial intelligence and machine learning, cloud service providers like Amazon Web Services (AWS) are relentlessly innovating and introducing new services to support these modern needs. In this post we'll be pulling back the curtain on two of AWS´s centerpieces AI/ML services: the new kid on the block Amazon Bedrock and the well-established Amazon SageMaker.&lt;/p&gt;

&lt;p&gt;We're going to pit these two contenders against each other, in a friendly way of course. We’ll weigh them up, focusing on the heavy hitters: general differences, data protection and security, setup efforts, customizability, and potential use cases. &lt;/p&gt;

&lt;p&gt;&lt;del&gt;It's only fair to tell you that Amazon Bedrock is not yet available to the masses. It's currently only presenting itself to a select audience of customers and partners. But don't worry, we've gathered enough information to send it into the ring with SageMaker.&lt;/del&gt; &lt;a href="https://aws.amazon.com/de/blogs/aws/amazon-bedrock-is-now-generally-available-build-and-scale-generative-ai-applications-with-foundation-models/" rel="noopener noreferrer"&gt; As of the 28th of September 2023, Amazon Bedrock has become generally available in the AWS regions of US East (N. Virginia) and US West (Oregon) &lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  General Differences
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Amazon Bedrock
&lt;/h3&gt;

&lt;p&gt;Amazon Bedrock is a fully managed service that provides access to pre-trained foundation models from Amazon/AWS and well-known AI startups. Not only does it come with an impressive set of features, it also eliminates the need to manage the underlying infrastructure and fits seamlessly into the AWS service landscape.  &lt;/p&gt;

&lt;p&gt;&lt;em&gt;Amazon Bedrock features summarized:&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Diverse Foundation Models:&lt;/strong&gt; Amazon Bedrock provides access to a range of pre-trained foundation models, offering a significant multifunction advantage compared to other AI services that usually come with only text foundation models. Some of these models include Amazon Titan for text summarization and generation, Jurassic-2 for multilingual text generation, Claude 2 for thoughtful dialogue and content creation, Command and Embed for text generation for business applications, and Stable Diffusion for image generation.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Agents for Amazon Bedrock:&lt;/strong&gt; Agents allow developers to build generative AI systems capable of incorporating data and integrating with internal APIs via Foundation Models (FMs), privately and securely, without the need to train the models. The FMs can complete complex tasks, in addition to generating text or chatting.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Serverless Experience:&lt;/strong&gt; Bedrock is serverless, meaning there are no servers or infrastructure to manage. Users only need to interact with a simple API: provide an input and specify the model to use, and Bedrock will provide an output. It's as simple as that.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Easy Model Customization:&lt;/strong&gt; Bedrock allows easy model customization through fine-tuning. Customers only need to point Bedrock to a few labeled examples in an S3 bucket, and the service can fine-tune the model for a specific task. You just need to provide a couple of dozen prompt/response examples, and you're good to go.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Data Privacy:&lt;/strong&gt; None of the user data is used to train the underlying foundational models. All data is encrypted and does not leave the user's Virtual Private Cloud (VPC). This feature is a significant milestone for the real-world, production use of foundational models.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Service Integration:&lt;/strong&gt; Bedrock integrates seamlessly with other AWS services, including SageMaker Experiments (for testing different models) and Pipelines (for managing Foundation Models at scale).&lt;/p&gt;

&lt;h3&gt;
  
  
  Amazon SageMaker
&lt;/h3&gt;

&lt;p&gt;On the other hand, Amazon SageMaker is a comprehensive service that allows data scientists and developers to build, train, and deploy machine learning models for extended use cases. SageMaker supports the complete machine learning lifecycle, providing tools to label and prepare your data, choose an algorithm, train the model, tune and optimize it for deployment, make predictions, and take action. &lt;/p&gt;

&lt;p&gt;Amazon SageMaker comes with a capability called "JumpStart" that essentially does exactly what it says. It's a jump start that speeds up machine learning projects by providing pre-built solutions and trained models, making it easier for users to get their projects off the ground. This feature is a blessing for developers who want to get started quickly without having to reinvent the wheel. &lt;/p&gt;

&lt;p&gt;&lt;em&gt;Amazon SageMaker features summarized:&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Comprehensive Machine Learning Lifecycle Support:&lt;/strong&gt; Amazon SageMaker supports every step of the machine learning process, from data labeling and preparation to model deployment and monitoring.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Built-in Algorithms and Frameworks:&lt;/strong&gt; Amazon SageMaker provides various built-in algorithms and frameworks, allowing users to select the most suitable one for their needs without requiring them to build everything from scratch.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Automatic Model Tuning:&lt;/strong&gt; Amazon SageMaker automatically tunes models by adjusting thousands of different combinations of algorithm parameters, leading to the most optimal model.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Training and Inference with SageMaker Studio:&lt;/strong&gt; The SageMaker Studio provides a single, web-based visual interface where you can perform all ML development steps, making it easier to build, train, and tune machine learning models.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Managed Spot Training:&lt;/strong&gt; Amazon SageMaker Managed Spot Training allows users to use Amazon EC2 Spot instances for training ML models, resulting in cost savings of up to 90% compared to on-demand instances.&lt;/p&gt;

&lt;h3&gt;
  
  
  Data Protection and Security Requirements
&lt;/h3&gt;

&lt;p&gt;Both Bedrock and SageMaker offer robust security features inherent in the AWS ecosystem. However, the services differ in terms of data management as their underlying architectures bring significant differences that could potentially impact your use case.&lt;/p&gt;

&lt;p&gt;With Amazon SageMaker, customers have complete control over their data and the underlying infrastructure. This means that users have full authority over where the data is processed. They have the ability to encrypt data both at rest and in transit, manage data access through Identity and Access Management (IAM) roles, and comply with regulations through AWS’s robust compliance offerings. Users can also use SageMaker in their Virtual Private Cloud (VPC) to have network level control. For customers with stringent data security requirements, this level of control is paramount.&lt;/p&gt;

&lt;p&gt;On the other hand, Amazon Bedrock, being a managed service, processes data within the confines of the AWS environment. While this means users won't have to worry about infrastructure management, it also means that they have less direct control over where the data is processed. Nonetheless, Bedrock ensures that none of the user data is used to train the underlying foundational models and all data is encrypted and does not leave the user's VPC. However, organizations with extremely high-security requirements might need to evaluate these considerations carefully.&lt;/p&gt;

&lt;p&gt;While both services provide robust security features, the decision will likely come down to how much direct control over data and infrastructure your organization requires.&lt;/p&gt;

&lt;h3&gt;
  
  
  Effort to Setup
&lt;/h3&gt;

&lt;p&gt;Bedrock's setup, being fully managed, is expected to require less effort compared to SageMaker. Users simply select an appropriate pre-trained foundation model, customize it with their data, and start using it.&lt;/p&gt;

&lt;p&gt;Conversely, despite its powered experience, SageMaker requires more setup effort due to its large feature set. Users have to prepare data, select or create an algorithm, train the model, and then deploy it. In addition, using SageMaker effectively requires more technical experience and additional infrastructure management compared to Bedrock.&lt;/p&gt;

&lt;h3&gt;
  
  
  Customizability
&lt;/h3&gt;

&lt;p&gt;SageMaker excels in customizability as it allows you to use your own algorithms, built-in ones, or select from many available on the AWS Marketplace or third party. You can fine-tune these models to meet specific requirements. This means that requirements delving deeper into detail are better suited with SageMaker, as you can use any available open-source Large Language Model (LLM) and train it to your preference.&lt;/p&gt;

&lt;p&gt;In contrast, Bedrock's customizability appears less flexible than SageMaker. While it allows users to customize foundation models with their data, Bedrock only comes with default Foundation Language Models, although with fine-tuning capabilities.&lt;/p&gt;

&lt;h3&gt;
  
  
  Use Cases
&lt;/h3&gt;

&lt;p&gt;Bedrock is ideal for scenarios where organizations need advanced AI capabilities quickly without having to deal with building models or managing infrastructure. Use cases include creative content generation, dialog system creation, text summarization, multilingual text creation, and advanced image generation tasks.&lt;/p&gt;

&lt;p&gt;SageMaker is suitable for a broader range of machine learning tasks that require detailed control over the process of model creation, training and deployment. It is ideal for predictive analytics, recommendation systems, anomaly detection, or any other task that requires a customized machine learning model.&lt;/p&gt;

&lt;h3&gt;
  
  
  Costs
&lt;/h3&gt;

&lt;p&gt;When it comes to cost, Amazon Bedrock could potentially have an advantage over Amazon SageMaker. Since Bedrock is a fully managed service, it abstracts away much of the infrastructure management, potentially reducing operational costs. Additionally, given that it's serverless, you only pay for the resources you actually use. This contrasts with SageMaker, where you may need to maintain dedicated resources even when not in use.&lt;/p&gt;

&lt;p&gt;Specifically, Amazon Bedrock's pricing model is based on usage, with charges for the number of tokens processed by the foundational models and the compute time used for fine-tuning. This means that the cost directly scales with your needs.&lt;/p&gt;

&lt;h3&gt;
  
  
  Conclusion
&lt;/h3&gt;

&lt;p&gt;Both Amazon Bedrock and Amazon SageMaker are powerful tools in the AWS AI/ML service landscape. Choosing between the two depends on your specific needs. If you need to quickly integrate advanced AI capabilities without much customization, Bedrock is the way to go. But if your use case demands deep customizability and you're willing to invest more effort into setting up and managing the model training process, SageMaker would be a better fit.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>ai</category>
      <category>generativeai</category>
      <category>amazonbedrock</category>
    </item>
    <item>
      <title>Spicing Up AWS Architecture Diagrams: A Step-by-Step Guide To Creating Animated AWS Architecture GIFs</title>
      <dc:creator>Artur Schneider</dc:creator>
      <pubDate>Fri, 30 Jun 2023 14:53:16 +0000</pubDate>
      <link>https://dev.to/aws-builders/spicing-up-aws-architecture-diagrams-a-step-by-step-guide-to-creating-animated-aws-architecture-gifs-jjb</link>
      <guid>https://dev.to/aws-builders/spicing-up-aws-architecture-diagrams-a-step-by-step-guide-to-creating-animated-aws-architecture-gifs-jjb</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;When it comes to visualizing the design and structure of cloud infrastructures, AWS architecture diagrams are an indispensable tool. They provide clear, concise visual representations that can communicate complex system designs with ease. But what if we could take these static diagrams a step further? What if we could make them more engaging and interactive?&lt;/p&gt;

&lt;p&gt;When I stumbled upon a &lt;a href="https://www.linkedin.com/posts/ankit-jodhani_blogathon-showwcase-growincommunity-ugcPost-7075669595998552065-gUPe?utm_source=share&amp;amp;utm_medium=member_desktop" rel="noopener noreferrer"&gt;LinkedIn post by Ankit Jodhani&lt;/a&gt;, I was immediately captivated by the animated architecture diagrams he presented. Like something out of a sci-fi movie, these diagrams twinkled, flashed, and undulated, drawing the viewer into the world of the architecture in a way static diagrams simply couldn't. The enigma surrounding this animation magic sparked a question: what advanced, perhaps extraterrestrial technology was behind this?&lt;/p&gt;

&lt;p&gt;To my surprise, the solution was not found in the outer reaches of the cosmos, but rather, nestled within a software we've known and used for years - PowerPoint. Yes, the same PowerPoint we've used to make countless school presentations and office reports was the secret behind these lively diagrams.&lt;/p&gt;

&lt;p&gt;With clever manipulation of PowerPoint's animation features, Ankit breathed a dynamic life into the typically static AWS architecture diagrams, giving them a whole new level of interactivity and engagement. His work truly illustrates that creativity is not about the tools we use, but how we use them.&lt;/p&gt;

&lt;p&gt;The allure of this novel approach to presenting architecture diagrams was simply too much to resist. Therefore, I've decided to dissect and demystify this process, providing you with a comprehensive step-by-step guide to animate your own AWS architecture diagrams. By the end of this journey, not only will you have a deeper understanding of PowerPoint's capabilities, but you'll also have the skills to create eye-catching AWS architecture diagrams that are sure to impress your clients and colleagues.&lt;/p&gt;

&lt;p&gt;Join me as we venture into the uncharted territory of animated AWS architecture diagrams, and let's bring your AWS architecture diagrams to life!&lt;/p&gt;

&lt;h2&gt;
  
  
  Requirements
&lt;/h2&gt;

&lt;p&gt;Before we dive into the step-by-step process of creating lively and engaging AWS architecture diagrams, let's make sure we have all the necessary tools at our disposal. These prerequisites will not only equip you to follow along with this guide, but will also set the foundation for your future endeavors in creating dynamic presentations.&lt;/p&gt;

&lt;p&gt;Here are the key requirements:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Microsoft PowerPoint&lt;/strong&gt;: To make your AWS architecture diagrams come to life, the first thing you need is Microsoft PowerPoint. This guide is designed with the latest version of PowerPoint from Office 365 in mind, ensuring you're utilizing all the modern features and functionalities.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Latest AWS PowerPoint Toolkit&lt;/strong&gt;: To build accurate and visually pleasing AWS diagrams, having a toolkit is invaluable. It offers you a comprehensive set of AWS-specific icons and resources. You can download it from the official &lt;a href="https://d1.awsstatic.com/webteam/architecture-icons/q2-2023/AWS-Architecture-Icons-Deck_For-Dark-BG_04282023.65449b8345e02ab5deca4fdeda360313aa999db2.zip" rel="noopener noreferrer"&gt;AWS website here&lt;/a&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;(Optional) AWS Icons Finder&lt;/strong&gt;: While not mandatory, this tool can vastly improve your efficiency. This is a simple web app that allows you to quickly search and copy AWS icons for your diagrams. Visit &lt;a href="https://jameskimthing.github.io/aws-icons" rel="noopener noreferrer"&gt;James Kim's AWS Icons Finder&lt;/a&gt; to try it out.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Creating Your Animated AWS Architecture Diagrams
&lt;/h2&gt;

&lt;p&gt;In this section, we will walk through the process of creating a dynamic, animated AWS architecture diagram. This process involves a sequence of detailed steps, from choosing your architectural layout to applying the finishing touches with animation and effects. Let's dive right in!&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Selecting Your AWS Architecture&lt;/strong&gt;: To start, ensure you have a suitable AWS Architecture on hand. For the purpose of this guide, I will be using the "Git to S3 Webhooks" example from the AWS Architecture Deck.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8smfhrxi9atdsmscmtei.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8smfhrxi9atdsmscmtei.png" alt="Image Architecture" width="800" height="447"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Choosing Your Shape&lt;/strong&gt;: Now, select the shape that you want to animate around your architecture. For this example, I will use a simple circle.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6bj3q7uazf7lh59f322j.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6bj3q7uazf7lh59f322j.png" alt="Image of Shape Selection" width="800" height="449"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Customizing Your Shape&lt;/strong&gt;: This is where you can get creative! You can customize your chosen shape with various colors and shape effects like a glow for that extra bling.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj1j6fwgyyqlu2qd9cw7j.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj1j6fwgyyqlu2qd9cw7j.png" alt="Image of Shape effects &amp;amp; color" width="800" height="567"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Adding Animation&lt;/strong&gt;: Move your shape to its starting position, then navigate to the animation tab. Here, select how your animation should appear. In this case, I prefer using the "Fade" effect.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6d7p1ernzd0uhndu2x7e.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6d7p1ernzd0uhndu2x7e.png" alt="Image of Animation option" width="800" height="409"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Positioning Your Shapes&lt;/strong&gt;: Copy and paste your shape to each position where you want it to move.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9kenk2ff4pg6xne8scu7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9kenk2ff4pg6xne8scu7.png" alt="Image of all dots in place" width="800" height="449"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Fine-Tuning Your Animation&lt;/strong&gt;: I recommend opening the Animation pane and adding an exit animation to your shapes. This will ensure they disappear after the movement is over. For consistency, I also choose "Fade" for the exit animation. Ensure you select the red "Fade" option as this represents the exit, whereas the green "Fade" is the entrance animation. Then, organize the order of your animations in the Animation pane.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvay8dz165jjx584ly503.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvay8dz165jjx584ly503.png" alt="Image of animation pane and exit animation" width="800" height="380"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Adding Movement to Your Animation&lt;/strong&gt;: Click on each dot and select "Add animation". Then, under Motion Paths, choose the "Custom Path" option. This allows you to create a personalized animation path for your shapes. Alternatively, you can choose from a range of pre-determined paths such as straight lines or loops, based on your preference. Make sure these paths are in the correct order in the Animation pane.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd02x4jam8u55tcgyeyfd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd02x4jam8u55tcgyeyfd.png" alt="Image of choosing motion paths" width="800" height="430"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Setting Your Animation Sequence&lt;/strong&gt;: For animations that should start simultaneously, select them all and choose "Start With Previous". Also, organize the start of the animations in the following order: first the green (entrance animation), then blue (movement animation), and finally red (exit animation).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8a1945l9oggo073c5saw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8a1945l9oggo073c5saw.png" alt="Image of animation order" width="800" height="566"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Reviewing and Exporting Your Animation&lt;/strong&gt;: With all the animations set, play through it to ensure everything looks as expected. If you need to, adjust the order of the animations. Once you're satisfied with your animation, export the slide as a GIF. The time it takes to create your GIF will depend on the quality you choose.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz1p4nzjyafwip6lpbiqd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz1p4nzjyafwip6lpbiqd.png" alt="Image GIF export" width="800" height="334"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;With these steps, you're fully equipped to create your own captivating AWS architecture diagrams. If you encounter any challenges, don't worry - I can also provide an example PowerPoint as a reference to aid in your creative process, just send me an DM via &lt;a href="https://www.linkedin.com/in/artur-schneider-cloud/" rel="noopener noreferrer"&gt;LinkedIn&lt;/a&gt;. Now it's time to turn your static diagrams into dynamic, engaging presentations that are sure to wow your audience! Big Thanks again to &lt;a href="https://www.linkedin.com/in/ankit-jodhani/" rel="noopener noreferrer"&gt;Ankit Jodhani&lt;/a&gt; for the inspiration!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F112pa8ezgoycrcp17wk5.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F112pa8ezgoycrcp17wk5.gif" alt="GIF" width="720" height="405"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>aws</category>
      <category>architecture</category>
      <category>powerpoint</category>
      <category>design</category>
    </item>
    <item>
      <title>PrivateGPT and AWS EC2: A beginner's Guide to AI experimentation</title>
      <dc:creator>Artur Schneider</dc:creator>
      <pubDate>Thu, 22 Jun 2023 15:10:00 +0000</pubDate>
      <link>https://dev.to/aws-builders/privategpt-and-aws-ec2-a-beginners-guide-to-ai-experimentation-2npm</link>
      <guid>https://dev.to/aws-builders/privategpt-and-aws-ec2-a-beginners-guide-to-ai-experimentation-2npm</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;In this era of digital transformation, it's hard to miss the wave of artificial intelligence and machine learning that is sweeping across all sectors. As an enthusiast with a curious mind, but with a limited background in AI, I, like many others, was intrigued yet overwhelmed. This fascination led me down the path of exploring AI, specifically the world of large language models (LLMs).&lt;/p&gt;

&lt;p&gt;The world of AI may seem daunting, filled with a myriad of complex terms, algorithms, and architectures. However, my goal was to simplify this journey for myself and, in doing so, create an opportunity for others who wish to venture into this domain.&lt;/p&gt;

&lt;p&gt;In this quest for simplicity, I stumbled upon PrivateGPT, an easy-to-implement solution that allows individuals to host a large language models on their local machines. Its powerful functionalities and ease of use make it an ideal starting point for anyone looking to experiment with AI. What's even more interesting is that it provides the option to use your own datasets, opening up avenues for unique, personalized AI applications - all of this without the need for a constant internet connection.&lt;/p&gt;

&lt;p&gt;PrivateGPT comes with a default language model named 'gpt4all-j-v1.3-groovy'. However, it does not limit the user to this single model. Users have the opportunity to experiment with various other open-source LLMs available on HuggingFace. One such model is Falcon 40B, the best performing open-source LLM currently available.&lt;/p&gt;

&lt;p&gt;In this blog post, I'll guide you through the process of setting up PrivateGPT on an AWS EC2 instance and using your own documents as sources for conversations with the LLM. To make the interaction even more convenient, we will be using a solution that provides an intuitive user interface on top of PrivateGPT.&lt;/p&gt;

&lt;p&gt;So, fasten your seatbelts and get ready for a journey into the exciting realm of AI, as we explore and experiment with large language models, all in the comfort of your own private environment.&lt;/p&gt;

&lt;h2&gt;
  
  
  Configuration
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Launching the EC2 Instance
&lt;/h3&gt;

&lt;p&gt;In this section, we will walk through the process of setting up an AWS EC2 instance tailored for running a PrivateGPT instance. We'll take it step by step. This will lay the groundwork for us to experiment with our language models and to use our own data sources.&lt;/p&gt;

&lt;p&gt;Let's start by setting up the AWS EC2 instance:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Choosing an Operating System&lt;/strong&gt;: In our case, we will be using Amazon Linux as our operating system. This is an excellent choice for hosting PrivateGPT due to its seamless integration with AWS services and robust security features. However, PrivateGPT is flexible and can also be hosted on other operating systems such as Windows or Mac.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Selecting Instance Type&lt;/strong&gt;: For the needs of our task, we require an instance with a minimum of 16 GB memory. The specific instance type that you choose may depend on other factors like cost and performance. I recommend using one of the T3 instances, such as t3.large or t3.xlarge.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6nn76ycka50w7atecnfe.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6nn76ycka50w7atecnfe.png" alt="Selecting Instance Type" width="786" height="250"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Storage Configuration&lt;/strong&gt;: After choosing the instance type, we need to add additional storage for the language model and our data. The exact amount of storage you need will depend on the size of the model and your dataset. For instance, the Falcon 40B model requires higher amount of storage itself.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1f9s7k0105n53wnibm8y.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1f9s7k0105n53wnibm8y.png" alt="Storage Size Selection" width="789" height="320"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Instance Details&lt;/strong&gt;: Proceed with the default instance details for the initial setup. These can be modified later based on specific requirements.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Security Group Configuration&lt;/strong&gt;: To ensure we can access the instance from our client, it is essential to configure the security group appropriately. Add a new rule to the security group that allows inbound traffic for the ports 80 and 3000 from your client IP address. This will enable you to securely access the instance over the internet.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhfgpp3cckiquw13wm2kh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhfgpp3cckiquw13wm2kh.png" alt="Adjusting Security Group" width="800" height="114"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Remember that this setup is primarily for experimental purposes. Whitelisting IP addresses is one way to secure your instance from unwanted public access. However, for more complex or sensitive deployments, you may need a more robust security architecture.&lt;/p&gt;

&lt;p&gt;At this point, you've successfully set up your AWS EC2 instance, creating a solid foundation for running PrivateGPT. Lets continue with the setup of PrivateGPT... &lt;/p&gt;

&lt;h2&gt;
  
  
  Setting up PrivateGPT
&lt;/h2&gt;

&lt;p&gt;Now that we have our AWS EC2 instance up and running, it's time to move to the next step: installing and configuring PrivateGPT. The following sections will guide you through the process, from connecting to your instance to getting your PrivateGPT up and running.&lt;/p&gt;

&lt;h3&gt;
  
  
  Connecting to the EC2 Instance
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Connection Setup&lt;/strong&gt;: To start with, we need to connect to the EC2 instance. In this case, we will be using AWS Session Manager. Session Manager provides a secure and convenient way to interact with your instances. It allows you to connect to your instance without needing to open inbound ports, manage SSH keys, or use bastion hosts. If you're new to Session Manager, you can refer to the official AWS documentation to set up the required configuration. Alternatively, you could also connect to your instance via SSH if you prefer.&lt;/p&gt;

&lt;h3&gt;
  
  
  Installing Prerequisites
&lt;/h3&gt;

&lt;p&gt;Once connected, we need to install a few prerequisites:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Git&lt;/strong&gt;: Start by installing Git, which will allow us to clone the PrivateGPT repository. If you're using Amazon Linux, you can install Git by running the command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo yum install git
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Pip&lt;/strong&gt;: Pip is a package installer for Python, which we will need to install the Python packages required by PrivateGPT. You can install it by running:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo yum install pip
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;NPM&lt;/strong&gt;: NPM (Node Package Manager) is used to install Node.js packages. If it's not already installed, you can install it by first installing Node.js with the command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo yum install npm
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Configuration of PrivateGPT
&lt;/h3&gt;

&lt;p&gt;With the prerequisites installed, we're now ready to set up PrivateGPT:&lt;/p&gt;

&lt;p&gt;Cloning the Repository: Clone the PrivateGPT repository to your instance using Git. You can do this with the command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;git clone https://github.com/SamurAIGPT/privateGPT.git
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;First we start our environment, to do this we have to navigate to the client folder and execute the following commands:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;npm install
npm run dev
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then we start a Flask application and install the necessary packages. For this we have to change to the server folder and execute the following commands:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;pip install -r requirements.txt
python privateGPT.py
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;By following these steps, you should have a fully operational PrivateGPT instance running on your AWS EC2 instance. Now, you can start experimenting with large language models and using your own data sources for generating text!&lt;/p&gt;

&lt;h2&gt;
  
  
  Navigating the PrivateGPT User Interface
&lt;/h2&gt;

&lt;p&gt;Now that we've successfully set up the PrivateGPT on our AWS EC2 instance, it's time to familiarize ourselves with its user-friendly interface. The UI is an intuitive tool, making it incredibly easy for you to interact with your language model, upload documents, manage your models, and generate text.&lt;/p&gt;

&lt;p&gt;First and foremost, you need to access the PrivateGPT UI. You can accomplish this by typing the public IP address of your AWS EC2 instance into your web browser's address bar from your client, appending :3000 at the end. It should look something like this: &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;a href="http://your-public-ip:3000" rel="noopener noreferrer"&gt;http://your-public-ip:3000&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;This will navigate you directly to the PrivateGPT interface hosted on your EC2 instance.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftsrvf280wxcxedqw02q7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftsrvf280wxcxedqw02q7.png" alt="Entering UI" width="682" height="189"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once you've entered the UI, the next step is to download a Large Language Model (LLM). You'll find a button in the UI specifically for this purpose. Clicking this button will commence the download process for the default language model 'gpt4all-j-v1.3-groovy'. Remember, PrivateGPT comes with a default language model, but you also have the freedom to experiment with others, like Falcon 40B from HuggingFace.&lt;/p&gt;

&lt;p&gt;With the language model ready, you're now prepared to upload your documents. Select the documents you'd like to use as a source for your LLM. After your documents have been successfully uploaded, the data needs to be ingested by the system. Look for an 'Ingest Data' button within the UI and click it. This action enables the model to consume and process the data from your documents.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fch3e9e0xkbh32of4ybt3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fch3e9e0xkbh32of4ybt3.png" alt="Ingesting Data" width="531" height="326"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Finally, with all the preparations complete, you're all set to start a conversation with your AI. Use the conversation input box to communicate with the model, and it will respond based on the knowledge it has gained from the ingested documents and its underlying model. (In my example I have generated PDF files from the official AWS documentations)&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8ggpx1zr30hxu3zynvor.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8ggpx1zr30hxu3zynvor.png" alt="Response from Bot" width="682" height="220"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And voila! You've now set foot in the fascinating world of AI-powered text generation. You can continue to explore and experiment with different settings and models to refine your understanding and outcomes. Enjoy this exciting journey!&lt;/p&gt;

&lt;h2&gt;
  
  
  Summary
&lt;/h2&gt;

&lt;p&gt;As we wind up this exploration, it's worth highlighting that while PrivateGPT may not offer the exact capabilities of something like ChatGPT, it provides a robust and secure environment to experiment with large language models, leveraging your own sources of data. From PDFs, HTML files, to Word documents and beyond, PrivateGPT offers flexibility in the types of documents you can use as data sources. (For the full list of supported document types, refer to the official PrivateGPT GitHub repository at &lt;a href="https://github.com/imartinez/privateGPT" rel="noopener noreferrer"&gt;https://github.com/imartinez/privateGPT&lt;/a&gt;.)&lt;/p&gt;

&lt;p&gt;What makes PrivateGPT all the more compelling is the constant evolution of open-source large language models. As these models continue to improve, the gap between services like ChatGPT is rapidly closing. The added advantage is that you're in control of your own data and infrastructure, providing a level of trust and flexibility that is invaluable in the rapidly evolving AI landscape.&lt;/p&gt;

&lt;p&gt;Undoubtedly, the journey into the realm of AI and large language models doesn't end here. With services like AWS SageMaker and open-source models from HuggingFace, the possibilities for experimentation and development are extensive. For those interested in more reliable solutions, I highly recommend checking out the insightful blog post by Phillip Schmid on using AWS SageMaker with large language models in a AWS environment. You can find his blog post &lt;a href="https://www.philschmid.de/sagemaker-llm-vpc" rel="noopener noreferrer"&gt;here&lt;/a&gt;: &lt;/p&gt;

&lt;p&gt;In conclusion, whether you're a seasoned AI practitioner or a curious enthusiast, tools like PrivateGPT offer an exciting playground to dive deeper into the world of AI. So go ahead, set up your PrivateGPT instance, play around with your data and models, and experience the incredible power of AI at your fingertips. Remember, "es lohnt sich" - it's worth it!&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Shout out to the creators of PrivateGPT &lt;a href="https://github.com/imartinez" rel="noopener noreferrer"&gt;Ivan Martinez&lt;/a&gt; and group around &lt;a href="https://github.com/SamurAIGPT" rel="noopener noreferrer"&gt;SamurAIGPT&lt;/a&gt; which give us a great start into the AI world through this simplification.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>privategpt</category>
      <category>aws</category>
      <category>generativeai</category>
      <category>ai</category>
    </item>
    <item>
      <title>Unlocking Cost Savings and Sustainability with AWS S3 and Synology Hyper Backup</title>
      <dc:creator>Artur Schneider</dc:creator>
      <pubDate>Mon, 05 Jun 2023 14:31:10 +0000</pubDate>
      <link>https://dev.to/aws-builders/unlocking-cost-savings-and-sustainability-with-aws-s3-and-synology-hyper-backup-4ah5</link>
      <guid>https://dev.to/aws-builders/unlocking-cost-savings-and-sustainability-with-aws-s3-and-synology-hyper-backup-4ah5</guid>
      <description>&lt;p&gt;The importance of safeguarding digital data cannot be overstated, and as a committed user of Synology NAS, I understood this imperative well. While my NAS offered centralized storage and easy accessibility, it also underscored the need for a reliable, cost-efficient, and sustainable disaster recovery (DR) solution. For a long time, Dropbox was my go-to for this. However, as my data needs evolved, Dropbox's fixed pricing structure began to feel increasingly ill-fitted and unsustainable economically.&lt;/p&gt;

&lt;p&gt;That's when I decided to explore the potential of AWS S3. Renowned for its scalability, security, and performance, AWS S3 also offered a flexible "pay-as-you-use" pricing model that seemed like a more sustainable and financially sensible choice.&lt;/p&gt;

&lt;h2&gt;
  
  
  Setting Up AWS S3 with Synology Hyper Backup
&lt;/h2&gt;

&lt;p&gt;Setting up AWS S3 with my Synology NAS was made incredibly straightforward through the Hyper Backup application. The transition involved:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Creating an AWS account and setting up an S3 bucket with proper access permissions.&lt;/li&gt;
&lt;li&gt;Launching the Hyper Backup application on the Synology DSM.&lt;/li&gt;
&lt;li&gt;Selecting AWS S3 as my backup destination.&lt;/li&gt;
&lt;li&gt;Filling in the necessary AWS S3 bucket details, including Access Key ID, Secret Access Key, and the bucket name.&lt;/li&gt;
&lt;li&gt;Configuring the backup settings, including selecting local folders to back up, setting automatic backups, and managing versions of my backups.&lt;/li&gt;
&lt;li&gt;Launching the backup process to begin syncing my Synology NAS data with my AWS S3 bucket.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Economic and Environmental Impact
&lt;/h2&gt;

&lt;p&gt;Dropbox's pricing model offered a static storage capacity, regardless of the actual usage. In my case, I didn't need the full size of the provided storage, so a sizable portion of the storage I was paying for went unused. This situation was not only financially inefficient but also unsustainable as it led to wasteful consumption of digital storage resources.&lt;/p&gt;

&lt;p&gt;Conversely, AWS S3's pay-as-you-go model allows users to pay only for the exact amount of storage used. This flexible model eliminated unnecessary waste, leading to a more sustainable and financially efficient solution. I found myself making savings of nearly 50% compared to my Dropbox subscription, as I was now only paying for the storage I was actively using.&lt;/p&gt;

&lt;p&gt;Moreover, AWS offers cost management tools to monitor spending and set up usage alerts. This ensured that I maintained control over my budget and further reduced waste, aligning with sustainable digital practices.&lt;/p&gt;

&lt;h2&gt;
  
  
  Final Thoughts
&lt;/h2&gt;

&lt;p&gt;Switching to AWS S3 for my Synology Hyper Backup solution has proven to be a sustainable, economical, and custom-fit solution for my DR needs. The flexible pricing model of AWS S3 has allowed me to eliminate waste, both in terms of storage and cost, while retaining a secure and reliable backup of my data.&lt;/p&gt;

&lt;p&gt;By moving from Dropbox to AWS S3, I not only found a cost-effective DR solution but also one that better aligns with principles of sustainable consumption and efficiency. If you're looking for an affordable, secure, adaptable, and sustainable DR solution for your Synology NAS, AWS S3 is a fantastic option to consider.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>backup</category>
      <category>synology</category>
      <category>sustainability</category>
    </item>
    <item>
      <title>Migrating large amounts of data to Amazon FSx for Windows File Server with AWS DataSync</title>
      <dc:creator>Artur Schneider</dc:creator>
      <pubDate>Mon, 29 May 2023 10:51:53 +0000</pubDate>
      <link>https://dev.to/aws-builders/migrating-large-amounts-of-data-to-amazon-fsx-for-windows-file-server-with-aws-datasync-10m</link>
      <guid>https://dev.to/aws-builders/migrating-large-amounts-of-data-to-amazon-fsx-for-windows-file-server-with-aws-datasync-10m</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;In the evolving landscape of technology and business, companies often find themselves navigating through a myriad of choices. One such choice is the decision between maintaining on-premises infrastructure or shifting to cloud-based solutions. This dilemma was recently faced by one of our clients - a global container shipment company.&lt;/p&gt;

&lt;p&gt;Their challenge was an aging fleet of local Windows file servers. As the hardware aged, issues of inefficiency, decreased performance, and increased maintenance costs became prevalent. Faced with the significant investment of time and resources required to refresh their local servers, they began to explore alternative options.&lt;/p&gt;

&lt;p&gt;Their search led them to AWS, and specifically, Amazon FSx for Windows File Server. Amazon FSx for Windows File Server offers fully managed Microsoft Windows file servers, providing the compatibility and features that their organization needed, all with the flexibility, scalability, and cost benefits of a cloud-based solution. It was an obvious choice, not only from a technological perspective but also for a smoother transition in terms of operating environment.&lt;/p&gt;

&lt;p&gt;The only obstacle that remained was the formidable task of migrating vast amounts of data from their local servers to the cloud. The complexity and potential risks of such a migration could not be underestimated. Yet, AWS once again provided the solution - AWS DataSync.&lt;/p&gt;

&lt;p&gt;AWS DataSync is a data transfer service that simplifies, automates, and accelerates copying large amounts of data to and from AWS storage services over the internet or AWS Direct Connect. Leveraging DataSync, we aimed to streamline the file migration process from one single dashboard, ensuring a seamless and efficient transition.&lt;/p&gt;

&lt;p&gt;In this blog post, we'll delve into our experience of this migration. I'll guide you through the process we followed, the challenges we faced, the solutions we devised, and most importantly, the success we achieved. Strap in for a deep dive into our journey from local Windows servers to Amazon FSx, orchestrated by AWS DataSync.&lt;/p&gt;

&lt;h2&gt;
  
  
  Requirements
&lt;/h2&gt;

&lt;p&gt;Before we dive into the migration, it's essential to understand the necessary prerequisites for using AWS DataSync and Amazon FSx for Windows.&lt;/p&gt;

&lt;h3&gt;
  
  
  AWS DataSync requirements
&lt;/h3&gt;

&lt;p&gt;Let's take a closer look at the DataSync requirements.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Network Connectivity&lt;/strong&gt;: AWS DataSync requires network connectivity between your source servers and the AWS region where your destination is located. This can be achieved either via the internet or using AWS Direct Connect / VPN for a dedicated network connection like in our case through VPC Endpoints. It is worth it to have a closer look at the official AWS documentation &lt;a href="https://docs.aws.amazon.com/datasync/latest/userguide/datasync-network.html" rel="noopener noreferrer"&gt;AWS DataSync network requirements&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;DataSync Agent&lt;/strong&gt;: The DataSync agent must be installed on a virtual machine or physical server with at least 4 vCPUs, 16 GB of RAM, and network connectivity to both your source storage and AWS. You can deploy your agent on a VMware ESXi, Linux Kernel-based Virtual Machine (KVM), or Microsoft Hyper-V hypervisor. For storage in a virtual private cloud (VPC) in AWS, you can deploy an agent even on an Amazon EC2 instance.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Outbound Network Access&lt;/strong&gt;: The DataSync agent requires outbound network access over port 80 for HTTP or port 443 for HTTPS. These ports must be open on your firewall.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;File System Compatibility&lt;/strong&gt;: If you are transferring data from a Windows-based file system, it should support SMB (Server Message Block) protocol. For Unix or Linux source data, your file system must support NFS (Network File System).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;AWS Permissions&lt;/strong&gt;: In order to transfer data, the DataSync service needs the necessary permissions in AWS Identity and Access Management (IAM). This requires an IAM role that DataSync can assume to access resources.&lt;/p&gt;

&lt;h3&gt;
  
  
  Amazon for Windows File Server requirements
&lt;/h3&gt;

&lt;p&gt;Now let's look at the requirements specific to using Amazon FSx for Windows File Server.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;VPC and Security Group Configuration&lt;/strong&gt;: Amazon FSx requires an Amazon VPC with the necessary security group rules. Your security groups must allow inbound traffic over the SMB port (usually port 445) from your clients, and the clients must be able to route traffic to the Amazon FSx file system.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Active Directory Integration&lt;/strong&gt;: Amazon FSx must be joined to an AWS Managed Microsoft AD or self-managed AD. Your Windows users and groups must be part of this Active Directory.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Minimum Storage Capacity&lt;/strong&gt;: The minimum storage capacity for Amazon FSx is 32 GiB.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Backup and Maintenance Preferences&lt;/strong&gt;: You need to choose your preferred daily backup window, weekly maintenance window, and whether to enable automatic backup.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Network Connectivity&lt;/strong&gt;: The FSx file server must have network connectivity to your workloads either within AWS (for workloads running on EC2 instances) or on-premises (for workloads running on your local servers).&lt;/p&gt;

&lt;h2&gt;
  
  
  Setting up Amazon FSx for Windows File Server
&lt;/h2&gt;

&lt;p&gt;Before you can proceed with setting up AWS DataSync, your destination - in this case, Amazon FSx for Windows File Server - needs to be correctly set up and ready to receive data.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 1: Prepare for Amazon FSx&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Before creating your Amazon FSx for Windows File Server, make sure you meet the following prerequisites:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Have a Active Directory ready to join your FSx and reachable from your AWS environment. Alternatively set up your AWS Managed Microsoft AD or self-managed Active Directory, which FSx will need to join.&lt;/li&gt;
&lt;li&gt;Ensure you have a VPC and Security Group ready to be associated with FSx.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Step 2: Create Your Amazon FSx for Windows File System&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Navigate to the Amazon FSx console and select "Create file system".&lt;/li&gt;
&lt;li&gt;Choose the deployment type as "FSx for Windows File Server" and select "Next".&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Step 3: Specify File System Details&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Name your file system for easy identification and proceed to configure it.&lt;/li&gt;
&lt;li&gt;Choose your desired storage capacity and performance level.&lt;/li&gt;
&lt;li&gt;Configure the network &amp;amp; security settings by selecting the desired VPC, the preferred subnet, and the security groups.&lt;/li&gt;
&lt;li&gt;In the Windows authentication section, select the Microsoft Active Directory that you've set up. The Amazon FSx for Windows File Server will be joined to this AD.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Step 4: Configure Optional Settings&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;You can choose to leave these settings at their default or customize them according to your needs:&lt;/li&gt;
&lt;li&gt;Maintenance preferences: You can specify a weekly time window when automatic maintenance activities occur.&lt;/li&gt;
&lt;li&gt;Data deduplication: This can reduce your storage costs if you have redundant data.&lt;/li&gt;
&lt;li&gt;Encryption: FSx data is encrypted at rest using keys you manage through AWS Key Management Service (KMS). You can use the default key, or choose a key you created.&lt;/li&gt;
&lt;li&gt;Throughput capacity: You can manually specify a throughput capacity, or you can allow Amazon FSx to automatically adjust it based on the workload.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Step 5: Review and Create&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Review your settings to make sure everything is correct.&lt;/li&gt;
&lt;li&gt;Finally, click "Create file system".&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Your Amazon FSx for Windows File Server should now be set up and ready to serve as a destination for your data transfer task in AWS DataSync. Note the DNS name and Windows Remote Management (WinRM) port of your newly created FSx for Windows file system - you'll need these when configuring your data transfer task in AWS DataSync.&lt;/p&gt;

&lt;p&gt;Now that your Amazon FSx is all set up, you can proceed with defining your target in AWS DataSync, and start transferring your data!&lt;/p&gt;

&lt;h2&gt;
  
  
  Setting up AWS DataSync
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Step 1: Setting up VPC Endpoints&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Before setting up AWS DataSync, ensure you've created a VPC endpoint to communicate with AWS DataSync securely within your Amazon VPC.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Navigate to the VPC Dashboard in your AWS Management Console.&lt;/li&gt;
&lt;li&gt;Under the "Virtual Private Cloud" section, click on "Endpoints".&lt;/li&gt;
&lt;li&gt;Click "Create Endpoint", and in the "Service category" choose "AWS services".&lt;/li&gt;
&lt;li&gt;For the service name, select "com.amazonaws.[your-region].datasync" where [your-region] should be replaced with your AWS region (like us-east-1, us-west-2, etc.).&lt;/li&gt;
&lt;li&gt;In the VPC section, choose the VPC where you want to create the endpoint.&lt;/li&gt;
&lt;li&gt;Select the appropriate Route Table, Security Group, and other options as per your requirements and click "Create endpoint".&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Step 2: Deploying the DataSync Agent&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Visit the AWS DataSync console in your AWS account and select "Get started".&lt;/li&gt;
&lt;li&gt;Choose "Deploy a new agent", and follow the instructions to download and deploy the agent in your local environment.&lt;/li&gt;
&lt;li&gt;You can connect to the DataSync via console from you Hypervisor to e.g. test the connection the the VPC endpoint or to retrieve the activation key &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feaebgg4szxn2sybraihj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feaebgg4szxn2sybraihj.png" alt="AWS DataSync Agent local console view" width="686" height="402"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 3: Configuring the DataSync Agent&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;After deploying the agent, you'll need to configure it to connect to your on-premises file system and your AWS account.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;In the AWS DataSync console, select your newly deployed agent.&lt;/li&gt;
&lt;li&gt;Click "Configure agent", and then enter the IP address or hostname of your on-premises file system.&lt;/li&gt;
&lt;li&gt;Enter your AWS account ID, choose the AWS region where you want to transfer data, and input the VPC endpoint ID that you created in Step 1.&lt;/li&gt;
&lt;li&gt;Choose a method to authenticate the agent to your AWS account (either by creating an IAM role or by entering your AWS access keys).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Step 4: Create a Location for your Data&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;In the DataSync console, choose "Create location".&lt;/li&gt;
&lt;li&gt;Select the location type based on where your data is stored. For on-premises locations, choose "NFS" or "SMB".&lt;/li&gt;
&lt;li&gt;Depending on the location type, provide the required information which includes the Agent, server hostname, and mount path for NFS or Agent, domain, user name, password, server hostname, and share name for SMB.&lt;/li&gt;
&lt;li&gt;Similarly, create a location for your destination data in FSx.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Step 5: Create a Task to Transfer Data&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;After setting up locations, you can now create a data transfer task.&lt;/li&gt;
&lt;li&gt;In the DataSync console, choose "Create task".&lt;/li&gt;
&lt;li&gt;Select the source location and destination location that you created in Step 4.&lt;/li&gt;
&lt;li&gt;Configure your data transfer settings like options for handling metadata, data verification, etc., and set a schedule if you want the task to run on a schedule.&lt;/li&gt;
&lt;li&gt;Finally, start the data transfer task.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You should monitor your data transfer tasks on the AWS DataSync console. This is where you can view the progress of the task, check for any errors, and view performance metrics.Please remember that this is a simplified overview of the process and does not include every possible configuration option. Always refer to the official AWS documentation for the most accurate and up-to-date information.&lt;/p&gt;

&lt;h2&gt;
  
  
  Challenges
&lt;/h2&gt;

&lt;p&gt;During our migration we experienced some challenges which I can summarize to the following:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Network Configuration Challenges&lt;/strong&gt;: Establishing secure and efficient network connections between your local environment and AWS was one of the major challenges. The complexity of existing VPN connection and VPC endpoints, while ensuring that all network requirements are met and firewalls are appropriately configured, can be quite daunting.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Insufficient Local Compute Resources&lt;/strong&gt;: Another issue was the need for a certain level of compute resources in your local environment to host the AWS DataSync Agent. If the minimum requirements were not met, although you were allowed to proceed with the migration, there could be performance issues. In such cases, using traditional methods like robocopy was an alternative.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;IP Address Allocation&lt;/strong&gt;: AWS DataSync creates an Elastic Network Interface (ENI) in your chosen VPC for each task, each with its own IP address. It was crucial to ensure sufficient IP addresses were available in your subnets to accommodate these ENIs, preventing possible IP address exhaustion.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Permissions in Active Directory Group&lt;/strong&gt;: With Amazon FSx joining your existing domain, you needed to carefully select the AD group that would have administrative permissions. This decision is critical as it can't be modified after the FSx file system was created. Therefore, careful consideration was required when choosing the AD groups that would manage FSx in the future.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Reflecting on our migration journey, we've come to recognize several crucial takeaways and successes.&lt;/p&gt;

&lt;p&gt;AWS DataSync proved to be an exceptional tool for orchestrating large-scale data migrations from multiple on-premises locations to AWS. The ability to consolidate all migration tasks in a single dashboard, regardless of the geographical location, offers an unmatched level of organization and visibility. This feature, combined with comprehensive logging and monitoring capabilities, allowed us to maintain a tight control on our migration process.&lt;/p&gt;

&lt;p&gt;Choosing Amazon FSx for Windows File Server was indeed a strategic decision that aligned with our client's requirements and expectations. FSx provided a seamless transition from their familiar local Windows File Servers to the cloud. This ensured minimal disruption to their business operations, and allowed them to continue using file shares just as they were accustomed to, but with the added advantages of AWS' scalability, reliability, and robust security.&lt;/p&gt;

&lt;p&gt;Our most significant achievement was successfully migrating vast amounts of data across the globe to AWS with near-zero downtime. This feat, executed effectively with a keen eye on minimizing business impact, marks a pivotal moment in our client's digital transformation journey.&lt;/p&gt;

&lt;p&gt;Moving forward, we are confident in our ability to leverage AWS DataSync and Amazon FSx for Windows File Server to assist other businesses in their migration endeavors. As with any ambitious project, challenges are inevitable. Still, we have shown that through careful planning, robust technology, and a strong understanding of our infrastructure, even the most daunting hurdles can be overcome. Our experience stands testament to the power of cloud technology in transforming business landscapes, and we're excited to continue helping organizations navigate their unique paths to the cloud.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>migration</category>
      <category>awsdatasync</category>
      <category>amazonfsx</category>
    </item>
  </channel>
</rss>
