<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Luis Parraguez</title>
    <description>The latest articles on DEV Community by Luis Parraguez (@lparraguez).</description>
    <link>https://dev.to/lparraguez</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/lparraguez"/>
    <language>en</language>
    <item>
      <title>Do You need an AI ASSISTANT? Let’s build it using Amazon Q! (Part 2)</title>
      <dc:creator>Luis Parraguez</dc:creator>
      <pubDate>Sat, 03 Feb 2024 23:31:08 +0000</pubDate>
      <link>https://dev.to/aws-builders/do-you-need-an-ai-assistant-lets-build-it-using-amazon-q-part-2-54fe</link>
      <guid>https://dev.to/aws-builders/do-you-need-an-ai-assistant-lets-build-it-using-amazon-q-part-2-54fe</guid>
      <description>&lt;p&gt;In my &lt;a href="https://dev.to/aws-builders/do-you-need-an-ai-assistant-lets-build-it-using-amazon-q-part-1-3bh5"&gt;previous post&lt;/a&gt;, I shared how to implement an &lt;strong&gt;AI assistant&lt;/strong&gt; using the features available in the recently launched &lt;strong&gt;Amazon Q for Business service&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Overall scope was to build that &lt;strong&gt;AI assistant&lt;/strong&gt; to take advantage of a &lt;strong&gt;Knowledge Base&lt;/strong&gt; including information from &lt;strong&gt;the internet, document repositories and uploaded relevant documents&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Until that moment, we had our &lt;strong&gt;AI Assistant&lt;/strong&gt; already operating for &lt;strong&gt;internal use&lt;/strong&gt;. We need now to take the next step to &lt;strong&gt;DEPLOY the AI Assistant to our organization&lt;/strong&gt; so we can have &lt;strong&gt;multiple users&lt;/strong&gt; leveraging its capabilities with the &lt;strong&gt;required security measures in place&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Let’s see now the &lt;strong&gt;process and lessons learned&lt;/strong&gt; during the &lt;strong&gt;deployment&lt;/strong&gt; of an &lt;strong&gt;AI Assistant&lt;/strong&gt; using &lt;strong&gt;Amazon Q for Business&lt;/strong&gt;!&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Integrating our AI Assistant with an Identity Provider&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The first step that we need to follow to deploy the &lt;strong&gt;Amazon Q web experience&lt;/strong&gt;, web user interface that we are using to interact with our &lt;strong&gt;AI Assistant&lt;/strong&gt;, is to integrate it with a &lt;strong&gt;system/service&lt;/strong&gt; that will &lt;strong&gt;authenticate and authorize the access of users&lt;/strong&gt; in our organization.&lt;/p&gt;

&lt;p&gt;Similarly to the &lt;strong&gt;various alternatives&lt;/strong&gt; that are available to integrate with &lt;strong&gt;data sources&lt;/strong&gt;, Amazon Q offers various alternatives of an &lt;strong&gt;identity provider (IdP)&lt;/strong&gt; that’s compliant with &lt;strong&gt;SAML 2.0&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;The &lt;strong&gt;identity providers&lt;/strong&gt; currently supported by Amazon Q include AWS IAM Identity Center, Okta, Microsoft EntraID, and PingIdentity.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Amazon Q&lt;/strong&gt; acts as a &lt;strong&gt;service provider (SP)&lt;/strong&gt; that &lt;strong&gt;requests user authentication and authorization&lt;/strong&gt; from an &lt;strong&gt;identity provider (IdP)&lt;/strong&gt;. The &lt;strong&gt;IdP authenticates&lt;/strong&gt; the user’s identity and &lt;strong&gt;provides attributes&lt;/strong&gt; about the user to &lt;strong&gt;Amazon Q&lt;/strong&gt;. &lt;strong&gt;Amazon Q&lt;/strong&gt; then &lt;strong&gt;authorizes the user’s session&lt;/strong&gt; based on these attributes.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Security Assertion Markup Language (SAML)&lt;/strong&gt; is used to transfer &lt;strong&gt;user identity data&lt;/strong&gt; between the IdP and Amazon Q in a standardized way. Some key points about integrating an IdP with Amazon Q:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Authentication&lt;/strong&gt; confirms a user’s identity by verifying they are who they say they are;&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Authorization&lt;/strong&gt; allows users certain permissions or access to resources;&lt;/li&gt;
&lt;li&gt;The &lt;strong&gt;IdP&lt;/strong&gt; stores, manages and verifies user identities for applications like &lt;strong&gt;Amazon Q&lt;/strong&gt;;&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Amazon Q&lt;/strong&gt; uses service-initiated single sign-on (SSO) to authenticate users. &lt;strong&gt;IdP-initiated SSO is not supported&lt;/strong&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;For our use case, I decided to use &lt;strong&gt;AWS IAM Identity Center&lt;/strong&gt; as our &lt;strong&gt;Identity Provider&lt;/strong&gt;. We need to follow preparation steps to proceed with the integration.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Enabling an IAM Identity Center instance&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;We need to enable &lt;strong&gt;IAM Identity Center&lt;/strong&gt; using the service console:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv1xev48t3ojjoglrvjpy.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv1xev48t3ojjoglrvjpy.jpg" alt="Image description" width="800" height="369"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;At this point there is an important decision to be taken related to the &lt;strong&gt;type of instance&lt;/strong&gt; for IAM Identity Center: &lt;strong&gt;organization instances&lt;/strong&gt; or &lt;strong&gt;account instances&lt;/strong&gt;. Let’s briefly understand the characteristics of each one:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Organization instances of IAM Identity Center:&lt;/strong&gt; When we enable IAM Identity Center in conjunction with &lt;strong&gt;AWS Organizations&lt;/strong&gt;, we are creating an &lt;strong&gt;organization instance&lt;/strong&gt; of IAM Identity Center. An &lt;strong&gt;organization instance&lt;/strong&gt; is the &lt;strong&gt;primary method&lt;/strong&gt; of enabling IAM Identity Center as it provides support for &lt;strong&gt;all features&lt;/strong&gt; of IAM Identity Center including managing permissions for multiple AWS accounts in your organization and &lt;strong&gt;assigning access to customer managed applications&lt;/strong&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Account instances of IAM Identity Center:&lt;/strong&gt; Account instances are bound to a single AWS account and are used only to manage user and group access for supported applications in the same account and AWS Region. Supported applications are AWS managed applications and OIDC-based customer managed applications. &lt;strong&gt;OpenID Connect (OIDC)&lt;/strong&gt; is a standard for identity federation.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Looking at a summary of the capabilities available for each instance type:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fau864qzty6tnolrm3598.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fau864qzty6tnolrm3598.jpg" alt="Image description" width="800" height="233"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Based on the previous definitions, we need to select an &lt;strong&gt;organization instance of IAM Identity Center&lt;/strong&gt; due to, at least, two reasons: (1) We need to be able to integrate with &lt;strong&gt;Customer-managed applications&lt;/strong&gt; (that is the case of our &lt;strong&gt;Amazon Q for business application&lt;/strong&gt;) and (2) We need that the authentication and authorization process uses the &lt;strong&gt;SAML&lt;/strong&gt; standard. Both requisites are not supported by an &lt;strong&gt;Account instance of IAM Identity Center&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;If you hadn’t created an &lt;strong&gt;AWS organization&lt;/strong&gt; before, the process to enable an &lt;strong&gt;organization instance&lt;/strong&gt; will automatically create an AWS Organization in the background assigning the account that we are using as &lt;strong&gt;management account&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;After the creation of the &lt;strong&gt;organization instance&lt;/strong&gt;, we need to confirm our &lt;strong&gt;identity source&lt;/strong&gt;. The &lt;strong&gt;identity source&lt;/strong&gt; is where we administer users and groups, and it is the service that authenticates our users. By default, &lt;strong&gt;IAM Identity Center&lt;/strong&gt; creates an &lt;strong&gt;Identity Center directory&lt;/strong&gt;. For our use case we will use this default option.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frbo238ja035vjxswxuwv.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frbo238ja035vjxswxuwv.jpg" alt="Image description" width="800" height="242"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Note in the previous image that there is an &lt;strong&gt;AWS access portal URL&lt;/strong&gt; being shown here. It’s important to highlight that our &lt;strong&gt;AI Assistant&lt;/strong&gt; will appear in that portal, but &lt;strong&gt;we won’t be able to access it using the portal&lt;/strong&gt;. This is because &lt;strong&gt;IdP-initiated SSO is not supported&lt;/strong&gt; by Amazon Q. We will see later how we will access our AI Assistant.&lt;/p&gt;

&lt;p&gt;Moving forward, we need to create at least &lt;strong&gt;one valid user&lt;/strong&gt; with a &lt;strong&gt;valid e-mail address&lt;/strong&gt; and, optionally but highly recommended, create &lt;strong&gt;groups&lt;/strong&gt; and assign users to them. &lt;strong&gt;Users and/or groups&lt;/strong&gt; will be used later to enable &lt;strong&gt;access to our AI assistant&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;For our &lt;strong&gt;use case&lt;/strong&gt;, we created &lt;strong&gt;one group with one user&lt;/strong&gt; to test the access, including the &lt;strong&gt;activation of MFA&lt;/strong&gt; as a &lt;strong&gt;security measure&lt;/strong&gt; already available as part of the process:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flsl9hbfhpyi314raih5f.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flsl9hbfhpyi314raih5f.jpg" alt="Image description" width="800" height="315"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;At this moment, we are ready to begin the integration of our &lt;strong&gt;AI Assistant&lt;/strong&gt; with &lt;strong&gt;IAM Identity Center&lt;/strong&gt;. To start the process, we need to come back to the &lt;strong&gt;Amazon Q console&lt;/strong&gt; and “Edit” the &lt;strong&gt;Web Experience settings&lt;/strong&gt;:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3t9g9tdc0uc3akuzaiqd.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3t9g9tdc0uc3akuzaiqd.jpg" alt="Image description" width="800" height="239"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Following the steps detailed in &lt;a href="https://docs.aws.amazon.com/amazonq/latest/business-use-dg/idp-sso.html"&gt;Setting up Amazon Q with IAM Identity Center as identity provider — Amazon Q&lt;/a&gt;, we will complete the configuration of both &lt;strong&gt;Amazon Q&lt;/strong&gt; and its integration with &lt;strong&gt;IAM Identity Center&lt;/strong&gt;, executing &lt;strong&gt;carefully&lt;/strong&gt; a series of steps in both &lt;strong&gt;services console&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;At the end of this process, in &lt;strong&gt;IAM Identity Center&lt;/strong&gt;, we will have our &lt;strong&gt;AI Assistant application&lt;/strong&gt; configured with the &lt;strong&gt;assignment of authorized group/user&lt;/strong&gt;, as you can see in the following image:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr6pvz7gun8swyccougqs.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr6pvz7gun8swyccougqs.jpg" alt="Image description" width="800" height="406"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;On the side of &lt;strong&gt;Amazon Q&lt;/strong&gt;, in the &lt;strong&gt;Web experience settings&lt;/strong&gt; section we will have the information about the &lt;strong&gt;service role&lt;/strong&gt; created to authorize Amazon Q to provision the resources required for the deployment and mainly the &lt;strong&gt;Deployed URL that is the URL that the users will use to access our AI assistant application!!&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxaz1vrtvh1bsvv90be8c.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxaz1vrtvh1bsvv90be8c.jpg" alt="Image description" width="800" height="138"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now we are ready to test the access of our &lt;strong&gt;user&lt;/strong&gt; to our &lt;strong&gt;AI assistant&lt;/strong&gt; application. To do that, we can click on the &lt;strong&gt;Deployed URL&lt;/strong&gt; and start the process. As you can see in the following image, the integration with &lt;strong&gt;IAM Identity Center&lt;/strong&gt; will be actioned and we will need to &lt;strong&gt;sign-in&lt;/strong&gt; into the application:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz3w6qdrwcdw9dj9umdvy.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz3w6qdrwcdw9dj9umdvy.jpg" alt="Image description" width="504" height="655"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And &lt;strong&gt;take advantage of MFA&lt;/strong&gt; to &lt;strong&gt;enhance the security&lt;/strong&gt; to our application:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwi0ejozvfpi4tzhfsvpw.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwi0ejozvfpi4tzhfsvpw.jpg" alt="Image description" width="513" height="699"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Completing this process, we will get &lt;strong&gt;authorized access&lt;/strong&gt; to our &lt;strong&gt;AI assistant&lt;/strong&gt; and the user is &lt;strong&gt;READY TO GO to explore use cases that we described in the previous post!!&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1tyk6ewtm5aem0mroy8m.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1tyk6ewtm5aem0mroy8m.jpg" alt="Image description" width="800" height="462"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;At this point, we have accomplished the objective to &lt;strong&gt;deploy our AI assistant&lt;/strong&gt; using a &lt;strong&gt;WEB EXPERIENCE&lt;/strong&gt; user interface with access enabled for our &lt;strong&gt;authorized user&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;As from here, defining what will be our &lt;strong&gt;actual team of users&lt;/strong&gt;, we can extend the configuration of &lt;strong&gt;IAM Identity Center&lt;/strong&gt; and, through the proper &lt;strong&gt;communication channels&lt;/strong&gt; and in &lt;strong&gt;compliance with the corporate policies&lt;/strong&gt;, release access to our &lt;strong&gt;AI assistant for a broader group&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;In case your organization has different identity providers, like &lt;strong&gt;Microsoft Entra ID (formerly Azure Active Directory)&lt;/strong&gt; or others, you can also perform the required configuration as needed.&lt;/p&gt;

&lt;p&gt;As in the case of &lt;strong&gt;data&lt;/strong&gt;, I understand that &lt;strong&gt;these integration capabilities of Amazon Q with different Identity providers&lt;/strong&gt; helps to &lt;strong&gt;mitigate adoption barriers&lt;/strong&gt; and contributes to &lt;strong&gt;fulfill the strong expectation of organizations&lt;/strong&gt; to have a &lt;strong&gt;secure access to the AI assistant that will provide access to very valuable corporate information and documentation&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Now that we have completed this deployment step, &lt;strong&gt;what could be the next step?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;What about integrating our &lt;strong&gt;corporate applications and/or platforms&lt;/strong&gt; with the AI assistant not through a web interface but through &lt;strong&gt;APIs&lt;/strong&gt;?&lt;/p&gt;

&lt;p&gt;This could be a very interesting example of the &lt;strong&gt;utilization of our AI assistant’s insights&lt;/strong&gt; to &lt;strong&gt;enhance the execution of our Business/IT processes&lt;/strong&gt; and our &lt;strong&gt;team’s collaboration, don’t you agree?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Let’s meet again in our next post and please feel free to share your feedback and comments!&lt;/strong&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>genai</category>
      <category>aws</category>
      <category>data</category>
    </item>
    <item>
      <title>Do You need an AI ASSISTANT? Let's build it using Amazon Q! (Part 1)</title>
      <dc:creator>Luis Parraguez</dc:creator>
      <pubDate>Tue, 23 Jan 2024 00:30:28 +0000</pubDate>
      <link>https://dev.to/aws-builders/do-you-need-an-ai-assistant-lets-build-it-using-amazon-q-part-1-3bh5</link>
      <guid>https://dev.to/aws-builders/do-you-need-an-ai-assistant-lets-build-it-using-amazon-q-part-1-3bh5</guid>
      <description>&lt;p&gt;Generative AI is a branch of artificial intelligence that can create new content or data from scratch, such as text, images, audio, or video. It is powered by deep learning models that learn from large amounts of data and generate novel outputs based on a given input or context. &lt;strong&gt;One of the most exciting applications of generative AI is building AI assistants&lt;/strong&gt; that can interact with customers or employees in natural language, provide personalized information or recommendations, and automate tasks or processes.&lt;/p&gt;

&lt;p&gt;We are already seeing in the market the launch of &lt;strong&gt;AI assistants, using a chat bot experience&lt;/strong&gt;, that are built to answer questions using natural language using as source of information our own personal or company documents, extending the possibility of use the internet as the information source. In this way, we can have &lt;strong&gt;specialized/customized answers&lt;/strong&gt; for our questions applying a &lt;strong&gt;personal/business context&lt;/strong&gt; allowing us to get more &lt;strong&gt;precise and specific insights&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;To implement an &lt;strong&gt;AI Assistant&lt;/strong&gt; there are key functional and technical steps that we need to follow:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Define the use case and the target audience:&lt;/strong&gt; The first step is to identify the specific problem or opportunity that the AI Assistant will address, and the characteristics and needs of the customers or employees who will use it. Let’s define as &lt;strong&gt;sample use case&lt;/strong&gt; that our need is to have an AI Assistant that will be able to provide us with information about &lt;strong&gt;two domains&lt;/strong&gt;: FSI industry trends and AWS services.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Collect and prepare the data:&lt;/strong&gt; The second step is to gather and process the data that will be used to &lt;strong&gt;enrich the answers&lt;/strong&gt; and evaluate the generative AI Assistant. The data should be relevant, diverse, and high-quality, and should reflect the domains and the context of our use case. For our sample use case, let’s assume that the requirement is to include information from the &lt;strong&gt;internet, document repositories and to be able to add ad-hoc relevant documents&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Choose and deploy the generative AI platform:&lt;/strong&gt; The third step is to select and implement the generative AI platform that will enable the creation and management of the AI Assistant. The platform should be scalable, secure, and easy to use, and should provide tools and features for data ingestion, testing, and deployment. &lt;strong&gt;For our use case, we selected Amazon Q for Business as the platform to build our AI Assistant&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Design, build and test the AI Assistant:&lt;/strong&gt; The fourth step is to design, build and test the AI assistant, ensuring that it meets the functional and non-functional requirements, such as performance, accuracy, reliability, usability, and ethics. The AI assistant should be evaluated and validated by real users, and feedback should be collected and incorporated.&lt;/p&gt;

&lt;p&gt;Let’s see now the &lt;strong&gt;process and lessons learned&lt;/strong&gt; during the creation of an &lt;strong&gt;AI Assistant&lt;/strong&gt; using &lt;strong&gt;Amazon Q for Business&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;First, what is Amazon Q for Business?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Amazon Q for Business is a fully managed, &lt;strong&gt;generative AI-powered enterprise chat assistant&lt;/strong&gt; created by Amazon. It allows organizations to deploy an AI agent within their company to enhance employee productivity. Amazon Q for Business is tailored specifically for organizational use by allowing administrators to connect internal systems and limit access based on user permissions and groups. This ensures employees only get information relevant to their roles from trusted sources within the company.&lt;/p&gt;

&lt;p&gt;Let’s start then with the creation of our &lt;strong&gt;AI Assistant&lt;/strong&gt;!&lt;/p&gt;

&lt;p&gt;I will show the AI Assistant creation process using the &lt;strong&gt;AWS Console&lt;/strong&gt;, and that assumes that you have an &lt;strong&gt;AWS account&lt;/strong&gt; and an &lt;strong&gt;IAM user with administration privileges&lt;/strong&gt; required for the provision of the resources along the process. Alternatively, it is also possible to execute this process using the &lt;strong&gt;AWS CLI&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;To create our &lt;strong&gt;AI Assistant&lt;/strong&gt;, we need to create first an &lt;strong&gt;Amazon Q Application&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Let’s sign into the &lt;strong&gt;AWS Management Console&lt;/strong&gt; and open the &lt;a href="https://console.aws.amazon.com/amazonq/"&gt;Amazon Q console&lt;/a&gt; using the administrative user previously created.&lt;/p&gt;

&lt;p&gt;We need first to configure the &lt;strong&gt;Amazon Q Application&lt;/strong&gt; initial &lt;strong&gt;settings&lt;/strong&gt;:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fukrh4huw4obrn6jy1znt.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fukrh4huw4obrn6jy1znt.jpg" alt="Image description" width="800" height="154"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Note in the image above:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Service Access Role:&lt;/strong&gt; IAM role for Amazon Q to allow it to access the AWS resources it needs to create our application. We can choose to use an existing role or create a new role. The policy associated with this role will also allow &lt;strong&gt;Amazon Q&lt;/strong&gt; to publish information to &lt;strong&gt;CloudWatch Logs&lt;/strong&gt; that will allow us to monitor the data ingestion processes.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;KMS Encryption Key:&lt;/strong&gt; Amazon Q encrypts our data by default using &lt;strong&gt;AWS managed KMS keys&lt;/strong&gt;, option selected for this use case. Alternatively, you can use a &lt;strong&gt;customer-managed key&lt;/strong&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Not shown above but requested during the creation process, it is &lt;strong&gt;highly recommended&lt;/strong&gt; to include &lt;strong&gt;Application Tags&lt;/strong&gt; to identify &lt;strong&gt;all the resources linked to our Amazon Q application&lt;/strong&gt;. As an example, in our case, we used the combination &lt;strong&gt;“Created_By | bbold-amazonq-app”&lt;/strong&gt; for all the resources created.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Next, we need to create and select a &lt;strong&gt;Retriever&lt;/strong&gt; for our Amazon Q application. An &lt;strong&gt;Amazon Q retriever&lt;/strong&gt; determines where an Amazon Q conversational agent will search for answers to users' questions. When creating an Amazon Q application, a retriever must be selected. The retriever connects the application to external data sources containing information that can be used to respond to questions. There are two different types of retrievers that can be chosen:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Native retriever:&lt;/strong&gt; Allows connecting directly to data sources like knowledge bases, documentation repositories, or databases using &lt;strong&gt;Amazon Q data connectors&lt;/strong&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Amazon Kendra retriever:&lt;/strong&gt; Connects to an existing &lt;strong&gt;Amazon Kendra index&lt;/strong&gt; to query its data. &lt;strong&gt;Amazon Kendra&lt;/strong&gt; is an intelligent search service powered by machine learning. Amazon Kendra allows organizations to index documents from multiple sources and provide a unified search experience for internal information.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Let’s see the key &lt;strong&gt;Retriever settings&lt;/strong&gt;:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl28giyzm64gywnzw7o2e.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl28giyzm64gywnzw7o2e.jpg" alt="Image description" width="800" height="145"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Note in the image above:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Retriever:&lt;/strong&gt; We chose &lt;strong&gt;"Use native retriever"&lt;/strong&gt; to build an &lt;strong&gt;Amazon Q retriever&lt;/strong&gt; for our Amazon Q application. When we select the native retriever for an Amazon Q application, &lt;strong&gt;Amazon Q will create an index&lt;/strong&gt; to connect to and organize the data sources configured for the application. While the &lt;strong&gt;native retriever index is not an Amazon Kendra index&lt;/strong&gt;, both serve a similar purpose of housing and organizing content for retrieval. The main difference is the &lt;strong&gt;native retriever index&lt;/strong&gt; is managed internally by &lt;strong&gt;Amazon Q&lt;/strong&gt;, whereas a Kendra index would be a separate service integration.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Index provisioning:&lt;/strong&gt; When creating an Amazon Q application, the &lt;strong&gt;index provisioning number of units&lt;/strong&gt; refers to the &lt;strong&gt;storage capacity allocated for the application's index&lt;/strong&gt;. Each unit in the index corresponds to &lt;strong&gt;20,000 documents&lt;/strong&gt; that can be stored. Each storage unit includes 100 hours of connector usage per month. The first storage unit is available at no charge for the lesser of 750 hours or 31 days. We selected 1 storage unit to &lt;strong&gt;try out Amazon Q without incurring charges during the initial evaluation period&lt;/strong&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Similarly, we included &lt;strong&gt;Retriever Tags&lt;/strong&gt; and &lt;strong&gt;Index Tags&lt;/strong&gt;, using the same combination of the &lt;strong&gt;Application tags&lt;/strong&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Now, we are ready to connect with our &lt;strong&gt;data sources&lt;/strong&gt;! &lt;strong&gt;Amazon Q application data sources&lt;/strong&gt; are repositories of information that can be connected to an Amazon Q application to power the conversational agent's responses. When a user asks a question through the &lt;strong&gt;Amazon Q chat interface&lt;/strong&gt;, the system will search across connected data sources to &lt;strong&gt;find relevant answers and responses&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Using the &lt;strong&gt;Amazon Q retriever&lt;/strong&gt;, we can select several &lt;strong&gt;cloud and on-premises data sources&lt;/strong&gt;, as we can see in the following images:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2y5qnuusvsv74x2iiavp.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2y5qnuusvsv74x2iiavp.jpg" alt="Image description" width="800" height="151"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnxeypj10r95xztrenf7i.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnxeypj10r95xztrenf7i.jpg" alt="Image description" width="800" height="625"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;From my point of view, this a &lt;strong&gt;VERY RELEVANT CAPABILITY&lt;/strong&gt; of &lt;strong&gt;Amazon Q for Business&lt;/strong&gt;, as the power of our AI Assistant will depend on the &lt;strong&gt;variety and quality of the information sources&lt;/strong&gt; that we will connect, and for that it is very critical that we allow the organizations to &lt;strong&gt;leverage their information with security where that information is currently available&lt;/strong&gt;. This will also contribute a lot to &lt;strong&gt;eliminating adoption barriers&lt;/strong&gt; and &lt;strong&gt;reduce implementation time avoiding non-priority information migration efforts&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;When we are talking, for example, about business, technical and other types of &lt;strong&gt;business documents&lt;/strong&gt;, we will find them frequently stored in repositories like &lt;strong&gt;S3, SharePoint, OneDrive, or Google Drive&lt;/strong&gt;. If we talk about &lt;strong&gt;technical/code repositories&lt;/strong&gt;, they will typically be stored in a &lt;strong&gt;GitHub repository&lt;/strong&gt;. &lt;strong&gt;The GOOD NEWS is that ALL these data sources are already covered by Amazon Q!&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Remembering the requirements for our use case: &lt;strong&gt;&lt;em&gt;“…include information from the internet, document repositories and to be able to add ad-hoc relevant documents.”&lt;/em&gt;&lt;/strong&gt;, I was able to support this demand using the &lt;strong&gt;“Most Popular” data sources&lt;/strong&gt; shown above. Let’s look at each one of them:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;REQ #1: Including information from the Internet&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;To include information from the web, we will add a data source of type &lt;strong&gt;WEB CRAWLER&lt;/strong&gt;. An Amazon Q Web Crawler connector crawls and indexes either public facing websites or internal company websites that use HTTPS. When selecting websites to index, we must adhere to &lt;a href="https://aws.amazon.com/aup/?icmpid=docs_console_unmapped"&gt;Amazon Acceptable Use Policy&lt;/a&gt; and crawl only our own web pages or web pages we have the authorization to index.&lt;/p&gt;

&lt;p&gt;Now let’s follow the &lt;strong&gt;typical process to add a data source&lt;/strong&gt;: We start by specifying the &lt;strong&gt;name&lt;/strong&gt; of our data source and the &lt;strong&gt;source of the URLs that we want Amazon Q to index&lt;/strong&gt;. As you can see in the image below there are multiple options, from &lt;strong&gt;specifying an URLs list&lt;/strong&gt; directly in the console to specify &lt;strong&gt;Sitemaps&lt;/strong&gt; (Sitemaps contain lists of URLs that are available for crawling on a site to help crawlers comprehensively retrieve and index content).&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyomb652bo44yjc7q8d3n.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyomb652bo44yjc7q8d3n.jpg" alt="Image description" width="800" height="519"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;For our use case, I decided to use the option to provide a &lt;strong&gt;Source URLs file&lt;/strong&gt; stored in an &lt;strong&gt;S3 Bucket&lt;/strong&gt;, as we will use the same bucket also as &lt;strong&gt;complementary data source&lt;/strong&gt; later. The Source URLs file is just a text file that includes one URL per line. We can include up to 100 starting point URLs in the file. As example, we included in the file a list of URLs from our own website:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://www.bbold.com.br/"&gt;https://www.bbold.com.br/&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.bbold.com.br/en/"&gt;https://www.bbold.com.br/en/&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.bbold.com.br/single-post/"&gt;https://www.bbold.com.br/single-post/&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.bbold.com.br/en/single-post/"&gt;https://www.bbold.com.br/en/single-post/&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;After we define the list of URLs, we need to configure the &lt;strong&gt;security aspects&lt;/strong&gt; of our connection to them. For that Amazon Q offers &lt;strong&gt;several authentication alternatives&lt;/strong&gt;. For this example, we will use &lt;strong&gt;“No Authentication”&lt;/strong&gt; as we are crawling a &lt;strong&gt;public website&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F306mfu80wsnxsyx46p4w.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F306mfu80wsnxsyx46p4w.jpg" alt="Image description" width="800" height="269"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In case you need to index internal websites, it also offers the option to use a &lt;strong&gt;Web Proxy&lt;/strong&gt; and configure the &lt;strong&gt;authentication credentials&lt;/strong&gt; using &lt;strong&gt;AWS Secret Manager&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;In terms of security also, for selected data source types, Amazon Q provides optionally the alternative to &lt;strong&gt;configure a VPC and security group&lt;/strong&gt; that will be used by Amazon Q data source connector to access your information source (Examples: access an S3 bucker or a database through a specific VPC). Since our data source is accessible from the &lt;strong&gt;public internet&lt;/strong&gt;, we didn't need to enable the Amazon VPC feature.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw991fzukkw2s3mmsp7kw.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw991fzukkw2s3mmsp7kw.jpg" alt="Image description" width="800" height="130"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Next, we need to configure the &lt;strong&gt;IAM Role for Amazon Q&lt;/strong&gt; to access our data source repository credentials and application content. Being the first time that we are creating this data source, the recommendation is to create a &lt;strong&gt;new service Role&lt;/strong&gt; and provide a name for it:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2oxzvy2c53m3zdvijsti.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2oxzvy2c53m3zdvijsti.jpg" alt="Image description" width="800" height="180"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In the background, it will create an &lt;strong&gt;IAM role and a Customer Managed Policy&lt;/strong&gt; to allow Amazon Q to access the S3 bucket where the source URLs file is stored, access authentication credentials stored in &lt;strong&gt;AWS Secret Manager&lt;/strong&gt; (if applicable), manage the processing of the documents to be ingested and store information related to groups and users access to those documents.&lt;/p&gt;

&lt;p&gt;Moving on, we need to define the &lt;strong&gt;Synchronization Scope&lt;/strong&gt; for the documents to be ingested. We can define a &lt;strong&gt;sync domain range&lt;/strong&gt; as seen in the following image:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxinzn6jj9057sp9giy94.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxinzn6jj9057sp9giy94.jpg" alt="Image description" width="800" height="430"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;For our use case, we selected the option &lt;strong&gt;“Sync domains with subdomains only”&lt;/strong&gt; to prevent the scenario of indexing other third-party websites potentially linked from our company website.&lt;/p&gt;

&lt;p&gt;Additionally, Amazon Q also offers very interesting additional options regarding the scope of documents to be indexed &lt;strong&gt;(“Additional Configuration”)&lt;/strong&gt;. I would like to highlight some aspects of them:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;“Scope settings | Crawl depth”&lt;/strong&gt;: The number of levels from the seed URL that Amazon Q should crawl. This parameter is important to ensure that all the webpages that we need to be indexed are effectively included. Recommendation here is to review the structure of the websites that you are planning to index and see how many levels you need to include. For example, a &lt;strong&gt;crawl depth of “3”&lt;/strong&gt; means that the search engine will crawl up to 3 levels deep from the seed URL. For example: Seed URL (level 1), Pages directly linked from the seed URL (level 2) and Pages directly linked from level 2 pages (level 3).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;“Scope settings | Maximum file size”&lt;/strong&gt;: Where you can define the maximum file size of a webpage or attachment to crawl. You need to calibrate this parameter based on your knowledge of the documents sizes that are in your knowledge base.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;“Include files that web pages link to”&lt;/strong&gt;: When this option is selected in Amazon Q, it means that the crawler will index not only the content of the web pages specified in the seed URLs, but also any files that are linked from those web pages. This allows the full content being referenced from the web pages to be searchable (Some examples of files that may be linked from web pages include documents like PDFs, images, videos, audio files, and other attachments).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;“Crawl URL Patterns” and “URL Pattern to Index”&lt;/strong&gt;: Both parameters will help you to &lt;strong&gt;filter the scope of information to crawl and then to index&lt;/strong&gt;. Amazon Q will index the URLs that it crawled based on the crawl configuration. The &lt;strong&gt;“crawl URL patterns”&lt;/strong&gt; specify which URLs should be crawled to the specified crawl depth starting from the seed URLs. The &lt;strong&gt;“URL patterns to index”&lt;/strong&gt; configuration can further target which of the crawled URLs should be indexed and searchable.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Once we are done with defining the &lt;strong&gt;sync scope and filters&lt;/strong&gt;, we need to work with the configurations related to the &lt;strong&gt;synchronization jobs execution&lt;/strong&gt;. For that we need to configure &lt;strong&gt;how and when&lt;/strong&gt; we need those processes to run:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy9oct518rxqnog406frm.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy9oct518rxqnog406frm.jpg" alt="Image description" width="800" height="273"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As you can see above, for the how to sync &lt;strong&gt;(Sync Mode)&lt;/strong&gt; we can choose from make a &lt;strong&gt;full synchronization&lt;/strong&gt; or &lt;strong&gt;only sync the updates&lt;/strong&gt;. We selected the second option for our use case. In terms of the schedule &lt;strong&gt;(Sync run schedule)&lt;/strong&gt;, you select the frequency for the synchronization, from hourly to monthly, a custom time or On Demand. For our testing, we selected this last option.&lt;/p&gt;

&lt;p&gt;Similarly to what we did for the &lt;strong&gt;Amazon Q Application, Retriever and Index&lt;/strong&gt;, we can also specify &lt;strong&gt;Tags&lt;/strong&gt; for the &lt;strong&gt;Data Source&lt;/strong&gt;. Again, it’s &lt;strong&gt;highly recommended that you include tags&lt;/strong&gt; to identify all resources that are related to your Amazon Q application for &lt;strong&gt;cost management purposes&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjdlodjw8uvxme3m47m01.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjdlodjw8uvxme3m47m01.jpg" alt="Image description" width="800" height="144"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Finally, Amazon Q shows the &lt;strong&gt;Fields Mapping&lt;/strong&gt; section. Amazon Q crawls data source document &lt;strong&gt;attributes or metadata&lt;/strong&gt; and maps them to &lt;strong&gt;fields in your Amazon Q index&lt;/strong&gt;. Amazon Q has reserved fields that it uses when querying your application. It shows the list of &lt;strong&gt;default attributes&lt;/strong&gt; mapped for both &lt;strong&gt;web pages&lt;/strong&gt; and &lt;strong&gt;attachments&lt;/strong&gt; (they can be customized after data source creation):&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F40823voivurzb8iir524.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F40823voivurzb8iir524.jpg" alt="Image description" width="800" height="318"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;With this, you can finish the configuration of the new &lt;strong&gt;WEBCRAWLER&lt;/strong&gt; data source with the &lt;strong&gt;“Add Data Source”&lt;/strong&gt; option. Having done that, we need to execute the &lt;strong&gt;first synchronization job&lt;/strong&gt; that will run in a &lt;strong&gt;Full sync mode&lt;/strong&gt; independently of your configuration. After you complete this process you can see the &lt;strong&gt;Details, Sync History, Settings, and Tags&lt;/strong&gt; of your &lt;strong&gt;Amazon Q Data Source&lt;/strong&gt; in the console:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjvws36qz4amu788amda9.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjvws36qz4amu788amda9.jpg" alt="Image description" width="800" height="477"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In the image above, would like to highlight the &lt;strong&gt;Sync run history&lt;/strong&gt; section where you can see the &lt;strong&gt;synchronization jobs results&lt;/strong&gt; in terms of total items scanned, added/modified, deleted, and failed where you have quantitative information to evaluate if the crawling/indexing process have processed all what you expected based on your configuration. At this point, Amazon Q also offers the possibility to &lt;strong&gt;retrieve log information&lt;/strong&gt; from &lt;strong&gt;CloudWatch Logs&lt;/strong&gt; using a link in the &lt;strong&gt;“Details”&lt;/strong&gt; column:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffuoggigbjqpsbj2vulg8.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffuoggigbjqpsbj2vulg8.jpg" alt="Image description" width="800" height="425"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As you can see above, Amazon Q creates a &lt;strong&gt;Log Group&lt;/strong&gt; related to the Amazon Q application using the &lt;strong&gt;Application ID (“99fd2bd8-bc98-4d34-8153-54a2a3b189b3”)&lt;/strong&gt; as identifier and creates a &lt;strong&gt;Log Stream&lt;/strong&gt; for each execution using the &lt;strong&gt;Data source ID (“69374b88-5f35-4b3b-a0cc-0bfb2742c23a”)&lt;/strong&gt; as identifier.&lt;/p&gt;

&lt;p&gt;Amazon Q already creates the &lt;strong&gt;Query&lt;/strong&gt; that we can execute using &lt;strong&gt;Log Insights&lt;/strong&gt;, and running it, you will get the logs related to the synchronization job where you will be able to see &lt;strong&gt;details about the URLs processed and eventually errors that must be fixed&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;REQ #2: Including information from document repositories&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Having completed the configuration of a WEBCRAWLER data source, we can move forward to our second requirement where we will leverage &lt;strong&gt;S3 buckets&lt;/strong&gt; with documentation to be used by our AI Assistant. We need to configure an &lt;strong&gt;S3 data source&lt;/strong&gt; for &lt;strong&gt;each bucket&lt;/strong&gt; we plan to index.&lt;/p&gt;

&lt;p&gt;The configuration of this data source includes very similar steps to the ones we already saw for the WEBCRAWLER, including data source name, configuration of utilization of a VPC or not, IAM role creation, Sync scope, mode and run schedule, tags and finally field mapping.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffpt84eojbefrzdyal7lj.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffpt84eojbefrzdyal7lj.jpg" alt="Image description" width="800" height="474"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Same as in the case of the &lt;strong&gt;WEBCRAWLER data source&lt;/strong&gt;, in this case you can also use &lt;strong&gt;Log Insights&lt;/strong&gt; to get the logs related to the synchronization job where you will be able to see &lt;strong&gt;details about the documents processed and eventually errors that must be fixed&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;REQ #3: Including information from add ad-hoc relevant documents&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;While I recommend &lt;strong&gt;storing and organizing your documentation in an S3 bucket for durability, protection, and security purposes&lt;/strong&gt;, we may need also to upload specific ad-hoc files to expand the knowledge base of our AI Assistant. For those cases we can use the &lt;strong&gt;FILE UPLOADER&lt;/strong&gt; data source:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff8g84g71afdoqv81mz1z.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff8g84g71afdoqv81mz1z.jpg" alt="Image description" width="800" height="408"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3njt1f3pdjcf9uz917qf.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3njt1f3pdjcf9uz917qf.jpg" alt="Image description" width="800" height="163"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As you can see in the image above, you will be able to see in the console the &lt;strong&gt;list of uploaded files&lt;/strong&gt; that will be indexed and be part of the knowledge base of our AI Assistant.&lt;/p&gt;

&lt;p&gt;Once we have created all our required data sources, completed the initial synchronization jobs, and verified that they have finished successfully and indexed our documentation, &lt;strong&gt;we will be ready to TEST our AI assistant!&lt;/strong&gt; For that AWS has already built a &lt;strong&gt;WEB EXPERIENCE&lt;/strong&gt; which is a web interface of our &lt;strong&gt;AI Assistant / CHAT BOT application&lt;/strong&gt; that we can use to start interacting with it.&lt;/p&gt;

&lt;p&gt;We can access the &lt;strong&gt;WEB EXPERIENCE&lt;/strong&gt;, through the console:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx4rl9z35as2xezfjpzyv.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx4rl9z35as2xezfjpzyv.jpg" alt="Image description" width="800" height="226"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And open our &lt;strong&gt;AI Assistant / CHAT BOT application&lt;/strong&gt; interface:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2ukp61tysfcv4915aw55.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2ukp61tysfcv4915aw55.jpg" alt="Image description" width="800" height="438"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As you can see above, we can configure some attributes like the &lt;strong&gt;Title&lt;/strong&gt; (Name of the AI Assistant), &lt;strong&gt;Subtitle&lt;/strong&gt; (Objective of the AI Assistant) and &lt;strong&gt;Welcome Message&lt;/strong&gt;. Our suggestion is to include in the &lt;strong&gt;Welcome Message&lt;/strong&gt; what are the &lt;strong&gt;domains or subjects&lt;/strong&gt; that the &lt;strong&gt;AI Assistant&lt;/strong&gt; will be &lt;strong&gt;prepared to answer questions about&lt;/strong&gt; using the information that you provided through &lt;strong&gt;Data Sources&lt;/strong&gt;. This is very important to &lt;strong&gt;manage expectations from the potential users of the solution&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Now, let’s see some examples of our &lt;strong&gt;AI Assistant&lt;/strong&gt; in action!&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;SAMPLE PROMPT # 1 – Asking a question about industry trends (looking to use documents in the FILE UPLOADER):&lt;/strong&gt; &lt;em&gt;What are the key trends in 2024 for Corporate Banking?&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7i9m73shbv5x4gvosnr8.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7i9m73shbv5x4gvosnr8.jpg" alt="Image description" width="800" height="614"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Note that the answer includes the &lt;strong&gt;reference&lt;/strong&gt; to the information source used by the Assistant to prepare the response and allows you to &lt;strong&gt;provide feedback regarding the quality of the response&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;SAMPLE PROMPT # 2 – Asking a question about AWS services (looking to use documents in the S3 BUCKET):&lt;/strong&gt; &lt;em&gt;“Please prepare a summary of how AWS is supporting Banking Clients to improve their customers experience, including reference to customer examples and which AWS services are being used”&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F14eqlovnv4mwhbihchev.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F14eqlovnv4mwhbihchev.jpg" alt="Image description" width="800" height="582"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;SAMPLE PROMPT # 3 – Asking a question about DEVOPS (looking to use documents in a WEBPAGE):&lt;/strong&gt; &lt;em&gt;“Please prepare a summary about how we can help our customers to accelerate the implementation of DEVOPS culture in their organizations”&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9bjysl8xdbf6i82e8ayf.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9bjysl8xdbf6i82e8ayf.jpg" alt="Image description" width="800" height="530"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As you can see, using &lt;strong&gt;Amazon Q for Business&lt;/strong&gt; we have been able to implement an &lt;strong&gt;AI Assistant&lt;/strong&gt; and as from here we can continue &lt;strong&gt;enriching the KNOWLEDGE BASE&lt;/strong&gt; adding more documents to the data sources and/or adding more data sources as well as preparing our &lt;strong&gt;PROMPTS LIBRARY&lt;/strong&gt; with &lt;strong&gt;templates&lt;/strong&gt; that we can reuse to &lt;strong&gt;improve our productivity and maximize the quality of the responses&lt;/strong&gt; obtained from the AI Assistant.&lt;/p&gt;

&lt;p&gt;Important to highlight that those will be &lt;strong&gt;ON-GOING activities&lt;/strong&gt; as we want to have our Assistant always &lt;strong&gt;UP TO DATE&lt;/strong&gt; with our &lt;strong&gt;latest and verified documentation&lt;/strong&gt; and we want also our team &lt;strong&gt;improving their productivity&lt;/strong&gt;, knowing when and how to use the AI Assistant.&lt;/p&gt;

&lt;p&gt;As with any other &lt;strong&gt;Cloud technology adoption process&lt;/strong&gt;, it’s very critical to include a proper &lt;strong&gt;Organizational Change Management initiative&lt;/strong&gt; to make sure that the team is properly &lt;strong&gt;engaged, communicated, and trained&lt;/strong&gt; on this technology making sure that they understand that is a &lt;strong&gt;valuable tool&lt;/strong&gt; at their disposal to gain productivity and efficiency that &lt;strong&gt;DOES NOT ELIMINATE&lt;/strong&gt; the need for the &lt;strong&gt;“human in the loop” evaluation of the quality and applicability of the responses&lt;/strong&gt; before their utilization to fulfill internal or customer demands.&lt;/p&gt;

&lt;p&gt;This is a critical success factor to &lt;strong&gt;eliminate adoption barriers&lt;/strong&gt; and very valuable &lt;strong&gt;feedback for the IT Team&lt;/strong&gt; to be used for &lt;strong&gt;data source contents refinement and enrichment&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Up to this moment, we have our &lt;strong&gt;AI Assistant&lt;/strong&gt; already operating for &lt;strong&gt;internal use&lt;/strong&gt;. We need now to take the next step to &lt;strong&gt;DEPLOY the AI Assistant to our organization&lt;/strong&gt; so we can have &lt;strong&gt;multiple users&lt;/strong&gt; leveraging its capabilities with the &lt;strong&gt;required security measures in place&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Let's meet again in our next post and please feel free to share your feedback and comments!&lt;/strong&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>genai</category>
      <category>aws</category>
      <category>data</category>
    </item>
    <item>
      <title>Accelerating the Implementation of DevOps Culture in Your Organization with Amazon CodeCatalyst</title>
      <dc:creator>Luis Parraguez</dc:creator>
      <pubDate>Fri, 21 Jul 2023 17:40:19 +0000</pubDate>
      <link>https://dev.to/aws-builders/accelerating-the-implementation-of-devops-culture-in-your-organization-with-amazon-codecatalyst-5gmd</link>
      <guid>https://dev.to/aws-builders/accelerating-the-implementation-of-devops-culture-in-your-organization-with-amazon-codecatalyst-5gmd</guid>
      <description>&lt;p&gt;&lt;strong&gt;Good morning everyone!&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;DevOps culture has become increasingly popular among software development organizations as it fosters a collaborative approach between development and operations teams, enabling the continuous delivery of high quality software. However, effectively implementing the DevOps culture can be a complex challenge.&lt;/p&gt;

&lt;p&gt;This is where the Amazon CodeCatalyst service can play a relevant role. In this post, we’ll explore how Amazon CodeCatalyst can help organizations accelerate and improve the implementation of DevOps culture by providing a complete collaboration and automation platform.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Amazon CodeCatalyst Overview&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Amazon CodeCatalyst is an Amazon Web Services (AWS) service that provides an integrated platform to help teams collaborate, automate processes, and adopt DevOps culture best practices. It offers capabilities for code versioning, integration and continuous delivery (CI/CD) pipeline management, issue tracking, configuration management, and more.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Enhanced collaboration:&lt;/strong&gt; One of the keys to the success of the DevOps culture is effective collaboration between development and operations teams. Amazon CodeCatalyst offers advanced features to promote collaboration, such as centralized code repositories, integration with communication tools (such as Slack), and code review capabilities. These features allow teams to work together efficiently, share knowledge, and collaborate on projects with ease.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Process automation:&lt;/strong&gt; Automating processes is key to accelerating the implementation of the DevOps culture. Amazon CodeCatalyst provides comprehensive capabilities for automation, enabling you to create CI/CD pipelines to automate software build, test, and deploy. This reduces reliance on time-consuming manual processes, increasing the efficiency and speed of software delivery.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Configuration management:&lt;/strong&gt; Configuration management is an essential part of the DevOps culture. Amazon CodeCatalyst provides capabilities to efficiently manage the configuration of infrastructure and application environments. It supports the use of popular tools, such as AWS CloudFormation and Terraform, to provision and manage infrastructure resources as code. This ensures infrastructure consistency and traceability and simplifies the management of environments at different stages of the software lifecycle.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Issue tracking:&lt;/strong&gt; Issue tracking and effective project management are crucial to the successful implementation of the DevOps culture. Amazon CodeCatalyst provides built-in capabilities for issue tracking, allowing teams to record, prioritize, and track issues and development tasks. This improves visibility and collaboration around issues, making them easier to resolve quickly and efficiently.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Security and compliance:&lt;/strong&gt; Security and compliance are important considerations when implementing the DevOps culture. Amazon CodeCatalyst provides capabilities for access control, security monitoring, and integration with other security-focused AWS services, such as AWS Identity and Access Management (IAM) and AWS CloudTrail. This ensures that organizations can implement appropriate security practices and meet regulatory requirements.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Sharing Lessons Learned Using Amazon CodeCatalyst&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;I would like to share with you lessons learned from using Amazon CodeCatalyst applied in a project that combined Multi-Cloud and DevOps requirements. The final objective of this project was the implementation of a Static Serverless Website in Multi-Cloud architecture, having as main requirements:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Implement the static website through the use of Serverless storage resources in AWS (S3), Azure (Blob Storage) and OCI (Buckets).&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Create centralized repositories of infrastructure as code (IaC) and static website code, integrated with integration and continuous delivery pipelines.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Centralize automated provisioning, configuration, and management of infrastructure across multiple Clouds using Amazon CloudCatalyst and Terraform.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Centralize the management and automation of static website integration and continuous delivery pipelines across multiple Clouds using Amazon CloudCatalyst.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Communicate with AWS Cloud through native Amazon CloudCatalyst resources.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Communicate with Azure and OCI using CLIs (Command Line Interfaces) and leveraging compute resources from Amazon CloudCatalyst.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;We can see the solution architecture applied in this project in the following diagram:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkkaubeutn7vwuhv3kbpp.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkkaubeutn7vwuhv3kbpp.jpg" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Let’s now look at the key steps followed in this project:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 1 — Preparing Infrastructure as Code repository&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;As a first step to using the features of Amazon CloudCatalyst we must create a “Space” (in our project called “TerraformCodeCatalystLPG”). During the creation of the “Space” we need to specify the AWS Account that will be used for billing and for creating the resources in the AWS Cloud through authorization of IAM roles.&lt;/p&gt;

&lt;p&gt;Created the “Space” we can create a “Project” (in our project called “TerraformCodeCatalyst”):&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6nlodarxm27z1zpb0cpm.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6nlodarxm27z1zpb0cpm.jpg" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Within the “Project” we find the two groups of functionalities required to meet the requirements of the project: “Code” and “CI/CD”.&lt;/p&gt;

&lt;p&gt;In the figure below we can see these options on the left, including the detail of the features related to “Code”:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Source Repositories:&lt;/strong&gt; Functionality that allows the creation and versioning control of code repositories;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Pull Requests:&lt;/strong&gt; Functionality that allows the management of code update requests in the repositories, including support for approval/disapproval and application of updates (Merge);&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Dev Environments:&lt;/strong&gt; Functionality that allows the creation of pre-configured development environments that we can use to work with the code of our infrastructure and / or applications.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5ecgr4gc6b3en9s4v7ni.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5ecgr4gc6b3en9s4v7ni.jpg" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In the project, we tested the creation of development environments and checked the availability of environments with support for AWS Cloud9 (running in web browser) and Visual Studio Code as well as other options with support for JetBrains IDEs. For the project we chose to use a local Visual Studio Code environment for the reuse of previously prepared Terraform code.&lt;/p&gt;

&lt;p&gt;Using the feature Source Repository, we created our first repository (“bootstrapping-terraform-automation for-amazon-codecatalyst”) to store the Terraform code that was used for the provisioning of our infrastructure.&lt;/p&gt;

&lt;p&gt;Within this repository we first created a folder (“_bootstrap”) to store the code of the base infrastructure required for the operation of Terraform using an S3 Backend in AWS. This base infrastructure requires the creation of (1) an S3 bucket to store the Terraform file to control provisioned resources (terraform.tfstate), (2) a DynamoDB table to control concurrent access to the terraform.tfstate file in the case of parallel executions, and (3) IAM roles and policies required to connect Amazon CloudCatalyst to the AWS account where the resources will be created — one IAM role for Branch Main with permission to create resources and another IAM role to Branch Pull Request with read-only permission.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 2 — Creation of CI/CD workflows to update the Infrastructure as Code&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Once the base infrastructure resources have been created, we are ready to create the CI/CD workflows to manage the update of the infrastructure as code of our application. To do this first, we must associate the two previously created IAM roles with Amazon CloudCatalyst so that we can use them in workflows.&lt;/p&gt;

&lt;p&gt;As you can see in the figure below, we now use the “CI/CD | Workflows” functionality, selecting the code repository, to create 03 workflows in Branch Main:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;“TerraformPRBranch”:&lt;/strong&gt; Workflow that will manage the evaluation of requested updates through Pull Requests from a Branch. This workflow will perform the installation of Terraform in a virtual machine EC2 and will execute the commands Terraform init, validate e plan aiming to validate the updates made to the infrastructure code;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;“TerraformMainBranch”:&lt;/strong&gt; Workflow that will manage the automatic application of the approved updates in the code of Branch Main of our repository. In a similar way this workflow will execute the commands Terraform init, validate, plan e apply aiming to apply the updates made to the infrastructure code;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;“TerraformMainBranch_Destroy”:&lt;/strong&gt; Workflow that will manage the removal of infrastructure created through Branch Main code. This workflow is configured to run manually and will execute the Terraform init and destroy commands to eliminate the provisioned resources.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frb8mgr8knsa1dox1pwnt.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frb8mgr8knsa1dox1pwnt.jpg" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As an example, we can see below the YAML code of the workflow “TerraformMainBranch”:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Adaptation of the https://developer.hashicorp.com/terraform/tutorials/automation/github-actions workflow
Name: TerraformMainBranch
SchemaVersion: "1.0"

# Here we are including the trigger for this workflow: Push / Pull Request. If not included then this workflow will be executed only manually
Triggers:
  - Type: Push
    Branches:
      - main

# Here we are defining the actions that will be executed for this workflow
Actions:
  Terraform-Main-Branch-Apply:
    Identifier: aws/build@v1
    Inputs:
      Sources:
        - WorkflowSource
    Environment:
      Connections:
        - Role: Main-Branch-Infrastructure
          Name: "XXXXXXXXXXXX"
      Name: TerraformBootstrap
    Configuration: 
      Steps:
        - Run: export TF_VERSION=1.5.2 &amp;amp;&amp;amp; wget -O terraform.zip "https://releases.hashicorp.com/terraform/${TF_VERSION}/terraform_${TF_VERSION}_linux_amd64.zip"
        - Run: unzip terraform.zip &amp;amp;&amp;amp; rm terraform.zip &amp;amp;&amp;amp; mv terraform /usr/bin/terraform &amp;amp;&amp;amp; chmod +x /usr/bin/terraform
        - Run: terraform init -no-color
        - Run: terraform validate -no-color
        - Run: terraform plan -no-color -input=false
        - Run: terraform apply -auto-approve -no-color -input=false
    Compute:
      Type: EC2
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Step 3 — Execution of CI/CD workflows to update the Infrastructure as Code&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Next, we created a Branch (“test-pr-workflow”) that was used to validate the updates to the Terraform code of our infrastructure.&lt;/p&gt;

&lt;p&gt;The Terraform files of the application were organized into groups: the first one focused on connecting to AWS, Azure and OCI (multicloud_provider.tf and multicloud_variables.tf) and another three for provisioning the storage resources in each Cloud (Example: aws_storage.tf and aws_variables.tf). For the provisioning of this infrastructure we also used the S3 Backend previously created but storing the file terraform.tfstate in a different key of the bucket.&lt;/p&gt;

&lt;p&gt;Using Visual Studio Code Insiders, we synchronized the Terraform files of the infrastructure with our repository in Amazon CloudCatalyst using the “test-pr-workflow” Branch.&lt;/p&gt;

&lt;p&gt;Updated the files in the Branch “test-pr-workflow” we created a Pull Request to start the workflow “TerraformPRBranch” in this Branch. In the figure below you can see the data for creation of a Pull Request including the source Branch “test-pr-workflow” and Target Branch “main” as well as the specification of Reviewers, mandatory and optional, of the requested changes with which we are applying collaboration within the team of DevOps.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx942ps1q1yhcvndrx30m.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx942ps1q1yhcvndrx30m.jpg" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The creation of the Pull Request triggered the workflow “TerraformPRBranch” in Branch “test-pr-workflow” and after its completion we were able to verify, through the Logs of the command Terraform Plan, that the application of the updates in the infrastructure could be carried out successfully and, being this the case, we authorized Merge of updates in the Branch “main”.&lt;/p&gt;

&lt;p&gt;By authorizing the Merge, the workflow “TerraformMainBranch” was initiated and with it were carried out the infrastructure updates defined by the Terraform code:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frjjp5nkzxmqbay4bfeo2.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frjjp5nkzxmqbay4bfeo2.jpg" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;This demonstrated a full CI/CD cycle of automating infrastructure upgrades using Amazon CodeCatalyst!!&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 4 — Preparation of the application code repository (Serverless Static Website in Multi-Cloud architecture)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Similar to Step 1, using the Source Repository functionality we created the repository for the code of our application (“static-website-repo”), containing the files required for the website:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv9qw3aj3yi4xmyahw2qg.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv9qw3aj3yi4xmyahw2qg.jpg" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 5 — Creation of CI/CD workflows to update the application&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Following the same procedure as before, we proceeded to the creation of workflows to store the updates of the files of the static website in the buckets of each Cloud. Each of the workflows was configured to sequentially update the three specified environments: Testing, Homologation and Production, with the condition of advancing to the next environment only in case of success in the application in the previous environment.&lt;/p&gt;

&lt;p&gt;Let’s see below the highlights in the preparation of each workflow:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;“Upload_to_AWS_S3” Workflow — AWS S3 Bucket Storage&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;As you can see below, in the visual representation of this workflow, we have configured as an automatic trigger the code update event in the “main” branch of the repository. This trigger will start the workflow that will consist of 03 actions of type &lt;strong&gt;“aws/&lt;a href="mailto:s3-publish@v1.0.5"&gt;s3-publish@v1.0.5&lt;/a&gt;”&lt;/strong&gt;. This action is a native feature of Amazon CodeCatalyst that allows you to upload files to an S3 Bucket by running commands from an EC2 virtual machine:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0930piquci6gsfq9vzr8.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0930piquci6gsfq9vzr8.jpg" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;“Upload_to_Azure_Blob” Workflow — Azure Blob Storage&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;As you can see below, in the visual representation of this workflow, we have also configured as an automatic trigger the code update event in the “main” branch of the repository. This trigger will start the workflow that will also be composed of 03 actions (Actions).&lt;/p&gt;

&lt;p&gt;According to project requirements, communication with Azure was performed using Azure CLI (Command Line Interface). To enable this, the technical alternative was to apply Amazon CloudCatalyst’s ability to allow the creation of the processing environment using a custom container image. The custom image was the containerized version of Azure CLI (Image: mcr.microsoft.com/azure-cli). Authentication for CLI use was performed by leveraging Amazon CloudCatalyst Secrets functionality for the security of access information.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw7iug95towpknq5whark.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw7iug95towpknq5whark.jpg" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;“Upload_to_OCI_Bucket” Workflow — OCI Buckets Storage&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;As you can see below, in the visual representation of this workflow, we have also configured as an automatic trigger the code update event in the “main” branch of the repository. This trigger will start the workflow that will also be composed of 03 actions (Actions).&lt;/p&gt;

&lt;p&gt;According to the requirements of the project, the communication with OCI was also carried out using the OCI CLI (Command Line Interface) and to enable this the technical alternative applied was to run the containerized version of the OCI CLI on the EC2 virtual machine (Imagen: ghcr.io/oracle/oci-cli:latest ) using Docker commands in the version already available in the EC2 processing environment. Authentication for CLI use was also performed by leveraging Amazon CloudCatalyst Secrets functionality for the security of access information.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxz665ubwii9en19ovkc8.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxz665ubwii9en19ovkc8.jpg" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 6 — Execution of CI/CD workflows to update the application code&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The update of the application code was done applying the same methodology that we applied in the case of the infrastructure as code repository (branch creation for Pull Requests). After this process was carried out and authorizing the Merge, a Push event was generated in the “main” Branch and with that the three previous workflows were triggered initiating the update of the Buckets in AWS, Azure and OCI. The images in the sequence below show the result of the execution of the workflows:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fju8kivnkza463nztl75v.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fju8kivnkza463nztl75v.jpg" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9nsqsb12svziglovksuf.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9nsqsb12svziglovksuf.jpg" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxixi8v2233vj7yug9pz7.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxixi8v2233vj7yug9pz7.jpg" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And as a result of the processing we had our &lt;strong&gt;Static Serverless Website in Multi-Cloud architecture&lt;/strong&gt; up and running on &lt;strong&gt;AWS, Azure and OCI&lt;/strong&gt; powered by an &lt;strong&gt;end-to-end DevOps Process supported by Amazon CodeCatalyst!!&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe7omsd3qin720utmuq7c.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe7omsd3qin720utmuq7c.jpg" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Conclusion&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Efficient implementation of DevOps culture is critical to the success of software development organizations. Amazon CodeCatalyst provides a comprehensive platform to accelerate and enhance this implementation.&lt;/p&gt;

&lt;p&gt;With advanced collaboration, process automation, configuration management, and issue tracking capabilities, Amazon CodeCatalyst enables teams to collaborate more efficiently, improve speed of delivery, and ensure software quality. By adopting Amazon CodeCatalyst, organizations can drive their DevOps journey quickly and efficiently, leveraging the benefits of an agile, collaborative approach to software development.&lt;/p&gt;

&lt;p&gt;And, as we saw in the project presented above, Amazon CodeCatalyst also has the necessary resources to work with solutions in Multi-Cloud architecture in partnership with Terraform and Docker. &lt;strong&gt;I suggest you experiment the solution too!!&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;We are available to support you in this process and in the continuity of your Cloud Journey!&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Let’s meet again in our next post!&lt;/strong&gt;&lt;/p&gt;

</description>
      <category>cloudcomputing</category>
      <category>aws</category>
      <category>devops</category>
      <category>codecatalyst</category>
    </item>
  </channel>
</rss>
