<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: socraDrk</title>
    <description>The latest articles on DEV Community by socraDrk (@socratesruiz).</description>
    <link>https://dev.to/socratesruiz</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/socratesruiz"/>
    <language>en</language>
    <item>
      <title>Troubleshooting EKS with MCP: The Good, the Bad, and the Ugly (plus the Setup)</title>
      <dc:creator>socraDrk</dc:creator>
      <pubDate>Tue, 26 Aug 2025 11:12:40 +0000</pubDate>
      <link>https://dev.to/aws-builders/troubleshooting-eks-with-mcp-the-good-the-bad-and-the-ugly-plus-the-setup-24bi</link>
      <guid>https://dev.to/aws-builders/troubleshooting-eks-with-mcp-the-good-the-bad-and-the-ugly-plus-the-setup-24bi</guid>
      <description>&lt;p&gt;As part of our sessions to develop the skills of a one-person army with AI tools, we began exploring how to integrate their use into our daily tasks.&lt;/p&gt;

&lt;p&gt;One of the tasks that might be simple, but takes time is to troubleshoot issues during the deployment of our applications. In some of our projects, we use Kubernetes as the platform to deploy them. In dev environments, it's not uncommon for dev teams to struggle with deploying their applications. Either they are new to the topic, lack some information, or have a typo in the code, so we can have a flawless pipeline using GitOps, but the human factor still persists, and one way to help the teams is to also provide the tools to help them do the troubleshooting.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Setup
&lt;/h2&gt;

&lt;p&gt;Since we cannot share an actual setup from one of the projects, we tested the efficiency of solving issues in a Kubernetes cluster as an example. The complete setup is the following:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;EKS cluster with a sample web application

&lt;ul&gt;
&lt;li&gt;Deployed via eksctl&lt;/li&gt;
&lt;li&gt;3-Tier Web application deployed with hidden errors&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;Local environment with

&lt;ul&gt;
&lt;li&gt;Q Developer, using Claude Sonnet 4&lt;/li&gt;
&lt;li&gt;EKS MCP server&lt;/li&gt;
&lt;li&gt;Kubectl and eksctl&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;The references used for the setup will be at the end of the post, but as a summary, these steps were done:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Prerequisites: Q Developer, MCP servers, uv, kubectl and AWS CLI installed &lt;/span&gt;
&lt;span class="c"&gt;# References at the end of the post, as summary for Q:&lt;/span&gt;
&lt;span class="c"&gt;## [Download Amazon Q for command line for Linux AppImage](https://desktop-release.q.us-east-1.amazonaws.com/latest/amazon-q.appimage)&lt;/span&gt;
&lt;span class="nb"&gt;chmod&lt;/span&gt; +x amazon-q.appimage
./amazon-q.appimage
&lt;span class="c"&gt;## Authenticate with Builder ID, or with IAM Identity Center using the start URL given to you by your account administrator&lt;/span&gt;

&lt;span class="c"&gt;# MCP Servers configuration&lt;/span&gt;
pip &lt;span class="nb"&gt;install &lt;/span&gt;awslabs.aws-api-mcp-server

vim ~/.aws/amazonq/mcp.json
&lt;span class="c"&gt;### Copy and paste content below&lt;/span&gt;
&lt;span class="o"&gt;{&lt;/span&gt;
  &lt;span class="s2"&gt;"mcpServers"&lt;/span&gt;: &lt;span class="o"&gt;{&lt;/span&gt;
    &lt;span class="s2"&gt;"awslabs.aws-api-mcp-server"&lt;/span&gt;: &lt;span class="o"&gt;{&lt;/span&gt;
      &lt;span class="s2"&gt;"command"&lt;/span&gt;: &lt;span class="s2"&gt;"python"&lt;/span&gt;,
      &lt;span class="s2"&gt;"args"&lt;/span&gt;: &lt;span class="o"&gt;[&lt;/span&gt;
        &lt;span class="s2"&gt;"-m"&lt;/span&gt;,
        &lt;span class="s2"&gt;"awslabs.aws_api_mcp_server.server"&lt;/span&gt;
      &lt;span class="o"&gt;]&lt;/span&gt;,
      &lt;span class="s2"&gt;"env"&lt;/span&gt;: &lt;span class="o"&gt;{&lt;/span&gt;
        &lt;span class="s2"&gt;"AWS_REGION"&lt;/span&gt;: &lt;span class="s2"&gt;"YOUR_REGION"&lt;/span&gt;
      &lt;span class="o"&gt;}&lt;/span&gt;,
      &lt;span class="s2"&gt;"disabled"&lt;/span&gt;: &lt;span class="nb"&gt;false&lt;/span&gt;,
      &lt;span class="s2"&gt;"autoApprove"&lt;/span&gt;: &lt;span class="o"&gt;[]&lt;/span&gt;
    &lt;span class="o"&gt;}&lt;/span&gt;,
    &lt;span class="s2"&gt;"awslabs.eks-mcp-server"&lt;/span&gt;: &lt;span class="o"&gt;{&lt;/span&gt;
      &lt;span class="s2"&gt;"command"&lt;/span&gt;: &lt;span class="s2"&gt;"uvx"&lt;/span&gt;,
      &lt;span class="s2"&gt;"args"&lt;/span&gt;: &lt;span class="o"&gt;[&lt;/span&gt;
        &lt;span class="s2"&gt;"awslabs.eks-mcp-server@latest"&lt;/span&gt;,
        &lt;span class="s2"&gt;"--allow-write"&lt;/span&gt;,
        &lt;span class="s2"&gt;"--allow-sensitive-data-access"&lt;/span&gt;
      &lt;span class="o"&gt;]&lt;/span&gt;,
      &lt;span class="s2"&gt;"env"&lt;/span&gt;: &lt;span class="o"&gt;{&lt;/span&gt;
        &lt;span class="s2"&gt;"FASTMCP_LOG_LEVEL"&lt;/span&gt;: &lt;span class="s2"&gt;"ERROR"&lt;/span&gt;
      &lt;span class="o"&gt;}&lt;/span&gt;,
      &lt;span class="s2"&gt;"autoApprove"&lt;/span&gt;: &lt;span class="o"&gt;[]&lt;/span&gt;,
      &lt;span class="s2"&gt;"disabled"&lt;/span&gt;: &lt;span class="nb"&gt;false&lt;/span&gt;
    &lt;span class="o"&gt;}&lt;/span&gt;
  &lt;span class="o"&gt;}&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;


&lt;span class="c"&gt;# AWS CLI configuration&lt;/span&gt;
aws config

&lt;span class="c"&gt;## To store credentials:&lt;/span&gt;
vim ~/.aws/credentials
&lt;span class="c"&gt;### Copy paste the credentials&lt;/span&gt;
&lt;span class="o"&gt;[&lt;/span&gt;default]
&lt;span class="nv"&gt;aws_access_key_id&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;SHALALA
&lt;span class="nv"&gt;aws_secret_access_key&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;shalalala
&lt;span class="nv"&gt;aws_session_token&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;token


&lt;span class="c"&gt;# Install eksctl&lt;/span&gt;
&lt;span class="nb"&gt;cd&lt;/span&gt; ~
&lt;span class="nv"&gt;ARCH&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;amd64
&lt;span class="nv"&gt;PLATFORM&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;uname&lt;/span&gt; &lt;span class="nt"&gt;-s&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;_&lt;span class="nv"&gt;$ARCH&lt;/span&gt;
curl &lt;span class="nt"&gt;-sLO&lt;/span&gt; &lt;span class="s2"&gt;"https://github.com/eksctl-io/eksctl/releases/latest/download/eksctl_&lt;/span&gt;&lt;span class="nv"&gt;$PLATFORM&lt;/span&gt;&lt;span class="s2"&gt;.tar.gz"&lt;/span&gt;
curl &lt;span class="nt"&gt;-sL&lt;/span&gt; &lt;span class="s2"&gt;"https://github.com/eksctl-io/eksctl/releases/latest/download/eksctl_checksums.txt"&lt;/span&gt; | &lt;span class="nb"&gt;grep&lt;/span&gt; &lt;span class="nv"&gt;$PLATFORM&lt;/span&gt; | &lt;span class="nb"&gt;sha256sum&lt;/span&gt; &lt;span class="nt"&gt;--check&lt;/span&gt;
&lt;span class="nb"&gt;tar&lt;/span&gt; &lt;span class="nt"&gt;-xzf&lt;/span&gt; eksctl_&lt;span class="nv"&gt;$PLATFORM&lt;/span&gt;.tar.gz &lt;span class="nt"&gt;-C&lt;/span&gt; /tmp &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="nb"&gt;rm &lt;/span&gt;eksctl_&lt;span class="nv"&gt;$PLATFORM&lt;/span&gt;.tar.gz
&lt;span class="nb"&gt;sudo install&lt;/span&gt; &lt;span class="nt"&gt;-m&lt;/span&gt; 0755 /tmp/eksctl /usr/local/bin &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="nb"&gt;rm&lt;/span&gt; /tmp/eksctl


&lt;span class="c"&gt;# Deploy K8s cluster&lt;/span&gt;
&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;AWS_REGION&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;eu-west-1
&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;EKS_CLUSTER_NAME&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;eks-workshop
&lt;span class="nb"&gt;cat &lt;/span&gt;cluster.yaml | envsubst | eksctl create cluster &lt;span class="nt"&gt;-f&lt;/span&gt; -

&lt;span class="c"&gt;# Kubectl configuration&lt;/span&gt;
aws eks update-kubeconfig &lt;span class="nt"&gt;--name&lt;/span&gt; &lt;span class="nv"&gt;$EKS_CLUSTER_NAME&lt;/span&gt;

&lt;span class="c"&gt;# Once EKS is ready, deploy application. On this case, kubernetes_xD has some hidden issues in it.&lt;/span&gt;
kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; kubernetes_xD.yaml
kubectl &lt;span class="nb"&gt;wait&lt;/span&gt; &lt;span class="nt"&gt;--for&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nv"&gt;condition&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;available deployments &lt;span class="nt"&gt;--all&lt;/span&gt;

&lt;span class="c"&gt;# Validating application is working&lt;/span&gt;
&lt;span class="c"&gt;## Get load balancer URL&lt;/span&gt;
kubectl get svc ui
&lt;span class="c"&gt;## Access it via browser (port 80), example:&lt;/span&gt;
http://a7a2821a812cb40daa48ab4cca3e4179-191623826.eu-west-1.elb.amazonaws.com:80

&lt;span class="c"&gt;## Initially the above page will throw a 500 Error&lt;/span&gt;

&lt;span class="c"&gt;# To redeployed the components via YAML&lt;/span&gt;
kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; kubernetes_xD.yaml

&lt;span class="c"&gt;# To delete the components&lt;/span&gt;
kubectl delete &lt;span class="nt"&gt;-f&lt;/span&gt; kubernetes_xD.yaml

&lt;span class="c"&gt;# REMEMEBR!!!!!&lt;/span&gt;
&lt;span class="c"&gt;# Once you are finished, destroy the cluster to avoid unnecessary costs &lt;/span&gt;
eksctl delete cluster &lt;span class="nv"&gt;$EKS_CLUSTER_NAME&lt;/span&gt; &lt;span class="nt"&gt;--wait&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After this, we had an amazing EKS cluster with several crashed pods. Great, now how do we fix this, while we have tons of meetings in parallel in the project? The answer is to delegate to the AI and check and approve the proposed fixes.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Good
&lt;/h2&gt;

&lt;p&gt;The summary provided by Q Developer regarding our environment's status is actually good. It took around 5 minutes to check all namespaces and detect what components were failing.&lt;/p&gt;

&lt;p&gt;The prompt for this feels natural to write. Instead of asking to fix everything from the beginning, we started with a request for a summary, focusing on the fixes later. &lt;/p&gt;

&lt;p&gt;After that, given the list of issues provided by Q, we started fixing one by one, and that's where the "problems started".&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fudk298mguc1lznzjo3mm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fudk298mguc1lznzjo3mm.png" alt=" " width="800" height="332"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  The Bad
&lt;/h2&gt;

&lt;p&gt;Assumptions are a risk! Without much context, we observed that the tools accept the metadata written in the descriptions as truth, which leads to their own assumptions, some of which are beneficial, while others are not. The trick is in the prompt we give to clarify those assumptions.&lt;/p&gt;

&lt;p&gt;As an example:&lt;br&gt;
One of the pods was failing because, as an environment variable, we set up (on purpose for testing) an ActiveMQ reference variable, while the platform is using RabbitMQ.&lt;/p&gt;

&lt;p&gt;And then, instead of using the second, it assumed that what needs to be done is to deploy ActiveMQ for the application. And as mentioned before, this was avoided and solved by providing the correct context in the prompt. &lt;/p&gt;

&lt;h2&gt;
  
  
  The Ugly
&lt;/h2&gt;

&lt;p&gt;While trying to fix the issue, several requests were made that might not be necessary. A human would be able to detect such cases faster, or even the tool, once faced with a similar issue, learn from it and in the subsequent request not make the same mistake again. &lt;/p&gt;

&lt;p&gt;As an example:&lt;br&gt;
While solving issue one, it detected that the pod was having a problem. It first checked the pod with a get command, to later understand that a describe command was needed. On the second issue, the same applied. For the third time, I expected the describe command to be the first one tried, but it still kept the process, even though the get command didn't provide anything useful. &lt;/p&gt;

&lt;p&gt;This might be because it still needs more iterations and use on the tasks for the learning to happen, but still, it's something to consider: It will take time to adjust to the current context of the project.&lt;/p&gt;

&lt;h2&gt;
  
  
  Final Thoughts
&lt;/h2&gt;

&lt;p&gt;Even if this sample scenario is quite basic, we hope it will help other teams to be onboarded on this new way of doing troubleshooting. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8u63fmorvrfg7g3p8hwn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8u63fmorvrfg7g3p8hwn.png" alt=" " width="800" height="414"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As AI continues to evolve, now is the perfect time to become familiar with it and explore how it can support our projects.&lt;/p&gt;

&lt;p&gt;By being careful with the information we share with it, and also, considering the kind of setup we use (some projects might prefer to use their own models, rather than use the ones provided by Bedrock, for example), it can give us a lot of advantages, like&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Saving a lot of time that we can use for other tasks (like designing, supervising, and meetings)&lt;/li&gt;
&lt;li&gt;Automate tasks and focus on what really brings value to our project&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;One of the following steps is this second point, where the use of AI Agents will be the focus of our knowledge gathering, and later shared in a post.&lt;/p&gt;

&lt;h2&gt;
  
  
  References
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://github.com/awslabs/mcp/tree/main/src/aws-api-mcp-server" rel="noopener noreferrer"&gt;https://github.com/awslabs/mcp/tree/main/src/aws-api-mcp-server&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/awslabs/mcp/tree/main/src/eks-mcp-server" rel="noopener noreferrer"&gt;https://github.com/awslabs/mcp/tree/main/src/eks-mcp-server&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.eksworkshop.com/docs/introduction/getting-started/about" rel="noopener noreferrer"&gt;https://www.eksworkshop.com/docs/introduction/getting-started/about&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.aws.amazon.com/cli/v1/userguide/install-linux.html" rel="noopener noreferrer"&gt;https://docs.aws.amazon.com/cli/v1/userguide/install-linux.html&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.aws.amazon.com/amazonq/latest/qdeveloper-ug/command-line-installing.html" rel="noopener noreferrer"&gt;https://docs.aws.amazon.com/amazonq/latest/qdeveloper-ug/command-line-installing.html&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://eksctl.io/installation/" rel="noopener noreferrer"&gt;https://eksctl.io/installation/&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://v1-32.docs.kubernetes.io/docs/tasks/tools/install-kubectl-linux/" rel="noopener noreferrer"&gt;https://v1-32.docs.kubernetes.io/docs/tasks/tools/install-kubectl-linux/&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>ai</category>
      <category>aws</category>
      <category>kubernetes</category>
      <category>learning</category>
    </item>
    <item>
      <title>re:Invent 2023 - A Cloud Odyssey</title>
      <dc:creator>socraDrk</dc:creator>
      <pubDate>Wed, 06 Dec 2023 19:56:40 +0000</pubDate>
      <link>https://dev.to/aws-builders/reinvent-2023-a-cloud-odyssey-2peo</link>
      <guid>https://dev.to/aws-builders/reinvent-2023-a-cloud-odyssey-2peo</guid>
      <description>&lt;p&gt;What a week in Las Vegas! &lt;/p&gt;

&lt;p&gt;I feel so lucky to say this is my fourth time attending AWS re:Invent, and it continues to amaze me with all the topics, events, and experiences we can find during that week. So, let me share some thoughts about it.&lt;/p&gt;

&lt;h2&gt;
  
  
  Before the trip
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Ticket for the event... checked

&lt;ul&gt;
&lt;li&gt;Every year, the Community Builders program gives a certain amount of vouchers with a discount for the event (this year was 43% off), and I was lucky enough to have one!&lt;/li&gt;
&lt;li&gt;Besides that option, I also learned about the AWS All Builders Welcome Grant program, &lt;em&gt;which provides financial assistance to underrepresented technologists in their early stages of tech careers&lt;/em&gt;. Here is a &lt;a href="https://medium.com/@raphaela.han/aws-all-builders-welcome-grant-a-chance-to-learn-network-and-grow-5f57215b1fd0" rel="noopener noreferrer"&gt;link&lt;/a&gt; of a blog post that provides more information about it. And the best part is that I got to know about this thanks to people that I met this year that went thanks to it.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;Hotel... checked

&lt;ul&gt;
&lt;li&gt;The event is split into several conference centers in Las Vegas casinos. The portal of re:Invent provides access to some of them, but I was too late to reserve one of those, so I needed to book a hotel using a regular portal, and I stayed at the Flamingo's, which is quite close to the Ceasars Forum, which is also a venue of the event and has a direct connection to the Venetian, which is the main venue.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;Flight... checked

&lt;ul&gt;
&lt;li&gt;I couldn't find a direct flight to Las Vegas from Munich, so the next option that was more convenient and under my budget was to do a step in Atlanta. The downside was that the step took about 5 hours, which meant at least more time to kill playing Pokemon Go.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;Making the agenda... checked

&lt;ul&gt;
&lt;li&gt;While reserving the sessions I was interested in, I remember my time at the university, as it was required to log in on the platform, look for the session, create our agenda, and then be ready the day they open the registration for them, as the most interesting ones tend to run out of places quite fast.&lt;/li&gt;
&lt;li&gt;A tip is to try to have the sessions in a single venue because even though there's transportation between the venues, it can be exhausting and sometimes even impossible to go from one venue to the other and then to the other. So keep in mind that.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;Things to wear... checked

&lt;ul&gt;
&lt;li&gt;Remember my last point? &lt;em&gt;Walking is a huge part of the event&lt;/em&gt;, so comfortable shoes and clothes are a must. Besides that, it can be chilly in the afternoons during this time of year, so warm clothes are also necessary.&lt;/li&gt;
&lt;li&gt;Also, remember to leave some space in your luggage for all the SWAG you could find in the Expo from the sponsor.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;h2&gt;
  
  
  Arrival
&lt;/h2&gt;

&lt;p&gt;I traveled on Nov 26th and arrived in Las Vegas almost at midnight. However, I could still reach the AWS booth at the airport to collect my badge, which was practically empty due to the time. That way, I could avoid a longer queue the next day at the venues.&lt;/p&gt;

&lt;p&gt;On the contrary, the queue at the hotel was way longer for everyone arriving. Still, it was nothing that a little bit of patience could handle.&lt;/p&gt;

&lt;p&gt;One thing I recommend for people traveling from a different time zone is to avoid making the mistake I made of arriving with not even a complete day to rest. Monday was particularly challenging, but we could find coffee easily in the event. &lt;/p&gt;

&lt;h2&gt;
  
  
  Food and drinks
&lt;/h2&gt;

&lt;p&gt;Breakfast, lunch, coffee, sodas, and snacks are included during the event. The major venues have specific places for that, and there's a schedule where they provide them. And this year was the first time that the breakfast on Monday was included, as far as I know.&lt;/p&gt;

&lt;p&gt;For dinner, there are also several options. Depending on the events we attend after the daily sessions, it might also be possible to grab some food there, but here, let's be clear that it is not included directly on the ticket as the others. &lt;/p&gt;

&lt;h2&gt;
  
  
  Networking
&lt;/h2&gt;

&lt;p&gt;As a reserved person, starting a conversation with a stranger or finding a proper topic is complex. On top of that, my introverted side requires more time to gather energy to be around people, making this kind of event challenging for me.&lt;/p&gt;

&lt;p&gt;I always went with colleagues/friends in the past, so I already had a safety net of people who broke the ice for me. There were the typical occasions in the lunch rooms or in the halls where they started talking with someone, and then I would join the conversation. &lt;/p&gt;

&lt;p&gt;Another situation would be during the sessions, where at least I would already have a common topic to talk about. Then, the conversations started naturally with other attendees, as we would have similar issues or solutions for those issues in our companies that were discussed during the session.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnbrca71huu173unswmyx.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnbrca71huu173unswmyx.jpg" alt="re:Invent session"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Besides that, one of the advantages of having at least one AWS certification is access to the &lt;strong&gt;Certification Lounge&lt;/strong&gt;. In this place, people can relax and connect with other certified people. One of the main advantages is the continuous access to coffee and snacks. Don't get me wrong, these are provided for all attendees all day in the hallways, but the queue could get bigger there than in the certification lounge.&lt;/p&gt;

&lt;p&gt;Well, I said all that because this time was exceptional, as it was the first time I went as part of the Community Builders program, which came with the possibility of getting to know even more people, exchanging ideas, and discussing the content of the sessions.&lt;/p&gt;

&lt;p&gt;Curious enough, it was also the first time the Community Lounge took place, which worked as my base for the event. There, it was easier to find people I already knew or break the ice with new people, as we were already part of the same community. Therefore, we have things in common. Besides that, I didn't feel drained of energy much, as I felt part of a group that shared the same enthusiasm for the event.&lt;/p&gt;

&lt;p&gt;Being an expat living in a country where they don't speak my native language can be challenging from time to time, so it was also great to talk with the AWS LATAM community and show them that the communities in EMEA and NAMER are also great.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv1uidrqcatnr4lod3s6r.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv1uidrqcatnr4lod3s6r.jpg" alt="AWS LATAM Community"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Finally, one more thing to highlight on this part is that I met several AWS Heroes and User Group Leaders, which increased my motivation to share the knowledge and get more people involved in the cloud topics. Sharing is caring.&lt;/p&gt;

&lt;h2&gt;
  
  
  Content
&lt;/h2&gt;

&lt;p&gt;I will not dig too much into the details of the announcements made at the event, as other fellow builders did a great job doing that, so I will share a couple of useful links that are great to keep on the radar.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;You can find the top announcements of the event in &lt;a href="https://aws.amazon.com/blogs/aws/top-announcements-of-aws-reinvent-2023/" rel="noopener noreferrer"&gt;this link&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Most sessions are on this &lt;a href="https://www.youtube.com/playlist?list=PL2yQDdvlhXf-5R7VtNr9P4nosA7DiDtM1" rel="noopener noreferrer"&gt;Youtube channel&lt;/a&gt;. &lt;/li&gt;
&lt;li&gt;Some of the workshops, among other valuable topics, can be found &lt;a href="https://github.com/orgs/aws-samples/repositories?q=workshop&amp;amp;type=all&amp;amp;language=&amp;amp;sort=" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Besides that, the topics shown at the event are state of the art in their areas like DevOps, Serverless, Generative AI and Machine Learning in general, and best practices for building applications on the cloud or migrating to it. The sessions are made with a specific target audience, from beginners to experts.&lt;/p&gt;

&lt;h2&gt;
  
  
  Events
&lt;/h2&gt;

&lt;p&gt;Besides the several kinds of sessions we can find at the re:Invent, other events promote networking among the participants while allowing them to have drinks and snacks. Like the Expo, where all the sponsors are located, and people can hunt for some cool swag, and also people working at AWS, where they expose use cases and success stories by industry and another place for the developers' community to gather and get more information of best practices and training and certification.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc1gp3d44irh1qgxffaq7.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc1gp3d44irh1qgxffaq7.jpg" alt="Expo"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Besides the official party re:Play, we have the AWS Welcome receptions for each region, Community Mixer/User Group/AWS Heroes meetups, and sponsors' events. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frmwng6gkufsgh2dh2i0i.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frmwng6gkufsgh2dh2i0i.jpg" alt="Welcome Reception"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;It was great that during those events, I encountered colleagues from previous and current customers, which allowed me to catch up and have deeper discussions about the topics shown in the sessions.&lt;/p&gt;

&lt;h2&gt;
  
  
  Departure
&lt;/h2&gt;

&lt;p&gt;I couldn't predict that snow would cause many problems in Munich. My connection flight got delayed more than 12 hours because of that (Munich airport closed operations), but what kind of odyssey would it be if it didn't have some unexpected events. And at least at this point in time, I could kill some time chatting with all the persons I met who were also traveling back, and that even got stuck, unfortunately. &lt;/p&gt;

&lt;p&gt;Last but not least, is to remember to enjoy the event. I'm happy with all my experiences and the people I met.&lt;/p&gt;

&lt;p&gt;I'm looking forward to future events, and I hope this information will be helpful. Feel free to share your questions in the comments.&lt;/p&gt;

&lt;p&gt;Until next time re:Invent...&lt;/p&gt;

</description>
      <category>aws</category>
      <category>community</category>
      <category>cloudcomputing</category>
      <category>cloud</category>
    </item>
    <item>
      <title>Central Repository of CVs with Amazon Textract and Neptune</title>
      <dc:creator>socraDrk</dc:creator>
      <pubDate>Mon, 03 Jul 2023 15:01:07 +0000</pubDate>
      <link>https://dev.to/aws-builders/central-repository-of-cvs-with-amazon-textract-and-neptune-1904</link>
      <guid>https://dev.to/aws-builders/central-repository-of-cvs-with-amazon-textract-and-neptune-1904</guid>
      <description>&lt;p&gt;Working in a consultancy company has the advantage of participating in several projects where different technologies are used to provide a solution for several use cases.&lt;/p&gt;

&lt;p&gt;This lets us expand our perspective to solve problems, which is great, but with time is hard to keep up to date with all our colleagues on all the work we have done so far, due to being in different teams, projects, or just because we are mostly focused with our customer and we don’t have much time to chat.&lt;/p&gt;

&lt;p&gt;But since we have a sharing knowledge culture at Data Insights, we thought of possible solutions for this. One was already covered by my colleague Hsiao-Ching in &lt;a href="https://datainsights.de/get-some-data-insights-from-data-insights-based-on-graph-database-neo4j/" rel="noopener noreferrer"&gt;this blog post&lt;/a&gt;. And here we present another one that can be seen as a complementary part.&lt;/p&gt;

&lt;p&gt;Considering our CVs has all the necessary information, we need a process to read and cluster it in the relevant groups like the technologies used, customers we have worked with so far, etc. After that, we can store this in our database of choice to query the results later in time.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhzl6l4kdmvcpir6lcsbf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhzl6l4kdmvcpir6lcsbf.png" alt="Architecture"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;For this process, &lt;a href="https://aws.amazon.com/textract/faqs/" rel="noopener noreferrer"&gt;Amazon Textract&lt;/a&gt; is the perfect choice to gather the data from within the CVs, which are already in PDF format, though Textract also &lt;a href="https://docs.aws.amazon.com/textract/latest/dg/how-it-works-documents.html" rel="noopener noreferrer"&gt;supports other formats&lt;/a&gt;. The first part of our process is to upload the files to S3. From there, a Lambda function is triggered in order to start the Textract job.&lt;/p&gt;

&lt;p&gt;By using the feature of &lt;a href="https://docs.aws.amazon.com/textract/latest/dg/how-it-works-kvp.html" rel="noopener noreferrer"&gt;form extraction&lt;/a&gt;, Textract automatically detects that the CVs have some information in common about the person, like the profile, customers, and technologies used, among others, and puts this information in key-values. After this, the function stores back the result in another S3 bucket in CSV format.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7pyb7smeh45cjle0acab.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7pyb7smeh45cjle0acab.png" alt="Textract Output"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;For the second part of the process, we use &lt;a href="https://aws.amazon.com/neptune/faqs/" rel="noopener noreferrer"&gt;Amazon Neptune&lt;/a&gt;, as this will allow us to make queries in terms of the relationship of the data, like “give me all the colleagues who have worked with this technology or with this customer”. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk11orymomffpuomjat6p.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk11orymomffpuomjat6p.png" alt="Neptune Output"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Then in order to ingest the data into Neptune, another Lambda function is triggered when the result from Textract arrives at S3. The ingestion is possible as the CSV data is in Gremlin load data format, but &lt;a href="https://docs.aws.amazon.com/neptune/latest/userguide/bulk-load-tutorial-format.html" rel="noopener noreferrer"&gt;other formats are also supported&lt;/a&gt;, and this lets us do the ingestion in a bulk load fashion, instead of sending insert by insert, and by checking the status of the ingestion, we can see how many files failed to be ingested and need more preprocessing.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fg1c7pyif49wbhnpj2qgc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fg1c7pyif49wbhnpj2qgc.png" alt="Lambda upload"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once the data is loaded, by using a Sagemaker notebook, we can start to identify easily who can support us whenever we have a question about a specific topic, find common interests, or check which projects we can use as references for other ones.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4ri6qazz91g0gycozqmr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4ri6qazz91g0gycozqmr.png" alt="Sagemaker notebook query"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Finally, we can remark that this solution is completely serverless, as we don’t manage any infrastructure, and by just uploading the file, the process is automatically started. And as the next steps, we can further clean the information provided by Textract, in order to have a standardized output of values.&lt;/p&gt;

</description>
      <category>productivity</category>
      <category>aws</category>
      <category>architecture</category>
      <category>database</category>
    </item>
    <item>
      <title>Preparing for CKAD exam with EKS, Terraform and ChatGPT sample integration 📘</title>
      <dc:creator>socraDrk</dc:creator>
      <pubDate>Fri, 26 May 2023 14:09:08 +0000</pubDate>
      <link>https://dev.to/aws-builders/preparing-for-ckad-exam-with-eks-terraform-and-a-openai-api-integration-2od3</link>
      <guid>https://dev.to/aws-builders/preparing-for-ckad-exam-with-eks-terraform-and-a-openai-api-integration-2od3</guid>
      <description>&lt;p&gt;I received some feedback from &lt;a href="https://dev.to/aws-builders/aws-amplify-and-chatgpt-one-way-to-generate-html-mock-files-for-our-demos-2dhk"&gt;my previous post&lt;/a&gt;, saying that it would be interesting to see a little more in details the integration to the API from OpenAI.&lt;/p&gt;

&lt;p&gt;I was on my way to write a blog post for that, but then I received a reminder that my CKAD certificate was close to be expired 🤯&lt;/p&gt;

&lt;p&gt;So I decided to create a sandbox environment as a study playground for my exam, and deploy in it a really basic integration to OpenAI API 🤓&lt;/p&gt;

&lt;p&gt;For the deployment of the Kubernetes cluster, I used Amazon Elastic Kubernetes Service (EKS). This service from AWS enables us to have a complete cluster with minimal operational tasks. And in order to have the configuration of the cluster as Infrastructure as Code, I used Terraform.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff0x96we1stqwczskfjiu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff0x96we1stqwczskfjiu.png" alt="Terraform Plan" width="800" height="311"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Having said that, my implementation is literally a sandbox that shouldn't be used for production or even development environments, but that doesn't mean that you cannot have those scenarios with these tools. For production workloads, I would totally recommend having a look at the &lt;a href="https://github.com/aws-ia/terraform-aws-eks-blueprints"&gt;Terraform Blueprints for EKS&lt;/a&gt;. They cover implementations where the best practices for Kubernetes clusters are followed in terms of network, and security, and it has also several add-ons like Argo, Calico, Kafka, and Airflow, among others.&lt;/p&gt;

&lt;p&gt;As for the resources deployed inside Kubernetes, they are based on a simple frontend-backend scenario. Each application is based on a K8s Deployment with an init-container and a "normal" container.&lt;/p&gt;

&lt;p&gt;The frontend is a really basic React App that sends a prompt to the backend. And the backend is based in Python and FastAPI, with a single HTTP POST method that prepares and forwards the request to OpenAI API. Both frontend and backend applications are then containerized with Docker and pushed to ECR. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffnhvzj6dvef3yig7bx4m.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffnhvzj6dvef3yig7bx4m.png" alt="Request to backend" width="800" height="86"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Other K8s resources shown are a Job and a CronJob, accessing the backend and frontend respectively.   &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fceypu3hemnwk0kvv58gf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fceypu3hemnwk0kvv58gf.png" alt="Job Output" width="800" height="60"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;All of this is on my repository &lt;a href="https://github.com/socraDrk/ckad-sandbox/"&gt;ckad-sandbox&lt;/a&gt;, which only has 2 branches. The master branch and a "fix me" branch, which I hope can be useful for the people that are preparing for the CKAD exam.&lt;/p&gt;

&lt;p&gt;What I really like about this exam is that is 100% hands-on, which means we have a limited amount of time to fix the issues in the different clusters that are presented. &lt;a href="https://training.linuxfoundation.org/certification/certified-kubernetes-application-developer-ckad/"&gt;Here&lt;/a&gt; you can find more information about the exam, like the curriculum and more details on what topics to study more.&lt;/p&gt;

&lt;p&gt;Of course, if you are planning to do the CKA exam, then a "local" K8s cluster is the one you should install and configure, but the scope of the CKAD allows us to focus just on how the applications should be deployed and configured inside K8s. This means we can save study time by not wasting too much energy deploying the cluster, as we can rely on EKS to have this done for us. &lt;/p&gt;

&lt;p&gt;Finally, this implementation could be used as a base for real use case scenarios with some adjustments depending on your use case, like:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Having more than 1 replica, depending on the amount of traffic we would expect for the frontend and backend 🐳🐳🐳&lt;/li&gt;
&lt;li&gt;Having a much better definition of limits and quotas for our namespaces.&lt;/li&gt;
&lt;li&gt;Enabling Pod security contexts for our deployments 🔒&lt;/li&gt;
&lt;li&gt;Choose better persistent storage, as currently, it is using the local disk of the nodes, which is not at all recommended for production environments. For this, we could have a look to use EBS volumes as one of the many options.&lt;/li&gt;
&lt;li&gt;Defining an Ingress resource, like the &lt;a href="https://docs.aws.amazon.com/eks/latest/userguide/alb-ingress.html"&gt;ALB Ingress controller&lt;/a&gt;, in order to have a proper way to connect to our application from the outside.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://docs.aws.amazon.com/eks/latest/userguide/calico.html"&gt;Enabling Calico&lt;/a&gt; in order to provide the possibility to use K8s Network Policies within the cluster, which by default are not enabled in EKS 🕸️&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I hope this information would be useful in your path to learn more about Kubernetes, EKS, Terraform, and OpenAI. &lt;/p&gt;

&lt;p&gt;And any feedback is more than welcome👨‍🏫&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>terraform</category>
      <category>aws</category>
      <category>chatgpt</category>
    </item>
    <item>
      <title>AWS Amplify and ChatGPT: one way to generate html mock files for our demos 🤖</title>
      <dc:creator>socraDrk</dc:creator>
      <pubDate>Thu, 13 Apr 2023 21:09:29 +0000</pubDate>
      <link>https://dev.to/aws-builders/aws-amplify-and-chatgpt-one-way-to-generate-html-mock-files-for-our-demos-2dhk</link>
      <guid>https://dev.to/aws-builders/aws-amplify-and-chatgpt-one-way-to-generate-html-mock-files-for-our-demos-2dhk</guid>
      <description>&lt;p&gt;In several projects I have worked I have faced the challenge to deliver a sample of what will be done at the end of the project. Usually, this will be after some sessions where the requirements will be gathered and brainstorming of ideas will take place 🤔.&lt;/p&gt;

&lt;p&gt;As a result, sometimes we will have a whiteboard full of comments and some meeting notes with a quick summary, from which the developer team needs to start the analysis and development. For this, the deployment of the infrastructure can also take some time before we can actually see some samples running ⏲️.&lt;/p&gt;

&lt;p&gt;All of this can take some time, depending on the expertise of our teams, but we would still lose time during this initial setup. Luckily on one of my past projects, I got the opportunity to work with AWS Amplify &lt;/p&gt;

&lt;p&gt;"&lt;a href="https://aws.amazon.com/amplify/" rel="noopener noreferrer"&gt;&lt;em&gt;AWS Amplify is a complete solution that lets frontend web and mobile developers easily build, ship, and host full-stack applications on AWS, with the flexibility to leverage the breadth of AWS services as use cases evolve. No cloud expertise needed.&lt;/em&gt;&lt;/a&gt;"&lt;/p&gt;

&lt;p&gt;This means that our teams can leverage the deployment of both the frontend and the backend components to Amplify 🧐.&lt;/p&gt;

&lt;p&gt;I will use React for my frontend, which will be requesting ChatGPT to generate some HTML mock files which then we can use as a starting point.⚠️&lt;strong&gt;Please be aware that the use of the exact same code delivered by ChatGPT is not recommended for production use cases, as it might cause issues with plagiarism, and this is just for demo purposes to show the integration with the API from OpenAI&lt;/strong&gt;⚠️.&lt;/p&gt;

&lt;p&gt;Having said that, I used as a base &lt;a href="https://aws.amazon.com/getting-started/hands-on/build-react-app-amplify-graphql/module-one/" rel="noopener noreferrer"&gt;this link&lt;/a&gt; from AWS to initialize the components for the demo.&lt;/p&gt;

&lt;p&gt;I'm using a GitHub repository, which can be integrated with Cognito in order to deploy the latest commit as soon as it gets pushed.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Faoul7f1s4p8662fu9qnl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Faoul7f1s4p8662fu9qnl.png" alt="Amplify CI_CD"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We can also add the backend components with Amplify Studio. The UI is quite intuitive and in just a few clicks we can have the components for Authentication (Cognito), Data (AppSync and DynamoDB), Storage (S3), and Functions (Lambda).&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmtptlc3w4wxeskb2ld5b.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmtptlc3w4wxeskb2ld5b.png" alt="Amplify Studio"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And the great part is that it also provides a CLI to use in our console to make these deployments also. For the demo, I added just the above components and modified the data schema to support the description, image, and HTML code of the mock file we want to deploy.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcc3f62kjoad5fimcgr13.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcc3f62kjoad5fimcgr13.png" alt="Amplify CLI"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Following the &lt;em&gt;Getting Started&lt;/em&gt; link I shared before, all of this took me less than an hour. This means I can focus my time on what my webpage needs to do, instead of breaking my head deploying all services needed! 😎&lt;/p&gt;

&lt;p&gt;The sad part is that my frontend skills are not so great, so my modifications were quite simple and were the ones that took me a little more time than expected 🤯. &lt;/p&gt;

&lt;p&gt;To begin with, I needed to modify the Lambda function, to read the description of each item and the uploaded image, which has a draft of what we want as a mock file. Based on that, I send a request to ChatGPT to generate the HTML file and store the response on the HTML attribute of the item. In the meantime, the HTML attribute has the name of the item. I also added 2 more columns, one to reflect the value of the HTML attribute of our items and another one for an external link (a little more about that in a moment)&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv5n8qh36qx46cs4quumv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv5n8qh36qx46cs4quumv.png" alt="Web App 1"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The request to ChatGPT is quite simple for this demo, but this could be also completely parametrized and of course we can train our own model based on our needs, but I keep it simple for the demo and use one of the default ones.&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2g1txro9kin1dd7ouk9e.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2g1txro9kin1dd7ouk9e.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The Lambda function is configured to be triggered by an S3 Event, whenever an image is uploaded with our front end. The best part of this is that with the AWS CLI, we can add this trigger configuration since the trigger happens from the S3 bucket that acts as storage for our amplify project. This step is then asynchronous and will take some time to be reflected on our site, but once we click on refresh we will see the mock HTML.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpazxou83lh1ru7cmepa5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpazxou83lh1ru7cmepa5.png" alt="Web App 1 full"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now coming back to the external link. I created manually a CloudFront distribution to expose the HTML files generated by ChatGPT. I needed also to modify the Lambda function to store the HTML file also in S3, which acts as the origin for CloudFront.&lt;/p&gt;

&lt;p&gt;Here is worth mentioning I'm using Amplify Hosting as a solution to expose my React App. It is also possible to use instead CloudFront + S3 and let Amplify do also this deployment. If we would want more control with setting up a CDN and hosting buckets, that would be the option to use, but for this demo, Amplify Console (Hosting) is more than enough... but I might change the deployment later to just use CloudFront + S3 😼.&lt;/p&gt;

&lt;p&gt;Anyways, now if we follow the link, we can actually see the HTML mock file, and the great part is that it is automatically generated after we submit our idea in our frontend 🤖.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh4e6ipgax0om0a7eydm5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh4e6ipgax0om0a7eydm5.png" alt="Test 1"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Another sample&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F83xlcum5v3zh5gskc30f.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F83xlcum5v3zh5gskc30f.png" alt="Web App 3 full"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Another HTML mock file was automatically generated&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwxj8k9w1iunyz3hwpefm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwxj8k9w1iunyz3hwpefm.png" alt="Test3"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Lessons learned
&lt;/h2&gt;

&lt;p&gt;Amplify can help us to reduce the development time for our apps. It provides not only support for React but also for other popular frameworks for both web and mobile development.&lt;/p&gt;

&lt;p&gt;If we lack a deep knowledge of all the AWS services needed for this kind of application, this can save us from a lot of problems, but if we are an enterprise company where we have several kinds of projects in the same account (for any reason), things get tricky.&lt;/p&gt;

&lt;p&gt;Even though we can import existing AWS resources as Amplify components, it might get quite complex to manage the shared resources and not mess with other projects. On top of that, if our application would require specific requirements like being able to access our app only from the Intranet of our company, this solution might not be the recommended way, as we would need to deal with network configuration, maybe adding a WAF for IP filtering or choosing another solution that provides compatibility with VPC + DirectConnect integration, which would make the deployment quite a mess 🙀.&lt;/p&gt;

&lt;p&gt;Aside from that, ChatGPT is a tool that can help us in some situations, keeping heavily in mind we should still interpret the results that it gives us. The potential it provides is great if we also consider custom models, but that was out of the scope of the demo.&lt;/p&gt;

&lt;p&gt;I hope these lessons are helpful for those who want to try Amplify and leave a comment if you would want to play with the demo.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>chatgpt</category>
      <category>react</category>
      <category>productivity</category>
    </item>
    <item>
      <title>Yet another journey to the cloud: User Management</title>
      <dc:creator>socraDrk</dc:creator>
      <pubDate>Mon, 27 Mar 2023 18:35:47 +0000</pubDate>
      <link>https://dev.to/aws-builders/yet-another-journey-to-the-cloud-user-management-3cmk</link>
      <guid>https://dev.to/aws-builders/yet-another-journey-to-the-cloud-user-management-3cmk</guid>
      <description>&lt;p&gt;Hi and welcome to my first post in dev.to! &lt;/p&gt;

&lt;p&gt;There are already several posts, videos, and tutorials from different sources that cover their journey to the cloud, so I will not cover some of the really basic concepts of what's the cloud and similar, but I would rather focus on some of the concepts that I have seen are a bit of a challenge during the first days on cloud projects.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;&lt;strong&gt;User Management&lt;/strong&gt;&lt;/em&gt; is one of the first concepts we need to have clear before working in the cloud and for that we need to differentiate between the types of users we could have:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Cloud users and roles: via IAM or IAM Identity Center Users (successor to AWS Single Sign-On)&lt;/li&gt;
&lt;li&gt;Operating System (OS) users: e.g. Linux or Windows&lt;/li&gt;
&lt;li&gt;Database users: e.g. AuroraDB, Oracle, or SQL Server&lt;/li&gt;
&lt;li&gt;External APIs or system users: e.g. SAP Hana, On Premises API, BI Tools (Tableau, Power BI, etc)&lt;/li&gt;
&lt;li&gt;And on top of that our Application users. Which for this post can be either a web application or REST API&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Cloud Users and Roles
&lt;/h2&gt;

&lt;p&gt;The difference between a user and a role on a high level is that the user is associated with a person or an application and the role has no association with a specific person. I leave this &lt;a href="https://docs.aws.amazon.com/IAM/latest/UserGuide/id.html"&gt;link&lt;/a&gt; for more information and best practices on when to use a user or role.&lt;/p&gt;

&lt;p&gt;These &lt;em&gt;users&lt;/em&gt; depend normally on the IAM service or the IAM Identity Center Users and are the ones responsible to communicate with the AWS APIs, which makes them ideal for the use case of deploying the cloud infrastructure like EC2 (ideally with &lt;a href="https://docs.aws.amazon.com/cdk/v2/guide/home.html"&gt;Infrastructure as a Code&lt;/a&gt;) or when our application needs to make direct calls to one of the AWS services like Lambda or S3.&lt;/p&gt;

&lt;p&gt;The permissions we have for our users are managed by IAM Policies which we can attach to our user, role, or group. One of the &lt;a href="https://docs.aws.amazon.com/IAM/latest/UserGuide/best-practices.html#grant-least-privilege"&gt;best practices&lt;/a&gt; for this is to apply least-privilege permissions, which means that we should assign just the permission required to our tasks and nothing more. &lt;/p&gt;

&lt;h2&gt;
  
  
  OS Users
&lt;/h2&gt;

&lt;p&gt;Once we have our EC2 up and running, we figure out that most probably we need to connect with &lt;em&gt;ec2_user&lt;/em&gt; or &lt;em&gt;ubuntu&lt;/em&gt; in case of most distros of Linux or &lt;em&gt;Administrator&lt;/em&gt; for Windows.&lt;/p&gt;

&lt;p&gt;These users are the local users configured by default in the AMIs we choose for our EC2 and with them, we can create more OS users, install packages, update the OS, and in general do our sysadmin tasks inside our EC2. &lt;/p&gt;

&lt;p&gt;But one important thing to consider is that those users don't exist in IAM, so by default, we cannot access the AWS APIs (though &lt;a href="https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html"&gt;we can attach an IAM role to our EC2&lt;/a&gt;, but let's leave that option aside at the moment to not complicate things more).&lt;/p&gt;

&lt;h2&gt;
  
  
  Database users
&lt;/h2&gt;

&lt;p&gt;Next, we have our database. In this scenario, we can use AWS RDS to deploy our database and it will create an &lt;em&gt;administrator&lt;/em&gt; user. With this, we can create more users and then, as an example, use &lt;a href="https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/rds-secrets-manager.html"&gt;AWS Secret Manager to store those credentials&lt;/a&gt; and even rotate them.&lt;/p&gt;

&lt;p&gt;Besides this mechanism, RDS also supports authentication via IAM, which means we would need a valid &lt;em&gt;Cloud User&lt;/em&gt;, and via &lt;em&gt;Kerberos&lt;/em&gt;. This last one might sound scary, so I will leave it for later.&lt;/p&gt;

&lt;h2&gt;
  
  
  External Users
&lt;/h2&gt;

&lt;p&gt;Depending on our organization, we might have more systems to access outside AWS, which means more users to track. Here it would depend on the system, we could be speaking of the regular user:password mechanism, certificate-based, SSO via SAML and OAuth2, among several others.&lt;/p&gt;

&lt;h2&gt;
  
  
  Our Application Users
&lt;/h2&gt;

&lt;p&gt;And last we have the users for our application. Normally we would have a module in our application dedicated to this purpose. This module would have its database of users and check they are who they claim they are (authentication) and that the users have permission to access the resources they request in our applications (authorization). &lt;/p&gt;

&lt;p&gt;Having said that, one of the benefits of deploying our application in AWS is that we can leverage that module to AWS Cognito. I talk a little more about this scenario in a &lt;a href="https://datainsights.de/easy-authentication-on-amazon-ecs/"&gt;previous post&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;With Cognito, we can have a user pool with valid users that can access our application and configure our application to retrieve the scopes of those users and see if they can access certain resources.&lt;/p&gt;

&lt;h2&gt;
  
  
  Users everywhere
&lt;/h2&gt;

&lt;p&gt;At this point I hope it's clear that the more systems and services we would use, the more kind of users are needed, so we need to be careful from the very beginning of our journey when we decide on the users to be created, their permissions and who will use them.&lt;/p&gt;

&lt;p&gt;The developer's dream is to have administrator access, with root or sudo permissions, and grant all to our schemas, that would be rainbows and flowers until the sad truth comes when we realized why the best practices are in our best interest: keeping our application and environment &lt;strong&gt;&lt;em&gt;safe from attacks&lt;/em&gt;&lt;/strong&gt;, misuse or error during deployments and have proper &lt;strong&gt;&lt;em&gt;observability&lt;/em&gt;&lt;/strong&gt; on what is happening in our account.&lt;/p&gt;

&lt;h3&gt;
  
  
  Centralized user management
&lt;/h3&gt;

&lt;p&gt;But not everything needs to be spread in several places, remember that scary word: &lt;em&gt;Kerberos&lt;/em&gt;. This mechanism is used by RDS as a way to authenticate to our database and it requires an Active Directory (AD). Luckily for us, AWS provides a service for that: AWS Directory Service.&lt;/p&gt;

&lt;p&gt;If we enable this integration in RDS, all the users we create on Active Directory will be valid on our database. Up to this date, AuroraDB, MySQL, Postgres, Oracle, and SQL Server are compatible with this.&lt;/p&gt;

&lt;p&gt;The advantage of using AD is that several external systems, like SAP Hana or BI Tools, are compatible with it and we could configure them also to use it. Even QuickSight, which is the business analytics service from AWS, and with its enterprise edition we can &lt;a href="https://docs.aws.amazon.com/quicksight/latest/user/aws-directory-service.html"&gt;integrate it with Directory Service&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Besides that, we can also configure our Linux or Windows EC2 instances to join our AD and do it automatically during bootstrap. And if we would require to provide several desktops to our users, we might have had a look into AWS Workspaces, which is a managed desktop computing service in the cloud that can be also &lt;a href="https://docs.aws.amazon.com/whitepapers/latest/best-practices-deploying-amazon-workspaces/ad-ds-deployment-scenarios.html"&gt;integrated with AD&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;For Cognito, you might guess, we can integrate it with AD using SAML and Active Directory Federation Services. And with this, we can implicitly extend this integration to our applications behind API Gateway, ELB, or mobile apps.&lt;/p&gt;

&lt;p&gt;And last, we can enable IAM Identity Center Users to communicate with an AD in our central AWS account, so all the child accounts can access them using the SSO mechanism. &lt;/p&gt;

&lt;p&gt;Having configured this, we would need:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Cloud Users: For the initial setup of the main account and the AD.&lt;/li&gt;
&lt;li&gt;AD Users: Active Directory users and groups for all other cases.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--qPeIOnXC--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/865uplqlli4zyf65y63o.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--qPeIOnXC--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/865uplqlli4zyf65y63o.png" alt="High Level Architecture" width="880" height="760"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And that's it! I won't cover the details of the configuration of this setup here, but please leave a comment if that would be something of interest for another post. And I hope this will help you in your journey to the cloud...&lt;/p&gt;

&lt;p&gt;&lt;em&gt;One User to rule them all, One User to find them, One User to bring them all and in the system bind them.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>aws</category>
      <category>beginners</category>
      <category>database</category>
      <category>architecture</category>
    </item>
  </channel>
</rss>
