<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Luis Fernando De Pombo</title>
    <description>The latest articles on DEV Community by Luis Fernando De Pombo (@depombo).</description>
    <link>https://dev.to/depombo</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/depombo"/>
    <language>en</language>
    <item>
      <title>Quickly compare cost and results of different LLMs on the same prompt</title>
      <dc:creator>Luis Fernando De Pombo</dc:creator>
      <pubDate>Mon, 03 Mar 2025 18:29:09 +0000</pubDate>
      <link>https://dev.to/backmesh/quickly-compare-cost-and-results-of-different-llms-on-the-same-prompt-3lgl</link>
      <guid>https://dev.to/backmesh/quickly-compare-cost-and-results-of-different-llms-on-the-same-prompt-3lgl</guid>
      <description>&lt;p&gt;I often want a quick comparison of different LLMs to see the result+price+performance across different tasks or prompts.&lt;/p&gt;

&lt;p&gt;So I put together &lt;a href="https://llmcomp.backmesh.com" rel="noopener noreferrer"&gt;LLMcomp&lt;/a&gt;—a straightforward site to compare (some) popular LLMs on cost, latency, and other details in one place. It’s still a work in progress, so any suggestions or ideas are welcome. I can add more LLMs if there is interest. It currently has Claude Sonnet, Deep Seek and 4o which are the ones I compare and contrast the most.&lt;/p&gt;

&lt;p&gt;I built it using a port of AgentOps' &lt;a href="https://github.com/backmesh/tokencost" rel="noopener noreferrer"&gt;tokencost&lt;/a&gt; for the web to estimate LLM usage costs on the web and the code for the website is open source and roughly 400 LOC&lt;/p&gt;

</description>
      <category>deepseek</category>
      <category>ai</category>
      <category>opensource</category>
      <category>discuss</category>
    </item>
    <item>
      <title>3 common mistakes when integrating the OpenAI API with your web or mobile app</title>
      <dc:creator>Luis Fernando De Pombo</dc:creator>
      <pubDate>Fri, 14 Feb 2025 19:02:14 +0000</pubDate>
      <link>https://dev.to/backmesh/openai-api-key-safety-and-mistakes-in-client-side-code-3k2j</link>
      <guid>https://dev.to/backmesh/openai-api-key-safety-and-mistakes-in-client-side-code-3k2j</guid>
      <description>&lt;p&gt;A key concern when building a web or mobile application that uses OpenAI’s language models is protecting your private API key. If you want to keep complexity low and release features quickly, setting up a fully fledged backend just to safeguard an API key can be time consuming. In this article, you’ll learn how to avoid the most common security mistakes when integrating the OpenAI API into a client-side app—without sacrificing simplicity or safety. Overlooking these pitfalls can leave the door open to OpenAI API misuse, which can:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Lead to unexpected or excessive charges on your OpenAI account&lt;/li&gt;
&lt;li&gt;Violate OpenAI’s Acceptable Use Policy and Terms of Service&lt;/li&gt;
&lt;li&gt;Potentially expose your project to spam or abusive behavior&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Mistake # 1: Storing the OpenAI Key in Frontend Code
&lt;/h2&gt;

&lt;p&gt;Putting your API key directly in client-side code (using any type of web or mobile framework) as shown in the sample frontend code below is tempting, but also very risky.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="nx"&gt;OpenAI&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;openai&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;client&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;OpenAI&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
  &lt;span class="na"&gt;dangerouslyAllowBrowser&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="na"&gt;apiKey&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;sk-proj-YOURSECRETKEY&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;completion&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;OpenAI&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;instance&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;chat&lt;/span&gt;&lt;span class="p"&gt;(...)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Anyone can inspect your code (even if it is minified or obfuscated somehow), extract your key, and abuse it. Don't let &lt;a href="https://community.openai.com/t/repeated-fraudulent-use-of-my-api-keys/510465/3" rel="noopener noreferrer"&gt;this&lt;/a&gt; happen to you. The OpenAI SDK requires the &lt;code&gt;dangerouslyAllowBrowser&lt;/code&gt; flag to try to prevent this problem and make sure you add a layer of protection that keeps the secret key away from the end user’s browser or device.&lt;/p&gt;

&lt;h2&gt;
  
  
  Mistake # 2: Using an Unauthenticated Proxy
&lt;/h2&gt;

&lt;p&gt;Putting your OpenAI API key as a secret in a hosted server or cloud function and calling that endpoint from your client is much better than exposing it directly, but it is still unsafe. Let's take the following client-side application code:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;opair&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nf"&gt;fetch&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;https://functionroute.yourcloud.com/v1/chat/completions&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="na"&gt;method&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;POST&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="na"&gt;headers&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Content-Type&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;application/json&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="p"&gt;},&lt;/span&gt;
  &lt;span class="na"&gt;body&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;JSON&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;stringify&lt;/span&gt;&lt;span class="p"&gt;({...})&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Alongside the cloud function OpenAI API proxy it is calling:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="nx"&gt;route&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;yourcloud&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="nx"&gt;route&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;/&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;req&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;res&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;opair&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nf"&gt;fetch&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;`https://api.openai.com/&lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;req&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;path&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;`&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="na"&gt;method&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;POST&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;headers&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Content-Type&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;application/json&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Authentication&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;`Bearer &lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;env&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;OPENAI_SECRET_KEY&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;`&lt;/span&gt;

    &lt;span class="p"&gt;},&lt;/span&gt;
    &lt;span class="na"&gt;body&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;req&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;body&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;clone&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt;
  &lt;span class="p"&gt;});&lt;/span&gt;
  &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nx"&gt;res&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;send&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;opair&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;})&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Since this proxy is publicly accessible, an attacker could bypass your app and access your OpenAI API even without your private key at &lt;code&gt;https://functionroute.yourcloud.com/&lt;/code&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Mistake # 3: Using an Authenticated Proxy without Permissions
&lt;/h2&gt;

&lt;p&gt;Putting your OpenAI API key as a secret in a hosted server or cloud function and adding authentication so only your users can call the proxy endpoint is quite common and still much better than the setups laid out thus far. However, it is &lt;em&gt;still&lt;/em&gt; unsafe. Let's take the following frontend code:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="nx"&gt;supabase&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;supabase-js&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;jwt&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;supabase&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;auth&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;session&lt;/span&gt;&lt;span class="p"&gt;().&lt;/span&gt;&lt;span class="nx"&gt;access_token&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;opair&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nf"&gt;fetch&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;https://functionroute.yourcloud.com/v1/chat/completion&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="na"&gt;method&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;POST&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="na"&gt;headers&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Content-Type&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;application/json&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Authentication&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;`Bearer &lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;jwt&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;`&lt;/span&gt;
  &lt;span class="p"&gt;},&lt;/span&gt;
  &lt;span class="na"&gt;body&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;JSON&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;stringify&lt;/span&gt;&lt;span class="p"&gt;({...}),&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Alongside the cloud function OpenAI API authenticated proxy it is calling:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="nx"&gt;route&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;yourcloud&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;createClient&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;@supabase/supabase-js&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;supabase&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;createSupabaseClient&lt;/span&gt;&lt;span class="p"&gt;(...);&lt;/span&gt;

&lt;span class="nx"&gt;route&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;/&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;req&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;res&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;authHeader&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;req&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;headers&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;authorization&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;];&lt;/span&gt;
  &lt;span class="k"&gt;try&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="c1"&gt;// 1. Get the token from the Authorization header&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;authHeader&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;req&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;headers&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;authorization&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;||&lt;/span&gt; &lt;span class="dl"&gt;''&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;jwt&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;authHeader&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;split&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt; &lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)[&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="c1"&gt;// Bearer &amp;lt;token&amp;gt;&lt;/span&gt;

    &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;!&lt;/span&gt;&lt;span class="nx"&gt;jwt&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nx"&gt;res&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;status&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;401&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;json&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="na"&gt;error&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;No token provided&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt; &lt;span class="p"&gt;})&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;

    &lt;span class="c1"&gt;// 2. Verify the token using the JWKS&lt;/span&gt;
        &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;data&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;user&lt;/span&gt; &lt;span class="p"&gt;},&lt;/span&gt; &lt;span class="nx"&gt;error&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;supabase&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;auth&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;getUser&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;jwt&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
        &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;error&lt;/span&gt; &lt;span class="o"&gt;||&lt;/span&gt; &lt;span class="o"&gt;!&lt;/span&gt;&lt;span class="nx"&gt;user&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nf"&gt;createResponse&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Unauthorized&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;status&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;401&lt;/span&gt; &lt;span class="p"&gt;});&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="c1"&gt;// 3. The token is valid&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;opair&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nf"&gt;fetch&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;`https://api.openai.com/&lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;req&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;path&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;`&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="na"&gt;method&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;POST&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="na"&gt;headers&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Content-Type&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;application/json&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Authentication&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;`Bearer &lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;env&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;OPENAI_SECRET_KEY&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;`&lt;/span&gt;

      &lt;span class="p"&gt;},&lt;/span&gt;
      &lt;span class="na"&gt;body&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;req&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;body&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;clone&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt;
    &lt;span class="p"&gt;});&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nx"&gt;res&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;send&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;opair&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

  &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;catch &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;error&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;error&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;error&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nx"&gt;res&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;status&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;401&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;json&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="na"&gt;error&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Invalid token&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt; &lt;span class="p"&gt;})&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;})&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This backend integration for the OpenAI API looks fine at first glance and provides some safety. However, there are three problems with it:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;There are no limits to how many times a given user can call your OpenAI API proxy. Any user can call this API proxy up to the default account &lt;a href="https://platform.openai.com/docs/guides/rate-limits" rel="noopener noreferrer"&gt;rate limits&lt;/a&gt; for the Open AI API associated with your private API key by simply getting their JWT and calling the proxy at &lt;code&gt;https://functionroute.yourcloud.com/v1/&lt;/code&gt;. The JWTs, or authentication tokens, typically expire within an hour. That means that someone can generate a token from your client-side code and (mis)use it until it expires. Then rinse and repeat.&lt;/li&gt;
&lt;li&gt;OpenAI, and other LLM APIs, have endpoints that allow uploading &lt;a href="https://platform.openai.com/docs/api-reference/files" rel="noopener noreferrer"&gt;files&lt;/a&gt; for augmented analysis and creating &lt;a href="https://platform.openai.com/docs/api-reference/threads" rel="noopener noreferrer"&gt;threads&lt;/a&gt; with chat history for autonomous assistants. This backend does not provide any permissions or access control over &lt;em&gt;which&lt;/em&gt; users uploaded or created &lt;em&gt;which&lt;/em&gt; private resources in the OpenAI API. This means that any user with just their JWT can exploit this security vulnerability and see all the files and conversations ever created with your OpenAI API key. This will cause infringements on the data and privacy of your end users. The following frontend code will call the OpenAI API &lt;a href="https://platform.openai.com/docs/api-reference/files/list" rel="noopener noreferrer"&gt;list files&lt;/a&gt; end point which will return all the files ever uploaded regardless of the user that uploaded it:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="nx"&gt;supabase&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;supabase-js&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;jwt&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;supabase&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;auth&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;session&lt;/span&gt;&lt;span class="p"&gt;().&lt;/span&gt;&lt;span class="nx"&gt;access_token&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;opair&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nf"&gt;fetch&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;https://functionroute.yourcloud.com/v1/files&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="na"&gt;method&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;GET&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="na"&gt;headers&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Content-Type&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;application/json&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Authentication&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;`Bearer &lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;jwt&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;`&lt;/span&gt;
  &lt;span class="p"&gt;},&lt;/span&gt;
  &lt;span class="na"&gt;body&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;JSON&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;stringify&lt;/span&gt;&lt;span class="p"&gt;({...}),&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Building a safe integration
&lt;/h2&gt;

&lt;p&gt;To prevent the vulnerabilities laid out thus far, you should implement the following safeguards in your backend:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Implement Authentication&lt;/strong&gt;: Require valid credentials (e.g., token-based auth) so you know exactly who’s accessing your services.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Track Usage and Limits per User&lt;/strong&gt;: Prevent users from racking up high API costs by restricting the number of requests per minute per user.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Enforce Secure Permissions:&lt;/strong&gt; Ensure only the user who created a resource can view or modify it to protect against unauthorized access or data leaks.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Building a simple integration
&lt;/h2&gt;

&lt;p&gt;Implementing proper integration with the OpenAI API in a backend can be time-consuming. A Backend-as-a-Service (BaaS) like Backmesh abstracts this complexity handling authentication, rate limits and permission enforcement for you—without requiring you to build or maintain a backend. The idea is straightforward:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;   Store your private OpenAI API key securely in a protected configuration that users cannot see.

&lt;ol&gt;
&lt;li&gt; Process requests with permissions, rate limiting and authentication&lt;/li&gt;
&lt;li&gt;  Your front end sends a request to the Backmesh endpoint on behalf of your user.&lt;/li&gt;
&lt;li&gt;The endpoint verifies this request belongs to one of your users, the user hasn't gone past their limits (e.g. 5 requests per min) and the user is not requesting resources it did not previously create. Only if all of this is true, Backmesh attaches your secret API key from the secure store to the request.&lt;/li&gt;
&lt;li&gt;The request goes to OpenAI, then Backmesh returns the OpenAI API response to your front end with streaming support.&lt;/li&gt;
&lt;/ol&gt;


&lt;/li&gt;

&lt;/ol&gt;

&lt;p&gt;The resulting client-side code is described below and is taken from one of our &lt;a href="https://dev.to/docs/supabase"&gt;tutorials&lt;/a&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="nx"&gt;OpenAI&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;openai&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="nx"&gt;supabase&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;supabase-js&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;BACKMESH_PROXY_URL&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt;
  &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;https://edge.backmesh.com/v1/proxy/gbBbHCDBxqb8zwMk6dCio63jhOP2/wjlwRswvSXp4FBXwYLZ1/v1&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;jwt&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;supabase&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;auth&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;session&lt;/span&gt;&lt;span class="p"&gt;().&lt;/span&gt;&lt;span class="nx"&gt;access_token&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;client&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;OpenAI&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
  &lt;span class="na"&gt;httpAgent&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;HttpsProxyAgent&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;BACKMESH_PROXY_URL&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
  &lt;span class="na"&gt;dangerouslyAllowBrowser&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="na"&gt;apiKey&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;jwt&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;From your perspective, there’s “no backend” to manage—no servers, no DevOps, no complicated configurations. But under the hood, Backmesh is a secure middleman between your front end and OpenAI API. Read more about Backmesh's security considerations &lt;a href="https://dev.to/docs/security"&gt;here&lt;/a&gt; or check out the open source code on &lt;a href="https://github.com/backmesh/backmesh" rel="noopener noreferrer"&gt;Github&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>openai</category>
      <category>programming</category>
      <category>react</category>
    </item>
    <item>
      <title>Built a ChatGPT-like app on Firestore with no backend/functions at ~1k LOC</title>
      <dc:creator>Luis Fernando De Pombo</dc:creator>
      <pubDate>Thu, 30 Jan 2025 19:09:22 +0000</pubDate>
      <link>https://dev.to/depombo/built-a-chatgpt-like-app-on-firestore-with-no-backendfunctions-at-1k-loc-5228</link>
      <guid>https://dev.to/depombo/built-a-chatgpt-like-app-on-firestore-with-no-backendfunctions-at-1k-loc-5228</guid>
      <description>&lt;p&gt;Source Code: &lt;a href="https://github.com/backmesh/flutter-chatgpt" rel="noopener noreferrer"&gt;https://github.com/backmesh/flutter-chatgpt&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Live Demo: &lt;a href="https://flutter-chatgpt.pages.dev" rel="noopener noreferrer"&gt;https://flutter-chatgpt.pages.dev&lt;/a&gt;&lt;/p&gt;

</description>
      <category>beginners</category>
      <category>programming</category>
      <category>opensource</category>
      <category>flutter</category>
    </item>
    <item>
      <title>npm i anthropic-react-native</title>
      <dc:creator>Luis Fernando De Pombo</dc:creator>
      <pubDate>Tue, 10 Sep 2024 15:42:29 +0000</pubDate>
      <link>https://dev.to/depombo/npm-i-anthropic-react-native-37na</link>
      <guid>https://dev.to/depombo/npm-i-anthropic-react-native-37na</guid>
      <description>&lt;p&gt;I recently published a new package on npm that brings the Anthropic APIs to React Native without polyfills.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/backmesh/anthropic-react-native" rel="noopener noreferrer"&gt;https://github.com/backmesh/anthropic-react-native&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Today the library supports chat streaming, normal chat completions and file upload with more endpoints coming soon. The goal of the library is to follow the &lt;a href="https://github.com/anthropics/anthropic-sdk-typescript" rel="noopener noreferrer"&gt;Node SDK&lt;/a&gt; wherever possible while taking advantage of &lt;a href="https://github.com/binaryminds/react-native-sse" rel="noopener noreferrer"&gt;React Native SSE&lt;/a&gt; for streaming where the Anthropic Node SDK does not work. Lmk what you think or if this will be useful to you!&lt;/p&gt;

</description>
      <category>anthropic</category>
      <category>reactnative</category>
      <category>mobile</category>
      <category>llm</category>
    </item>
    <item>
      <title>npm i openai-react-native</title>
      <dc:creator>Luis Fernando De Pombo</dc:creator>
      <pubDate>Mon, 26 Aug 2024 14:48:04 +0000</pubDate>
      <link>https://dev.to/depombo/npm-i-openai-react-native-3gop</link>
      <guid>https://dev.to/depombo/npm-i-openai-react-native-3gop</guid>
      <description>&lt;p&gt;We just published this package. It is a React Native OpenAI API client without polyfills since the official OpenAI Node SDK does not support React Native. See this &lt;a href="https://github.com/openai/openai-node/issues/355" rel="noopener noreferrer"&gt;issue&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Today the library supports chat streaming, normal chat completions and expo file upload with more endpoints coming soon. The goal of the library is to follow the &lt;a href="https://github.com/openai/openai-node" rel="noopener noreferrer"&gt;OpenAI Node SDK&lt;/a&gt; wherever possible without using any polyfills by taking advantage of &lt;a href="https://github.com/binaryminds/react-native-sse" rel="noopener noreferrer"&gt;React Native SSE&lt;/a&gt; and &lt;a href="https://docs.expo.dev/versions/latest/sdk/filesystem/" rel="noopener noreferrer"&gt;Expo FileSystem&lt;/a&gt; implementations to support calling the OpenAI API through a proxy from React Native with streaming and file upload support. Lmk what you think or what you would like to see!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/backmesh/openai-react-native" rel="noopener noreferrer"&gt;https://github.com/backmesh/openai-react-native&lt;/a&gt;&lt;/p&gt;

</description>
      <category>reactnative</category>
      <category>expo</category>
    </item>
    <item>
      <title>Securely call private key APIs like openAI from flutter without a backend</title>
      <dc:creator>Luis Fernando De Pombo</dc:creator>
      <pubDate>Wed, 14 Aug 2024 19:22:23 +0000</pubDate>
      <link>https://dev.to/depombo/no-more-needless-backends-for-simple-flutter-apps-12nb</link>
      <guid>https://dev.to/depombo/no-more-needless-backends-for-simple-flutter-apps-12nb</guid>
      <description>&lt;p&gt;As of August 2024, none of the major Backend-as-a-service platforms for developing Flutter apps like Supabase, Cloudflare, Vercel, or Firebase support Dart cloud functions. Additionally, many Flutter packages can't be utilized in Dart backends due to the absence of UI dependencies. As a result, developers using Flutter often have to rewrite model and controller logic to handle database operations a second time in Dart, Javascript or Python. This leads to duplicated effort and a slower development process which is the entire purpose of these services.&lt;/p&gt;

&lt;h2&gt;
  
  
  Introducing Backmesh
&lt;/h2&gt;

&lt;p&gt;I got tired of having to write a cloud function in Javascript or Python for every Flutter project that called some private key API so I built &lt;a href="https://backmesh.com" rel="noopener noreferrer"&gt;Backmesh&lt;/a&gt;. Backmesh is a minimal SaaS for Flutter developers to safely call private key APIs, such as OpenAI in the example below, without writing any backend code in a server or cloud function.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight dart"&gt;&lt;code&gt;&lt;span class="c1"&gt;// Auth Provider: Firebase&lt;/span&gt;
&lt;span class="c1"&gt;// App Type: Flutter Dart&lt;/span&gt;
&lt;span class="c1"&gt;// Private Key API: OpenAI&lt;/span&gt;

&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="s"&gt;'package:firebase_auth/firebase_auth.dart'&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt;
&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="s"&gt;'package:dart_openai/dart_openai.dart'&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt;


&lt;span class="n"&gt;OpenAI&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;baseUrl&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt;
  &lt;span class="s"&gt;"https://edge.backmesh.com/proxyname"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="c1"&gt;// "https://api.openai.com" is the default one.&lt;/span&gt;
&lt;span class="c1"&gt;// set api secret key to jwt&lt;/span&gt;
&lt;span class="n"&gt;OpenAI&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;apiKey&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="n"&gt;FirebaseAuth&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;instance&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;currentUser&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;getIdToken&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
&lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="n"&gt;OpenAI&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;instance&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;chat&lt;/span&gt;&lt;span class="p"&gt;(...)&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;There are many legitimate use cases for backends, but the primary use case for many Flutter apps that require a backend is just being able to securely call an external, private key API. Backmesh allows Flutter apps, or any client app written in any language, to do this without a backend by storing the private API key and providing a JWT protected proxy with user-scoped access using the app's authentication provider.&lt;/p&gt;

&lt;h2&gt;
  
  
  Security Considerations
&lt;/h2&gt;

&lt;p&gt;Let's quickly go over authorization and the two different types of authentication:&lt;/p&gt;

&lt;p&gt;Authentication verifies someone is who they say they are.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;User authentication means we can identify which user is making a request and whether the user in question is valid.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Client app authentication means we can verify that the request is originating from a valid and untampered app client.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Authorization verifies that someone has access to something specific and grants access if applicable. Authorization requires authentication to happen first.&lt;/p&gt;

&lt;p&gt;For example, Firebase provides user authentication, but only properly configured Firestore security rules provide authorization to your database. Furthermore, only adding Firebase AppCheck can provide client app authentication. More about that &lt;a href="https://firebase.google.com/docs/firestore/security/overview" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;In the case of a public API proxy, Backmesh performs user authentication to ensure that requests to the proxy securing your private key API come from one of your users. This does not provide authorization about which specific users are allowed to make which requests to the proxy, or how many requests a specific user can make. However, with Backmesh you can set rate limits across all your users to reasonably protect your API resources e.g. no user should be calling a given API more than X times per hour. It is possible to integrate with Stripe to be able to set authorization access rules for users based on their payment plan. If you are interested in this email hello at backmesh dot com.&lt;/p&gt;

</description>
      <category>flutter</category>
      <category>webdev</category>
      <category>mobile</category>
    </item>
    <item>
      <title>cloud infrastructure that proposes verifiable bugfixes</title>
      <dc:creator>Luis Fernando De Pombo</dc:creator>
      <pubDate>Tue, 05 Mar 2024 19:24:59 +0000</pubDate>
      <link>https://dev.to/depombo/cloud-infrastructure-that-proposes-verifiable-bugfixes-2gfo</link>
      <guid>https://dev.to/depombo/cloud-infrastructure-that-proposes-verifiable-bugfixes-2gfo</guid>
      <description>&lt;p&gt;LLMs are thus far the fastest adopted technology in history after OpenAI reported that ChatGPT achieved 100 million users within 2 months of its release. Chief among the uses cases for LLMs is code generation. Today, we mostly generate code with LLMs using chat-like interfaces within a terminal, website, or code editor. However as impressive as LLMs are at coding today, any developer that has used LLMs for code generation knows that even with the right context the vast majority of the time LLMs don't generate code that works as-is. Instead the generated code either needs to be throughly massaged, or it serves as inspiration for the code we ultimately end up writing. In some cases, the LLM-generated code seems to come close to the desired solution and we verify it by perusing it and/or manually running it within our codebases. However, this rarely works. One Princeton &lt;a href="https://arxiv.org/pdf/2310.06770.pdf" rel="noopener noreferrer"&gt;study&lt;/a&gt; that tasked two different LLMs to solve 2,294 real GitHub issues found that Claude 2 and GPT-4 solved only 4.8% and 1.7% of the issues respectively.&lt;/p&gt;

&lt;p&gt;Let's look at the most common types of LLM code generation and for debugging let's explore a workflow to automatically verify LLM-generated bugfixes outside of a chat/visual interface where we can't see the many mistakes made by LLMs. These types of workflows to programmatically verify LLM-generated code might not be the norm today, but they will become more attractive as the price of invoking LLMs goes down and the generative coding abilities of LLMs improve.&lt;/p&gt;

&lt;h2&gt;
  
  
  The different types of LLM code generation
&lt;/h2&gt;

&lt;p&gt;Let's take a closer look at the different scenarios in which developers use LLMs for code generation:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1) "I know how to write this code but it's repetitive or cumbersome to write"&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Code editor AI copilots shine here. This is one of the most common scenarios and we can easily verify the accuracy of the code as there are often many code samples that can be given to the LLM. Some examples include implementing a function with a well defined purpose, calling an API and then performing some sort of data transformation on the response, or generating self-contained components in React or some other reactive frontend framework.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2) "I don't know how to write this code. Help"&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This use case is tricky because the LLM can often lead us astray unless we reason from first principles and try to understand the underlying technology, API and/or framework. However, we often don't have many other alternatives as searching Google and Stackoverflow probably didn't help much either. This often ends up being a conversation with a lot of back and forth where code-generated is interleaved with explanations and ideas for how approach the problem in ways that may or may not work.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3) "Write tests for this code"&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This one is straightforward for the LLM to understand, but there is still no easy way to know if the tests generated do a sufficiently good enough job at testing the provided code even if the tests pass. However, it is up to the developer to be exhaustive enough in describing the many different ways in which the provided code should be tested and what type of input should be provided for unit tests. I worked on a &lt;a href="https://github.com/alantech/marsha" rel="noopener noreferrer"&gt;syntax&lt;/a&gt; that combined English and functional notation into a constrained format that was compiled by LLMs into tested Python programs. We tried to make sure the LLM had enough information to generate valid tests by failing to compile the syntax if examples were not provided by the user, but there was still no way of verifying the LLM-generated unit tests themselves.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4) "Make this error go away"&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This one is straightforward for the LLM to understand. The developer often just provides the stacktrace and can sometimes add more context by including the code referenced in the stacktrace. LLMs tend to do better when the error generated belongs to a commonly used programming language and/or framework as training datasets from Reddit and Stackoverflow are likely richer for these cases. The LLM can often output instructions or sample code that the developer can implement to quickly test if the bugfix works. Like the 2nd use case on this list, this use case can also end up being a conversation with a lot of back and forth where code generated by the LLM is interleaved with suggestions on ways to resolve the error that may or may not work.&lt;/p&gt;

&lt;p&gt;I'm particularly interested in automating the verification of the LLM code generated for the last use case because the functionality of LLM-generated code when debugging is binary and always verifiable- did the error go away and did any new errors arise?&lt;/p&gt;

&lt;h2&gt;
  
  
  Automatically verifying LLM-generated bugfixes
&lt;/h2&gt;

&lt;p&gt;Automatically debugging exceptions within a containerized production application is possible with an opinionated cloud infrastructure and error monitoring setup that collects the necessary data to verify bugfixes and behind the scenes tasks an LLM to generate a bugfix for each of the recorded production errors. A pull request to the application's codebase is submitted if and only if a bugfix verifiably fixes a given error. Pull requests ideally run with unit tests that ensure there are no breaking changes or regressions. Finally, one of the maintainers can take a closer look at the LLM's proposal for a verified bugfix to a particular error and decide to discard it, further edit it or merge it.&lt;/p&gt;

&lt;p&gt;In order for the cloud infrastructure hosting the production application in question to verify the LLM-generated bugfixes, the platform needs to be able to parse and collect incoming requests and outgoing network traffic using eBPF (more on that later). The infrastructure offering can then create branches out of the main git branch of the application's codebase with potential bugfixes and deploy them to sandboxed development environments behind the scenes. Finally, the cloud offering replays the incoming requests for the sandboxed environments while simultaneously mocking the outgoing network traffic using previously recorded values from the time the error took place. If no error is thrown, the cloud offering opens a pull request for the maintainers to review. This is a rough sketch of what I am describing:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2blv7czocy0dp4101owr.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2blv7czocy0dp4101owr.jpg" alt="Image description" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;A cloud infrastructure platform that is able to generate and verify LLM bugfixes while also providing the best UX requires two important features:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1) git-based deployment process for containerized apps&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;A git-centered deployment experience allows us to deploy by pushing a git branch to a remote repository without having to configure a CI/CD system. This workflow has been around since 2009 thanks to Heroku, the grandfather of all PaaS. This DevOps abstraction tucks away a fair amount of complexity by making some assumptions about our app. For most cases, it really should be as simple as some form of pushing to a branch with a Dockerfile or docker-compose.yml in it (or nothing in particular if using &lt;a href="https://nixpacks.com/docs" rel="noopener noreferrer"&gt;Nixpacks&lt;/a&gt;) without having to deal with the fairly standard DevOps process of:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;building the docker image from a Dockerfile&lt;/li&gt;
&lt;li&gt;pushing the image to a repository&lt;/li&gt;
&lt;li&gt;updating the process in a VM or K8 pod with the new code&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I do wish more container cloud offerings today where as easy to use as the git-based, webapp offerings of Netlify or Vercel. &lt;a href="https://fly.io/docs/languages-and-frameworks/dockerfile/" rel="noopener noreferrer"&gt;Fly.io&lt;/a&gt; is one good example of a provider with a seamless, git-based deployment process for containerized apps. Cloud infrastructure that abstracts away the repetitive parts of the deployment process not only has superior UX, but it can easily provide features that can save us a lot of development time like &lt;a href="https://vercel.com/docs/deployments/preview-deployments" rel="noopener noreferrer"&gt;preview deployments&lt;/a&gt;. Most importantly, a git-based deployment process also makes it possible to programmatically replicate deployments behind the scenes. This is imperative to be able to propose verifiable bugfixes as we can test if an LLM can generate code that fixes a given bug, and do that repeatedly for all bugs, without an overly complicated CI/CD workflow, or some sort of spaghetti of yaml.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2) replay of incoming traffic and mocking of outgoing traffic via eBPF-based, SDK-less error monitoring&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Error monitoring tools are used primordialy to triage applicaiton errors. However, some offerings can also collect networking traffic to later replicate production conditions in sandboxed staging environments to improve reliability of upcoming software releases. Most error monitoring tools offer SDKs to explicitly instrument errors and networking calls within an app which can be tedious. However, some providers like &lt;a href="https://www.datadoghq.com/knowledge-center/ebpf/#how-to-enjoy-the-benefits-of-ebpf-today" rel="noopener noreferrer"&gt;DataDog&lt;/a&gt; have SDK-less error monitoring offerings built on &lt;a href="https://ebpf.io" rel="noopener noreferrer"&gt;eBPF&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;eBPF is a Linux technology that can run sandboxed programs in the kernel without changing kernel source. This technology opens the door for powerful error monitoring tools by hooking directly into kernel OS calls without any additional syntax required within a containerized codebase. However, fully SDK-less error monitoring via eBPF programs requires privileged, or sudo, access to hook into the kernel to then span the containerized app as a child process. This can be a complicated setup as most containerized deployment systems provide a sandboxed, or unprivileged, environment to run apps on. As such, SDK-less error monitoring is less common and typically reserved for business critical high performance needs. However, an SDK-less setup provides both a better UX, and the ability to replay incoming traffic and mock outgoing traffic behind the scenes which is necessary to verify the LLM's proposed bugfixes programmatically. Mocking all outgoing network traffic is particulally tricky as any communication to downstream databases and services needs to be properly parsed, recorded and stubbed which would not be possible without eBPF hooks.&lt;/p&gt;

&lt;h2&gt;
  
  
  Devtools that can automatically verify LLM-generated code
&lt;/h2&gt;

&lt;p&gt;Certain devtools within domains where the functionality of LLM-generated code can be verified have been on the rise as they can save developers countless of hours. This trend will continue as the price of invoking LLMs goes down and the generative coding abilities of LLMs improve. &lt;a href="https://www.figma.com/community/search?resource_type=plugins&amp;amp;sort_by=relevancy&amp;amp;query=Figma+to+code&amp;amp;editor_type=all&amp;amp;price=all&amp;amp;creators=all" rel="noopener noreferrer"&gt;Figma to code plugins&lt;/a&gt; are now abundant and significantly better than only a couple of years ago thanks to the newer generative AI models that can understand design assets and use them to verify the correctness of LLM-generated frontend code. Similarly, we can verify code generated by an LLM that was tasked to fix application errors because the expected output is simply that the error goes away and no new errors arise.&lt;/p&gt;

&lt;p&gt;Drop me a line at lfdepombo at gmail dot com if you want to learn more about a cloud infrastructure offering I have been working on that proposes verifiable bugfixes. I would also love to hear your thoughts on any type of programmatic verification for LLM-generated code, or on building abstractions with LLMs without directly exposing a chat, or visual, interface to end-users.&lt;/p&gt;

</description>
      <category>coding</category>
      <category>programming</category>
      <category>chatgpt</category>
      <category>cloud</category>
    </item>
    <item>
      <title>LLMs as compilers</title>
      <dc:creator>Luis Fernando De Pombo</dc:creator>
      <pubDate>Mon, 31 Jul 2023 18:51:06 +0000</pubDate>
      <link>https://dev.to/depombo/llms-as-compilers-547f</link>
      <guid>https://dev.to/depombo/llms-as-compilers-547f</guid>
      <description>&lt;h2&gt;
  
  
  a brief history of our programming abstractions
&lt;/h2&gt;

&lt;p&gt;It is hard to fathom, but in a not-so-distant past we considered &lt;a href="https://en.wikipedia.org/wiki/Assembly_language" rel="noopener noreferrer"&gt;assembly languages&lt;/a&gt; a high-level programming language when compared to its predecessor, &lt;a href="https://en.wikipedia.org/wiki/Machine_code" rel="noopener noreferrer"&gt;machine code&lt;/a&gt;. In 1947, assembly languages introduced an abstraction layer to simplify programming computers by letting us use numbers, symbols, and abbreviations to talk to computers instead of just 0s and 1s as had been the case with machine code. Assembly languages gave birth to computer science as a field in the digital realm if we exclude analog technologies from the 1800s and early 1900s like &lt;a href="https://en.wikipedia.org/wiki/Mechanical_calculator" rel="noopener noreferrer"&gt;mechanical calculators&lt;/a&gt;, &lt;a href="https://en.wikipedia.org/wiki/Abacus" rel="noopener noreferrer"&gt;abaci&lt;/a&gt;, &lt;a href="https://en.wikipedia.org/wiki/Tabulating_machine" rel="noopener noreferrer"&gt;tabulators&lt;/a&gt;, etc. However, assembly languages were still very low-level and required a lot of effort to write and maintain.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight c"&gt;&lt;code&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="n"&gt;A&lt;/span&gt; &lt;span class="n"&gt;sample&lt;/span&gt; &lt;span class="err"&gt;“&lt;/span&gt;&lt;span class="n"&gt;hello&lt;/span&gt; &lt;span class="n"&gt;world&lt;/span&gt;&lt;span class="err"&gt;”&lt;/span&gt; &lt;span class="n"&gt;program&lt;/span&gt; &lt;span class="n"&gt;that&lt;/span&gt; &lt;span class="n"&gt;writes&lt;/span&gt; &lt;span class="n"&gt;to&lt;/span&gt; &lt;span class="n"&gt;stdout&lt;/span&gt; &lt;span class="n"&gt;using&lt;/span&gt; &lt;span class="err"&gt;`&lt;/span&gt;&lt;span class="n"&gt;write&lt;/span&gt;&lt;span class="err"&gt;`&lt;/span&gt; &lt;span class="n"&gt;then&lt;/span&gt; &lt;span class="n"&gt;exits&lt;/span&gt; &lt;span class="n"&gt;using&lt;/span&gt; &lt;span class="err"&gt;`&lt;/span&gt;&lt;span class="n"&gt;exit&lt;/span&gt;&lt;span class="err"&gt;`&lt;/span&gt; &lt;span class="n"&gt;on&lt;/span&gt; &lt;span class="n"&gt;Linux&lt;/span&gt; &lt;span class="n"&gt;x86&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;64&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;
&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="n"&gt;courtesy&lt;/span&gt; &lt;span class="n"&gt;of&lt;/span&gt; &lt;span class="n"&gt;Jim&lt;/span&gt; &lt;span class="n"&gt;Fisher&lt;/span&gt; &lt;span class="n"&gt;https&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt;&lt;span class="c1"&gt;//jameshfisher.com/2018/03/10/linux-assembly-hello-world/&lt;/span&gt;

&lt;span class="n"&gt;global&lt;/span&gt; &lt;span class="n"&gt;_start&lt;/span&gt;

&lt;span class="n"&gt;section&lt;/span&gt; &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;text&lt;/span&gt;

&lt;span class="n"&gt;_start&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt;
  &lt;span class="n"&gt;mov&lt;/span&gt; &lt;span class="n"&gt;rax&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;        &lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="n"&gt;write&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
  &lt;span class="n"&gt;mov&lt;/span&gt; &lt;span class="n"&gt;rdi&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;        &lt;span class="p"&gt;;&lt;/span&gt;   &lt;span class="n"&gt;STDOUT_FILENO&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="n"&gt;mov&lt;/span&gt; &lt;span class="n"&gt;rsi&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;msg&lt;/span&gt;      &lt;span class="p"&gt;;&lt;/span&gt;   &lt;span class="s"&gt;"Hello, world!&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="n"&gt;mov&lt;/span&gt; &lt;span class="n"&gt;rdx&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;msglen&lt;/span&gt;   &lt;span class="p"&gt;;&lt;/span&gt;   &lt;span class="k"&gt;sizeof&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"Hello, world!&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
  &lt;span class="n"&gt;syscall&lt;/span&gt;           &lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="p"&gt;);&lt;/span&gt;

  &lt;span class="n"&gt;mov&lt;/span&gt; &lt;span class="n"&gt;rax&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;60&lt;/span&gt;       &lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="n"&gt;exit&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
  &lt;span class="n"&gt;mov&lt;/span&gt; &lt;span class="n"&gt;rdi&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;        &lt;span class="p"&gt;;&lt;/span&gt;   &lt;span class="n"&gt;EXIT_SUCCESS&lt;/span&gt;
  &lt;span class="n"&gt;syscall&lt;/span&gt;           &lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="n"&gt;section&lt;/span&gt; &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;rodata&lt;/span&gt;
  &lt;span class="n"&gt;msg&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="n"&gt;db&lt;/span&gt; &lt;span class="s"&gt;"Hello, world!"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;10&lt;/span&gt;
  &lt;span class="n"&gt;msglen&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="n"&gt;equ&lt;/span&gt; &lt;span class="err"&gt;$&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt; &lt;span class="n"&gt;msg&lt;/span&gt;
        &lt;span class="err"&gt;`&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then came Fortran I in 1953, with a new &lt;a href="https://en.wikipedia.org/wiki/Input/output" rel="noopener noreferrer"&gt;I/O&lt;/a&gt; interface abstraction that allowed us to more eloquently ask computers to perform and display numerical calculations. &lt;a href="https://en.wikipedia.org/wiki/Fortran" rel="noopener noreferrer"&gt;Fortran I&lt;/a&gt; was the first &lt;a href="https://en.wikipedia.org/wiki/General-purpose_programming_language" rel="noopener noreferrer"&gt;general-purpose programming language&lt;/a&gt; of its kind, but it would later become the common ancestor for many more of these syntaxes including C in 1972. However as we started to write more complex software with Fortran I, we realized that manual memory management became more and more cumbersome. &lt;a href="https://en.wikipedia.org/wiki/John_McCarthy_(computer_scientist)" rel="noopener noreferrer"&gt;John McCarthy&lt;/a&gt; came up with Lisp in 1959. Lisp was the second general-purpose programming language to exist after Fortran I and provided a higher-level abstraction that automatically managed memory for us using a new concept at the time called &lt;a href="https://en.wikipedia.org/wiki/Garbage_collection_(computer_science)" rel="noopener noreferrer"&gt;garbage collection&lt;/a&gt;. Automatic memory management of a computer's memory wasn't (and still isn't) as performant or efficient as the manual allocation and deallocation of memory, but it provided us with a superior software development experience.&lt;/p&gt;

&lt;p&gt;The evolution of programming languages has been a constant march towards abstractions that make programming easier and more accessible. &lt;a href="https://en.wikibooks.org/wiki/Fortran/memory_management" rel="noopener noreferrer"&gt;Newer versions of Fortran&lt;/a&gt; and the majority of the &lt;a href="https://survey.stackoverflow.co/2023/#section-most-popular-technologies-programming-scripting-and-markup-languages" rel="noopener noreferrer"&gt;most used programming languages today&lt;/a&gt; use some descendent of the original garbage collector invented by John McCarthy to automatically manage computer memory. Programming abstractions build on top of each other over time and the old ones continue to live underneath the newer ones that we use today. And so the high-level languages of yesterday become the build artifacts and implementation details of today.&lt;/p&gt;

&lt;h2&gt;
  
  
  how deterministic does a compiler need to be?
&lt;/h2&gt;

&lt;p&gt;LLMs are already changing the way we interact with code as witnessed with the &lt;a href="https://survey.stackoverflow.co/2023/#section-most-popular-technologies-ai-developer-tools" rel="noopener noreferrer"&gt;advent&lt;/a&gt; of &lt;a href="https://github.com/features/copilot" rel="noopener noreferrer"&gt;GitHub's Copilot&lt;/a&gt;. However, thus far LLMs have only served as programming companions that we can't fully rely on as they are not deterministic and can periodically hallucinate or produce errors. Historically, compilers are deterministic. Given the same input, the same compiler will always give you the same output. This has ruled out LLMs as compilers in the traditional sense. However, how much determinism do we really need to reliably produce software that does what we want it to do? If we think of an LLM as a software engineer, as opposed to a companion, the same software engineer can produce different software given the same spec or set of requirements. All the outputs across different implementation attempts can be valid unless the requirements provided are excessively stringent. If we can get to a point where LLMs can reliably produce correct code that meets the requirements given, then we can start to think of LLMs as compilers even if they are not deterministic. The question is not really if it will happen, but when it will happen. The field of generative AI is unfolding rapidly and working towards both the correctness and determinism of the code LLMs produce. As we make progress on both fronts, LLMs will sooner or later add English as another root node of the &lt;a href="https://github.com/stereobooster/programming-languages-genealogical-tree" rel="noopener noreferrer"&gt;genealogical tree of programming languages&lt;/a&gt;, and from it, a whole new set of syntaxes will span that will use the high-level programming languages of today, like Javascript and Python, as foundational and low-level implementation details akin to what assembly represents today.&lt;/p&gt;

&lt;h2&gt;
  
  
  compillmers
&lt;/h2&gt;

&lt;p&gt;There is already a lot of hay to mow with the current state of affairs in generative AI. LLMs as proper compilers, compiLLMers if you will, can produce correct code reliably enough today given enough guidance. Getting an LLM to generate correct code requires providing various examples and descriptive instructions. The UX of a chat interface to an LLM inherently leads people to write prompts that do not meet these criteria. We need to make it easy for people to give LLMs precise descriptions and numerous examples as concisely as possible via syntaxes that are similar to English so they remain easy to learn and use. &lt;a href="https://en.wikipedia.org/wiki/Coq" rel="noopener noreferrer"&gt;Coq&lt;/a&gt; is a great example of a functional programming syntax that is verbose and distant from English, but example-driven via assertions. &lt;a href="https://techhub.social/@ISV_Damocles" rel="noopener noreferrer"&gt;David Ellis&lt;/a&gt;, &lt;a href="https://twitter.com/alrre93" rel="noopener noreferrer"&gt;Alejandro Guillen&lt;/a&gt; and I recently introduced &lt;a href="https://github.com/alantech/marsha" rel="noopener noreferrer"&gt;Marsha&lt;/a&gt; as a proposal for what a syntax that meets the requirements outlined can look like. It is still early, but LLMs will increasingly give us the power to create more accessible representations of computer programs that look close to English. These representations will be distilled by LLMs into the complexities of the current high-level languages. Knowing Java or Python will become a rare skill, akin to individuals specializing in low-level optimizations using C or assembly language these days. Instead, the focus of developer experience will shift to the higher-level abstractions that are built on top of LLMs and composing these abstractions for different tasks. Compillmers will make programming more accessible in the near future such that writing software becomes part of the resume of most knowledge workers.&lt;/p&gt;

</description>
      <category>gpt3</category>
      <category>ai</category>
      <category>programming</category>
      <category>opensource</category>
    </item>
    <item>
      <title>Why SQL is right for Infrastructure Management</title>
      <dc:creator>Luis Fernando De Pombo</dc:creator>
      <pubDate>Thu, 06 Apr 2023 21:15:23 +0000</pubDate>
      <link>https://dev.to/iasql/why-sql-is-right-for-infrastructure-management-3gge</link>
      <guid>https://dev.to/iasql/why-sql-is-right-for-infrastructure-management-3gge</guid>
      <description>&lt;p&gt;Infrastructure is the heart of your company. Without it, nothing can actually be done and there'd be no reason for customers to pay you. For software companies that infrastructure is the software that its engineers directly write, and usually the cloud infrastructure and services it runs on top of and integrates with.&lt;/p&gt;

&lt;p&gt;In this post, we will geek out on various software abstractions and data structures, tools used by professionals of various sorts, and dig into the pros and cons of these with respect to cloud infrastructure management in particular. We'll see that while SQL has its own warts, it is the "least worst" of all of the options out there.&lt;/p&gt;

&lt;h2&gt;
  
  
  Following these instructions is imperative
&lt;/h2&gt;

&lt;p&gt;The simplest way to define how to set something up is to write up a list of instructions to follow, in order, to build whatever infrastructure you're dealing with, whether its the instructions on how to build a restaurant or an AWS Fargate cluster. This list of steps to process (with a LISt Processor, or &lt;a href="https://en.wikipedia.org/wiki/Lisp_%28programming_language%29" rel="noopener noreferrer"&gt;LISP&lt;/a&gt;, if you will 😉), which can include instructions to repeat or skip over steps based on conditions you have run into, is called &lt;a href="https://en.wikipedia.org/wiki/Imperative_programming" rel="noopener noreferrer"&gt;imperative programming&lt;/a&gt; in the software world.&lt;/p&gt;

&lt;h3&gt;
  
  
  Imperative Restaurant Instructions
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyau2bf2nfsdlioiy24rm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyau2bf2nfsdlioiy24rm.png" alt="Man in hard-hat builds your restaurant" width="512" height="512"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Design restaurant.&lt;/li&gt;
&lt;li&gt;Flatten the ground.&lt;/li&gt;
&lt;li&gt;Prop up pieces of wood next to each other.&lt;/li&gt;
&lt;li&gt;Bang nails into wood.&lt;/li&gt;
&lt;li&gt;Is restaurant finished? No, go to 3.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;For your cloud infrastructure, this is similar to using the &lt;a href="https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/" rel="noopener noreferrer"&gt;AWS SDK&lt;/a&gt; and directly calling the various methods, checking their results and eventually arriving at the desired infrastructure. It is &lt;em&gt;also&lt;/em&gt; like following a step-by-step guide clicking through the various AWS console UI elements to set up your infrastructure, like in &lt;a href="https://towardsdatascience.com/deploy-your-python-app-with-aws-fargate-tutorial-7a48535da586" rel="noopener noreferrer"&gt;this guide&lt;/a&gt;. The latter may not be automated, to you, but to your company executives it is as they don't care how the infrastructure building was accomplished as long as it was done quickly, cheaply, and reliably, and you know they always want all three. 😉&lt;/p&gt;

&lt;p&gt;Imperative infrastructure management works &lt;em&gt;well&lt;/em&gt; for initial setup of new infrastructure. There's nothing there to interfere with it, so you can just write the "happy path" and get it working. And if that fails, you can just tear it all down and start all over again if that's cheap, which it is for cloud infrastructure (though not for building a restaurant), so the &lt;a href="https://en.wikipedia.org/wiki/Cyclomatic_complexity" rel="noopener noreferrer"&gt;cyclomatic complexity&lt;/a&gt; of the code you write is low and everyone can follow along with it. For example, one can use the &lt;a href="https://aws.amazon.com/cli/" rel="noopener noreferrer"&gt;&lt;code&gt;aws&lt;/code&gt; cli application&lt;/a&gt; inside of a &lt;a href="https://www.gnu.org/software/bash/" rel="noopener noreferrer"&gt;&lt;code&gt;bash&lt;/code&gt; script&lt;/a&gt; to create a new &lt;a href="https://en.wikipedia.org/wiki/Ssh-keygen" rel="noopener noreferrer"&gt;ssh keypair&lt;/a&gt; in AWS, a &lt;a href="https://docs.aws.amazon.com/vpc/latest/userguide/security-groups.html" rel="noopener noreferrer"&gt;security group&lt;/a&gt; enabling ssh access, a new EC2 instance associated with both, and then access that instance (to demonstrate that it works).&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;#!/bin/bash&lt;/span&gt;

aws ec2 create-key-pair &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--region&lt;/span&gt; us-west-2 &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--key-name&lt;/span&gt; login &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--key-type&lt;/span&gt; ed25519 | &lt;span class="se"&gt;\&lt;/span&gt;
  jq &lt;span class="nt"&gt;-r&lt;/span&gt; .KeyMaterial &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; login.pem
&lt;span class="nb"&gt;chmod &lt;/span&gt;600 login.pem
&lt;span class="nv"&gt;SG_ID&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;aws ec2 create-security-group &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--group-name&lt;/span&gt; login-sg &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--description&lt;/span&gt; &lt;span class="s2"&gt;"Login security group"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--region&lt;/span&gt; us-west-2 &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--vpc-id&lt;/span&gt; &lt;span class="si"&gt;$(&lt;/span&gt;aws ec2 describe-vpcs &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--region&lt;/span&gt; us-west-2 | &lt;span class="se"&gt;\&lt;/span&gt;
    jq &lt;span class="nt"&gt;-r&lt;/span&gt; &lt;span class="s1"&gt;'.Vpcs[] | select(.IsDefault = true) | .VpcId'&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt; | &lt;span class="se"&gt;\&lt;/span&gt;
  jq &lt;span class="nt"&gt;-r&lt;/span&gt; .GroupId&lt;span class="si"&gt;)&lt;/span&gt;
aws ec2 authorize-security-group-ingress &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--group-name&lt;/span&gt; login-sg &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--region&lt;/span&gt; us-west-2 &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--ip-permissions&lt;/span&gt; &lt;span class="s1"&gt;'IpProtocol=tcp,FromPort=22,ToPort=22,IpRanges=[{CidrIp=0.0.0.0/0}]'&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; /dev/null
&lt;span class="nv"&gt;INSTANCE_ID&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;aws ec2 run-instances &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--region&lt;/span&gt; us-west-2 &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--subnet-id&lt;/span&gt; &lt;span class="si"&gt;$(&lt;/span&gt;aws ec2 describe-subnets &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--region&lt;/span&gt; us-west-2 | &lt;span class="se"&gt;\&lt;/span&gt;
    jq &lt;span class="nt"&gt;-r&lt;/span&gt; .Subnets[0].SubnetId&lt;span class="si"&gt;)&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--instance-type&lt;/span&gt; t3.small &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--image-id&lt;/span&gt; resolve:ssm:/aws/service/canonical/ubuntu/server/20.04/stable/current/amd64/hvm/ebs-gp2/ami-id &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--security-group-ids&lt;/span&gt; &lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;SG_ID&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--key-name&lt;/span&gt; login &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--associate-public-ip-address&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--tag-specifications&lt;/span&gt; &lt;span class="s1"&gt;'ResourceType=instance,Tags=[{Key=name,Value=login-inst}]'&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--output&lt;/span&gt; json | &lt;span class="se"&gt;\&lt;/span&gt;
  jq &lt;span class="nt"&gt;-r&lt;/span&gt; .Instances[0].InstanceId&lt;span class="si"&gt;)&lt;/span&gt;
&lt;span class="nb"&gt;echo &lt;/span&gt;EC2 instance &lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;INSTANCE_ID&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt; starting 
aws ec2 &lt;span class="nb"&gt;wait &lt;/span&gt;instance-status-ok &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--region&lt;/span&gt; us-west-2 &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--instance-ids&lt;/span&gt; &lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;INSTANCE_ID&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;
&lt;span class="nv"&gt;PUBLIC_IP&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;aws ec2 describe-instances &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--region&lt;/span&gt; us-west-2 &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--filters&lt;/span&gt; &lt;span class="nv"&gt;Name&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;tag:name,Values&lt;span class="o"&gt;=[&lt;/span&gt;login-inst] &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--filters&lt;/span&gt; &lt;span class="nv"&gt;Name&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;instance-state-name,Values&lt;span class="o"&gt;=[&lt;/span&gt;running] | &lt;span class="se"&gt;\&lt;/span&gt;
  jq &lt;span class="nt"&gt;-r&lt;/span&gt; .Reservations[0].Instances[0].PublicIpAddress&lt;span class="si"&gt;)&lt;/span&gt;
&lt;span class="nb"&gt;echo &lt;/span&gt;Server accessible at &lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;PUBLIC_IP&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;
ssh &lt;span class="nt"&gt;-i&lt;/span&gt; login.pem &lt;span class="nt"&gt;-o&lt;/span&gt; &lt;span class="s2"&gt;"StrictHostKeyChecking no"&lt;/span&gt; ubuntu@&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;PUBLIC_IP&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt; &lt;span class="nb"&gt;uname&lt;/span&gt; &lt;span class="nt"&gt;-a&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Keep in mind that this is a &lt;em&gt;simple&lt;/em&gt; script that does not handle failure of any of the commands called, creating one EC2 instance with two supporting elements (a security group to allow access to the SSH port and an SSH keypair to log into the instance). If you intend to actually use it multiple times, several of the arguments currently hardwired should be turned into shell arguments themselves, and all of the error paths should be tackled.&lt;/p&gt;

&lt;p&gt;However, the vast majority of the time you are not creating clones of the same infrastructure over and over again, instead you need to make changes to your existing infrastructure, and that original code is more than likely useless to you. To get from the &lt;em&gt;current&lt;/em&gt; state of your infrastructure to your &lt;em&gt;desired&lt;/em&gt; state, you need to call different APIs than you did before, and its much riskier to make a mistake because this infrastructure is already in use.&lt;/p&gt;

&lt;h2&gt;
  
  
  Declare your intentions at once
&lt;/h2&gt;

&lt;p&gt;When the desired state is relatively simple to define and the mechanism to reach that state is not that important, writing up a declaration of what is needed and letting something/someone else deal with it is the most logical abstraction. This would be like drafting up the architectural draft for your new restaurant and paying a contracting company to actually build it, or &lt;a href="https://en.wikipedia.org/wiki/Declarative_programming#Domain-specific_languages" rel="noopener noreferrer"&gt;writing HTML and letting a web browser render it&lt;/a&gt;, or writing a &lt;a href="https://github.com/hashicorp/hcl" rel="noopener noreferrer"&gt;Terraform HCL&lt;/a&gt; file and letting the Terraform CLI tool &lt;code&gt;apply&lt;/code&gt; it. This is called &lt;a href="https://en.wikipedia.org/wiki/Declarative_programming" rel="noopener noreferrer"&gt;declarative programming&lt;/a&gt; in the software world, and has many advantages (and a few disadvantages!) for cloud infrastructure management.&lt;/p&gt;

&lt;h3&gt;
  
  
  Declarative Restaurant Instructions
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fryegl9t2j9h2m7xtvq1x.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fryegl9t2j9h2m7xtvq1x.png" alt="King in robes on throne" width="512" height="512"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;By decree of the king, a restaurant shall be built!&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;In declarative programming you have some initial state and in comparatively dense and high-level declarative code define the desired state. In many use-cases (like web browsers and sometimes for restaurant-building contractors) the initial state is "blank" and the engine that transforms the declarative code into imperative operations can often be a relatively straightforward &lt;a href="https://en.wikipedia.org/wiki/Interpreter_%28computing%29" rel="noopener noreferrer"&gt;interpreter&lt;/a&gt; walking the &lt;a href="https://en.wikipedia.org/wiki/Abstract_syntax_tree" rel="noopener noreferrer"&gt;AST&lt;/a&gt; of the declarative code.&lt;/p&gt;

&lt;p&gt;Infrastructure management &lt;em&gt;generally&lt;/em&gt; is not that straightforward. Changes to infrastructure require in-place mutations of the current state or a full &lt;a href="https://en.wikipedia.org/wiki/Blue-green_deployment" rel="noopener noreferrer"&gt;blue/green deployment&lt;/a&gt; of the entirety of your production infrastructure is very expensive, requiring all resource costs to be doubled, and deployments to require some amount of downtime for users to swap over and state to be resynchronized. Particularly for databases this approach is essentially impossible as there is too much state to swap, which is what makes database migrations tricky when dropping data.&lt;/p&gt;

&lt;p&gt;So declarative programming for infrastructure, also known as &lt;a href="https://en.wikipedia.org/wiki/Infrastructure_as_code" rel="noopener noreferrer"&gt;Infrastructure as Code&lt;/a&gt;, must take the existing state and generate the imperative operations to perform based on the differences between the existing state and the declared desired state. This is like a contracting company being given an architecture draft for a new restaurant and being told to remodel an existing commercial space (maybe it was also a restaurant, maybe it was a clothing store, etc) and the contracting company figuring out what needs to be torn down first, what can be re-used, and what needs to be built new.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight hcl"&gt;&lt;code&gt;&lt;span class="nx"&gt;terraform&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;required_providers&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;aws&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="nx"&gt;source&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"hashicorp/aws"&lt;/span&gt;
      &lt;span class="nx"&gt;version&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"4.60.0"&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="nx"&gt;data&lt;/span&gt; &lt;span class="s2"&gt;"external"&lt;/span&gt; &lt;span class="s2"&gt;"example"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;program&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
    &lt;span class="s2"&gt;"bash"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="s2"&gt;"-c"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="o"&gt;&amp;lt;&amp;lt;-&lt;/span&gt;&lt;span class="no"&gt;EOT&lt;/span&gt;&lt;span class="sh"&gt;
      if [ ! -f ./login.pem ]; then
        # Terraform doesn't support creating the PEM file via AWS. It has to be
        # created locally and then imported into AWS. Inside of a conditional so
        # we don't accidentally re-create the file between 'plan' and 'apply'
        yes '' | ssh-keygen -t ed25519 -m PEM -f login.pem
      fi
      chmod 600 login.pem
      # The output of this script needs to be JSON formatted. This is a bit wonky
      # but building it over time via 'jq' operators is actually harder to follow
      echo {
      echo   '"access_key_id": "'$(aws configure get aws_access_key_id)'",'
      echo   '"secret_access_key": "'$(aws configure get aws_secret_access_key)'",'
      echo   '"public_key": "'$(cat login.pem.pub)'"'
      echo }
&lt;/span&gt;&lt;span class="no"&gt;    EOT
&lt;/span&gt;  &lt;span class="p"&gt;]&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="nx"&gt;provider&lt;/span&gt; &lt;span class="s2"&gt;"aws"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;region&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"us-west-2"&lt;/span&gt;
  &lt;span class="nx"&gt;access_key&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;data&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;external&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;example&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;result&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"access_key_id"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
  &lt;span class="nx"&gt;secret_key&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;data&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;external&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;example&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;result&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"secret_access_key"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="nx"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"aws_key_pair"&lt;/span&gt; &lt;span class="s2"&gt;"login"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;key_name&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"login"&lt;/span&gt;
  &lt;span class="nx"&gt;public_key&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;data&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;external&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;example&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;result&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"public_key"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="nx"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"aws_security_group"&lt;/span&gt; &lt;span class="s2"&gt;"login_sg"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;name&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"login-sg"&lt;/span&gt;
  &lt;span class="nx"&gt;description&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"Login security group"&lt;/span&gt;
  &lt;span class="nx"&gt;ingress&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;from_port&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;22&lt;/span&gt;
    &lt;span class="nx"&gt;to_port&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;22&lt;/span&gt;
    &lt;span class="nx"&gt;protocol&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"tcp"&lt;/span&gt;
    &lt;span class="nx"&gt;cidr_blocks&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"0.0.0.0/0"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="nx"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"aws_instance"&lt;/span&gt; &lt;span class="s2"&gt;"login"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;ami&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"resolve:ssm:/aws/service/canonical/ubuntu/server/20.04/stable/current/amd64/hvm/ebs-gp2/ami-id"&lt;/span&gt;
  &lt;span class="nx"&gt;instance_type&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"t3.small"&lt;/span&gt;
  &lt;span class="nx"&gt;associate_public_ip_address&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
  &lt;span class="nx"&gt;key_name&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;aws_key_pair&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;login&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;key_name&lt;/span&gt;
  &lt;span class="nx"&gt;vpc_security_group_ids&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;aws_security_group&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;login_sg&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
  &lt;span class="nx"&gt;tags&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;name&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"login-inst"&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This example still requires the following lines from the imperative example to get the public IP address of the new EC2 instance and log into it:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;PUBLIC_IP&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;aws ec2 describe-instances &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--region&lt;/span&gt; us-west-2 &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--filters&lt;/span&gt; &lt;span class="nv"&gt;Name&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;tag:name,Values&lt;span class="o"&gt;=[&lt;/span&gt;login-inst] &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--filters&lt;/span&gt; &lt;span class="nv"&gt;Name&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;instance-state-name,Values&lt;span class="o"&gt;=[&lt;/span&gt;running] | &lt;span class="se"&gt;\&lt;/span&gt;
  jq &lt;span class="nt"&gt;-r&lt;/span&gt; .Reservations[0].Instances[0].PublicIpAddress&lt;span class="si"&gt;)&lt;/span&gt;
&lt;span class="nb"&gt;echo &lt;/span&gt;Server accessible at &lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;PUBLIC_IP&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;
ssh &lt;span class="nt"&gt;-i&lt;/span&gt; login.pem &lt;span class="nt"&gt;-o&lt;/span&gt; &lt;span class="s2"&gt;"StrictHostKeyChecking no"&lt;/span&gt; ubuntu@&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;PUBLIC_IP&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt; &lt;span class="nb"&gt;uname&lt;/span&gt; &lt;span class="nt"&gt;-a&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This diffing of original and desired state can be resolved several ways. The contracting company could spend a lot of time inspecting the existing commercial space to find absolutely every piece that can be re-used, removing them and setting them aside with the materials that need to be purchased, then rework the interior walls, and build everything from scratch, or it could decide to tear everything out back into an empty space and rebuild from scratch (like what your web browser does between web pages), or perhaps you need to keep the space mostly usable for customers, minimizing disruption to the current operations while things are changed. This last case is what infrastructure management tools need to tackle your infrastructure with the least amount of downtime or no downtime at all, if you're &lt;em&gt;careful&lt;/em&gt; with it.&lt;/p&gt;

&lt;p&gt;How can you "careful" with declarative programming tools, if you don't have direct control over what they do? Terraform does this with a &lt;a href="https://developer.hashicorp.com/terraform/cli/commands/plan" rel="noopener noreferrer"&gt;plan&lt;/a&gt; command, which performs the diffing of current and desired state and reports back to you the operations it expects to execute to do so. Terraform will tell you when a change it intends to make is going to provision new resources, alter existing resources, drop resources, and &lt;em&gt;replace&lt;/em&gt; resources. That last one is particularly annoying when it shows up because it means the change you want to perform can only be done by dropping it and recreating it with the new configuration.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;terraform plan

data.external.example: Reading...
data.external.example: Read &lt;span class="nb"&gt;complete &lt;/span&gt;after 1s &lt;span class="o"&gt;[&lt;/span&gt;&lt;span class="nb"&gt;id&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;-]

Terraform used the selected providers to generate the following execution
plan. Resource actions are indicated with the following symbols:
  + create

Terraform will perform the following actions:

  &lt;span class="c"&gt;# aws_instance.login will be created&lt;/span&gt;
  + resource &lt;span class="s2"&gt;"aws_instance"&lt;/span&gt; &lt;span class="s2"&gt;"login"&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
      + ami                                  &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"resolve:ssm:/aws/service/canonical/ubuntu/server/20.04/stable/current/amd64/hvm/ebs-gp2/ami-id"&lt;/span&gt;
      + arn                                  &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="o"&gt;(&lt;/span&gt;known after apply&lt;span class="o"&gt;)&lt;/span&gt;
      + associate_public_ip_address          &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nb"&gt;true&lt;/span&gt;
      + availability_zone                    &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="o"&gt;(&lt;/span&gt;known after apply&lt;span class="o"&gt;)&lt;/span&gt;
      + cpu_core_count                       &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="o"&gt;(&lt;/span&gt;known after apply&lt;span class="o"&gt;)&lt;/span&gt;
      + cpu_threads_per_core                 &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="o"&gt;(&lt;/span&gt;known after apply&lt;span class="o"&gt;)&lt;/span&gt;
      + disable_api_stop                     &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="o"&gt;(&lt;/span&gt;known after apply&lt;span class="o"&gt;)&lt;/span&gt;
      + disable_api_termination              &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="o"&gt;(&lt;/span&gt;known after apply&lt;span class="o"&gt;)&lt;/span&gt;
      + ebs_optimized                        &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="o"&gt;(&lt;/span&gt;known after apply&lt;span class="o"&gt;)&lt;/span&gt;
      + get_password_data                    &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nb"&gt;false&lt;/span&gt;
      + host_id                              &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="o"&gt;(&lt;/span&gt;known after apply&lt;span class="o"&gt;)&lt;/span&gt;
      + host_resource_group_arn              &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="o"&gt;(&lt;/span&gt;known after apply&lt;span class="o"&gt;)&lt;/span&gt;
      + iam_instance_profile                 &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="o"&gt;(&lt;/span&gt;known after apply&lt;span class="o"&gt;)&lt;/span&gt;
      + &lt;span class="nb"&gt;id&lt;/span&gt;                                   &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="o"&gt;(&lt;/span&gt;known after apply&lt;span class="o"&gt;)&lt;/span&gt;
      + instance_initiated_shutdown_behavior &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="o"&gt;(&lt;/span&gt;known after apply&lt;span class="o"&gt;)&lt;/span&gt;
      + instance_state                       &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="o"&gt;(&lt;/span&gt;known after apply&lt;span class="o"&gt;)&lt;/span&gt;
      + instance_type                        &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"t3.small"&lt;/span&gt;
      + ipv6_address_count                   &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="o"&gt;(&lt;/span&gt;known after apply&lt;span class="o"&gt;)&lt;/span&gt;
      + ipv6_addresses                       &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="o"&gt;(&lt;/span&gt;known after apply&lt;span class="o"&gt;)&lt;/span&gt;
      + key_name                             &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"login"&lt;/span&gt;
      + monitoring                           &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="o"&gt;(&lt;/span&gt;known after apply&lt;span class="o"&gt;)&lt;/span&gt;
      + outpost_arn                          &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="o"&gt;(&lt;/span&gt;known after apply&lt;span class="o"&gt;)&lt;/span&gt;
      + password_data                        &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="o"&gt;(&lt;/span&gt;known after apply&lt;span class="o"&gt;)&lt;/span&gt;
      + placement_group                      &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="o"&gt;(&lt;/span&gt;known after apply&lt;span class="o"&gt;)&lt;/span&gt;
      + placement_partition_number           &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="o"&gt;(&lt;/span&gt;known after apply&lt;span class="o"&gt;)&lt;/span&gt;
      + primary_network_interface_id         &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="o"&gt;(&lt;/span&gt;known after apply&lt;span class="o"&gt;)&lt;/span&gt;
      + private_dns                          &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="o"&gt;(&lt;/span&gt;known after apply&lt;span class="o"&gt;)&lt;/span&gt;
      + private_ip                           &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="o"&gt;(&lt;/span&gt;known after apply&lt;span class="o"&gt;)&lt;/span&gt;
      + public_dns                           &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="o"&gt;(&lt;/span&gt;known after apply&lt;span class="o"&gt;)&lt;/span&gt;
      + public_ip                            &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="o"&gt;(&lt;/span&gt;known after apply&lt;span class="o"&gt;)&lt;/span&gt;
      + secondary_private_ips                &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="o"&gt;(&lt;/span&gt;known after apply&lt;span class="o"&gt;)&lt;/span&gt;
      + security_groups                      &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="o"&gt;(&lt;/span&gt;known after apply&lt;span class="o"&gt;)&lt;/span&gt;
      + source_dest_check                    &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nb"&gt;true&lt;/span&gt;
      + subnet_id                            &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="o"&gt;(&lt;/span&gt;known after apply&lt;span class="o"&gt;)&lt;/span&gt;
      + tags                                 &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
          + &lt;span class="s2"&gt;"name"&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"login-inst"&lt;/span&gt;
        &lt;span class="o"&gt;}&lt;/span&gt;
      + tags_all                             &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
          + &lt;span class="s2"&gt;"name"&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"login-inst"&lt;/span&gt;
        &lt;span class="o"&gt;}&lt;/span&gt;
      + tenancy                              &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="o"&gt;(&lt;/span&gt;known after apply&lt;span class="o"&gt;)&lt;/span&gt;
      + user_data                            &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="o"&gt;(&lt;/span&gt;known after apply&lt;span class="o"&gt;)&lt;/span&gt;
      + user_data_base64                     &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="o"&gt;(&lt;/span&gt;known after apply&lt;span class="o"&gt;)&lt;/span&gt;
      + user_data_replace_on_change          &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nb"&gt;false&lt;/span&gt;
      + vpc_security_group_ids               &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="o"&gt;(&lt;/span&gt;known after apply&lt;span class="o"&gt;)&lt;/span&gt;
    &lt;span class="o"&gt;}&lt;/span&gt;

  &lt;span class="c"&gt;# aws_key_pair.login will be created&lt;/span&gt;
  + resource &lt;span class="s2"&gt;"aws_key_pair"&lt;/span&gt; &lt;span class="s2"&gt;"login"&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
      + arn             &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="o"&gt;(&lt;/span&gt;known after apply&lt;span class="o"&gt;)&lt;/span&gt;
      + fingerprint     &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="o"&gt;(&lt;/span&gt;known after apply&lt;span class="o"&gt;)&lt;/span&gt;
      + &lt;span class="nb"&gt;id&lt;/span&gt;              &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="o"&gt;(&lt;/span&gt;known after apply&lt;span class="o"&gt;)&lt;/span&gt;
      + key_name        &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"login"&lt;/span&gt;
      + key_name_prefix &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="o"&gt;(&lt;/span&gt;known after apply&lt;span class="o"&gt;)&lt;/span&gt;
      + key_pair_id     &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="o"&gt;(&lt;/span&gt;known after apply&lt;span class="o"&gt;)&lt;/span&gt;
      + key_type        &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="o"&gt;(&lt;/span&gt;known after apply&lt;span class="o"&gt;)&lt;/span&gt;
      + public_key      &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIF4yiy4lef2WFQ7zdtuOZUHz9onyADTJ16l0OX8a211y damocles@zelbinion"&lt;/span&gt;
      + tags_all        &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="o"&gt;(&lt;/span&gt;known after apply&lt;span class="o"&gt;)&lt;/span&gt;
    &lt;span class="o"&gt;}&lt;/span&gt;

  &lt;span class="c"&gt;# aws_security_group.login_sg will be created&lt;/span&gt;
  + resource &lt;span class="s2"&gt;"aws_security_group"&lt;/span&gt; &lt;span class="s2"&gt;"login_sg"&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
      + arn                    &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="o"&gt;(&lt;/span&gt;known after apply&lt;span class="o"&gt;)&lt;/span&gt;
      + description            &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"Login security group"&lt;/span&gt;
      + egress                 &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="o"&gt;(&lt;/span&gt;known after apply&lt;span class="o"&gt;)&lt;/span&gt;
      + &lt;span class="nb"&gt;id&lt;/span&gt;                     &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="o"&gt;(&lt;/span&gt;known after apply&lt;span class="o"&gt;)&lt;/span&gt;
      + ingress                &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="o"&gt;[&lt;/span&gt;
          + &lt;span class="o"&gt;{&lt;/span&gt;
              + cidr_blocks      &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="o"&gt;[&lt;/span&gt;
                  + &lt;span class="s2"&gt;"0.0.0.0/0"&lt;/span&gt;,
                &lt;span class="o"&gt;]&lt;/span&gt;
              + description      &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;""&lt;/span&gt;
              + from_port        &lt;span class="o"&gt;=&lt;/span&gt; 22
              + ipv6_cidr_blocks &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="o"&gt;[]&lt;/span&gt;
              + prefix_list_ids  &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="o"&gt;[]&lt;/span&gt;
              + protocol         &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"tcp"&lt;/span&gt;
              + security_groups  &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="o"&gt;[]&lt;/span&gt;
              + self             &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nb"&gt;false&lt;/span&gt;
              + to_port          &lt;span class="o"&gt;=&lt;/span&gt; 22
            &lt;span class="o"&gt;}&lt;/span&gt;,
        &lt;span class="o"&gt;]&lt;/span&gt;
      + name                   &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"login-sg"&lt;/span&gt;
      + name_prefix            &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="o"&gt;(&lt;/span&gt;known after apply&lt;span class="o"&gt;)&lt;/span&gt;
      + owner_id               &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="o"&gt;(&lt;/span&gt;known after apply&lt;span class="o"&gt;)&lt;/span&gt;
      + revoke_rules_on_delete &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nb"&gt;false&lt;/span&gt;
      + tags_all               &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="o"&gt;(&lt;/span&gt;known after apply&lt;span class="o"&gt;)&lt;/span&gt;
      + vpc_id                 &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="o"&gt;(&lt;/span&gt;known after apply&lt;span class="o"&gt;)&lt;/span&gt;
    &lt;span class="o"&gt;}&lt;/span&gt;

Plan: 3 to add, 0 to change, 0 to destroy.

─────────────────────────────────────────────────────────────────────────────

Note: You didn&lt;span class="s1"&gt;'t use the -out option to save this plan, so Terraform can'&lt;/span&gt;t
guarantee to take exactly these actions &lt;span class="k"&gt;if &lt;/span&gt;you run &lt;span class="s2"&gt;"terraform apply"&lt;/span&gt; now.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;When you see steps where Terraform plans to drop or replace resources, you can decide if that's actually alright, or if it will negatively impact your business during the deployment, and then you can abort and define an intermediate state to first transition to, usually creating a new resource without dropping the old one, then do whatever operation is necessary to move internal state to it, then continue by deleting the old resource. This is like checking up on your contracting company before they start doing their work to make sure they don't do anything strange you don't want, and sometimes needing to handhold them through details of your business to come up with the new plan of action.&lt;/p&gt;

&lt;p&gt;Until cloud resource management is as quick to execute as a webpage render where state changes can be done faster than a human can respond, this sort of hand-holding is necessary for live infrastructure with uptime guarantees. Declarative programming, at least for cloud infrastructure management, is a &lt;a href="https://en.wikipedia.org/wiki/Leaky_abstraction" rel="noopener noreferrer"&gt;leaky abstraction&lt;/a&gt; over the imperative operations that need to be executed.&lt;/p&gt;

&lt;p&gt;The Terraform HCL files don't have the prior state encoded into them, nor does it depend on being stored within a git repository to provide that prior state, so how does Terraform get that initial state to operate on? Terraform maintains an &lt;a href="https://developer.hashicorp.com/terraform/language/state" rel="noopener noreferrer"&gt;internal statefile&lt;/a&gt; to perform the diffing against. It has a mechanism to &lt;a href="https://developer.hashicorp.com/terraform/cli/commands/refresh" rel="noopener noreferrer"&gt;refresh that state&lt;/a&gt;, but being a JSON blob with poor typing, can &lt;a href="https://faun.pub/cleaning-up-a-terraform-state-file-the-right-way-ab509f6e47f3" rel="noopener noreferrer"&gt;become corrupted, requiring manual intervention&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Why does Terraform need this JSON-based statefile when it has the much cleaner HCL format that you use as a developer? This is because HCL, as a language that provides affordances to the developer's own intuition on how to structure their representation of their infrastructure, does not have a &lt;a href="https://en.wikipedia.org/wiki/Canonical_form" rel="noopener noreferrer"&gt;canonical form&lt;/a&gt; to make these comparisons with. Terraform must first pre-process the HCL into an internal data structure which it can then generate a &lt;a href="https://en.wikipedia.org/wiki/Diff" rel="noopener noreferrer"&gt;diff&lt;/a&gt; that can then be used to determine the operations to perform. The JSON-based statefile is likely a very close representation of this internal data structure, and manual editing of it is highly discouraged because the details of it likely vary from release to release with an internal migration mechanism performed on upgrade of the Terraform CLI.&lt;/p&gt;

&lt;p&gt;Needing to learn Terraform's HCL language to make infrastructure changes has been cited as a &lt;a href="https://www.pulumi.com/why-pulumi/" rel="noopener noreferrer"&gt;weakness&lt;/a&gt; that can be mitigated by wrapping it in the syntaxes more familiar to the user like &lt;a href="https://developer.hashicorp.com/terraform/cdktf" rel="noopener noreferrer"&gt;Terraform CDK&lt;/a&gt;, &lt;a href="https://www.pulumi.com" rel="noopener noreferrer"&gt;Pulumi&lt;/a&gt;, or &lt;a href="https://docs.aws.amazon.com/cdk/v2/guide/home.html" rel="noopener noreferrer"&gt;AWS CDK&lt;/a&gt;. This &lt;em&gt;is&lt;/em&gt; a weakness, but we believe that the HCL-to-Statefile transformation being &lt;a href="https://en.wikipedia.org/wiki/Surjective_function" rel="noopener noreferrer"&gt;surjective&lt;/a&gt; instead of &lt;a href="https://en.wikipedia.org/wiki/Bijection" rel="noopener noreferrer"&gt;bijective&lt;/a&gt; is actually the bigger issue.&lt;/p&gt;

&lt;h3&gt;
  
  
  Surjective Function &lt;em&gt;f&lt;/em&gt; Diagram
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fupload.wikimedia.org%2Fwikipedia%2Fcommons%2F6%2F6c%2FSurjection.svg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fupload.wikimedia.org%2Fwikipedia%2Fcommons%2F6%2F6c%2FSurjection.svg" alt="Surjective Diagram" width="200" height="200"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://en.wikipedia.org/wiki/Surjective_function#/media/File:Surjection.svg" rel="noopener noreferrer"&gt;From Wikipedia:&lt;/a&gt; A function &lt;em&gt;f&lt;/em&gt; is surjective if all of its inputs &lt;em&gt;X&lt;/em&gt; map to all outputs &lt;em&gt;Y&lt;/em&gt;, but overlaps can occur.&lt;/p&gt;

&lt;p&gt;This is like multiple Terraform HCL representations all mapping to the exact same cloud infrastructure.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Objective is Bi- hmmmm....
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Bijective Function &lt;em&gt;f&lt;/em&gt; Diagram
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fupload.wikimedia.org%2Fwikipedia%2Fcommons%2Fa%2Fa5%2FBijection.svg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fupload.wikimedia.org%2Fwikipedia%2Fcommons%2Fa%2Fa5%2FBijection.svg" alt="Bijective Diagram" width="200" height="200"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://en.wikipedia.org/wiki/Bijection#/media/File:Bijection.svg" rel="noopener noreferrer"&gt;From Wikipedia:&lt;/a&gt; A function &lt;em&gt;f&lt;/em&gt; is bijective if all of its inputs &lt;em&gt;X&lt;/em&gt; map to all outputs &lt;em&gt;Y&lt;/em&gt; exactly once.&lt;/p&gt;

&lt;p&gt;A bijective function has a one-to-one mapping of all state in one set with all state in another set. Both representations are equivalent (given the appropriate transform functions), both sets (of possible cloud states) are the same size (conceptually), and for any one particular state in one set there is exactly one state in the other set. Programming Languages don't fit the bill here, as was demonstrated earlier, but what does? What, exactly, defines your cloud resources in the most minimal way?&lt;/p&gt;

&lt;p&gt;The best answer in most cases is &lt;del&gt;&lt;a href="https://en.wikipedia.org/wiki/Cloud" rel="noopener noreferrer"&gt;water vapor&lt;/a&gt;&lt;/del&gt; &lt;a href="https://en.wikipedia.org/wiki/Entity#In_computer_science" rel="noopener noreferrer"&gt;Entities&lt;/a&gt;. Each cloud resource has some unique identifier and a set of properties that configure it. Some of those properties can be relations or dependencies on other cloud resources. The ordering of the entities does not matter, so long as the dependencies are explicitly defined to make determination of the order of operations possible. You may have noticed that this sounds a lot like &lt;a href="https://en.wikipedia.org/wiki/Object_%28computer_science%29" rel="noopener noreferrer"&gt;Objects&lt;/a&gt; in &lt;a href="https://en.wikipedia.org/wiki/Object-oriented_programming" rel="noopener noreferrer"&gt;Object-Oriented Programming&lt;/a&gt;. It is very much the same, with the only difference being that the unique identifier &lt;em&gt;must&lt;/em&gt; be unique, so if we considered an array of pointers to these objects as the entity IDs, multiple pointers to the same object wouldn't be allowed.&lt;/p&gt;

&lt;p&gt;This means the representation of your cloud resources should be &lt;em&gt;data&lt;/em&gt;, not code. Particularly, it should be &lt;a href="https://en.wikipedia.org/wiki/Relational_model" rel="noopener noreferrer"&gt;relational data&lt;/a&gt; to allow the references/relations with other entities that actually exist within your cloud. Modeling each entity as a wholly-siloed object without those relations explicitly tracked &lt;em&gt;might&lt;/em&gt; work for read-only cases, but puts the burden of properly joining the entities on the user and could lead to unexpected errors during mutation if the disjointed state is not carefully kept in sync. This mutation of that data would be the code, and fits very well into a classic &lt;a href="https://www.redhat.com/en/blog/rest-architecture" rel="noopener noreferrer"&gt;REST architecture&lt;/a&gt; (in the original sense, though it could also work in the &lt;a href="https://en.wikipedia.org/wiki/Overview_of_RESTful_API_Description_Languages#Hypertext-driven_API" rel="noopener noreferrer"&gt;Web-oriented RESTful meaning&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Since your infrastructure is actually data, just about any data format &lt;em&gt;could&lt;/em&gt; be used to store it, as demonstrated by Terraform's usage of JSON under-the-hood. Some will require more application-level guarantees on top of the data format to achieve all of the necessary components, however. For instance JSON does not have any native way to represent &lt;em&gt;references&lt;/em&gt; to other JSON objects that are not wholly contained within the object in question, so you would need to define a way to represent them and enforce it within your application. &lt;a href="https://yaml.org/spec/1.2.2/#24-tags" rel="noopener noreferrer"&gt;YAML does support them via tags&lt;/a&gt; but being a markup language has multiple equivalent representations and is therefore not suitable for this purpose (without a strict linter to reduce it back down to a canonical form). There are also &lt;a href="https://capnproto.org/language.html#dynamically-typed-fields" rel="noopener noreferrer"&gt;binary formats that can encode references&lt;/a&gt; with &lt;a href="https://www.sqlite.org/cli.html" rel="noopener noreferrer"&gt;SQLite's file format&lt;/a&gt; arguably being the most prolific.&lt;/p&gt;

&lt;p&gt;SQL immediately stands out here because it was designed for making &lt;a href="https://en.wikipedia.org/wiki/Relational_algebra" rel="noopener noreferrer"&gt;relational algebra&lt;/a&gt;, the other side of the &lt;a href="https://en.wikipedia.org/wiki/Entity%E2%80%93relationship_model" rel="noopener noreferrer"&gt;Entity-Relationship model&lt;/a&gt;, accessible. There are likely more people who know SQL than any programming language (for IaC) or data format you could choose to represent your cloud infrastructure. Many non-programmers know it, as well, such as data scientists, business analysts, accountants, etc, and there is an entire ecosystem of useful functionality you gain "for free" by using a SQL representation than just about any format out there. Furthermore, a powerful SQL engine like the open source &lt;a href="https://postgresql.org" rel="noopener noreferrer"&gt;PostgreSQL&lt;/a&gt; provides tools capable of elevating the infrastructure management &lt;em&gt;experience&lt;/em&gt; beyond what IaC can do today.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Advantages of SQL Databases for Infrastructure Management
&lt;/h3&gt;

&lt;p&gt;First, by defining &lt;a href="https://www.postgresql.org/docs/current/tutorial-fk.html" rel="noopener noreferrer"&gt;foreign key relations&lt;/a&gt; between tables, the database can immediately inform you if you have dependencies that must be created before you can create the resource you desire. It will simply return an error message to you if you have inserted an incomplete record missing any required fields.&lt;/p&gt;

&lt;p&gt;"But," you say, "programming languages with solid type systems, like &lt;a href="https://www.rust-lang.org" rel="noopener noreferrer"&gt;Rust&lt;/a&gt;, can also do that for you." That is true, and this was not an attempt to disparage them, merely to show that you aren't losing anything by using SQL as the representation. There are, however, ways SQL goes above and beyond. By defining &lt;a href="https://www.postgresql.org/docs/current/ddl-constraints.html" rel="noopener noreferrer"&gt;unique, not-null, and check constraints&lt;/a&gt; on top of a &lt;a href="https://www.postgresql.org/docs/current/datatype.html" rel="noopener noreferrer"&gt;rich type system&lt;/a&gt; the database can prevent you from inserting faulty data to a greater degree than even Rust, as the constraints can be against the live state of your infrastructure itself. Instead of code needing to deal with unexpected values further down the line, the faulty data can be prevented from ever being inserted. Rust's compiler making sure that you handle every possible branch is an awesome thing, but SQL has the power to make more of these branches impossible in the first place.&lt;/p&gt;

&lt;p&gt;By being in a database, you can answer questions about your infrastructure via queries, and in the case of security vulnerabilities or expensive misconfigurations you can update that state across the board at the same time. The shared, live nature of the SQL database more closely matches the reality of the cloud infrastructure, while the more malleable nature of the querying in the Structured Query Language lets you cross-check in many dimensions the AWS console does not.&lt;/p&gt;

&lt;p&gt;There's a further advantage by having a bijective relationship with your cloud: you can re-synchronize the database with your cloud at any time, avoiding statefile issues by never getting too out-of-date, and being almost impossible to corrupt. Software bugs are unavoidable, but since populating the database is trivial (compared to the cloud account itself), it could simply be dropped and recreated at any time without an impact on your own usage.&lt;/p&gt;

&lt;p&gt;Most SQL databases have the concept of &lt;a href="https://www.postgresql.org/docs/current/sql-createtrigger.html#SQL-CREATETRIGGER-EXAMPLES" rel="noopener noreferrer"&gt;trigger functions&lt;/a&gt; that execute based on changes to the database itself. With a collection of automatically-created trigger functions, an &lt;a href="https://en.wikipedia.org/wiki/Audit_trail" rel="noopener noreferrer"&gt;audit log&lt;/a&gt; can be maintained, which can keep track of who made what cloud changes and when they did, going a step further than even a git repository of IaC changes, as changes done manually through the AWS console will also show up (though these would not have a database user to tie them to).&lt;/p&gt;

&lt;p&gt;Trigger functions have other uses that don't even have an analogous concept in IaC tooling. For instance, trigger functions can be added to automatically re-insert deleted records of sensitive cloud resources for automated &lt;a href="https://en.wikipedia.org/wiki/Chaos_engineering" rel="noopener noreferrer"&gt;Chaos Engineering&lt;/a&gt; recovery. Either a mutation that was attempted by a developer that shouldn't be allowed (without also going through the much larger change of removing said trigger function first) so the resource is never deleted in the first place, as well as automatic recreation of the desired resource in the cloud if deleted via the AWS console or dropped during an outage.&lt;/p&gt;

&lt;p&gt;SQL Database &lt;a href="https://en.wikipedia.org/wiki/Snapshot_%28computer_storage%29" rel="noopener noreferrer"&gt;Snapshotting&lt;/a&gt; could be used to quickly revert all infrastructure changes for infrastructure without hidden state (eg, the snapshot of your load balancers' prior state would successfully restore the load balancer configuration, but a dropped database restored via a snapshot would not restore the data within, only that you had such a database in the first place and you would still need to manually re-load a backup snapshot of those, as well).&lt;/p&gt;

&lt;p&gt;Database engines also represent a standardized protocol to interface with your data. This standard opens up a world of possibilities that can encompass the imperative and declarative infrastructure management styles and go beyond them. You can:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Perform operations on your database via scripts and the &lt;a href="https://www.postgresql.org/docs/current/app-psql.html" rel="noopener noreferrer"&gt;&lt;code&gt;psql&lt;/code&gt; CLI&lt;/a&gt; tool, using whatever mix of declarative and imperative styles you prefer, mutating various tables declaratively and calling &lt;a href="https://dev.to/docs/modules/aws/aws_sdk/?view=code-examples"&gt;&lt;code&gt;AWS SDK functions directly&lt;/code&gt;&lt;/a&gt; where and when you see fit.&lt;/li&gt;
&lt;li&gt;Integrate the database into your application itself with a &lt;a href="https://node-postgres.com/" rel="noopener noreferrer"&gt;postgres client library&lt;/a&gt; allowing your applications to make infrastructure changes (like provisioning sharded resources for a client that wants isolation, or using a more accurate forecasting model to pre-allocate more resources before the storm hits).&lt;/li&gt;
&lt;li&gt;Connect with a &lt;a href="https://www.jetbrains.com/datagrip/features/postgresql/" rel="noopener noreferrer"&gt;SQL IDE&lt;/a&gt; for a variety of reasons, such as outage mitigation or inspection of your infrastructure in a tabular (Excel-like) view.&lt;/li&gt;
&lt;li&gt;Connect the database to a &lt;a href="https://grafana.com/docs/grafana/latest/datasources/postgres/" rel="noopener noreferrer"&gt;graphical dashboard&lt;/a&gt; for live visualization of your infrastructure's state and any other metadata transferred through.&lt;/li&gt;
&lt;li&gt;Connect it to a &lt;a href="https://docs.retool.com/docs/postgresql-integration" rel="noopener noreferrer"&gt;low code tool&lt;/a&gt; for rapid response dashboards for your infrastructure with buttons you can tackle known outage vectors near-instantly, or just to make self-service provisioning of new microservices in the infrastructure team's preferred way.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;There are negatives to representing your infrastructure as SQL, of course, as there is no perfect solution for all situations, but we believe &lt;a href="https://iasql.com" rel="noopener noreferrer"&gt;IaSQL&lt;/a&gt; is a superset versus IaC in all but one category.&lt;/p&gt;

&lt;p&gt;SQL is an old, irregular language to work with, but it is better known than HCL and SQL already has it's own Pulumi/CDK in the form of every ORM with introspection (like &lt;a href="https://www.prisma.io/docs/concepts/components/introspection" rel="noopener noreferrer"&gt;Javascript's Prisma&lt;/a&gt;, &lt;a href="https://docs.djangoproject.com/en/4.1/howto/legacy-databases/" rel="noopener noreferrer"&gt;Python's Django&lt;/a&gt;, &lt;a href="https://github.com/xo/xo" rel="noopener noreferrer"&gt;Go's XO&lt;/a&gt; etc) and QueryBuilder (&lt;a href="https://learn.microsoft.com/en-us/dotnet/csharp/programming-guide/concepts/linq/" rel="noopener noreferrer"&gt;LINQ&lt;/a&gt;, &lt;a href="https://knexjs.org/" rel="noopener noreferrer"&gt;Knex&lt;/a&gt;, etc) in whatever programming language you prefer. You probably already know it.&lt;/p&gt;

&lt;p&gt;Peer review of changes is &lt;em&gt;different&lt;/em&gt; versus IaC. Instead of writing HCL or other IaC code, having it reviewed, and then &lt;code&gt;apply&lt;/code&gt;ing it, you can enter an &lt;a href="https://dev.to/docs/transaction"&gt;IaSQL transaction&lt;/a&gt;, make the desired changes with Postgres acting as your IDE, automatically blocking many impossible changes via types and relations, and use &lt;a href="https://dev.to/docs/review/"&gt;&lt;code&gt;iasql_create_review&lt;/code&gt;&lt;/a&gt; to create a review artifact for your team &lt;em&gt;after&lt;/em&gt; IaSQL has already vetted it for technical correctness. This vetting makes the result superior to the traditional peer review system in IaC.&lt;/p&gt;

&lt;p&gt;Database schemas can be overwhelming, and a schema accurately representing &lt;em&gt;all&lt;/em&gt; of AWS would be dizzying. HCL and other IaC tools let you elide parts of AWS via a module system, so we have implemented our own &lt;a href="https://dev.to/docs/module"&gt;module system&lt;/a&gt; on top of Postgres inspired equally by programming language module systems and Linux distro package managers. This module system takes the place the usual database &lt;a href="https://en.wikipedia.org/wiki/Schema_migration" rel="noopener noreferrer"&gt;schema migration system&lt;/a&gt; for the management of the tables in the database, which may feel unusual at first, but we believe it is superior (for this use-case).&lt;/p&gt;

&lt;p&gt;Even with IaC tools letting you ignore details you don't specifically need, AWS and other clouds expose many details in multiple layers for you to control when often you just need the same subset of functionality over-and-over, so these IaC tools have higher-level modules for these cookie-cutter use-cases. For IaSQL we &lt;a href="https://dev.to/docs/low-level-vs-high-level"&gt;did the same&lt;/a&gt;, though in a way that coexists with the lower-level modules so you can still query those details later, if you like. &lt;a href="https://dev.to/blog/ecs-simplified"&gt;Here&lt;/a&gt; is an example of a high-level module that greatly simplifies ECS fargate. &lt;/p&gt;

&lt;p&gt;IaSQL is declarative like IaC, so the leaky abstraction problem of declarative programming can still impact things. Like IaC's &lt;code&gt;apply&lt;/code&gt; concept IaSQL has &lt;a href="https://dev.to/docs/transaction"&gt;its own transaction system&lt;/a&gt; (separate from the actual &lt;a href="https://www.postgresql.org/docs/current/tutorial-transactions.html" rel="noopener noreferrer"&gt;database transaction system&lt;/a&gt;) that will let you enqueue changes and preview what actions the engine will take before commiting to it (or rolling it back). Going beyond that and acknowledging that sometimes you need to directly work in the lower abstraction layer, IaSQL exposes optional &lt;a href="https://iasql.com/docs/modules/aws/aws_sdk/" rel="noopener noreferrer"&gt;raw SDK access&lt;/a&gt; within the database to handle those tougher situations, and can then re-synchronize the declarative state afterwards.&lt;/p&gt;

&lt;p&gt;As IaSQL is able to both push changes from the database to the cloud and pull changes from the cloud to the database versus the one-way push from HCL that Terraform provides, you can make changes directly in the AWS console, then inspect the audit log directly or &lt;a href="https://iasql.com/docs/modules/builtin/tables/iasql_functions_rpcs_iasql_get_sql_since.IasqlGetSqlSince/" rel="noopener noreferrer"&gt;call a special formatting function&lt;/a&gt; to see what SQL statements were the equivalent of those console changes. This reduces the learning curve of IaSQL and also lowers the barrier to entry in comparison to IaC. You can continue using other cloud management tooling in conjunction with IaSQL (including to a degree IaC) and IaSQL will show you "it's way" of doing the same when it syncs.&lt;/p&gt;

&lt;p&gt;Finally, IaSQL makes your infrastructure &lt;em&gt;accessible&lt;/em&gt; to a much large portion of your company than IaC tools can. Data Scientists who know SQL could help with the provisioning of their own batch processing code, EngSec can query &lt;em&gt;all&lt;/em&gt; company infrastructure for security vulnerabilities and propose changes to improve things, Business Analysts in Growth teams can query actual traffic information from the live infrastructure (with read-only AWS credentials, of course), Finance auditors can query your infrastructure joined on pricing data and figure out &lt;em&gt;accurate&lt;/em&gt; expense reporting per division, and DevOps can run queries joined on CloudWatch utilization stats to automatically identify overprovisioned services and create recommended auto-load scaling configuration changes. Tons of "not-developer" and "developer-adjacent" roles that can take over a lot of the ancillary burden of maintaining a production system.&lt;/p&gt;

&lt;p&gt;But IaSQL is much newer than the IaC tools. We have coverage at the time of writing for 25 different AWS services, but that is admittedly far from complete. IaC tools also have cloud resources they cannot represent, but it is a much smaller percentage. This disadvantage will diminish over time, but creating a bijective entity representation of every cloud service requires more developer firepower than the approach IaC tools have taken, so we expect to always lag temporally on this front. At least, as long as AWS and friends don't follow a resource-oriented standard like &lt;a href="https://swagger.io/specification/" rel="noopener noreferrer"&gt;Swagger&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Cloud APIs on Swagger?
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fibn1tthjypydhcgs9fps.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fibn1tthjypydhcgs9fps.jpg" alt="swagger-i-would-be-so-happy" width="500" height="572"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Me too, Craig, me too...&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;With that said, what does our EC2 instance example look like in IaSQL, uh, SQL?&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="c1"&gt;-- Assuming that the AWS credentials have already been inserted as done during the setup wizard&lt;/span&gt;
&lt;span class="k"&gt;SELECT&lt;/span&gt; &lt;span class="n"&gt;default_aws_region&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s1"&gt;'us-west-2'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="c1"&gt;-- Install the necessary modules&lt;/span&gt;
&lt;span class="k"&gt;SELECT&lt;/span&gt; &lt;span class="n"&gt;iasql_install&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s1"&gt;'aws_ec2'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s1"&gt;'aws_ec2_metadata'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s1"&gt;'ssh_accounts'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="c1"&gt;-- Create the key-pair and put it in our ssh_credentials table&lt;/span&gt;
&lt;span class="c1"&gt;-- We will update this later with the public IP address&lt;/span&gt;
&lt;span class="k"&gt;INSERT&lt;/span&gt; &lt;span class="k"&gt;INTO&lt;/span&gt; &lt;span class="n"&gt;ssh_credentials&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nv"&gt;"name"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;hostname&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;username&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;private_key&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="k"&gt;VALUES&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s1"&gt;'login'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s1"&gt;'tbd'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s1"&gt;'ubuntu'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;SELECT&lt;/span&gt; &lt;span class="n"&gt;privateKey&lt;/span&gt; &lt;span class="k"&gt;FROM&lt;/span&gt; &lt;span class="n"&gt;key_pair_request&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s1"&gt;'login'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s1"&gt;'us-west-2'&lt;/span&gt;&lt;span class="p"&gt;)));&lt;/span&gt;

&lt;span class="c1"&gt;-- Start an IaSQL transaction so these changes don't mutate our infrastructure immediately&lt;/span&gt;
&lt;span class="k"&gt;SELECT&lt;/span&gt; &lt;span class="n"&gt;iasql_begin&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;

&lt;span class="c1"&gt;-- Define the security group&lt;/span&gt;
&lt;span class="k"&gt;INSERT&lt;/span&gt; &lt;span class="k"&gt;INTO&lt;/span&gt; &lt;span class="n"&gt;security_group&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;description&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;group_name&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="k"&gt;VALUES&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s1"&gt;'Login security group'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s1"&gt;'login-sg'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="c1"&gt;-- And it's ingress rule&lt;/span&gt;
&lt;span class="k"&gt;INSERT&lt;/span&gt; &lt;span class="k"&gt;INTO&lt;/span&gt; &lt;span class="n"&gt;security_group_rule&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;is_egress&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;ip_protocol&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;from_port&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;to_port&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;cidr_ipv4&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;security_group_id&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="k"&gt;SELECT&lt;/span&gt; &lt;span class="k"&gt;false&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s1"&gt;'tcp'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;22&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;22&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s1"&gt;'0.0.0.0/0'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;id&lt;/span&gt;
&lt;span class="k"&gt;FROM&lt;/span&gt; &lt;span class="n"&gt;security_group&lt;/span&gt;
&lt;span class="k"&gt;WHERE&lt;/span&gt; &lt;span class="n"&gt;group_name&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s1"&gt;'login-sg'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="c1"&gt;-- Define the instance and the key-pair to use&lt;/span&gt;
&lt;span class="k"&gt;INSERT&lt;/span&gt; &lt;span class="k"&gt;INTO&lt;/span&gt; &lt;span class="n"&gt;instance&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;ami&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;instance_type&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;key_pair_name&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;tags&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;subnet_id&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
  &lt;span class="k"&gt;SELECT&lt;/span&gt;
    &lt;span class="s1"&gt;'resolve:ssm:/aws/service/canonical/ubuntu/server/20.04/stable/current/amd64/hvm/ebs-gp2/ami-id'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="s1"&gt;'t3.small'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="s1"&gt;'login'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="s1"&gt;'{"name":"login"}'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;id&lt;/span&gt;
  &lt;span class="k"&gt;FROM&lt;/span&gt; &lt;span class="n"&gt;subnet&lt;/span&gt;
  &lt;span class="k"&gt;WHERE&lt;/span&gt; &lt;span class="n"&gt;availability_zone&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s1"&gt;'us-west-2a'&lt;/span&gt;
  &lt;span class="k"&gt;LIMIT&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="c1"&gt;-- Associate the instance with the security group&lt;/span&gt;
&lt;span class="k"&gt;INSERT&lt;/span&gt; &lt;span class="k"&gt;INTO&lt;/span&gt; &lt;span class="n"&gt;instance_security_groups&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;instance_id&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;security_group_id&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;SELECT&lt;/span&gt;
  &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;SELECT&lt;/span&gt; &lt;span class="n"&gt;id&lt;/span&gt; &lt;span class="k"&gt;FROM&lt;/span&gt; &lt;span class="n"&gt;instance&lt;/span&gt; &lt;span class="k"&gt;WHERE&lt;/span&gt; &lt;span class="n"&gt;tags&lt;/span&gt; &lt;span class="o"&gt;-&amp;gt;&amp;gt;&lt;/span&gt; &lt;span class="s1"&gt;'name'&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s1"&gt;'login'&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
  &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;SELECT&lt;/span&gt; &lt;span class="n"&gt;id&lt;/span&gt; &lt;span class="k"&gt;FROM&lt;/span&gt; &lt;span class="n"&gt;security_group&lt;/span&gt; &lt;span class="k"&gt;WHERE&lt;/span&gt; &lt;span class="n"&gt;group_name&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s1"&gt;'login-sg'&lt;/span&gt; &lt;span class="k"&gt;AND&lt;/span&gt; &lt;span class="n"&gt;region&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s1"&gt;'us-west-2'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We can then &lt;code&gt;SELECT * FROM iasql_preview();&lt;/code&gt; to see what this does, like a &lt;code&gt;terraform plan&lt;/code&gt;:&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;code&gt;iasql_preview&lt;/code&gt; output
&lt;/h3&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;action&lt;/th&gt;
&lt;th&gt;table_name&lt;/th&gt;
&lt;th&gt;id&lt;/th&gt;
&lt;th&gt;description&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;create&lt;/td&gt;
&lt;td&gt;instance&lt;/td&gt;
&lt;td&gt;1&lt;/td&gt;
&lt;td&gt;1&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;create&lt;/td&gt;
&lt;td&gt;security_group&lt;/td&gt;
&lt;td&gt;18&lt;/td&gt;
&lt;td&gt;18&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;create&lt;/td&gt;
&lt;td&gt;security_group_rule&lt;/td&gt;
&lt;td&gt;1&lt;/td&gt;
&lt;td&gt;1&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;This is a "high level" overview with minimal details. We can get back a listing of all changes, represented in auto-generated SQL with &lt;code&gt;SELECT * FROM iasql_get_sql_for_transaction();&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="k"&gt;INSERT&lt;/span&gt; &lt;span class="k"&gt;INTO&lt;/span&gt; &lt;span class="n"&gt;security_group&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;description&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;group_name&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;region&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="k"&gt;VALUES&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s1"&gt;'Login security group'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s1"&gt;'login-sg'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;SELECT&lt;/span&gt; &lt;span class="n"&gt;region&lt;/span&gt; &lt;span class="k"&gt;FROM&lt;/span&gt; &lt;span class="n"&gt;aws_regions&lt;/span&gt; &lt;span class="k"&gt;WHERE&lt;/span&gt; &lt;span class="n"&gt;region&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s1"&gt;'us-west-2'&lt;/span&gt;&lt;span class="p"&gt;));&lt;/span&gt;


&lt;span class="k"&gt;INSERT&lt;/span&gt; &lt;span class="k"&gt;INTO&lt;/span&gt; &lt;span class="n"&gt;security_group_rule&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;is_egress&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;ip_protocol&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;from_port&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;to_port&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;cidr_ipv4&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;region&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;security_group_id&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="k"&gt;VALUES&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s1"&gt;'f'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s1"&gt;'tcp'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s1"&gt;'22'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s1"&gt;'22'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s1"&gt;'0.0.0.0/0'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;SELECT&lt;/span&gt; &lt;span class="n"&gt;region&lt;/span&gt; &lt;span class="k"&gt;FROM&lt;/span&gt; &lt;span class="n"&gt;aws_regions&lt;/span&gt; &lt;span class="k"&gt;WHERE&lt;/span&gt; &lt;span class="n"&gt;region&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s1"&gt;'us-west-2'&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;SELECT&lt;/span&gt; &lt;span class="n"&gt;id&lt;/span&gt; &lt;span class="k"&gt;FROM&lt;/span&gt; &lt;span class="n"&gt;security_group&lt;/span&gt; &lt;span class="k"&gt;WHERE&lt;/span&gt; &lt;span class="n"&gt;group_id&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;NULL&lt;/span&gt; &lt;span class="k"&gt;AND&lt;/span&gt; &lt;span class="n"&gt;region&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;SELECT&lt;/span&gt; &lt;span class="n"&gt;region&lt;/span&gt; &lt;span class="k"&gt;FROM&lt;/span&gt; &lt;span class="n"&gt;aws_regions&lt;/span&gt; &lt;span class="k"&gt;WHERE&lt;/span&gt; &lt;span class="n"&gt;region&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s1"&gt;'us-west-2'&lt;/span&gt;&lt;span class="p"&gt;)));&lt;/span&gt;


&lt;span class="k"&gt;INSERT&lt;/span&gt; &lt;span class="k"&gt;INTO&lt;/span&gt; &lt;span class="n"&gt;instance&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;ami&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;instance_type&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;key_pair_name&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="k"&gt;state&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;tags&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;hibernation_enabled&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;region&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;subnet_id&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="k"&gt;VALUES&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s1"&gt;'resolve:ssm:/aws/service/canonical/ubuntu/server/20.04/stable/current/amd64/hvm/ebs-gp2/ami-id'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s1"&gt;'t3.small'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s1"&gt;'login'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s1"&gt;'running'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s1"&gt;'{"name":"login"}'&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="n"&gt;jsonb&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s1"&gt;'f'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;SELECT&lt;/span&gt; &lt;span class="n"&gt;region&lt;/span&gt; &lt;span class="k"&gt;FROM&lt;/span&gt; &lt;span class="n"&gt;aws_regions&lt;/span&gt; &lt;span class="k"&gt;WHERE&lt;/span&gt; &lt;span class="n"&gt;region&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s1"&gt;'us-west-2'&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;SELECT&lt;/span&gt; &lt;span class="n"&gt;id&lt;/span&gt; &lt;span class="k"&gt;FROM&lt;/span&gt; &lt;span class="n"&gt;subnet&lt;/span&gt; &lt;span class="k"&gt;WHERE&lt;/span&gt; &lt;span class="n"&gt;subnet_id&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s1"&gt;'subnet-06140fd708a495450'&lt;/span&gt; &lt;span class="k"&gt;AND&lt;/span&gt; &lt;span class="n"&gt;region&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;SELECT&lt;/span&gt; &lt;span class="n"&gt;region&lt;/span&gt; &lt;span class="k"&gt;FROM&lt;/span&gt; &lt;span class="n"&gt;aws_regions&lt;/span&gt; &lt;span class="k"&gt;WHERE&lt;/span&gt; &lt;span class="n"&gt;region&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s1"&gt;'us-west-2'&lt;/span&gt;&lt;span class="p"&gt;)));&lt;/span&gt;


&lt;span class="k"&gt;INSERT&lt;/span&gt; &lt;span class="k"&gt;INTO&lt;/span&gt; &lt;span class="n"&gt;instance_security_groups&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;instance_id&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;security_group_id&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="k"&gt;VALUES&lt;/span&gt; &lt;span class="p"&gt;((&lt;/span&gt;&lt;span class="k"&gt;SELECT&lt;/span&gt; &lt;span class="n"&gt;id&lt;/span&gt; &lt;span class="k"&gt;FROM&lt;/span&gt; &lt;span class="n"&gt;instance&lt;/span&gt; &lt;span class="k"&gt;WHERE&lt;/span&gt; &lt;span class="n"&gt;instance_id&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;NULL&lt;/span&gt; &lt;span class="k"&gt;AND&lt;/span&gt; &lt;span class="n"&gt;region&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;SELECT&lt;/span&gt; &lt;span class="n"&gt;region&lt;/span&gt; &lt;span class="k"&gt;FROM&lt;/span&gt; &lt;span class="n"&gt;aws_regions&lt;/span&gt; &lt;span class="k"&gt;WHERE&lt;/span&gt; &lt;span class="n"&gt;region&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s1"&gt;'us-west-2'&lt;/span&gt;&lt;span class="p"&gt;)),&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;SELECT&lt;/span&gt; &lt;span class="n"&gt;id&lt;/span&gt; &lt;span class="k"&gt;FROM&lt;/span&gt; &lt;span class="n"&gt;security_group&lt;/span&gt; &lt;span class="k"&gt;WHERE&lt;/span&gt; &lt;span class="n"&gt;group_id&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;NULL&lt;/span&gt; &lt;span class="k"&gt;AND&lt;/span&gt; &lt;span class="n"&gt;region&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;SELECT&lt;/span&gt; &lt;span class="n"&gt;region&lt;/span&gt; &lt;span class="k"&gt;FROM&lt;/span&gt; &lt;span class="n"&gt;aws_regions&lt;/span&gt; &lt;span class="k"&gt;WHERE&lt;/span&gt; &lt;span class="n"&gt;region&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s1"&gt;'us-west-2'&lt;/span&gt;&lt;span class="p"&gt;)));&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;which in this case is very similar to (though a bit more verbose than) our original SQL statements, but if you were going back and forth on what exact changes you're going to make, it can be a good summary.&lt;/p&gt;

&lt;p&gt;If we're good with this, we can &lt;code&gt;SELECT iasql_commit()&lt;/code&gt; to push this to our cloud account and then update our &lt;code&gt;ssh_credentials&lt;/code&gt; record with the public IP address and check the connection:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="k"&gt;WITH&lt;/span&gt; &lt;span class="n"&gt;h&lt;/span&gt; &lt;span class="k"&gt;AS&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;
  &lt;span class="k"&gt;SELECT&lt;/span&gt; &lt;span class="k"&gt;host&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;im&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;public_ip_address&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;AS&lt;/span&gt; &lt;span class="n"&gt;hostname&lt;/span&gt;
  &lt;span class="k"&gt;FROM&lt;/span&gt; &lt;span class="n"&gt;instance_metadata&lt;/span&gt; &lt;span class="n"&gt;im&lt;/span&gt;
  &lt;span class="k"&gt;INNER&lt;/span&gt; &lt;span class="k"&gt;JOIN&lt;/span&gt; &lt;span class="n"&gt;instance&lt;/span&gt; &lt;span class="n"&gt;i&lt;/span&gt; &lt;span class="k"&gt;ON&lt;/span&gt; &lt;span class="n"&gt;im&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;instance_id&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;i&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;instance_id&lt;/span&gt;
  &lt;span class="k"&gt;WHERE&lt;/span&gt; &lt;span class="n"&gt;i&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;tags&lt;/span&gt; &lt;span class="o"&gt;-&amp;gt;&amp;gt;&lt;/span&gt; &lt;span class="s1"&gt;'name'&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s1"&gt;'login'&lt;/span&gt;
  &lt;span class="k"&gt;LIMIT&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="k"&gt;UPDATE&lt;/span&gt; &lt;span class="n"&gt;ssh_credentials&lt;/span&gt; &lt;span class="k"&gt;SET&lt;/span&gt; &lt;span class="n"&gt;hostname&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;h&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;hostname&lt;/span&gt; &lt;span class="k"&gt;FROM&lt;/span&gt; &lt;span class="n"&gt;ssh_credentials&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;h&lt;/span&gt; &lt;span class="k"&gt;WHERE&lt;/span&gt; &lt;span class="n"&gt;hostname&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s1"&gt;'tbd'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="k"&gt;SELECT&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="k"&gt;FROM&lt;/span&gt; &lt;span class="n"&gt;ssh_exec&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s1"&gt;'login'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s1"&gt;'uname -a'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  SQL is the least bad option for Infrastructure Management
&lt;/h2&gt;

&lt;p&gt;Infrastructure Management is a complex beast. The cloud APIs are massive with a rich collection of knobs to tweak, and mistakes can be dangerous because the changes often take a long time and recovery even more so. We have learned about the two main types of infrastructure management: imperative and declarative, where ad-hoc usage of the AWS console fits under the imperative umbrella along with scripts calling an SDK, while IaC has until recently been the only way to declaratively manage your infrastructure.&lt;/p&gt;

&lt;p&gt;Both imperative and declarative styles have their downsides. Imperative being more explicit makes it more tedious and more prone to error, and the usual execution frequency of &lt;em&gt;only once&lt;/em&gt; for most production changes makes scripting usually no better than clicking around in the AWS console. Declarative is much less tedious, but because it is a leaky abstraction inspecting its execution plan is critical to make sure it doesn't automatically put you into an outage, and failures during application can still require falling back to imperative mode to get things back into shape from time-to-time.&lt;/p&gt;

&lt;p&gt;Also until recently, all declarative approaches have suffered from only being a one-way transition, from your IaC codebase into the cloud, but no good or easy way to synchronize that code with the current state of the cloud if they drift from each other. Because these declarative programming approaches do not have a singular canonical form, they can't be bijective, making the transform back from the other side impossible to square up.&lt;/p&gt;

&lt;p&gt;Data, on the other hand, is much easier to normalize in such a way, and relational data models, as found in SQL databases, can naturally model the relationships between cloud services, making it possible to produce a bijective representation. IaSQL takes advantage of this to make a declarative model based on tables and foreign keys that can automatically handle changes done outside of this declarative model, either by dropping down into its own imperative mode or by using any other tooling to manage your infrastructure, giving it a flexibility and resiliency that IaC tools don't have.&lt;/p&gt;

&lt;p&gt;On top of this foundation, tooling to replicate or substitute for existing IaC tooling has been added, along with all of the past ~50 years of database tooling and expertise that you can take advantage of, providing a superset of functionality that makes existing use-cases easier and new use-cases possible.&lt;/p&gt;

&lt;p&gt;IaSQL is &lt;a href="https://dev.to/blog/beta"&gt;currently in beta&lt;/a&gt; and can be used locally with &lt;a href="https://hub.docker.com/r/iasql/iasql" rel="noopener noreferrer"&gt;&lt;code&gt;docker&lt;/code&gt;&lt;/a&gt; or via our &lt;a href="https://dev.to/hosted"&gt;SaaS&lt;/a&gt;. We're proud of what we have built, and can't wait to hear from you on feature requests and bug reports.&lt;/p&gt;

</description>
      <category>sql</category>
      <category>aws</category>
      <category>devops</category>
      <category>terraform</category>
    </item>
    <item>
      <title>Securely connect to an Amazon RDS via PrivateLink using SQL</title>
      <dc:creator>Luis Fernando De Pombo</dc:creator>
      <pubDate>Wed, 29 Mar 2023 20:55:59 +0000</pubDate>
      <link>https://dev.to/iasql/securely-connect-to-an-amazon-rds-via-privatelink-using-sql-40ck</link>
      <guid>https://dev.to/iasql/securely-connect-to-an-amazon-rds-via-privatelink-using-sql-40ck</guid>
      <description>&lt;p&gt;Do you have some database instances on RDS and wonder what's the most secure way to reach them? In this post, we will walk you through how to securely connect to an AWS RDS instance using PrivateLink and IaSQL. AWS PrivateLink provides private connectivity between virtual private clouds (VPCs), supported AWS services, and your on-premises networks without exposing your traffic to the public internet. IaSQL is an &lt;a href="https://github.com/iasql/iasql" rel="noopener noreferrer"&gt;open-source&lt;/a&gt; software tool that creates a two-way connection between an unmodified PostgreSQL database and an AWS account so you can manage your infrastructure from a database.&lt;/p&gt;

&lt;p&gt;When creating a database, AWS provides some specific information about the hostnames and ports used. You can access those to reach your databases by using specific clients for MySQL, Postgres, etc...:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flyif41v4pes3b1egsbgo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flyif41v4pes3b1egsbgo.png" alt="Image description" width="800" height="347"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;On a public VPC, those endpoints will be public by default. However, consider creating them under a private VPC and accessing them internally, to grant additional security to critical services. AWS PrivateLink offers the possibility to securely access those services without exposing them to the internet, just using Amazon's private network.&lt;/p&gt;

&lt;h3&gt;
  
  
  Are my RDS instances properly configured?
&lt;/h3&gt;

&lt;p&gt;Please use this query to check it:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt; &lt;span class="c1"&gt;---- Installing the needed modules&lt;/span&gt;
&lt;span class="k"&gt;SELECT&lt;/span&gt;
  &lt;span class="n"&gt;iasql_install&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s1"&gt;'aws_rds'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s1"&gt;'aws_vpc'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="c1"&gt;-- Perform the query for endpoints&lt;/span&gt;
&lt;span class="k"&gt;SELECT&lt;/span&gt;
  &lt;span class="n"&gt;rds&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;region&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="n"&gt;vpc&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;is_default&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="n"&gt;vpc&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;cidr_block&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="k"&gt;SELECT&lt;/span&gt;
      &lt;span class="k"&gt;COUNT&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;
    &lt;span class="k"&gt;FROM&lt;/span&gt;
      &lt;span class="n"&gt;endpoint_interface&lt;/span&gt;
    &lt;span class="k"&gt;WHERE&lt;/span&gt;
      &lt;span class="n"&gt;endpoint_interface&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;region&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;rds&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;region&lt;/span&gt;
      &lt;span class="k"&gt;AND&lt;/span&gt; &lt;span class="n"&gt;service&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s1"&gt;'rds'&lt;/span&gt;
      &lt;span class="k"&gt;AND&lt;/span&gt; &lt;span class="n"&gt;endpoint_interface&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;vpc_id&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;vpc&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;id&lt;/span&gt;
  &lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;AS&lt;/span&gt; &lt;span class="n"&gt;has_endpoint_interface&lt;/span&gt;
&lt;span class="k"&gt;FROM&lt;/span&gt;
  &lt;span class="n"&gt;rds&lt;/span&gt;
  &lt;span class="k"&gt;LEFT&lt;/span&gt; &lt;span class="k"&gt;OUTER&lt;/span&gt; &lt;span class="k"&gt;JOIN&lt;/span&gt; &lt;span class="n"&gt;vpc&lt;/span&gt; &lt;span class="k"&gt;ON&lt;/span&gt; &lt;span class="n"&gt;vpc&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;region&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;rds&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;region&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Have you found missing endpoints? No problem, IaSQL can generate missing Endpoint Interfaces for you:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="k"&gt;SELECT&lt;/span&gt;
  &lt;span class="o"&gt;*&lt;/span&gt;
&lt;span class="k"&gt;FROM&lt;/span&gt;
  &lt;span class="n"&gt;iasql_begin&lt;/span&gt; &lt;span class="p"&gt;();&lt;/span&gt;

&lt;span class="c1"&gt;-- Inserts the missing endpoints&lt;/span&gt;
&lt;span class="k"&gt;INSERT&lt;/span&gt; &lt;span class="k"&gt;INTO&lt;/span&gt;
  &lt;span class="n"&gt;endpoint_interface&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;region&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;vpc_id&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;service&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;private_dns_enabled&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="k"&gt;SELECT&lt;/span&gt;
  &lt;span class="n"&gt;RDS&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;region&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="n"&gt;vpc&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;id&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="s1"&gt;'rds'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="k"&gt;TRUE&lt;/span&gt;
&lt;span class="k"&gt;FROM&lt;/span&gt;
  &lt;span class="n"&gt;rds&lt;/span&gt;
  &lt;span class="k"&gt;INNER&lt;/span&gt; &lt;span class="k"&gt;JOIN&lt;/span&gt; &lt;span class="n"&gt;vpc&lt;/span&gt; &lt;span class="k"&gt;ON&lt;/span&gt; &lt;span class="n"&gt;rds&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;region&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;vpc&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;region&lt;/span&gt;
&lt;span class="k"&gt;WHERE&lt;/span&gt;
  &lt;span class="k"&gt;NOT&lt;/span&gt; &lt;span class="k"&gt;EXISTS&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="k"&gt;SELECT&lt;/span&gt;
      &lt;span class="n"&gt;id&lt;/span&gt;
    &lt;span class="k"&gt;FROM&lt;/span&gt;
      &lt;span class="n"&gt;endpoint_interface&lt;/span&gt;
    &lt;span class="k"&gt;WHERE&lt;/span&gt;
      &lt;span class="n"&gt;endpoint_interface&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;region&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;rds&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;region&lt;/span&gt;
      &lt;span class="k"&gt;AND&lt;/span&gt; &lt;span class="n"&gt;endpoint_interface&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;vpc_id&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;vpc&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;id&lt;/span&gt;
  &lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="c1"&gt;-- Preview the changes&lt;/span&gt;
&lt;span class="k"&gt;SELECT&lt;/span&gt;
  &lt;span class="o"&gt;*&lt;/span&gt;
&lt;span class="k"&gt;FROM&lt;/span&gt;
  &lt;span class="n"&gt;iasql_preview&lt;/span&gt; &lt;span class="p"&gt;();&lt;/span&gt;

&lt;span class="c1"&gt;-- Apply the changes&lt;/span&gt;
&lt;span class="c1"&gt;--select * from iasql_commit();&lt;/span&gt;
&lt;span class="c1"&gt;-- Rollback the changes&lt;/span&gt;
&lt;span class="k"&gt;select&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="n"&gt;iasql_rollback&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Running this query on an IaSQL database will auto-generate the missing endpoint interfaces and will allow you to preview the changes to be applied on your cloud. Once you are OK with the results, you can uncomment the &lt;code&gt;iasql_commit&lt;/code&gt; query, comment the &lt;code&gt;iasql_rollback&lt;/code&gt; query, and it will create the endpoints for you. If the final results in the cloud are not as expected, you can always roll back your changes by calling the &lt;code&gt;iasql_rollback&lt;/code&gt; command.&lt;/p&gt;

&lt;h3&gt;
  
  
  Testing the result
&lt;/h3&gt;

&lt;p&gt;After running the query, you should have Endpoint Interfaces created for your RDS resources. Those should be on the region and VPC where you had your databases:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3t739za4kz7vgzk3376q.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3t739za4kz7vgzk3376q.png" alt="Image description" width="800" height="447"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To start testing the result, you can start a new EC2 instance on a private VPC in the same region where your RDS and Endpoint interface is configured. You can double-check that there is no internet connectivity. But the RDS endpoint could still be reached, by using the interface that has been created:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frqhqpzqyueo97g4vm373.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frqhqpzqyueo97g4vm373.png" alt="Image description" width="800" height="468"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Please note that you could only use those endpoints from the same region. You could reach the services in multiple regions with the use of &lt;a href="https://docs.aws.amazon.com/vpc/latest/peering/what-is-vpc-peering.html" rel="noopener noreferrer"&gt;VPC peering&lt;/a&gt;&lt;/p&gt;

</description>
      <category>aws</category>
      <category>postgres</category>
      <category>sql</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>Save $ on public S3 buckets using VPC endpoints via SQL</title>
      <dc:creator>Luis Fernando De Pombo</dc:creator>
      <pubDate>Thu, 23 Mar 2023 20:19:30 +0000</pubDate>
      <link>https://dev.to/iasql/save-on-public-s3-buckets-using-vpc-endpoints-via-sql-5dko</link>
      <guid>https://dev.to/iasql/save-on-public-s3-buckets-using-vpc-endpoints-via-sql-5dko</guid>
      <description>&lt;p&gt;Are you using S3 buckets as part of your cloud deployments? How are you accessing them?&lt;/p&gt;

&lt;p&gt;When running applications behind VPCs without public access, there may be the need to access S3 buckets from the private subnet over the public internet.&lt;br&gt;
One simple but costly way to do so is to rely on &lt;a href="https://docs.aws.amazon.com/en_en/vpc/latest/userguide/vpc-nat-gateway.html" rel="noopener noreferrer"&gt;NAT gateways&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp6l13rn5xoev5l0vjin2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp6l13rn5xoev5l0vjin2.png" alt="Image description" width="800" height="702"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;However, creating &lt;a href="https://docs.aws.amazon.com/vpc/latest/privatelink/vpc-endpoints-s3.html" rel="noopener noreferrer"&gt;gateway or interface VPC endpoints&lt;/a&gt; for each region where your buckets are exposed is a more optimal solution.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4eo3581aswhxqkae75uf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4eo3581aswhxqkae75uf.png" alt="Image description" width="800" height="263"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;When the VPC endpoints are enabled you can access your S3 buckets using this endpoint. In this post, we will walk you through how to control your buckets from an internal network with the desired security, and without the extra costs of a NAT gateway using a VPC endpoint and IaSQL. IaSQL is an &lt;a href="https://github.com/iasql/iasql" rel="noopener noreferrer"&gt;open-source&lt;/a&gt; software tool that creates a two-way connection between an unmodified PostgreSQL database and an AWS account so you can manage your infrastructure from a database.&lt;/p&gt;


&lt;h3&gt;
  
  
  Why use VPC Interface Endpoints
&lt;/h3&gt;

&lt;p&gt;AWS VPC Interface Endpoints are a component of AWS's &lt;a href="https://aws.amazon.com/privatelink" rel="noopener noreferrer"&gt;PrivateLink&lt;/a&gt; infrastructure. AWS PrivateLink provides private connectivity between virtual private clouds (VPCs), supported AWS services, and your on-premises networks without exposing your traffic to the public internet.&lt;/p&gt;

&lt;p&gt;Relying on PrivateLink offers a set of advantages in terms of &lt;strong&gt;cost&lt;/strong&gt;, &lt;strong&gt;security&lt;/strong&gt; and &lt;strong&gt;performance&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Cost&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;When using NAT gateways, you are paying for the NAT gateway itself, and the bandwidth used to access the internet. A NAT gateway is a generic way for private instances to reach the internet, and can be useful for generic usage, such as pulling dependencies, updating repositories, etc... But when used for reaching AWS resources directly, there are more optimal and cheaper solutions such as VPC endpoints - they will not have generic access to internet but only to specified public AWS endpoints. When using VPC endpoints, you are paying for the bandwidth used to access the public endpoints, but not for the VPC endpoint itself.&lt;/p&gt;

&lt;p&gt;Costs for NAT gateway depend on each region, but it should be over &lt;em&gt;0.045 USD/hour&lt;/em&gt; per GB processed + &lt;em&gt;0.045 USD/hour&lt;/em&gt; for the instance itself. Costs for VPC endpoints are charged at &lt;em&gt;0.01 USD/hour&lt;/em&gt; per GB processed, being cheaper after the first PB.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Component&lt;/th&gt;
&lt;th&gt;Hourly pricing&lt;/th&gt;
&lt;th&gt;Per GB pricing&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;NAT gateway&lt;/td&gt;
&lt;td&gt;0.045 USD/hour&lt;/td&gt;
&lt;td&gt;0.45 USD/hour per GB&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;VPC endpoint&lt;/td&gt;
&lt;td&gt;--&lt;/td&gt;
&lt;td&gt;0.01 USD/hour per GB&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;Security: no internet traversal&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;When deploying services in production, the concept of defense-in-depth is essential: have an information assurance strategy that provides multiple, redundant defensive measures in case a security control fails or a vulnerability is exploited.&lt;br&gt;
By using interface endpoints the services will be accessed via Amazon's internal networks. It is an essential security measure, that will prevent attackers from directly reaching the services by keeping them in a private network, reducing the surface area of attack.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Security: policies&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;AWS services can benefit from &lt;a href="https://docs.aws.amazon.com/IAM/latest/UserGuide/access_controlling.html" rel="noopener noreferrer"&gt;IAM policies&lt;/a&gt; for fine-grained access control. When using interface endpoints, those policies can be applied to determine the traffic that can pass through the interface.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Performance: latency&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Because the traffic stays within Amazon’s networks, the physical distance traveled, the number of hops and the risk of traversing congested networks are all significantly lower.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Performance: bandwidth&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;PrivateLink supports a sustained bandwidth of 10 Gbps per availability zone with bursts up to 40 Gbps.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Performance: stability&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;PrivateLink consistently lowers the number of errors and timeouts on high loads. The distance traveled, the number of hops, and the risks associated with traversing congested networks are reduced when the traffic over Private Link stays within Amazon’s networks.&lt;/p&gt;

&lt;p&gt;In terms of speed, Private Link supports a sustained bandwidth of 10 Gbps per availability zone with bursts up to 40 Gbps.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Do you want to know if you have it properly configured? The following query will check for active S3 VPC interface endpoints:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This query will find your active S3 buckets for all regions, associated with the existing endpoint gateways or interfaces - if they exist.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt; &lt;span class="c1"&gt;-- Installing the needed modules&lt;/span&gt;
&lt;span class="k"&gt;SELECT&lt;/span&gt;
  &lt;span class="n"&gt;iasql_install&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s1"&gt;'aws_s3'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s1"&gt;'aws_vpc'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="c1"&gt;-- Perform the query for endpoints&lt;/span&gt;
&lt;span class="k"&gt;SELECT&lt;/span&gt;
  &lt;span class="n"&gt;bucket&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;region&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="n"&gt;vpc&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;is_default&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="n"&gt;vpc&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;cidr_block&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="k"&gt;SELECT&lt;/span&gt;
      &lt;span class="k"&gt;COUNT&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;
    &lt;span class="k"&gt;FROM&lt;/span&gt;
      &lt;span class="n"&gt;endpoint_gateway&lt;/span&gt;
    &lt;span class="k"&gt;WHERE&lt;/span&gt;
      &lt;span class="n"&gt;endpoint_gateway&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;region&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;bucket&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;region&lt;/span&gt;
      &lt;span class="k"&gt;AND&lt;/span&gt; &lt;span class="n"&gt;service&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s1"&gt;'s3'&lt;/span&gt;
      &lt;span class="k"&gt;AND&lt;/span&gt; &lt;span class="n"&gt;endpoint_gateway&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;vpc_id&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;vpc&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;id&lt;/span&gt;
  &lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;AS&lt;/span&gt; &lt;span class="n"&gt;has_endpoint_gateway&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="k"&gt;SELECT&lt;/span&gt;
      &lt;span class="k"&gt;COUNT&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;
    &lt;span class="k"&gt;FROM&lt;/span&gt;
      &lt;span class="n"&gt;endpoint_interface&lt;/span&gt;
    &lt;span class="k"&gt;WHERE&lt;/span&gt;
      &lt;span class="n"&gt;endpoint_interface&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;region&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;bucket&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;region&lt;/span&gt;
      &lt;span class="k"&gt;AND&lt;/span&gt; &lt;span class="n"&gt;service&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s1"&gt;'s3'&lt;/span&gt;
      &lt;span class="k"&gt;AND&lt;/span&gt; &lt;span class="n"&gt;endpoint_interface&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;vpc_id&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;vpc&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;id&lt;/span&gt;
  &lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;AS&lt;/span&gt; &lt;span class="n"&gt;has_endpoint_interface&lt;/span&gt;
&lt;span class="k"&gt;FROM&lt;/span&gt;
  &lt;span class="n"&gt;bucket&lt;/span&gt;
  &lt;span class="k"&gt;LEFT&lt;/span&gt; &lt;span class="k"&gt;OUTER&lt;/span&gt; &lt;span class="k"&gt;JOIN&lt;/span&gt; &lt;span class="n"&gt;vpc&lt;/span&gt; &lt;span class="k"&gt;ON&lt;/span&gt; &lt;span class="n"&gt;vpc&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;region&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;bucket&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;region&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Have you found missing endpoints? No problem, IaSQL can generate them for you. You can create two different types of endpoints: gateway and interface. We're going to describe their main features and you can create the desired type in an automated way.&lt;/p&gt;

&lt;h2&gt;
  
  
  Interface endpoints
&lt;/h2&gt;

&lt;p&gt;An interface endpoint is an elastic network interface with a private IP address that serves as an entry point for traffic destined for a supported AWS service. As they use ENIs, they can benefit from security groups to control traffic.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://docs.aws.amazon.com/IAM/latest/UserGuide/access_controlling.html" rel="noopener noreferrer"&gt;IAM policies&lt;/a&gt; can also be&lt;br&gt;
added to control access to those endpoints.&lt;/p&gt;

&lt;p&gt;They are regional, meaning they can only be accessed by the same region they are created. However, Multi-region is possible by also using VPC peering, so resources in one region can be accessed from others, though this only supports IPv4 TCP traffic.&lt;/p&gt;

&lt;p&gt;An interface endpoint (except S3 interface endpoint) has a corresponding private DNS hostname, that can be used to access the resource.&lt;/p&gt;

&lt;p&gt;Interface endpoints cover a wide range of services such as S3, Lambda, API Gateway, etc... &lt;a href="https://docs.aws.amazon.com/vpc/latest/privatelink/create-interface-endpoint.html" rel="noopener noreferrer"&gt;Get more detail about interface endpoints&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This query will auto-generate the missing interface endpoints and will preview the changes to be applied on your cloud. Simply uncomment the final statement and run it to make the changes to your cloud account.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt; &lt;span class="c1"&gt;-- Inserts the missing endpoints&lt;/span&gt;
&lt;span class="k"&gt;SELECT&lt;/span&gt;
  &lt;span class="o"&gt;*&lt;/span&gt;
&lt;span class="k"&gt;FROM&lt;/span&gt;
  &lt;span class="n"&gt;iasql_begin&lt;/span&gt; &lt;span class="p"&gt;();&lt;/span&gt;

&lt;span class="k"&gt;INSERT&lt;/span&gt; &lt;span class="k"&gt;INTO&lt;/span&gt;
  &lt;span class="n"&gt;endpoint_interface&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;region&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;vpc_id&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;service&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="k"&gt;SELECT&lt;/span&gt;
  &lt;span class="n"&gt;bucket&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;region&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="n"&gt;vpc&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;id&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="s1"&gt;'s3'&lt;/span&gt;
&lt;span class="k"&gt;FROM&lt;/span&gt;
  &lt;span class="n"&gt;bucket&lt;/span&gt;
  &lt;span class="k"&gt;INNER&lt;/span&gt; &lt;span class="k"&gt;JOIN&lt;/span&gt; &lt;span class="n"&gt;vpc&lt;/span&gt; &lt;span class="k"&gt;ON&lt;/span&gt; &lt;span class="n"&gt;bucket&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;region&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;vpc&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;region&lt;/span&gt;
&lt;span class="k"&gt;WHERE&lt;/span&gt;
  &lt;span class="k"&gt;NOT&lt;/span&gt; &lt;span class="k"&gt;EXISTS&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="k"&gt;SELECT&lt;/span&gt;
      &lt;span class="n"&gt;id&lt;/span&gt;
    &lt;span class="k"&gt;FROM&lt;/span&gt;
      &lt;span class="n"&gt;endpoint_interface&lt;/span&gt;
    &lt;span class="k"&gt;WHERE&lt;/span&gt;
      &lt;span class="n"&gt;endpoint_interface&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;region&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;bucket&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;region&lt;/span&gt;
      &lt;span class="k"&gt;AND&lt;/span&gt; &lt;span class="n"&gt;endpoint_interface&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;vpc_id&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;vpc&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;id&lt;/span&gt;
  &lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="c1"&gt;-- Preview the changes&lt;/span&gt;
&lt;span class="k"&gt;SELECT&lt;/span&gt;
  &lt;span class="o"&gt;*&lt;/span&gt;
&lt;span class="k"&gt;FROM&lt;/span&gt;
  &lt;span class="n"&gt;iasql_preview&lt;/span&gt; &lt;span class="p"&gt;();&lt;/span&gt;

&lt;span class="c1"&gt;-- Commit the changes&lt;/span&gt;
&lt;span class="c1"&gt;-- SELECT * FROM iasql_commit();&lt;/span&gt;
&lt;span class="c1"&gt;-- Rollback changes&lt;/span&gt;
&lt;span class="c1"&gt;-- SELECT * FROM iasql_rollback();&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Endpoint gateways
&lt;/h2&gt;

&lt;p&gt;An endpoint gateway is a gateway that can be configured as a target for a route in your route table, used to access traffic in DynamoDB or S3.&lt;/p&gt;

&lt;p&gt;Multiple gateway endpoints can be created in a single VPC, and those can be used on different route tables to enforce different access policies from different subnets to the same service.&lt;/p&gt;

&lt;p&gt;Gateway endpoints are supported within the same region only, resources from multiple regions cannot be accessed, even using VPC peering. They also support IPv4 traffic only.&lt;/p&gt;

&lt;p&gt;To use them, DNS resolution must be enabled in the VPC.&lt;/p&gt;

&lt;p&gt;When a route is added, all instances in the subnets associated with the route table will automatically use the endpoint to access the service.&lt;/p&gt;

&lt;p&gt;Gateway endpoints only support S3 and DynamoDB services. &lt;a href="https://docs.aws.amazon.com/vpc/latest/privatelink/gateway-endpoints.html" rel="noopener noreferrer"&gt;Get more detail about gateway endpoints&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;This query will auto-generate the missing endpoint gateways and will allow you to preview the changes to be applied on your cloud.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt; &lt;span class="c1"&gt;-- Inserts the missing endpoints&lt;/span&gt;
&lt;span class="k"&gt;SELECT&lt;/span&gt;
  &lt;span class="o"&gt;*&lt;/span&gt;
&lt;span class="k"&gt;FROM&lt;/span&gt;
  &lt;span class="n"&gt;iasql_begin&lt;/span&gt; &lt;span class="p"&gt;();&lt;/span&gt;

&lt;span class="k"&gt;INSERT&lt;/span&gt; &lt;span class="k"&gt;INTO&lt;/span&gt;
  &lt;span class="n"&gt;endpoint_gateway&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;region&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;vpc_id&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;service&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="k"&gt;SELECT&lt;/span&gt;
  &lt;span class="n"&gt;bucket&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;region&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="n"&gt;vpc&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;id&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="s1"&gt;'s3'&lt;/span&gt;
&lt;span class="k"&gt;FROM&lt;/span&gt;
  &lt;span class="n"&gt;bucket&lt;/span&gt;
  &lt;span class="k"&gt;INNER&lt;/span&gt; &lt;span class="k"&gt;JOIN&lt;/span&gt; &lt;span class="n"&gt;vpc&lt;/span&gt; &lt;span class="k"&gt;ON&lt;/span&gt; &lt;span class="n"&gt;bucket&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;region&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;vpc&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;region&lt;/span&gt;
&lt;span class="k"&gt;WHERE&lt;/span&gt;
  &lt;span class="k"&gt;NOT&lt;/span&gt; &lt;span class="k"&gt;EXISTS&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="k"&gt;SELECT&lt;/span&gt;
      &lt;span class="n"&gt;id&lt;/span&gt;
    &lt;span class="k"&gt;FROM&lt;/span&gt;
      &lt;span class="n"&gt;endpoint_gateway&lt;/span&gt;
    &lt;span class="k"&gt;WHERE&lt;/span&gt;
      &lt;span class="n"&gt;endpoint_gateway&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;region&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;bucket&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;region&lt;/span&gt;
      &lt;span class="k"&gt;AND&lt;/span&gt; &lt;span class="n"&gt;endpoint_gateway&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;vpc_id&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;vpc&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;id&lt;/span&gt;
  &lt;span class="p"&gt;)&lt;/span&gt;
  &lt;span class="c1"&gt;-- Preview the changes&lt;/span&gt;
&lt;span class="k"&gt;SELECT&lt;/span&gt;
  &lt;span class="o"&gt;*&lt;/span&gt;
&lt;span class="k"&gt;FROM&lt;/span&gt;
  &lt;span class="n"&gt;iasql_preview&lt;/span&gt; &lt;span class="p"&gt;();&lt;/span&gt;

&lt;span class="c1"&gt;-- Commit the changes&lt;/span&gt;
&lt;span class="c1"&gt;-- SELECT * FROM iasql_commit();&lt;/span&gt;
&lt;span class="c1"&gt;-- Rollback changes&lt;/span&gt;
&lt;span class="c1"&gt;-- SELECT * FROM iasql_rollback();&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Testing the result
&lt;/h2&gt;

&lt;p&gt;Once all the relevant endpoints have been created, you can confirm your access to the gateway endpoints from an internal EC2 instance on the same region as the endpoint you want to test:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs8ggg3xxrhx6dkbjt4gl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs8ggg3xxrhx6dkbjt4gl.png" alt="Image description" width="800" height="308"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;For interface endpoints, access also can be tested using the named endpoint.&lt;/p&gt;

</description>
      <category>s3</category>
      <category>aws</category>
      <category>sql</category>
      <category>cloud</category>
    </item>
    <item>
      <title>Deploy a Static Website on AWS using SQL</title>
      <dc:creator>Luis Fernando De Pombo</dc:creator>
      <pubDate>Mon, 06 Mar 2023 20:37:34 +0000</pubDate>
      <link>https://dev.to/iasql/deploy-a-static-website-on-aws-using-sql-4nj</link>
      <guid>https://dev.to/iasql/deploy-a-static-website-on-aws-using-sql-4nj</guid>
      <description>&lt;p&gt;Did you know you can deploy a static website using a SQL REPL? In this post, we'll show you how to use IaSQL to deploy a static website from your GitHub repository to AWS S3 + Cloudfront services using only SQL queries. IaSQL is an &lt;a href="https://github.com/iasql/iasql" rel="noopener noreferrer"&gt;open-source&lt;/a&gt; software tool that creates a two-way connection between an unmodified PostgreSQL database and an AWS account so you can manage your infrastructure from a database.&lt;/p&gt;

&lt;p&gt;We will create and configure an S3 bucket to serve our static website. To enable support for HTTPS, we'll also add a CloudFront distribution. We will also leverage CodeBuild to automatically build the files for our project and copy them to the S3 bucket created already.&lt;/p&gt;

&lt;h2&gt;
  
  
  Create a S3 Bucket
&lt;/h2&gt;

&lt;p&gt;To be able to work with S3, we should first install the corresponding IaSQL module.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="k"&gt;SELECT&lt;/span&gt;
  &lt;span class="n"&gt;iasql_install&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s1"&gt;'aws_s3'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Installing a module gives us the ability to use tables and RPCs provided by it. &lt;code&gt;aws_s3&lt;/code&gt; module gives us the ability to manage an S3 bucket, S3 static website hosting, and other related stuff. So let's create an S3 bucket first.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="k"&gt;SELECT&lt;/span&gt;
  &lt;span class="n"&gt;iasql_begin&lt;/span&gt; &lt;span class="p"&gt;();&lt;/span&gt;

&lt;span class="k"&gt;INSERT&lt;/span&gt; &lt;span class="k"&gt;INTO&lt;/span&gt;
  &lt;span class="n"&gt;bucket&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;NAME&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;policy_document&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;region&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="k"&gt;VALUES&lt;/span&gt;
  &lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="s1"&gt;'&amp;lt;bucket-name&amp;gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="s1"&gt;'{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Sid": "PublicReadGetObject",
      "Effect": "Allow",
      "Principal": "*",
      "Action": [
        "s3:GetObject"
      ],
      "Resource": [
        "arn:aws:s3:::&amp;lt;bucket-name&amp;gt;/*"
      ]
    }
  ]
}'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="s1"&gt;'us-east-1'&lt;/span&gt;
  &lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="k"&gt;SELECT&lt;/span&gt;
  &lt;span class="n"&gt;iasql_commit&lt;/span&gt; &lt;span class="p"&gt;();&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The above query will create a new bucket in the &lt;code&gt;us-east-1&lt;/code&gt; region with the defined name &lt;code&gt;&amp;lt;bucket-name&amp;gt;&lt;/code&gt; and the given policy using IaSQL. The &lt;code&gt;iasql_begin&lt;/code&gt; and &lt;code&gt;iasql_commit&lt;/code&gt; functions are RPCs that will start and finish an IaSQL transaction. Learn more about IaSQL transactions in this part of our &lt;a href="https://dev.to/docs/transaction/"&gt;documentation&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Now that we have a bucket, we can upload a file to it and see if we're able to view it using our web browser. Let's use IaSQL to upload a file to our newly created bucket:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="k"&gt;SELECT&lt;/span&gt;
  &lt;span class="o"&gt;*&lt;/span&gt;
&lt;span class="k"&gt;FROM&lt;/span&gt;
  &lt;span class="n"&gt;s3_upload_object&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s1"&gt;'&amp;lt;bucket-name&amp;gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s1"&gt;'hello.txt'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s1"&gt;'Hello IaSQL!'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s1"&gt;'text/plain'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This is going to upload a file named &lt;code&gt;hello.txt&lt;/code&gt; in our bucket whose content is &lt;code&gt;Hello IaSQL!&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;You can read the code for &lt;a href="https://github.com/iasql/iasql/blob/c70f068c7520baf00cea9ddd3a76b8c6dbd2b23b/src/modules/aws_s3/rpcs/s3_upload_object.ts#L27-L33" rel="noopener noreferrer"&gt;&lt;code&gt;s3_upload_object&lt;/code&gt; RPC&lt;/a&gt; as well as all other IaSQL modules and RPCs in our GitHub &lt;a href="https://github.com/iasql/iasql/" rel="noopener noreferrer"&gt;repository&lt;/a&gt; to see how they work.&lt;/p&gt;

&lt;p&gt;Let's see if we can access our file using the S3 bucket URL. It should be as follows:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;https://&amp;lt;bucket-name&amp;gt;.s3.amazonaws.com/hello.txt
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;But we're unable to access the file directly because S3 blocks public access by default.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6gs1uxohdyj4uoinmfdl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6gs1uxohdyj4uoinmfdl.png" alt="Image description" width="800" height="165"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Make The Bucket Public
&lt;/h2&gt;

&lt;p&gt;We need to enable public access to our bucket files to be able to directly access the files. We can use the &lt;code&gt;public_access_block&lt;/code&gt; table provided by &lt;code&gt;aws_s3&lt;/code&gt; module to allow for public requests to reach our objects.&lt;/p&gt;

&lt;p&gt;If you want to know which resources (via tables) an IaSQL module handles, you can visit our documentation page. It also provides a list and explanation of all the RPCs that are provided by a module. In our case, we can visit &lt;a href="https://iasql.com/docs/reference/sql/#aws_s3" rel="noopener noreferrer"&gt;this link&lt;/a&gt; to get a list of tables and RPCs available for &lt;code&gt;aws_s3&lt;/code&gt; module.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="k"&gt;SELECT&lt;/span&gt;
  &lt;span class="n"&gt;iasql_begin&lt;/span&gt; &lt;span class="p"&gt;();&lt;/span&gt;

&lt;span class="k"&gt;INSERT&lt;/span&gt; &lt;span class="k"&gt;INTO&lt;/span&gt;
  &lt;span class="n"&gt;public_access_block&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;bucket_name&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;block_public_acls&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;ignore_public_acls&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;block_public_policy&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;restrict_public_buckets&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="k"&gt;VALUES&lt;/span&gt;
  &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s1"&gt;'&amp;lt;bucket-name&amp;gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="k"&gt;FALSE&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="k"&gt;FALSE&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="k"&gt;FALSE&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="k"&gt;FALSE&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;ON&lt;/span&gt; &lt;span class="n"&gt;CONFLICT&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;bucket_name&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="k"&gt;DO&lt;/span&gt;
&lt;span class="k"&gt;UPDATE&lt;/span&gt;
&lt;span class="k"&gt;SET&lt;/span&gt;
  &lt;span class="n"&gt;block_public_acls&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;excluded&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;block_public_acls&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="n"&gt;ignore_public_acls&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;excluded&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;ignore_public_acls&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="n"&gt;block_public_policy&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;excluded&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;block_public_policy&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="n"&gt;restrict_public_buckets&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;excluded&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;restrict_public_buckets&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="k"&gt;SELECT&lt;/span&gt;
  &lt;span class="n"&gt;iasql_commit&lt;/span&gt; &lt;span class="p"&gt;();&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;There's a 1-1 relationship between the bucket and bucket public access block, therefore we're using Postgres &lt;a href="https://www.postgresql.org/docs/current/sql-insert.html#SQL-ON-CONFLICT" rel="noopener noreferrer"&gt;&lt;code&gt;ON CONFLICT&lt;/code&gt;&lt;/a&gt; syntax so that when there's a record already, we can replace it without hassle.&lt;/p&gt;

&lt;p&gt;Now we should be able to directly access our file through the web browser.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;https://&amp;lt;bucket-name&amp;gt;.s3.amazonaws.com/hello.txt
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwbuga1wuf2gwak0ep8b6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwbuga1wuf2gwak0ep8b6.png" alt="Image description" width="340" height="104"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Use S3 Static Website Hosting
&lt;/h2&gt;

&lt;p&gt;But simply serving the files doesn't mean we can host a static website. We need to enable &lt;a href="https://docs.aws.amazon.com/AmazonS3/latest/userguide/WebsiteHosting.html" rel="noopener noreferrer"&gt;static website hosting&lt;/a&gt; for our bucket to be able to deploy a React codebase. So let's enable it.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="k"&gt;SELECT&lt;/span&gt;
  &lt;span class="n"&gt;iasql_begin&lt;/span&gt; &lt;span class="p"&gt;();&lt;/span&gt;

&lt;span class="k"&gt;INSERT&lt;/span&gt; &lt;span class="k"&gt;INTO&lt;/span&gt;
  &lt;span class="n"&gt;bucket_website&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;bucket_name&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;index_document&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="k"&gt;VALUES&lt;/span&gt;
  &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s1"&gt;'&amp;lt;bucket-name&amp;gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s1"&gt;'index.html'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="k"&gt;SELECT&lt;/span&gt;
  &lt;span class="n"&gt;iasql_commit&lt;/span&gt; &lt;span class="p"&gt;();&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We'll use this functionality to route all the requests to &lt;code&gt;index.html&lt;/code&gt; file. This way we can deploy a sample React application and serve it through S3. To get the link for our S3 bucket's static website, we can use &lt;code&gt;get_bucket_website_endpoint&lt;/code&gt; function.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="k"&gt;SELECT&lt;/span&gt;
  &lt;span class="o"&gt;*&lt;/span&gt;
&lt;span class="k"&gt;FROM&lt;/span&gt;
  &lt;span class="n"&gt;get_bucket_website_endpoint&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s1"&gt;'&amp;lt;bucket-name&amp;gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Build The Project And Sync To S3
&lt;/h2&gt;

&lt;p&gt;Now that everything is set, we just need to build our React app and deploy it to S3. We have already pushed a sample app to this repository:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;https://github.com/iasql/sample-react-app
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;But you can use whatever codebase you'd like by changing the URLs so that they point to the Github repository hosting your React app.&lt;/p&gt;

&lt;p&gt;Now it's the time to create a CodeBuild project. CodeBuild is an AWS CI/CD system that is free of cost. The CodeBuild project will do the following:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Pull the codebase from the GitHub repository&lt;/li&gt;
&lt;li&gt;Build the app&lt;/li&gt;
&lt;li&gt;Copy the resulting files (&lt;code&gt;build/*&lt;/code&gt;) to the S3 bucket&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;We can do this with the following SQL query:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="k"&gt;SELECT&lt;/span&gt;
  &lt;span class="n"&gt;iasql_begin&lt;/span&gt; &lt;span class="p"&gt;();&lt;/span&gt;

&lt;span class="c1"&gt;-- create the needed role for codebuild&lt;/span&gt;
&lt;span class="k"&gt;INSERT&lt;/span&gt; &lt;span class="k"&gt;INTO&lt;/span&gt;
  &lt;span class="n"&gt;iam_role&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;role_name&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;assume_role_policy_document&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;attached_policies_arns&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="k"&gt;VALUES&lt;/span&gt;
  &lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="s1"&gt;'deploy-static-website-role'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="s1"&gt;'{
    "Statement": [
      {
        "Effect": "Allow",
        "Principal": {
          "Service": "codebuild.amazonaws.com"
        },
        "Action": "sts:AssumeRole"
      }
    ],
    "Version": "2012-10-17"
  }'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;ARRAY&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;
      &lt;span class="s1"&gt;'arn:aws:iam::aws:policy/AWSCodeBuildAdminAccess'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="s1"&gt;'arn:aws:iam::aws:policy/CloudWatchLogsFullAccess'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="s1"&gt;'arn:aws:iam::aws:policy/AmazonS3FullAccess'&lt;/span&gt;
    &lt;span class="p"&gt;]&lt;/span&gt;
  &lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="c1"&gt;-- create the codebuild project&lt;/span&gt;
&lt;span class="k"&gt;INSERT&lt;/span&gt; &lt;span class="k"&gt;INTO&lt;/span&gt;
  &lt;span class="n"&gt;codebuild_project&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;project_name&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;build_spec&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;source_type&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;privileged_mode&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;service_role_name&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;region&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="k"&gt;VALUES&lt;/span&gt;
  &lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="s1"&gt;'deploy-static-website'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="s1"&gt;'version: 0.2

phases:
  pre_build:
    commands:
      - git clone https://github.com/iasql/sample-react-app &amp;amp;&amp;amp; cd sample-react-app
  build:
    commands:
      - echo Installing dependencies
      - npm install
      - echo Building the app
      - npm run build
  post_build:
    commands:
      - echo Copying the files to the S3 bucket
      - aws s3 sync build/ s3://&amp;lt;bucket-name&amp;gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="s1"&gt;'NO_SOURCE'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="k"&gt;FALSE&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="s1"&gt;'deploy-static-website-role'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="s1"&gt;'us-east-1'&lt;/span&gt;
  &lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="k"&gt;SELECT&lt;/span&gt;
  &lt;span class="n"&gt;iasql_commit&lt;/span&gt; &lt;span class="p"&gt;();&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The above SQL command first creates a role that is needed for CodeBuild to operate. Then it'll create the actual CodeBuild project that clones the repo, builds it and finally syncs the resulting files to our S3 bucket. We need to trigger the CodeBuild project to run and then our files will be uploaded to our bucket.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="k"&gt;SELECT&lt;/span&gt;
  &lt;span class="o"&gt;*&lt;/span&gt;
&lt;span class="k"&gt;FROM&lt;/span&gt;
  &lt;span class="n"&gt;start_build&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s1"&gt;'deploy-static-website'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s1"&gt;'us-east-1'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This will trigger the CodeBuild project to start. After a while we should be able to see the files appearing in our S3 bucket. We can access our React app using the endpoint for the S3 static website hosting. As we already mentioned, to get the endpoint we can use &lt;code&gt;get_bucket_website_endpoint&lt;/code&gt; helper function.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="k"&gt;SELECT&lt;/span&gt;
  &lt;span class="n"&gt;get_bucket_website_endpoint&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s1"&gt;'&amp;lt;bucket-name&amp;gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;By visiting the link returned by the above function, you can see our sample app has been deployed. The problem is that S3 static website hosting does not support HTTPS, and therefore we need to use a CloudFront distribution in order to have HTTPS connection.&lt;/p&gt;

&lt;h2&gt;
  
  
  Create a CloudFront Distribution
&lt;/h2&gt;

&lt;p&gt;Serving files in a bucket to the public using pure S3 isn't a good idea. In our case because the S3 static website hosting does not provide an HTTPS endpoint, but in most cases because the &lt;a href="https://aws.amazon.com/s3/pricing/" rel="noopener noreferrer"&gt;data transfer rates for S3&lt;/a&gt; aren't cheap and can grow out of control. Also, the speed in which the users can access to the bucket objects will increase if you use a CDN because they'll be delivered to the users from the nearest edge server.&lt;/p&gt;

&lt;p&gt;With the above in mind, let's create a CloudFront distribution for our S3 bucket. But first, we need to install the &lt;code&gt;aws_cloudfront&lt;/code&gt; module to be able to leverage its abilities.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="k"&gt;SELECT&lt;/span&gt;
  &lt;span class="n"&gt;iasql_install&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s1"&gt;'aws_cloudfront'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then create the distribution:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="k"&gt;SELECT&lt;/span&gt;
  &lt;span class="n"&gt;iasql_begin&lt;/span&gt; &lt;span class="p"&gt;();&lt;/span&gt;

&lt;span class="k"&gt;INSERT&lt;/span&gt; &lt;span class="k"&gt;INTO&lt;/span&gt;
  &lt;span class="n"&gt;distribution&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;caller_reference&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;origins&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;default_cache_behavior&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="k"&gt;VALUES&lt;/span&gt;
  &lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="s1"&gt;'my-website'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="p"&gt;(&lt;/span&gt;
      &lt;span class="s1"&gt;'[
  {
    "DomainName": "'&lt;/span&gt; &lt;span class="o"&gt;||&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;
        &lt;span class="k"&gt;SELECT&lt;/span&gt;
          &lt;span class="n"&gt;get_bucket_website_endpoint&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s1"&gt;'&amp;lt;bucket-name&amp;gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
      &lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;||&lt;/span&gt; &lt;span class="s1"&gt;'",
    "Id": "my-website-origin",
    "CustomOriginConfig": {
      "HTTPPort": 80,
      "HTTPSPort": 443,
      "OriginProtocolPolicy": "http-only"
    }
  }
]'&lt;/span&gt;
    &lt;span class="p"&gt;)::&lt;/span&gt;&lt;span class="n"&gt;json&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="s1"&gt;'{
  "TargetOriginId": "my-website-origin",
  "ViewerProtocolPolicy": "allow-all",
  "CachePolicyId": "658327ea-f89d-4fab-a63d-7e88639e58f6"
}'&lt;/span&gt;
  &lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="k"&gt;SELECT&lt;/span&gt;
  &lt;span class="n"&gt;iasql_commit&lt;/span&gt; &lt;span class="p"&gt;();&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We can access our website through the CloudFront distribution domain name. To get it, we can simply run the following query:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="k"&gt;SELECT&lt;/span&gt;
  &lt;span class="n"&gt;domain_name&lt;/span&gt;
&lt;span class="k"&gt;FROM&lt;/span&gt;
  &lt;span class="n"&gt;distribution&lt;/span&gt;
&lt;span class="k"&gt;WHERE&lt;/span&gt;
  &lt;span class="n"&gt;caller_reference&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s1"&gt;'my-website'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;It supports HTTPS, it's fast, it's cheaper than directly serving on S3, and it can be easily connected to a Route53 domain.&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>beginners</category>
      <category>tutorial</category>
      <category>aws</category>
    </item>
  </channel>
</rss>
