<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Levi Nunnink</title>
    <description>The latest articles on DEV Community by Levi Nunnink (@levinunnink).</description>
    <link>https://dev.to/levinunnink</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/levinunnink"/>
    <language>en</language>
    <item>
      <title>7 Open Graph Tag Mistakes That Make Your Links Look Broken</title>
      <dc:creator>Levi Nunnink</dc:creator>
      <pubDate>Thu, 26 Feb 2026 16:49:24 +0000</pubDate>
      <link>https://dev.to/levinunnink/7-open-graph-tag-mistakes-that-make-your-links-look-broken-5h2g</link>
      <guid>https://dev.to/levinunnink/7-open-graph-tag-mistakes-that-make-your-links-look-broken-5h2g</guid>
      <description>&lt;p&gt;You write a great blog post. Someone shares it on LinkedIn. The preview shows a broken image, a truncated title, and a description pulled from your cookie banner.&lt;/p&gt;

&lt;p&gt;This happens way more often than it should. And the fix is almost always the same: your Open Graph tags are wrong.&lt;/p&gt;

&lt;p&gt;OG tags are the &lt;code&gt;&amp;lt;meta&amp;gt;&lt;/code&gt; tags in your &lt;code&gt;&amp;lt;head&amp;gt;&lt;/code&gt; that tell platforms like Facebook, LinkedIn, Slack, Discord, and Twitter/X what to show when someone shares a link. Get them wrong and your content looks unprofessional — or invisible.&lt;/p&gt;

&lt;p&gt;Here are the 7 most common mistakes I see, and how to fix each one.&lt;/p&gt;

&lt;h2&gt;
  
  
  1. Missing og:image entirely
&lt;/h2&gt;

&lt;p&gt;This is the big one. No &lt;code&gt;og:image&lt;/code&gt; means no preview image. Most platforms will show a blank grey box or a tiny favicon. Your link looks like spam.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Fix:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight html"&gt;&lt;code&gt;&lt;span class="nt"&gt;&amp;lt;meta&lt;/span&gt; &lt;span class="na"&gt;property=&lt;/span&gt;&lt;span class="s"&gt;"og:image"&lt;/span&gt; &lt;span class="na"&gt;content=&lt;/span&gt;&lt;span class="s"&gt;"https://yoursite.com/images/post-preview.jpg"&lt;/span&gt; &lt;span class="nt"&gt;/&amp;gt;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Every page that could be shared should have an &lt;code&gt;og:image&lt;/code&gt;. If you don't want to create unique images for every page, set a default fallback in your template.&lt;/p&gt;

&lt;h2&gt;
  
  
  2. Using a relative URL for og:image
&lt;/h2&gt;

&lt;p&gt;This works in your browser:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight html"&gt;&lt;code&gt;&lt;span class="c"&gt;&amp;lt;!-- Broken for OG --&amp;gt;&lt;/span&gt;
&lt;span class="nt"&gt;&amp;lt;meta&lt;/span&gt; &lt;span class="na"&gt;property=&lt;/span&gt;&lt;span class="s"&gt;"og:image"&lt;/span&gt; &lt;span class="na"&gt;content=&lt;/span&gt;&lt;span class="s"&gt;"/images/preview.jpg"&lt;/span&gt; &lt;span class="nt"&gt;/&amp;gt;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;But Facebook's crawler doesn't know your domain. It needs the full URL:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight html"&gt;&lt;code&gt;&lt;span class="c"&gt;&amp;lt;!-- Works --&amp;gt;&lt;/span&gt;
&lt;span class="nt"&gt;&amp;lt;meta&lt;/span&gt; &lt;span class="na"&gt;property=&lt;/span&gt;&lt;span class="s"&gt;"og:image"&lt;/span&gt; &lt;span class="na"&gt;content=&lt;/span&gt;&lt;span class="s"&gt;"https://yoursite.com/images/preview.jpg"&lt;/span&gt; &lt;span class="nt"&gt;/&amp;gt;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This applies to &lt;code&gt;og:url&lt;/code&gt; and &lt;code&gt;og:image&lt;/code&gt; — always use absolute URLs starting with &lt;code&gt;https://&lt;/code&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  3. og:image is the wrong size
&lt;/h2&gt;

&lt;p&gt;You uploaded your 100x100 logo as the OG image. On Facebook it gets stretched into a blurry mess. On LinkedIn it shows as a tiny thumbnail instead of a large card.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The sweet spot: 1200 x 630 pixels.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This works well across every major platform — Facebook, LinkedIn, Twitter/X, Slack, and Discord. Keep the file under 8MB and stick with JPG or PNG.&lt;/p&gt;

&lt;h2&gt;
  
  
  4. Missing og:title and og:description
&lt;/h2&gt;

&lt;p&gt;Without these, platforms fall back to your &lt;code&gt;&amp;lt;title&amp;gt;&lt;/code&gt; tag and &lt;code&gt;&amp;lt;meta name="description"&amp;gt;&lt;/code&gt;. Sometimes that's fine. But often your &lt;code&gt;&amp;lt;title&amp;gt;&lt;/code&gt; includes your site name and separators that look weird in a social card:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;"How to Deploy Next.js | My Blog — A Blog About Things"&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Set explicit OG tags with clean, shareable copy:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight html"&gt;&lt;code&gt;&lt;span class="nt"&gt;&amp;lt;meta&lt;/span&gt; &lt;span class="na"&gt;property=&lt;/span&gt;&lt;span class="s"&gt;"og:title"&lt;/span&gt; &lt;span class="na"&gt;content=&lt;/span&gt;&lt;span class="s"&gt;"How to Deploy Next.js"&lt;/span&gt; &lt;span class="nt"&gt;/&amp;gt;&lt;/span&gt;
&lt;span class="nt"&gt;&amp;lt;meta&lt;/span&gt; &lt;span class="na"&gt;property=&lt;/span&gt;&lt;span class="s"&gt;"og:description"&lt;/span&gt; &lt;span class="na"&gt;content=&lt;/span&gt;&lt;span class="s"&gt;"A step-by-step guide to deploying your Next.js app to Vercel, AWS, and Cloudflare."&lt;/span&gt; &lt;span class="nt"&gt;/&amp;gt;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Keep &lt;code&gt;og:title&lt;/code&gt; under 60 characters and &lt;code&gt;og:description&lt;/code&gt; under 155 characters to avoid truncation.&lt;/p&gt;

&lt;h2&gt;
  
  
  5. Not setting twitter:card
&lt;/h2&gt;

&lt;p&gt;Twitter/X has its own meta tag system. If you don't set &lt;code&gt;twitter:card&lt;/code&gt;, you might get a tiny "summary" card instead of the large image card you wanted.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight html"&gt;&lt;code&gt;&lt;span class="c"&gt;&amp;lt;!-- Small card with square thumbnail --&amp;gt;&lt;/span&gt;
&lt;span class="nt"&gt;&amp;lt;meta&lt;/span&gt; &lt;span class="na"&gt;name=&lt;/span&gt;&lt;span class="s"&gt;"twitter:card"&lt;/span&gt; &lt;span class="na"&gt;content=&lt;/span&gt;&lt;span class="s"&gt;"summary"&lt;/span&gt; &lt;span class="nt"&gt;/&amp;gt;&lt;/span&gt;

&lt;span class="c"&gt;&amp;lt;!-- Large card with full-width image — usually what you want --&amp;gt;&lt;/span&gt;
&lt;span class="nt"&gt;&amp;lt;meta&lt;/span&gt; &lt;span class="na"&gt;name=&lt;/span&gt;&lt;span class="s"&gt;"twitter:card"&lt;/span&gt; &lt;span class="na"&gt;content=&lt;/span&gt;&lt;span class="s"&gt;"summary_large_image"&lt;/span&gt; &lt;span class="nt"&gt;/&amp;gt;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Twitter/X will fall back to OG tags for the title, description, and image, so you don't need to duplicate everything. But you do need &lt;code&gt;twitter:card&lt;/code&gt; to control the layout.&lt;/p&gt;

&lt;h2&gt;
  
  
  6. Serving different content to crawlers
&lt;/h2&gt;

&lt;p&gt;Some setups accidentally serve different HTML to bots than to browsers. This happens with:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Client-side rendering&lt;/strong&gt;: If your OG tags are injected by JavaScript, crawlers won't see them. Facebook, Slack, and most platforms don't execute JS.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;A/B testing tools&lt;/strong&gt;: If your testing script swaps the &lt;code&gt;&amp;lt;title&amp;gt;&lt;/code&gt; or meta tags, crawlers might see the wrong variant.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Caching layers&lt;/strong&gt;: A CDN or cache might serve a stale version of your page to crawlers.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Fix&lt;/strong&gt;: Your OG tags must be in the initial HTML response. If you're using React/Next.js, use server-side rendering or static generation for your &lt;code&gt;&amp;lt;head&amp;gt;&lt;/code&gt; tags. In Next.js App Router, export &lt;code&gt;metadata&lt;/code&gt; from your &lt;code&gt;page.tsx&lt;/code&gt; — it's server-rendered by default.&lt;/p&gt;

&lt;h2&gt;
  
  
  7. Never testing your tags
&lt;/h2&gt;

&lt;p&gt;You set up OG tags once, deployed, and never checked. Then your image CDN changed domains, and every link shared for the last 3 months showed a broken preview.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Always test after deploying.&lt;/strong&gt; You can use the &lt;a href="https://smmall.cloud/tools/open-graph-checker" rel="noopener noreferrer"&gt;Smmall Cloud Open Graph Checker&lt;/a&gt; to see exactly what every platform will display — it shows previews for Google, Facebook, Twitter/X, and Slack side by side, plus warns you about missing or misconfigured tags.&lt;/p&gt;

&lt;p&gt;It's also useful to test periodically. CDN changes, CMS updates, and theme switches can silently break your OG tags.&lt;/p&gt;

&lt;h2&gt;
  
  
  Quick checklist
&lt;/h2&gt;

&lt;p&gt;Before you ship any page that might get shared:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;[ ] &lt;code&gt;og:title&lt;/code&gt; is set and under 60 characters&lt;/li&gt;
&lt;li&gt;[ ] &lt;code&gt;og:description&lt;/code&gt; is set and under 155 characters&lt;/li&gt;
&lt;li&gt;[ ] &lt;code&gt;og:image&lt;/code&gt; is set with an absolute URL, 1200x630px, under 8MB&lt;/li&gt;
&lt;li&gt;[ ] &lt;code&gt;og:url&lt;/code&gt; is set to the canonical URL&lt;/li&gt;
&lt;li&gt;[ ] &lt;code&gt;twitter:card&lt;/code&gt; is set to &lt;code&gt;summary_large_image&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;[ ] Tags are in the server-rendered HTML, not injected by JS&lt;/li&gt;
&lt;li&gt;[ ] You've tested the URL in a &lt;a href="https://smmall.cloud/tools/open-graph-checker" rel="noopener noreferrer"&gt;preview tool&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Get these right and your links will look polished every time someone shares them. It takes 5 minutes and makes a real difference in click-through rates.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Built a free &lt;a href="https://smmall.cloud/tools/open-graph-checker" rel="noopener noreferrer"&gt;Open Graph Checker&lt;/a&gt; that shows previews for Google, Facebook, Twitter/X, and Slack. Check your site's OG tags in seconds.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>html</category>
      <category>seo</category>
      <category>beginners</category>
    </item>
    <item>
      <title>Email files to your own cloud storage</title>
      <dc:creator>Levi Nunnink</dc:creator>
      <pubDate>Sun, 01 Jun 2025 00:40:48 +0000</pubDate>
      <link>https://dev.to/levinunnink/email-to-s3-3n1p</link>
      <guid>https://dev.to/levinunnink/email-to-s3-3n1p</guid>
      <description>&lt;p&gt;This is a submission for the &lt;a href="https://dev.to/challenges/postmark"&gt;Postmark Challenge: Inbox Innovators&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  What I Built
&lt;/h2&gt;

&lt;p&gt;I build a simple service that allows you to securely upload email attachments directly to AWS S3 simply by forwarding it to an email address. When your inbox gets crowded and you want to keep your important attachments in cloud storage, you can forward your email to &lt;code&gt;&amp;lt;folder&amp;gt;&lt;/code&gt;&lt;code&gt;@email-to-s3.email&lt;/code&gt;. The service will process those attachments using Postmark's Inbound API and store the files in your own cloud folder on S3.&lt;/p&gt;

&lt;h2&gt;
  
  
  Demo
&lt;/h2&gt;

&lt;p&gt;Forward any attachment:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl40ov9bbbjc5z3bt8io9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl40ov9bbbjc5z3bt8io9.png" alt="Forward any attachment"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The service uploads it and responds with a link to the file in the cloud:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwoly50vjzn34cz50312q.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwoly50vjzn34cz50312q.png" alt="Response message"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;It also stores it under a S3 folder linked to your email address so you can login and retrieve your uploads any time.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy2l10ue910cplekd7kyf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy2l10ue910cplekd7kyf.png" alt="Stored files"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://email-to-s3.email" rel="noopener noreferrer"&gt;Try it yourself&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This is just a proof of concept. There are a number of cases where it would be ideal to upload attachments to a cloud folder simply by forwarding an email: customer reports, notifications from internal systems, team collaboration, project management streamlining, etc. This code should be easy to adapt to these and many other use cases.&lt;/p&gt;

&lt;h2&gt;
  
  
  Code Repository
&lt;/h2&gt;


&lt;div class="ltag-github-readme-tag"&gt;
  &lt;div class="readme-overview"&gt;
    &lt;h2&gt;
      &lt;img src="https://assets.dev.to/assets/github-logo-5a155e1f9a670af7944dd5e12375bc76ed542ea80224905ecaf878b9157cdefc.svg" alt="GitHub logo"&gt;
      &lt;a href="https://github.com/levinunnink" rel="noopener noreferrer"&gt;
        levinunnink
      &lt;/a&gt; / &lt;a href="https://github.com/levinunnink/email-to-s3" rel="noopener noreferrer"&gt;
        email-to-s3
      &lt;/a&gt;
    &lt;/h2&gt;
    &lt;h3&gt;
      An example of how to upload email attachments to S3 using the Postmark Inbound hooks
    &lt;/h3&gt;
  &lt;/div&gt;
  &lt;div class="ltag-github-body"&gt;
    
&lt;div id="readme" class="md"&gt;
&lt;div class="markdown-heading"&gt;
&lt;h1 class="heading-element"&gt;Email to S3&lt;/h1&gt;
&lt;/div&gt;
&lt;p&gt;Lightweight Express.js service that receives inbound emails via Postmark and uploads attachments to any S3-compatible block storage (AWS S3, Cloudflare R2, DigitalOcean Spaces, etc.) under the pattern &lt;code&gt;&amp;lt;email-address&amp;gt;/&amp;lt;attachment-name&amp;gt;&lt;/code&gt;. No database dependencies.&lt;/p&gt;
&lt;div class="markdown-heading"&gt;
&lt;h2 class="heading-element"&gt;Features&lt;/h2&gt;
&lt;/div&gt;
&lt;ul&gt;
&lt;li&gt;Receives inbound email webhooks from Postmark&lt;/li&gt;
&lt;li&gt;Uploads attachments to S3&lt;/li&gt;
&lt;li&gt;No database dependencies&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="markdown-heading"&gt;
&lt;h2 class="heading-element"&gt;Requirements&lt;/h2&gt;
&lt;/div&gt;
&lt;ul&gt;
&lt;li&gt;Node.js &amp;gt;= 14&lt;/li&gt;
&lt;li&gt;Credentials for your storage (Access Key and Secret)&lt;/li&gt;
&lt;li&gt;S3 bucket or bucket-equivalent name created in your provider&lt;/li&gt;
&lt;li&gt;Postmark server API token&lt;/li&gt;
&lt;li&gt;Serverless Framework installed (&lt;code&gt;npm install -g serverless&lt;/code&gt;)&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="markdown-heading"&gt;
&lt;h2 class="heading-element"&gt;Installation&lt;/h2&gt;

&lt;/div&gt;
&lt;div class="highlight highlight-source-shell notranslate position-relative overflow-auto js-code-highlight"&gt;
&lt;pre&gt;git clone https://github.com/levinunnink/email-to-s3.git
&lt;span class="pl-c1"&gt;cd&lt;/span&gt; email-to-s3
npm install&lt;/pre&gt;

&lt;/div&gt;
&lt;div class="markdown-heading"&gt;
&lt;h2 class="heading-element"&gt;Configuration&lt;/h2&gt;

&lt;/div&gt;
&lt;div class="markdown-heading"&gt;
&lt;h2 class="heading-element"&gt;Environment Variables&lt;/h2&gt;

&lt;/div&gt;
&lt;p&gt;Copy &lt;code&gt;.env.example&lt;/code&gt; to &lt;code&gt;.env&lt;/code&gt; and fill in:&lt;/p&gt;
&lt;div class="highlight highlight-source-dotenv notranslate position-relative overflow-auto js-code-highlight"&gt;
&lt;pre&gt;&lt;span class="pl-c"&gt;&lt;span class="pl-c"&gt;#&lt;/span&gt; .env&lt;/span&gt;
&lt;span class="pl-v"&gt;STORAGE_ENDPOINT&lt;/span&gt;&lt;span class="pl-k"&gt;=&lt;/span&gt;&lt;span class="pl-s"&gt;https://&amp;lt;account&amp;gt;.&amp;lt;region&amp;gt;.r2.cloudflarestorage.com&lt;/span&gt;  &lt;span class="pl-c"&gt;&lt;span class="pl-c"&gt;#&lt;/span&gt; e.g. Cloudflare R2 endpoint&lt;/span&gt;
&lt;span class="pl-v"&gt;STORAGE_REGION&lt;/span&gt;&lt;span class="pl-k"&gt;=&lt;/span&gt;&lt;span class="pl-s"&gt;auto&lt;/span&gt;                             &lt;span class="pl-c"&gt;&lt;span class="pl-c"&gt;#&lt;/span&gt; Some providers may ignore region&lt;/span&gt;
&lt;span class="pl-v"&gt;STORAGE_ACCESS_KEY_ID&lt;/span&gt;&lt;span class="pl-k"&gt;=&lt;/span&gt;&lt;span class="pl-s"&gt;your-access-key-id&lt;/span&gt;
&lt;span class="pl-v"&gt;STORAGE_SECRET_ACCESS_KEY&lt;/span&gt;&lt;span class="pl-k"&gt;=&lt;/span&gt;&lt;span class="pl-s"&gt;your-secret-access-key&lt;/span&gt;
&lt;span class="pl-v"&gt;STORAGE_BUCKET&lt;/span&gt;&lt;span class="pl-k"&gt;=&lt;/span&gt;&lt;span class="pl-s"&gt;your-bucket-name&lt;/span&gt;
&lt;span class="pl-v"&gt;POSTMARK_SERVER_TOKEN&lt;/span&gt;&lt;span class="pl-k"&gt;=&lt;/span&gt;&lt;span class="pl-s"&gt;your-postmark-token&lt;/span&gt;
&lt;span class="pl-v"&gt;PORT&lt;/span&gt;&lt;span class="pl-k"&gt;=&lt;/span&gt;&lt;span class="pl-s"&gt;3000&lt;/span&gt;
&lt;span class="pl-v"&gt;SEND_EMAIL_RESPONSE&lt;/span&gt;&lt;span class="pl-k"&gt;=&lt;/span&gt;&lt;span class="pl-s"&gt;1&lt;/span&gt; &lt;span class="pl-c"&gt;&lt;span class="pl-c"&gt;#&lt;/span&gt; Indicates if the service should send a response email with the&lt;/span&gt;&lt;/pre&gt;…
&lt;/div&gt;
&lt;/div&gt;
  &lt;/div&gt;
  &lt;div class="gh-btn-container"&gt;&lt;a class="gh-btn" href="https://github.com/levinunnink/email-to-s3" rel="noopener noreferrer"&gt;View on GitHub&lt;/a&gt;&lt;/div&gt;
&lt;/div&gt;


&lt;h2&gt;
  
  
  How I Built It
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Tech Stack&lt;/strong&gt;: Node.js &amp;amp; TypeScript, Express.js, AWS SDK v3, Postmark SDK, Serverless Framework&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Storage Abstraction&lt;/strong&gt;: A generic &lt;code&gt;StorageClient&lt;/code&gt; wraps any S3-compatible endpoint, using &lt;code&gt;forcePathStyle&lt;/code&gt; and custom &lt;code&gt;endpoint&lt;/code&gt; for providers like R2 or Spaces.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Serverless&lt;/strong&gt;: Deployed as an AWS Lambda behind API Gateway, defined in &lt;code&gt;serverless.yml&lt;/code&gt; with least-privilege IAM. This should be trivial to stand up on AWS, Cloudflare Pages, or DigitalOcean Apps.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Postmark Integration&lt;/strong&gt;: Parses &lt;code&gt;InboundMessage&lt;/code&gt;, decodes Base64 attachments, and streams each file into your bucket. On failure, returns HTTP 500 so Postmark will retry. Additionally we use the Postmark Transactional API to send email responses with the uploaded attachments. (This can be disabled with a an env variable.)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Developer Experience&lt;/strong&gt;: Zero database. You just configure environment variables, deploy, and you’re live. Local dev uses &lt;code&gt;ts-node-dev&lt;/code&gt; for quick iteration.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>devchallenge</category>
      <category>postmarkchallenge</category>
      <category>webdev</category>
      <category>api</category>
    </item>
    <item>
      <title>Souring on Serverless</title>
      <dc:creator>Levi Nunnink</dc:creator>
      <pubDate>Thu, 07 Dec 2023 16:57:33 +0000</pubDate>
      <link>https://dev.to/levinunnink/souring-on-serverless-5gha</link>
      <guid>https://dev.to/levinunnink/souring-on-serverless-5gha</guid>
      <description>&lt;p&gt;For those who don’t know, Serverless computing is a cloud computing execution model where the cloud provider dynamically manages the allocation of machine resources. In a serverless architecture, developers can build and run applications and services “without having to manage the infrastructure”. &lt;/p&gt;

&lt;p&gt;The big draw for me as someone who has no desire to ever SSH into a server again was the idea of a “resourceless” function that just runs when I need it. Even more magical was the idea of auto-scaling: The ability to handle any load of traffic without ever having to think about it is tantalizing. And I still think it is… in theory.&lt;/p&gt;

&lt;h3&gt;
  
  
  Cold starts are killer when you need a quick response
&lt;/h3&gt;

&lt;p&gt;One of the key things to know when building an application is what needs to be synchronous and what can be asynchronous. HTTP is a synchronous protocol, meaning that it needs to answer immediately when someone makes a request. This should have been my first clue that making HTTP requests to an infrastructure that needed to start up before it could respond wasn’t exactly a recipe for success. &lt;/p&gt;

&lt;p&gt;The end result is a web application that feels sluggish, where the loading states just linger longer than feels right. You can do static rendering, CDN caching, but if your API has to start up when responding to a request, it’s gonna feel slow.&lt;/p&gt;

&lt;p&gt;I really noticed this on &lt;a href="https://smmall.cloud"&gt;Smmall Cloud&lt;/a&gt;. That bouncing cloud was just getting too familiar.&lt;/p&gt;

&lt;h3&gt;
  
  
  The “solutions” don’t really work
&lt;/h3&gt;

&lt;p&gt;When a function is triggered in a serverless environment like AWS Lambda, the platform needs to initialize resources (such as setting up the execution environment, loading dependencies, etc.) before the function can start executing. But then there’s a “warm start” where the resources are already initialized and ready to execute. This happens in a situation where you’re hitting the function multiple times before it shuts down and returns its resources to the cloud. Warm Start vs   Cold Start might be a good technical design for AWS but it’s a nightmare for debugging performance on a web application. Think about a situation where &lt;em&gt;sometimes&lt;/em&gt; your page loads fast and &lt;em&gt;sometimes&lt;/em&gt; it takes forever.&lt;/p&gt;

&lt;p&gt;Serverless apps can be configured with “pre-provisioned concurrency”, effectively keeping one or two resources warm and reducing cold-start times. But, in my experience, this still has latency associated with it. &lt;a href="https://stackoverflow.com/questions/66174999/provisioned-concurrency-has-minor-impact-on-response-time-of-lambda-function"&gt;And I don’t think I’m the only one&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;If you step back the idea of “pre-provisioned concurrency” ruins one of the key selling points of Serverless in the first place: &lt;em&gt;not using resources when you don’t need them&lt;/em&gt;.  If you have a service that’s designed to shut down when it’s idle but you insert a mechanism to make sure it never shuts down, ask yourself, are you actually using the right service?&lt;/p&gt;

&lt;h3&gt;
  
  
  Maybe an always-on API is a good idea after all
&lt;/h3&gt;

&lt;p&gt;Finally after being tormented by the bouncing cloud one to many times, I decided to give Digital Ocean a spin and see what they had to offer. Within about 30 minutes I had my API deployed and running on their servers. And guess what? It’s faster. And it’s cheap too.&lt;/p&gt;

&lt;p&gt;Thus dies the Serverless dream.&lt;/p&gt;

&lt;p&gt;I’ll still use it for asynchronous processing. (The &lt;a href="https://sheetmonkey.io"&gt;Sheet Monkey&lt;/a&gt; ETL works great on Serverless with SQS.)&lt;/p&gt;

&lt;p&gt;But moving forward, when I’m building HTTP, it’s back to servers for me.&lt;/p&gt;

</description>
      <category>serverless</category>
      <category>architecture</category>
      <category>http</category>
      <category>api</category>
    </item>
    <item>
      <title>Sending data from a HTML form to a Google Sheet</title>
      <dc:creator>Levi Nunnink</dc:creator>
      <pubDate>Thu, 03 Sep 2020 15:13:27 +0000</pubDate>
      <link>https://dev.to/levinunnink/sending-data-from-a-html-form-to-a-google-sheet-3m42</link>
      <guid>https://dev.to/levinunnink/sending-data-from-a-html-form-to-a-google-sheet-3m42</guid>
      <description>&lt;p&gt;I've been running into this situation more and more often where I need to gather user data on a website for things like a mailing list, an opt-in form, or a survey, but I don't have a marketing platform to store the data in. They all have different pricing and features and I don't have time to figure out which to use. I just wanted to append submissions from my front-end to a Google Sheet (mapping fields to column headers) and worry about marketing platforms later. But I couldn't find a good service to do that.&lt;/p&gt;

&lt;p&gt;So I decided to build it myself. Couldn't be that hard, right?&lt;/p&gt;

&lt;p&gt;Here's how I did it:&lt;/p&gt;

&lt;h1&gt;
  
  
  Tech Stack
&lt;/h1&gt;

&lt;p&gt;As I've written before, I think the perfect tech stack for your startup is whatever you can use to get the job done fastest. For me that's a variation on the MERN stack with &lt;a href="https://www.serverless.com/" rel="noopener noreferrer"&gt;Serverless&lt;/a&gt; as the hosting framework.&lt;/p&gt;

&lt;p&gt;If you've never built a Serverless app before and are looking for something to get you started, &lt;a href="https://github.com/levinunnink/serverless-boilerplate" rel="noopener noreferrer"&gt;take a look at this boilerplate project I threw together.&lt;/a&gt; It's pretty basic but I use it for a lot projects to get things started.&lt;/p&gt;

&lt;p&gt;The key considerations as I looked at the project were: &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;We needed to use the HTTP request to validate the form input and throw a user-visible error. &lt;/li&gt;
&lt;li&gt;If everything looked good, that's when we needed to start talking to Google about updating a sheet. And because this was a third-party we needed to interact &lt;em&gt;responsibly&lt;/em&gt; and limit our rates. &lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;I've written about this in another article, but &lt;a href="https://dev.to/levinunnink/building-a-scalable-fault-tolerant-shopify-import-export-pipeline-with-aws-3f4"&gt;SQS FIFO queues are a great way to rate limit interactions with a third party api.&lt;/a&gt; So any interaction with Google needed to happen in a queue with a worker function. This is a perfect application for Serverless and FIFO.&lt;/p&gt;

&lt;p&gt;Ultimately the basic architecture I had sketched out looked like this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F7g88qbp5c0m3h4vgzk99.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F7g88qbp5c0m3h4vgzk99.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;With this framework in place, I needed to get down to the specifics of each bit of logic.&lt;/p&gt;

&lt;h2&gt;
  
  
  Working with the Google Sheets API
&lt;/h2&gt;

&lt;p&gt;My HTTP endpoint would be getting a POST payload like:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight jsx"&gt;&lt;code&gt;

&lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;DOB&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;6/20/1997&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;
    &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Name&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Jane Doe&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Email&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;jane@applemail.com&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;I needed to convert that to a sheet like:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fcujrp7kb763vey60dumw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fcujrp7kb763vey60dumw.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The only caveat is that I needed to order the data correctly so the values matched the columns in the sheet and then add it to the end of the sheet. Pretty simple.&lt;/p&gt;

&lt;p&gt;Note: All these examples use the &lt;a href="https://developers.google.com/sheets/api/reference/rest/v4" rel="noopener noreferrer"&gt;Google Sheets API, v4.&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://developers.google.com/sheets/api/reference/rest/v4/spreadsheets.values/append" rel="noopener noreferrer"&gt;https://developers.google.com/sheets/api/reference/rest/v4/spreadsheets.values/append&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight jsx"&gt;&lt;code&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;google&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;require&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;googleapis&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="kd"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;ExecuteSheetUpdateCommand&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="cm"&gt;/**
   * @param {string} spreadsheetId ID of the Google Spreadsheet.
   * @param {Object} data Object that contains the data to add to the sheet as key-value pairs.
   * @param {google.auth.OAuth2} auth An Google OAuth client with a valid access token: https://github.com/googleapis/google-api-nodejs-client.
  */&lt;/span&gt;
  &lt;span class="kd"&gt;static&lt;/span&gt; &lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="nf"&gt;exec&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;spreadsheetId&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;data&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;auth&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;sheets&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;google&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;sheets&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="na"&gt;version&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;v4&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;auth&lt;/span&gt; &lt;span class="p"&gt;});&lt;/span&gt;

    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;rows&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;data&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;Name&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;data&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;Email&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;data&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;DOB&lt;/span&gt;&lt;span class="p"&gt;];&lt;/span&gt;
    &lt;span class="c1"&gt;// Add our new data to the bottom of the sheet.&lt;/span&gt;
    &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;sheets&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;spreadsheets&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;values&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;append&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
      &lt;span class="nx"&gt;spreadsheetId&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="na"&gt;range&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;A1&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="na"&gt;valueInputOption&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;RAW&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="na"&gt;insertDataOption&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;INSERT_ROWS&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="na"&gt;resource&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="na"&gt;values&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;rows&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;
      &lt;span class="p"&gt;},&lt;/span&gt;
    &lt;span class="p"&gt;});&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Viola! With one simple function we were automatically mapping form data to Google Sheets.&lt;/p&gt;

&lt;p&gt;Now obviously &lt;em&gt;this function isn't great&lt;/em&gt;. It's coupling the form headers to the sheet structure with this line: &lt;code&gt;const rows = [data.Name, data.Email, data.DOB];&lt;/code&gt; You really shouldn't do that. (For example, if I moved a column in my spreadsheet, this function would keep inserting data into the old location and my sheet would have incorrect data.) But it's a bit more complicated to automatically map form fields to the sheet headers and I'm leaving that part out for the sake of this example.&lt;/p&gt;

&lt;h2&gt;
  
  
  Adding an REST endpoint with an SQS worker
&lt;/h2&gt;

&lt;p&gt;Ok, so we have a function that can send a JSON object to a Google Sheet but how do we do that with a HTML form? The answer is HTTP + SQS.&lt;/p&gt;

&lt;p&gt;The HTTP part is pretty simple if you're familiar with Node and &lt;a href="https://www.npmjs.com/package/express" rel="noopener noreferrer"&gt;Express&lt;/a&gt;. (You could just as easily deploy this on another node-friendly environment but I'm going to show you how to do with Serverless and AWS.) I use the &lt;a href="https://github.com/awslabs/aws-serverless-express" rel="noopener noreferrer"&gt;aws-serverless-express&lt;/a&gt; package to ship my express apps as Serverless Lambda functions. Combined with the &lt;a href="https://github.com/droplr/serverless-api-cloudfront" rel="noopener noreferrer"&gt;serverless-api-cloudfront&lt;/a&gt; package, it's incredibly easy to spin up a scalable API.&lt;/p&gt;

&lt;p&gt;Here's an express HTTP endpoint that begins the update to the Google Sheet:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight jsx"&gt;&lt;code&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;express&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;require&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;express&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;bodyParser&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;require&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;body-parser&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="c1"&gt;// An AWS SQS client&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;sqsClient&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;require&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;./clients/SQSClient&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;app&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;express&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;

&lt;span class="nx"&gt;app&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;use&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;bodyParser&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;urlencoded&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="na"&gt;extended&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt; &lt;span class="p"&gt;}));&lt;/span&gt;

&lt;span class="nx"&gt;app&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;post&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;/form/:spreadsheetId&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="k"&gt;async &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;req&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;res&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;next&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;spreadsheetId&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;req&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;params&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="c1"&gt;// The Google Sheet ID&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;body&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;req&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="c1"&gt;// The post body&lt;/span&gt;

  &lt;span class="cm"&gt;/* Note: You should run your own custom validation on the 
     * form before forwarding it on. In this example we just continue.
   *
     * At a minimum, make sure you have permission to update the 
   * sheet, otherwise this will break downstream.
     */&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;passedValidation&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

  &lt;span class="k"&gt;if&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;passedValidation&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="c1"&gt;// Send the data to our SQS queue for further processing&lt;/span&gt;
    &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;sqsClient&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;createEntry&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;sendMessage&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
      &lt;span class="nx"&gt;spreadsheetId&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="nx"&gt;body&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="p"&gt;});&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;else&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;throw&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;Error&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Invalid form data&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;

  &lt;span class="nx"&gt;res&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;status&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;200&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;send&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Submitted your form&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;And then here's the Lambda function that pulls the data off of the throttled SQS FIFO queue and processes it for Google:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight jsx"&gt;&lt;code&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;google&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;require&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;googleapis&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;ExecuteSheetUpdateCommand&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;require&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;../commands/ExecuteSheetUpdateCommand&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="nx"&gt;exports&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;handle&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;async &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;event&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;context&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;callback&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;messages&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;event&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;record&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;body&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

  &lt;span class="c1"&gt;// This little logic helps us throttle our API interactions&lt;/span&gt;
  &lt;span class="nx"&gt;messages&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;reduce&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;async &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;previousPromise&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;nextMessage&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;previousPromise&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;spreadsheetId&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;body&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;nextMessage&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;accessToken&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="cm"&gt;/* Load a valid access token for your Google user */&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="c1"&gt;// Construct an oAuth client with your client information that you've securely stored in the environment&lt;/span&gt;
        &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;oAuth2Client&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nx"&gt;google&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;auth&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;OAuth2&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
      &lt;span class="nx"&gt;process&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;env&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;GOOGLE_CLIENT_ID&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;process&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;env&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;GOOGLE_CLIENT_SECRET&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="kc"&gt;null&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="nx"&gt;oAuth2Client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;setCredentials&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
      &lt;span class="na"&gt;access_token&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;accessToken&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="p"&gt;});&lt;/span&gt;
    &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;ExecuteSheetUpdateCommand&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;exec&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;spreadsheetId&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;body&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;oAuth2Client&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;Promise&lt;/span&gt;&lt;span class="p"&gt;((&lt;/span&gt;&lt;span class="nx"&gt;resolve&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="nf"&gt;setTimeout&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;resolve&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;1000&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt; &lt;span class="c1"&gt;// Throttle for the Google API&lt;/span&gt;
    &lt;span class="p"&gt;});&lt;/span&gt;
  &lt;span class="p"&gt;},&lt;/span&gt; &lt;span class="nb"&gt;Promise&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;resolve&lt;/span&gt;&lt;span class="p"&gt;());&lt;/span&gt;

  &lt;span class="nf"&gt;callback&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
&lt;span class="p"&gt;};&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;The reason we're using &lt;a href="https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/FIFO-queues.html" rel="noopener noreferrer"&gt;SQS with FIFO&lt;/a&gt; and not just executing this all in the HTTP endpoint is because it allows us to quickly respond to the user who is submitting the form, and update the Sheet as soon as we can while respecting API limits.&lt;/p&gt;

&lt;p&gt;If we don't think about API limits, we could get ourselves into situations where the user is shown an error screen as soon as they submit a form. Not good. The Google Sheets API has a limit of "100 requests per 100 seconds per user", or 1 request / second is as fast as we can safely interact with it. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/FIFO-queues.html" rel="noopener noreferrer"&gt;SQS FIFO queues&lt;/a&gt; allow us to put our sheet updates into a single line, grouped by user id, and we can then throttle those executions using that &lt;code&gt;messages.reduce&lt;/code&gt; snippet above to make sure that we never go over our limit of 1 request / second / user. And we also get the added benefit of allowing AWS to do the hard work of throttling. The key is when your populating the FIFO queue, &lt;em&gt;make sure the &lt;code&gt;MessageGroupId&lt;/code&gt; is set to the Google user id that is making the OAuth request.&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Wrapping it up&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;By using a combination of these techniques and functions, you should be in a place where you can write an HTML form like:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight html"&gt;&lt;code&gt;

&lt;span class="nt"&gt;&amp;lt;form&lt;/span&gt; &lt;span class="na"&gt;action=&lt;/span&gt;&lt;span class="s"&gt;"https://&amp;lt;my-express-endpoint&amp;gt;/form/&amp;lt;my-sheet-id&amp;gt;"&lt;/span&gt; &lt;span class="na"&gt;method=&lt;/span&gt;&lt;span class="s"&gt;"post"&lt;/span&gt;&lt;span class="nt"&gt;&amp;gt;&lt;/span&gt;
&lt;span class="nt"&gt;&amp;lt;input&lt;/span&gt; &lt;span class="na"&gt;type=&lt;/span&gt;&lt;span class="s"&gt;"email"&lt;/span&gt; &lt;span class="na"&gt;name=&lt;/span&gt;&lt;span class="s"&gt;"Email"&lt;/span&gt; &lt;span class="na"&gt;placeholder=&lt;/span&gt;&lt;span class="s"&gt;"Enter your email"&lt;/span&gt; &lt;span class="na"&gt;required&lt;/span&gt; &lt;span class="nt"&gt;/&amp;gt;&lt;/span&gt;
&lt;span class="nt"&gt;&amp;lt;input&lt;/span&gt; &lt;span class="na"&gt;type=&lt;/span&gt;&lt;span class="s"&gt;"name"&lt;/span&gt; &lt;span class="na"&gt;name=&lt;/span&gt;&lt;span class="s"&gt;"Name"&lt;/span&gt; &lt;span class="na"&gt;placeholder=&lt;/span&gt;&lt;span class="s"&gt;"Enter your name"&lt;/span&gt; &lt;span class="na"&gt;required&lt;/span&gt; &lt;span class="nt"&gt;/&amp;gt;&lt;/span&gt;
&lt;span class="nt"&gt;&amp;lt;input&lt;/span&gt; &lt;span class="na"&gt;type=&lt;/span&gt;&lt;span class="s"&gt;"submit"&lt;/span&gt; &lt;span class="na"&gt;value=&lt;/span&gt;&lt;span class="s"&gt;"Submit"&lt;/span&gt; &lt;span class="nt"&gt;/&amp;gt;&lt;/span&gt;
&lt;span class="nt"&gt;&amp;lt;/form&amp;gt;&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;and data will magically show up in your Google Sheet every time it's submitted:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fkmtvp74pzza5lxildse7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fkmtvp74pzza5lxildse7.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Sheet Monkey
&lt;/h1&gt;

&lt;p&gt;Ok, that was a lot more work than I thought. That's why I ended up turning this into a &lt;a href="https://sheetmonkey.io" rel="noopener noreferrer"&gt;little indie product&lt;/a&gt;. If you need to send your HTML forms into a Google Sheet and don't want to go to the hassle of building your own solution, check out what I built at Sheet Monkey.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://sheetmonkey.io" rel="noopener noreferrer"&gt;&lt;br&gt;
&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F7hextqzhhj0gzcrtq6jn.png"&gt;&lt;br&gt;
&lt;/a&gt;&lt;/p&gt;

</description>
      <category>node</category>
      <category>serverless</category>
      <category>frontend</category>
      <category>html</category>
    </item>
    <item>
      <title>Building a scalable, fault-tolerant Shopify import / export pipeline with AWS</title>
      <dc:creator>Levi Nunnink</dc:creator>
      <pubDate>Thu, 23 Jan 2020 04:18:00 +0000</pubDate>
      <link>https://dev.to/levinunnink/building-a-scalable-fault-tolerant-shopify-import-export-pipeline-with-aws-3f4</link>
      <guid>https://dev.to/levinunnink/building-a-scalable-fault-tolerant-shopify-import-export-pipeline-with-aws-3f4</guid>
      <description>&lt;p&gt;Maybe you've had this problem: Some asks you to build a service that can export a bunch of from a system. Recently I was working on a side-project, a &lt;a href="https://developers.shopify.com/"&gt;Shopify integration&lt;/a&gt; that needed to export a bunch of Shopify API metadata from one store and import it into another store. &lt;/p&gt;

&lt;p&gt;It needed to work something like this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--QalrcUdS--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/ezbtvvvctknzo7d4k76t.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--QalrcUdS--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/ezbtvvvctknzo7d4k76t.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;That seems pretty simple, right? I thought so too. "This should take me a few days," I said. "I can knock it out over the weekend." 🤦‍♂️&lt;/p&gt;

&lt;p&gt;Here's what I quickly discovered:&lt;/p&gt;

&lt;h3&gt;
  
  
  Challenge: Exporting big datasets won't work using only HTTP
&lt;/h3&gt;

&lt;p&gt;HTTP is a synchronous protocol, which means that it if it sends a message it requires a response. If it doesn't get a response within a certain amount of time it gives up and throws an error. But my export needed to accommodate pages and pages of &lt;a href="https://help.shopify.com/en/manual/products/metafields"&gt;Shopify metadata&lt;/a&gt;, each bit of metadata needing to be built with multiple API calls. There was no way that I'll be able to build the export file in a time to respond to a single HTTP request.&lt;/p&gt;

&lt;p&gt;Even if I was able to put together the raw computing power to do a big export in a few seconds, there was no reason to do so. Important as they are, exports happen infrequently enough that users are fine waiting for them. HTTP was just the wrong application for the task.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--E-IVd6UX--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/7c0sihw0ppoln0oc10r8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--E-IVd6UX--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/7c0sihw0ppoln0oc10r8.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The first thing I did was switch to &lt;a href="https://aws.amazon.com/sqs/"&gt;AWS SQS&lt;/a&gt; and wire up a &lt;a href="https://aws.amazon.com/dynamodb/"&gt;DynamoDB&lt;/a&gt; table. So instead of my HTTP request trying to build an entire export, it just started an export and would regularly ask if it was done via polling. So the new system looked like:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--8pMAg0y4--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/v3owyf1w8hhqfsy2ebky.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--8pMAg0y4--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/v3owyf1w8hhqfsy2ebky.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Much better. This model didn't care how big or small the Shopify data set was. It just issued SQS messages and Lambda workers would assemble the data. DynamoDB kept track of state and would just keep issuing batches of SQS messages until nothing was left to be exported. It would &lt;em&gt;scale&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;Or so I thought, but I forgot one thing...&lt;/p&gt;

&lt;h3&gt;
  
  
  Challenge: Shopify's API is rate limited
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--BXYk9yw3--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/0b7wa97ey8qos5ywaa80.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--BXYk9yw3--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/0b7wa97ey8qos5ywaa80.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The problem with this lovely scalable, asynchronous system was that it was &lt;em&gt;too&lt;/em&gt; scalable. It performed like gangbusters, so much that we immediately started hitting the &lt;a href="https://help.shopify.com/en/api/reference/rest-admin-api-rate-limits"&gt;Shopify API rate limit&lt;/a&gt; as soon as it tried this with a decent sized store.&lt;/p&gt;

&lt;p&gt;The underlying problem with that my process was &lt;em&gt;asynchronous.&lt;/em&gt; It would just issue loads of SQS messages, generating loads of parallel lambda workers, themselves generating loads of API requests. It was the equivalent of a hungry mob swarming a single food truck. The answer in both cases is "form an orderly line! We will serve you in order." In the AWS world this is a &lt;a href="https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/FIFO-queues.html"&gt;FIFO queue&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--5szex4O3--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/w8t20cawqinf8hyil47s.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--5szex4O3--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/w8t20cawqinf8hyil47s.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;FIFO (first in, first out) queues are &lt;em&gt;special&lt;/em&gt; SQS queues that make sure your messages form an orderly line and nobody cuts. For me this meant that I could post as many SQS messages, as quickly as I wanted but my workers would only trigger &lt;em&gt;in sequence,&lt;/em&gt; never all at once. The best part of this is that I didn't have to significantly refactor any of my code. I only needed to change the queue type to FIFO, add a few extra parameters to my message, and AWS did the rest of the work. No more rate limit failures.&lt;/p&gt;

&lt;h3&gt;
  
  
  Challenge: Exports should expire
&lt;/h3&gt;

&lt;p&gt;The last part of this is that I wanted exports to expire within 24 hours after they were ready. But the more I thought about this little feature, the more complicated it seemed. Should I build a scheduled / chron function to query the DB for old records and delete them? That seemed like a lot of work and potentially brittle. That's when &lt;a href="https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/TTL.html"&gt;DynamoDB TTL&lt;/a&gt; came to the rescue.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--fa8G7VYV--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/feshqsretsqcqj7sl0jv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--fa8G7VYV--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/feshqsretsqcqj7sl0jv.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;TTL (time to live) is a lovely little feature of DynamoDB that lets you set an expiration date for a record. Just add a property that says how long the record should stick around. In this case, instead of writing a bunch of logic to do this myself, I let DynamoDB handle the complexity. What could have been hours of development turned into minutes.&lt;/p&gt;

&lt;h3&gt;
  
  
  Conclusion: Read the AWS docs
&lt;/h3&gt;

&lt;p&gt;AWS is an incredibly broad ecosystem with a lot of features that are easy to miss. Frankly, while the docs are thorough, they don't exactly hold your hand when you're trying to figure out how something works. But if you spend the time to do the research up front, you'll save yourself hours and potentially days of dev work.&lt;/p&gt;

&lt;p&gt;In this situation, two simple AWS features, FIFO and TTL, shaved days off of schedule that had already grown more than I wanted.&lt;/p&gt;

&lt;p&gt;===&lt;/p&gt;

&lt;p&gt;🤓 Hey, if you want to follow along with me as I build a static web host for designers, &lt;a href="https://wunderbucket.io"&gt;sign up for the Wunderbucket beta&lt;/a&gt;. Let me know that you're from the dev.to community and I'll be sure to put you to the front of the line.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://wunderbucket.io"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ALRYum1H--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://wunderbucket-home.wunderbucket.io/preview.2eb90704.png" alt="Wunderbucket: Static hosting for creative people"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>aws</category>
      <category>dynamodb</category>
      <category>architecture</category>
    </item>
    <item>
      <title>A dev's guide to interviewing another developer</title>
      <dc:creator>Levi Nunnink</dc:creator>
      <pubDate>Tue, 17 Sep 2019 23:50:24 +0000</pubDate>
      <link>https://dev.to/levinunnink/a-dev-s-guide-to-interviewing-another-developer-465c</link>
      <guid>https://dev.to/levinunnink/a-dev-s-guide-to-interviewing-another-developer-465c</guid>
      <description>&lt;p&gt;If you're a good developer, at some point someone (likely your boss or co-founder) is going to want to create more of you. And it'll probably be your job to do that. Here's the problem, just because you're an amazing programmer, doesn't mean you have the faintest idea of how to find more amazing programmers. Hiring (not scaling, not architecture, not fundraising) was the hardest skill that I had to learn as a technical leader. And some of my past employees would doubtless agree that I was &lt;em&gt;Not Great At It&lt;/em&gt;. My guess is that it’s going to be your biggest challenge too.&lt;/p&gt;

&lt;p&gt;If you make a good hire it can change your life; a bad one can sink it.  Here’s what I’ve learned over ten years of interviewing:&lt;/p&gt;

&lt;h2&gt;
  
  
  The interview is your best chance to avoid management
&lt;/h2&gt;

&lt;p&gt;The interview is your first line of defense. If you’re someone like me, the interview can be a very mysterious process. You’ve heard legends about Round Manhole Covers and Whiteboard Binary Trees but you’re not sure how that translates into a successful employee. Here’s how you should think about the interview: &lt;em&gt;It’s your best chance to avoid management.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Good employees will make you feel like you have super powers. Bad hires will make you feel like you’re dragging a tire. They’ll make you read books about management, employment law, performance reviews, and conflict resolution. They’ll tear you away from what you actually need to work on and make you focus on them.&lt;/p&gt;

&lt;p&gt;So when you conduct interviews, ask yourself: will this person work for me, or will I end up working for them. Will they make you great or will they make you a manager? Let me be clear: &lt;em&gt;you don’t have time to be a manager&lt;/em&gt;. Make sure you aren’t hiring a one-way ticket to just that.&lt;/p&gt;

&lt;h2&gt;
  
  
  Good first impressions are probably wrong
&lt;/h2&gt;

&lt;p&gt;Let’s mentally picture your first interview: &lt;/p&gt;

&lt;p&gt;&lt;em&gt;You’ve put out a job posting, you’ve gathered resumes and picked the ones that look promising. Now you’re in a coffee shop, clutching a notepad, waiting for your subject. You’ve been a little nervous about this interview. You’re an introvert yourself, which is probably why you chose a career that would let you hide behind a computer. Meeting strangers is definitely not something that makes you comfortable--much less meeting strangers to judge their merits. You’ve seen your interviewee on LinkedIn and exchanged some emails but what if they’re a weirdo? What if the conversation has a lot of awkward pauses? “Let’s have a conversation.” you told them. “Conversation”. That’s what you called it. You were too milquetoast to even call it an “Interview”. It felt too formal but what else is this? Every person who walks through the door you mentally compare to the LinkedIn avatar. Then with a little nervous twitch in your stomach, there they are. “Hi.” You extend your hand for a handshake, trying for a firm-but-not-aggressive pressure.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Some time later you’re shaking hands goodbye and this is much more comfortable. You sit back down in your chair and watch them leave the coffee shop and you find that you’re grinning. It ended up being a very pleasant forty minutes. Your interviewee was definitely not a weirdo. They were charming, impeccably dressed, oozing positivity and confidence.You shared some laughs and realized that you both had similar tastes in TV shows and politics. They seemed genuinely interested in everything you had to say. It really did feel like a natural conversation between two peers. They dropped a lot of phrases like “test-driven-development” and “functional programming”. Words appear in your mind “Great Teammate”. You picture them on the “About” page of your website. It seems like they belong there.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Pardon the vivid detail but this has been me during a number of interviews, which led to less-than-stellar hiring decisions. Frankly, it’s the easiest thing in the world to project positive vibes in an interview. Enjoy them while they last because those good vibes vanish quickly if the person is a bad hire. You’ll very quickly be wondering what happened to that charming person sitting across from you in the coffee shop and who is this vampire draining your lifeblood?&lt;/p&gt;

&lt;p&gt;If you’re making your first hires, put all your good feelings to the side. Feelings are cheap and they aren’t a good indicator if the person is actually competent. &lt;/p&gt;

&lt;p&gt;But I do have one caveat, &lt;strong&gt;listen to your bad feelings&lt;/strong&gt;: If something feels off during your interview, if you had a hard time communicating, if you’re happy when it’s over--pay attention to that. It’s not going to get better. If you have a hard time communicating in an interview, it’s not going to get easier when you’re trying to explain the strategy. If they speak in a condescending tone or become combative, imagine how fun that’s going to be when you’re up late trying to ship a new feature.&lt;/p&gt;

&lt;p&gt;So remember mere good first impressions are worthless. Bad first impressions are much more significant.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conduct a live coding test
&lt;/h2&gt;

&lt;p&gt;Maybe the idea of a live coding-test feels a bit heavy handed, a little bit awkward? Shove these feelings away and embrace your inner jerk. A live coding test is essential to making a good hiring decision.&lt;/p&gt;

&lt;p&gt;My recommendation is to start your interview with the ice-breaking, conversational stuff to put you both at ease but quickly move into the coding test. What I really want to see when I do a coding test is what sort of programmer I’m sitting next to. Here’s my suggestions for conducting a coding test that yields real insights:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Inform them ahead of time that you’re going to do a live coding exercise and give them an idea of what it will involve (front-end, back-end, algorithmic). Let them use their own laptop and editor.&lt;/li&gt;
&lt;li&gt;Test with real problems. If you aren’t doing a lot of algorithmic programming, don’t have them whiteboard an algorithm. The goal isn’t to have the programmer who can quickly recall their computer science coursework. It’s to find someone who can help you be Great At One Thing.&lt;/li&gt;
&lt;li&gt;Allow them to use the internet for reference. Honestly, I can’t imagine why you wouldn’t want to do this. Do you intend to force your team to code from memory? Then why act like this is important in the interview?&lt;/li&gt;
&lt;li&gt;Pick three problems for them to solve. Here’s a real set of tests from a session interviews that we did hiring javascript developers: &lt;a href="https://codepen.io/collection/nkwJkb/"&gt;https://codepen.io/collection/nkwJkb/&lt;/a&gt; They range from simple to difficult. I wanted the first test to something that even a decent junior javascript developer would be able to solve. (You’d be shocked at how many got stuck on problem one.) After each test was complete, we’d review the results and talk about their decisions together. &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;An example task from an interview session:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="cm"&gt;/* 

1. Load all of the URLs and detect their HTTP response code.
2. When they are all done report back to the interface how many of the functions succeeded and how many responded with status code that wasn't 200.

*/&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;urlsToLoad&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;http://codsepen.io/jobs&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;/about&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;/spark&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;http://codsepen.io//nothing-here-to-load&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;];&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;These are just suggestions. Maybe you don’t like them, maybe you do. But no matter what, just make sure that a live coding test is part of your interview. Watching how a developer solves a single problem is invaluable. You will save yourself from terrible hiring nightmares the second you start this practice.&lt;/p&gt;

&lt;h2&gt;
  
  
  Date before you marry
&lt;/h2&gt;

&lt;p&gt;In highly-competitive environments, where engineering talent is scarce this might be hard to pull off but I recommend starting your employment relationship with a contract project. Pick a short project (six weeks, maximum) and see how they perform. Pay them a good rate. The goal isn’t to get cheap labor, it’s to make sure that they’re a good long-term decision.&lt;/p&gt;

&lt;p&gt;After the contract is over, assess their work very carefully. Programmers are not static creatures, they will improve over time and add new skills, but don’t expect miracles if the project didn’t go well. If you had to jump in and get them unstuck at any point, that should be a deal-breaker. Difficulty communicating requirements should also be serious red flags. But in the best case scenario you should be able to proceed to full-time employment with high confidence that you’re making a great decision.&lt;/p&gt;




&lt;p&gt;Interviewing, hiring is an art not a science, which may irk us technical founders, but there are principles and practices that will help you make good decisions and keep you from disaster. Invest time and energy in your hiring process. Don’t wing it. It’s worth the extra upfront work to find those ideal engineers who will help you be better than you could be on your own.&lt;/p&gt;




&lt;p&gt;My name's Levi Nunnink. I've previously co-founded two tech startups and I'm working on a third. If you want to follow along with me as I build a static web host for creative people, &lt;a href="https://wunderbucket.io"&gt;sign up for the Wünderbucket beta&lt;/a&gt;. Let me know that you're from the dev.to community and I'll be sure to put you to the front of the line.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://wunderbucket.io"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ALRYum1H--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://wunderbucket-home.wunderbucket.io/preview.2eb90704.png" alt="Wunderbucket: Static hosting for creative people"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Photo by: &lt;a href="https://unsplash.com/@charlesdeluvio?utm_medium=referral&amp;amp;utm_campaign=photographer-credit&amp;amp;utm_content=creditBadge" rel="noopener noreferrer" title="Download free do whatever you want high-resolution photos from Charles 🇵🇭"&gt;Charles 🇵🇭&lt;/a&gt;&lt;/p&gt;

</description>
      <category>career</category>
      <category>startup</category>
      <category>management</category>
    </item>
    <item>
      <title>I ran a study to see who has fastest page load times: static, hosted, or dynamic sites. Here's what I learned.</title>
      <dc:creator>Levi Nunnink</dc:creator>
      <pubDate>Mon, 26 Aug 2019 23:22:15 +0000</pubDate>
      <link>https://dev.to/levinunnink/between-static-dynamic-and-hosted-websites-who-has-fastest-page-load-times-the-results-are-unexpected-54ap</link>
      <guid>https://dev.to/levinunnink/between-static-dynamic-and-hosted-websites-who-has-fastest-page-load-times-the-results-are-unexpected-54ap</guid>
      <description>&lt;p&gt;There's basically three different ways to deliver a website these days.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Hosted:&lt;/strong&gt; Services like Squarespace, Weebly, Wix and countless more fall under the "Hosted" banner. The appeal is that that you can build a website without having to know how to code. The disadvantage is that these sites come with limited customizability and if a feature doesn't exist, there's usually no way to add it in. (Just try figuring out how to auto-upload to Squarespace for example.) Because these services all rely on proprietary technology, it's hard to know how they operate under the hood. They're easy to use but they're the website equivalent of living in manufactured housing.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Dynamic:&lt;/strong&gt; These are websites powered by a Content Management System, using an interpreted language like PHP and storing content in a database like MySQL. WordPress is the unquestionable monster in this group, with &lt;a href="https://www.whoishostingthis.com/compare/wordpress/stats/" rel="noopener noreferrer"&gt;an estimated 75 million websites or &lt;em&gt;27% of the internet&lt;/em&gt; being powered by WordPress&lt;/a&gt;! In 2019, this is definitely the most popular way to deploy a website.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Static:&lt;/strong&gt; While this sort of website is as old as the internet, deploying 100% static sites is seeing a resurgence in popularity. Static sites come with some big architectural advantages in security over dynamic sites and they are infinitely customizable as opposed to hosted sites. And theoretically, they should be a lot faster than any other option. But is that the case...?&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I'm building a &lt;a href="https://wunderbucket.io" rel="noopener noreferrer"&gt;static hosting service for designers&lt;/a&gt; and I wanted to it to load pages fast; faster than WordPress and faster than Squarespace. But I wanted to see if that was a realistic expectation. I decided to run some tests on these different methods. &lt;/p&gt;

&lt;p&gt;With some Googling and searching around on &lt;a href="https://censys.io/" rel="noopener noreferrer"&gt;Censys&lt;/a&gt; I was able to put together a sample group of sites, segmented by service. Then it was just a matter of running them through &lt;a href="https://tools.pingdom.com" rel="noopener noreferrer"&gt;Pingdom's Speed Test&lt;/a&gt; service one by one and compiling the results. Here's what I found.&lt;/p&gt;

&lt;h2&gt;
  
  
  Hosted Sites
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://infogram.com/bar-chart-1h7v4pq1m05d6k0?live" rel="noopener noreferrer"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Ffiles.nunn.ink%2FiH3d5y.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I'll be honest, I expected these to perform the worst. Hosted, templated sites always feel slow to me. But the results weren't bad. The average page load for the sites I surveyed was 1467 milliseconds. But where the hosted services really shone was in &lt;em&gt;consistency&lt;/em&gt;. When you run a standard deviation calculation on the page load times, the hosted sites only showed an average deviation 485 milliseconds. This was for all sorts of different content, image heavy, text heavy, etc&lt;/p&gt;

&lt;p&gt;&lt;a href="https://infogram.com/bar-chart-1h7v4pq1m05d6k0?live" rel="noopener noreferrer"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Ffiles.nunn.ink%2FbbPQiQ.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The service with the lowest average page load time was &lt;a href="https://squarespace.com" rel="noopener noreferrer"&gt;Squarespace&lt;/a&gt; and the service with the most consistent load times (lowest standard deviation) was &lt;a href="https://wix.com" rel="noopener noreferrer"&gt;Wix&lt;/a&gt;. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;👉 Conclusion:&lt;/strong&gt; Hosted services are decent on page load times and they are most consistent. I.E. Due to their constraints, they're the hardest to &lt;em&gt;screw up royally&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://infogram.com/bar-chart-1h7v4pq1m05d6k0?live" rel="noopener noreferrer"&gt;View Hosted Sites stats on Infogram&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Dynamic Sites
&lt;/h2&gt;

&lt;p&gt;I've built a ton of Wordpress sites in my past, I've built a smaller number of TextPattern sites and I think I did one ExpressionEngine site about ten years ago. I include TextPattern and ExpressionEngine here because architecturally they're similar to WordPress and I wanted more data as a baseline. I was expecting this group of sites to load the slowest and this time I was right.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://infogram.com/dynamic-sites-1h1749jyd5ed6zj?live" rel="noopener noreferrer"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Ffiles.nunn.ink%2F0hGiao.png" alt="https://files.nunn.ink/0hGiao"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Dynamic sites took an average of 2,343 milliseconds to load. WordPress was the highest, taking an average of 3,081 seconds to load. And standard deviation was much higher for dynamic sites too:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://infogram.com/dynamic-sites-1h1749jyd5ed6zj?live" rel="noopener noreferrer"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Ffiles.nunn.ink%2Fwthmir.png" alt="https://files.nunn.ink/wthmir"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;WordPress had a standard deviation of 1,780 milliseconds, the highest out of any service that I surveyed.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;👉 Conclusion:&lt;/strong&gt; Despite their popularity, dynamic sites are the slowest and most dependent on individual configuration (I.E. hosting provider, plugins, caching, etc.). This is partly due to their complex architecture of interpreted language and database lookups that happens on every single page load. Once you mix in a large amount of unmoderated plugins of varying quality, it becomes hard to prevent sites slowing to a crawl. If speed is a major consideration for your site, look elsewhere or &lt;em&gt;proceed with caution&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://infogram.com/dynamic-sites-1h1749jyd5ed6zj?live" rel="noopener noreferrer"&gt;View Dynamic Sites stats on Infogram&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Static Sites
&lt;/h2&gt;

&lt;p&gt;Here's the part that I was excited about. Compared to Dynamic sites, static sites are things of beauty. All of the work happens locally or on a build machine so by the time it gets to a server, the only thing it has to do is serve up an HTML page--no database lookups, no language interpretation, no template compilation. Served over a CDN, this should be &lt;em&gt;blazing fast.&lt;/em&gt; And when I tested it, it was. Sort of...&lt;/p&gt;

&lt;p&gt;(Because static sites don't come with much of a fingerprint to show who the'yre hosted with, it was tricky to even find sites as samples to survey. I used Netlify's site of the week and then some searching on Censys to find samples for these three services.)&lt;/p&gt;

&lt;p&gt;&lt;a href="https://infogram.com/static-sites-1h7v4pq1edyq6k0?live" rel="noopener noreferrer"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Ffiles.nunn.ink%2FFphEMG.png" alt="https://files.nunn.ink/FphEMG/0hGiao"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://surge.sh/" rel="noopener noreferrer"&gt;Surge.sh&lt;/a&gt; and &lt;a href="https://zeit.co" rel="noopener noreferrer"&gt;Zeit Now&lt;/a&gt; ended up in a virtual tie for the fastest out of all the static services with Surge narrowly beating Zeit. &lt;a href="https://netlify.com" rel="noopener noreferrer"&gt;Netlify&lt;/a&gt; came in a little slower on page load times then I expected (even getting beat by Squarespace). The standard deviation results were interesting:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://infogram.com/static-sites-1h7v4pq1edyq6k0?live" rel="noopener noreferrer"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Ffiles.nunn.ink%2FGPIvP5.png" alt="https://files.nunn.ink/GPIvP5"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The static sites beat dynamic again but &lt;em&gt;lost out the hosted options by a significant margin&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;👉 Conclusion:&lt;/strong&gt; Static sites are the fastest way to serve a site &lt;em&gt;but it's not a magic bullet&lt;/em&gt;. You can lose that advantage if you don't build your HTML to take advantage of static hosting. If speed is your main consideration, make sure that you follow other best practices for optimizing page load in addition to serving a static site.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://infogram.com/static-sites-1h7v4pq1edyq6k0?live" rel="noopener noreferrer"&gt;View Static Sites stats on Infogram&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Final Results
&lt;/h2&gt;

&lt;p&gt;And the winners are...&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;🏆 Fastest Website Load Time:&lt;/strong&gt; &lt;a href="https://surge.sh" rel="noopener noreferrer"&gt;Surge.sh&lt;/a&gt;. Runner up: &lt;a href="https://zeit.co" rel="noopener noreferrer"&gt;Zeit&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;🏆 Most Consistent Website Load Time:&lt;/strong&gt; &lt;a href="https://wix.com" rel="noopener noreferrer"&gt;Wix&lt;/a&gt;. Runner up: &lt;a href="https://squarespace.com" rel="noopener noreferrer"&gt;Squarespace&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;And the losers:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;😖 Slowest Website Load Time:&lt;/strong&gt; &lt;a href="https://wordpress.com" rel="noopener noreferrer"&gt;Wordpress&lt;/a&gt;. Runner up: &lt;a href="https://textpattern.com" rel="noopener noreferrer"&gt;Textpattern&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;😖 Most Inconsistent Website Load Time:&lt;/strong&gt; &lt;a href="https://wordpress.com" rel="noopener noreferrer"&gt;Wordpress&lt;/a&gt;. Runner up: &lt;a href="https://zeit.co" rel="noopener noreferrer"&gt;Zeit&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Ffiles.nunn.ink%2FBU5QWc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Ffiles.nunn.ink%2FBU5QWc.png" alt="https://files.nunn.ink/BU5QWc"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://infogram.com/all-sites-1hnq41jdl5zd43z?live" rel="noopener noreferrer"&gt;View all stats on Infogram&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Final thoughts:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;If you love websites you should love and hate WordPress. I love WordPress because it's made building websites achievable for millions of users. I love WordPress because it treats its users with respect and doesn't trick them into using a product only to steal their data. In many ways it's a wonderful product. But I hate WordPress because it's the wrong architecture for so many of the sites it runs and, because of this, these sites run unbearably slow. I would venture that millions of WordPress sites (maybe even the majority of WordPress sites), should be rebuilt as static sites.&lt;/p&gt;

&lt;p&gt;That's why I'm so excited about services like Netlify, Zeit, and Surge. That's why I'm building my own static hosting service. I want more people to have websites but I want them to be &lt;em&gt;fast websites&lt;/em&gt;. The future doesn't belong to the template factories like Squarespace &amp;amp; co, it shouldn't. Things are looking very bright for static sites but &lt;em&gt;we need to make it more accessible to the developers who are using currently using WordPress to build their sites.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;===&lt;/p&gt;

&lt;p&gt;🤓 If you want to follow along with me as I build a static web host for creative people, &lt;a href="https://wunderbucket.io" rel="noopener noreferrer"&gt;sign up for the Wünderbucket beta&lt;/a&gt;. Let me know that you're from the dev.to community and I'll be sure to put you to the front of the line.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://wunderbucket.io" rel="noopener noreferrer"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwunderbucket-home.wunderbucket.io%2Fpreview.2eb90704.png" alt="Wunderbucket: Static hosting for creative people"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Disclaimer:&lt;/strong&gt; Do not substitute one man's results for a legitimate scientific study. There are lots of factors that can affect page load time other than what method / framework you're using. Any framework can be made to load fast and the best framework can load slowly.&lt;/p&gt;

</description>
      <category>hosting</category>
      <category>html</category>
      <category>wordpress</category>
    </item>
    <item>
      <title>Building an infinitely scalable cloud host for less than $5/mo</title>
      <dc:creator>Levi Nunnink</dc:creator>
      <pubDate>Wed, 07 Aug 2019 19:57:04 +0000</pubDate>
      <link>https://dev.to/levinunnink/building-an-infinitely-scalable-cloud-host-for-less-than-5-mo-1nkl</link>
      <guid>https://dev.to/levinunnink/building-an-infinitely-scalable-cloud-host-for-less-than-5-mo-1nkl</guid>
      <description>&lt;p&gt;So I’ve got this idea called Wünderbucket. It’s &lt;a href="https://wunderbucket.io" rel="noopener noreferrer"&gt;simple static web hosting for designer / developers who can only write HTML &amp;amp; CSS&lt;/a&gt;. I really want to build an app that’s sustainable. And for me sustainable means &lt;em&gt;cheap&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;I need a system that:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Costs nothing if nobody is using it.&lt;/strong&gt; If my idea sucks or it takes me longer to find users then I think, I don’t want to be paying for resources that I’m not using.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Scales up automatically to meet demand.&lt;/strong&gt; If my idea connects and people start registering, I don’t want to hit bottlenecks. My system should just &lt;em&gt;scale&lt;/em&gt; to meet demand.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Scales up affordably to meet demand.&lt;/strong&gt; This is critical. As the system scales, it needs to pay for itself. I can’t fund an expensive hosting bill for a hobby.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Seeing as Wünderbucket is cloud hosting, this could get expensive quick.&lt;/p&gt;

&lt;h2&gt;
  
  
  Paper napkin business model
&lt;/h2&gt;

&lt;p&gt;My model is freemium with Pro plans at $5/mo &amp;amp; $60/year. I’m assuming that we’re going to have 99% free users and 1% pro users (pretty standard SaaS conversion metrics). That means &lt;strong&gt;$5/mo has to pay for 100 users worth of cloud hosting costs and be profitable&lt;/strong&gt;. That’s not a lot of margin. &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;If cloud hosting costs are $2.50 per 100 users, I’ll need to get to 500k users before I could even support a full-time employee. That seems like a lot of users to support without any full-time employees. &lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Ffiles.nunn.ink%2FGoXqps.png"&gt;
&lt;/li&gt;
&lt;li&gt;But what if hosting costs are only $0.05 per 100 users? Seems aggressive but it means I could get to sustainability around 200k users.
&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Ffiles.nunn.ink%2FWlCDl5.png"&gt; Still a lot of users but it might be more doable.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The dynamics of this business are pretty simple: The lower I can keep our cloud hosting bill per 100 users, the quicker this becomes a self-sustaining enterprise. Here’s how I’m going to design this system so it works for me:&lt;/p&gt;

&lt;h2&gt;
  
  
  What Cloud to use?
&lt;/h2&gt;

&lt;p&gt;There’s really only three options that are worth considering: &lt;a href="https://aws.amazon.com/" rel="noopener noreferrer"&gt;AWS&lt;/a&gt;, &lt;a href="http://azure.com" rel="noopener noreferrer"&gt;Microsoft Azure&lt;/a&gt;, and &lt;a href="https://cloud.google.com" rel="noopener noreferrer"&gt;Google Cloud&lt;/a&gt;. They’re all fighting for eachother's business and they’ll give you secret discounts so it’s pretty tough to choose. &lt;/p&gt;

&lt;p&gt;I’m going with AWS because I’m really familiar with their stack. I’ve used Azure and Google in the past and they would do fine too with the features I’m looking for. This is one of those situations where I need to lean on my past experience and move fast so even if those hosts offer some discounts up front, the learning curve isn’t worth it to me. &lt;/p&gt;

&lt;p&gt;Bottom line, there’s not any significant across-the-board pricing advantage between the major clouds. They’re all racing to the bottom with their pricing. Just pick the one that you’re the most comfortable with and has the features that best fit your needs.&lt;/p&gt;

&lt;p&gt;Anyway, my pick is AWS.&lt;/p&gt;

&lt;h2&gt;
  
  
  Web application costs
&lt;/h2&gt;

&lt;p&gt;Here’s where we get into the first strategic decision. My service is going to have a &lt;a href="https://restfulapi.net/" rel="noopener noreferrer"&gt;REST API&lt;/a&gt; component that clients will use to manage their sites. Clients will need to interact with this API every time they update their site but it’s impossible to predict how often that will be. No matter what, it needs to be quick and respond back to the client fast.&lt;/p&gt;

&lt;p&gt;So let’s say I want to host my API web app on a couple of &lt;a href="https://aws.amazon.com/ec2/" rel="noopener noreferrer"&gt;EC2 instances&lt;/a&gt; with load balancing. Pretty standard setup. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fthepracticaldev.s3.amazonaws.com%2Fi%2F8c49rmb92qrq57dqvm9u.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fthepracticaldev.s3.amazonaws.com%2Fi%2F8c49rmb92qrq57dqvm9u.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Ok so with this model we already have a problem: the cost for my web application hosting is $61/mo. That means, I’d need to have 13 pro users and 1,300 free users before this broke even—way above the $0.05 per 100 users Holy Gail. This setup would have to easily support 120k users before it hit that number. I’m not sure two medium instances will be able to do that. And there’s another problem: I’m going to have to manually scale this infrastructure as my service scales: monitor response time, CPU usage, latency. That takes time and attention and more money. There’s a better way to do this. &lt;/p&gt;

&lt;p&gt;It’s called &lt;a href="https://serverless.com" rel="noopener noreferrer"&gt;Serverless&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;So what happens if I build my application logic using &lt;a href="https://aws.amazon.com/lambda/" rel="noopener noreferrer"&gt;Serverless Lambda&lt;/a&gt; functions behind &lt;a href="https://aws.amazon.com/api-gateway/" rel="noopener noreferrer"&gt;API Gateway&lt;/a&gt;?&lt;/p&gt;

&lt;p&gt;With Lambda I don’t have to worry about a server, I only need to know how much memory an API request is going to take and how long it should run. In my setup, each HTTP request translates to one Lambda invocation. All of my requests are only doing things like DB lookups and responding back to the client (nothing memory/cpu intensive like image resizing, etc) so I feel pretty comfortable using 128MB of memory for each invocation. But how many requests will my users make? Ok here’s some more paper napkin math:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Let’s say each user will update their website 200 times every month. (Seems like a lot but let’s be aggressive here.)&lt;/li&gt;
&lt;li&gt;That means 100 users will invoke 20k Lambdas every month. Lambda charges $0.20 per 1 million invocations. That means &lt;em&gt;100 users will cost me $0.004/mo&lt;/em&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;But I also want to use API Gateway so let’s add that in too:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;API Gateway charges $3.5/million requests so that would mean that 100 users will cost me $0.07/mo.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The other benefit with Lambda + API gateway is that it automatically scales to meet demand. &lt;em&gt;If nobody uses it. I pay nothing.&lt;/em&gt; If more people use it, I pay accordingly. I don’t need to monitor servers and try to match my EC2 instances to meet need. It all just happens. This is much more sustainable.&lt;/p&gt;

&lt;p&gt;One more thing though, Lambda and API Gateway have a free tier. It’s going to be a while before I even need to start paying for them. With these numbers I can scale to 25k users before I pay anything.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;⚡️ Web application total cost per 100 users (after 25k users): &lt;code&gt;$0.074&lt;/code&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Database costs
&lt;/h2&gt;

&lt;p&gt;&lt;em&gt;Sigh&lt;/em&gt;, databases. It’s impossible to build a web app without them but they’re almost always the bottleneck and a great source of pain and suffering. If you don’t do backups right, optimize your queries and indexes, and set up replication correctly you can find yourself in a world of hurt. If your app is slow, it’s almost always because of your database. Fortunately my service has a pretty simple data model: we’re just tracking users, websites, and what’s in those sites. A fundamental requirement is automated backups and the ability to restore. A simple data model makes No-SQL a good candidate but let’s look at this with a few different scenarios:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://aws.amazon.com/rds/aurora/" rel="noopener noreferrer"&gt;Relational DB&lt;/a&gt; (MySQL/PostgresDB): If I go with a db.t3.medium instance (seems small but maybe it will work for my needs) that’s $60/mo. And there’s no “free tier” here. I have to get to 120k users before that becomes profitable. And based on my experience with MySQL, I have serious doubts that this tiny instance is going to scale to 120k users.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://aws.amazon.com/documentdb/" rel="noopener noreferrer"&gt;No SQL DB (Mongo)&lt;/a&gt;: Mongo seems better suited to my needs but for the bare minimum—to host it on AWS using their fully managed option DocumentDB on a db.r4.large instance—it’s going to be $245/mo. No free tier. I’d have to hit 50k users before I even started breaking even on the database bill. I’d have to get to 1 million users before this became profitable enough to run at the margins I want. And there’s no way it’s going to scale up without changing instances from db.r4.large.25&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;So just to start development with any of those databases, I’d be immediately loosing money. &lt;/p&gt;

&lt;p&gt;But let’s look at &lt;a href="https://aws.amazon.com/dynamodb/" rel="noopener noreferrer"&gt;DynamoDB&lt;/a&gt;. &lt;/p&gt;

&lt;p&gt;Dynamo charges on requests, storage, and backups. Let’s crunch some numbers:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Requests.&lt;/strong&gt; There’s a pricing difference between read and write requests. My hosting service is going to be read heavy so I’ll guess a ratio of 20% write and 80% read. With my estimate of 20,000 requests per 100 users, the sum total requests would cost $0.054. ($0.05 writes and $0.004 reads).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Storage.&lt;/strong&gt; The data I’ll be storing is really small, even at scale, so I don’t anticipate going over the 25GB free tier.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Backups.&lt;/strong&gt; Let’s say that I get halfway to the 25GB free tier. My backups would cost $1.25/mo. This is the riskiest part here because, unless I clean out my data, this cost will just increase over time. Still it’s pretty cheap. I’m estimating that each user will consume about 10KB of storage. That means that backups will cost $0.00125/mo per 100 users.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;⚡️ Database total costs per 100 users: &lt;code&gt;$0.05&lt;/code&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  File hosting
&lt;/h2&gt;

&lt;p&gt;The last part of my service is hosting customer static web files and serving them up over a CDN. This is where things can get really dangerous. What happens if someone hosts a webpage with an embedded video and that webpage goes to the front of HackerNews and gets 200k visits in a single day? I need to be very careful how I structure the free and pro tiers so I don’t get killed with bandwidth and storage. Here’s what I’m thinking:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Free users: 100MB total storage and 100MB transfer/mo. &lt;/li&gt;
&lt;li&gt;Pro users: 1GB total storage and 10GB transfer/mo.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I’m building a static web host, not a media service. These limits should be more than enough for the average site with HTML, CSS, JS and images.&lt;/p&gt;

&lt;p&gt;For the block storage I’ll use &lt;a href="https://aws.amazon.com/s3/" rel="noopener noreferrer"&gt;S3&lt;/a&gt; and for the CDN I’ll use &lt;a href="https://aws.amazon.com/cloudfront/" rel="noopener noreferrer"&gt;CloudFront&lt;/a&gt;. (I’ll limit the CloudFront CDN to US and EU pricing just to make things simpler.)&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Free S3 storage costs: $0.23&lt;/li&gt;
&lt;li&gt;Pro S3 storage costs: $0.23&lt;/li&gt;
&lt;li&gt;Free data transfer costs: $0.084&lt;/li&gt;
&lt;li&gt;Pro data transfer costs: $0.85&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Again at this model, I won’t start even going over the free tier until I break 5,000 users.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;⚡️ Data storage and transfer costs per 100 users: &lt;code&gt;$1.394&lt;/code&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Wrapping things up
&lt;/h2&gt;

&lt;p&gt;One of the reasons I’ve written this article is that I wanted to show how big of an impact your technology and architecture choices can have on the sustainability of your business. You need to keep costs low when you’re starting out but you also need to keep them low as you scale. If you pick the wrong tech or price things the wrong way, you can find yourself with a hosting bill that you won’t be able to pay. But if you use technologies that are designed for your needs and budget, you can go from small to big while maintaining a healthy profit.&lt;/p&gt;

&lt;p&gt;To recap, here’s the ranges of pricing scenarios that I covered in this article:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;$306/mo to just start. I’d need to hit 6,000 users before I started to break even. About 300k users before I start making a healthy profit. But I would need to increase my infrastructure to accomodate. It’s likely that at 300k users, I would have be running a $5k/mo infrastructure bill for my EC2 instances and database hosting, giving me a $10,000 profit margin. That’s not bad but it’s barely enough for a full time employee and this system would need constant infrastructure maintenance to continue to grow.&lt;/li&gt;
&lt;li&gt;$1.424/mo to start. Automated scaling. Zero server maintenance. Instant profitability, even with small amounts of users. If I get to 300k users, I’ll be able to host them and make a $13,800 profit. The best part is that this would automatically scale up to meet the demand, allowing for investment in the product instead of investing scaling the infrastructure.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;⚡️ Final hosting costs per 100 users:  &lt;code&gt;$1.424/mo.&lt;/code&gt; &lt;code&gt;$3.58/mo&lt;/code&gt; profit.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;That looks pretty sustainable to me.&lt;/p&gt;

&lt;p&gt;===&lt;/p&gt;

&lt;p&gt;Obviously there’s other ways to make your business profitable than just picking serverless, cheap technologies. You can raise your pricing, increase conversion rates from free, reduce the amount of expensive features you give your free users, etc. But my recommendation is to go cheap when starting out. Build a business that automatically scales with your success and doesn’t punish your wallet for early experiments. Hit me up if you have any questions. Have fun!&lt;/p&gt;

</description>
      <category>aws</category>
      <category>scalability</category>
      <category>serverless</category>
      <category>lambda</category>
    </item>
    <item>
      <title>How to pick the right tech stack for your startup</title>
      <dc:creator>Levi Nunnink</dc:creator>
      <pubDate>Tue, 02 Jul 2019 20:54:38 +0000</pubDate>
      <link>https://dev.to/levinunnink/how-to-pick-the-right-tech-stack-for-your-startup-4cgo</link>
      <guid>https://dev.to/levinunnink/how-to-pick-the-right-tech-stack-for-your-startup-4cgo</guid>
      <description>&lt;h2&gt;
  
  
  Ugly but effective
&lt;/h2&gt;

&lt;p&gt;You know what sucks? The LAMP stack. On almost every front, it’s a lousy web stack. There’s a real sense of shame attached to the label &lt;a href="https://twitter.com/php_ceo"&gt;“PHP developer”&lt;/a&gt;. But you know what powered some of the most &lt;a href="https://www.quora.com/What-is-the-tech-stack-behind-Slack"&gt;insanely successful&lt;/a&gt; &lt;a href="https://linux.slashdot.org/story/09/04/11/1142246/how-facebook-runs-its-lamp-stack"&gt;startups&lt;/a&gt; in the last ten years? The LAMP stack.&lt;/p&gt;

&lt;p&gt;As a technical founder, your job isn’t to pick the “best” web technologies, it’s to pick adequate technologies that will make you best. What is the stack that will be fastest for you to ship a product? If you can whip up a LAMP app in no time, go with that. If you live and breathe Rails go with that. If you love MERN (my personal favorite) go with that.&lt;/p&gt;

&lt;p&gt;Originally, &lt;a href="https://droplr.com"&gt;Droplr&lt;/a&gt; started off as a CodeIgniter app, much to my shame in those days. I remember a palpable sense of embarrassment when I had to admit to some developers that Droplr’s API was nothing more than a bunch of PHP scripts connected to a MySQL database. But who the hell cares? The job wasn’t to build the most elegant API on the sexiest tech stack, it was to upload a file and give a user a link. It was great at that and that was the criteria our customers judged us on. Under the hood it was ugly but it was effective.&lt;/p&gt;

&lt;p&gt;So bottom line, you should already know what the ideal tech stack is for your startup and you can stop reading this article Pick what you’re best in.&lt;/p&gt;

&lt;p&gt;But here’s the tech stack I would use if I were starting a new project today. This is what is right for me. If it’s not right for you, that’s ok.&lt;/p&gt;

&lt;h2&gt;
  
  
  Different types of application logic
&lt;/h2&gt;

&lt;p&gt;At the core, every web app is going to contain pretty much the same thing:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Synchronous logic&lt;/strong&gt; (do this now)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Asynchronous logic&lt;/strong&gt; (do this and report back when you’re done)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Scheduled logic&lt;/strong&gt; (do this every hour/day/week).&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;If your project makes use of HTTP (of course it does), it’s going to contain a lot of synchronous logic. For example. REST APIs are going to be fundamentally synchronous, I.E. I want to query an endpoint and get back a list of objects. There’s no option for a HTTP request to report back later, it needs to give a result as quickly as possible. If it takes too long it fails (or throws a timeout error). Most basic web frameworks (Express, Rails, CodeIgniter) are built around synchronous logic.&lt;/p&gt;

&lt;p&gt;But don’t be fooled, asynchronous logic can be just as important. What if you need to write a service that takes a screenshot of a url at five different screen resolutions, in different geolocations, and then shows the result to the user? Most likely there’s no way you can do that synchronously in a single HTTP request. And you don’t really need to: the user can wait until the logic is complete to get the result. It’s an ideal case for asynchronous logic. Chances are, your app is going to need to run asynchronous logic. Your tech should be ready to handle this. Don’t make it an afterthought.&lt;/p&gt;

&lt;p&gt;Finally, scheduled logic is something that often gets forgotten but it can be actually very important. How are you going to backup your database? Track daily statistics? Send weekly email digests to your customers? These are all examples of scheduled logic. Maybe you’ll be able to get by for a few iterations without a good solution for scheduled logic but chances are you’ll need it sooner than you think.&lt;/p&gt;

&lt;p&gt;On top of all this, whatever solution you choose needs to be fast: fast to develop, fast to deploy, and fast to debug. When you’re starting out, always choose speed over scalability.&lt;/p&gt;

&lt;p&gt;If you already have a solution that meets these requirements that you can code crazy fast in, great! Use that one. But if you don’t, I’d recommend Node.JS + Serverless + MySQL/Postgres/Aurora or Mongo/DocumentDB as an ideal choice for your tech stack.&lt;/p&gt;

&lt;h2&gt;
  
  
  The case for Node.JS
&lt;/h2&gt;

&lt;p&gt;I’m sure there’s reasons to hate Node for linguistic purists but for me this is the language that will most help you be &lt;a href="https://bestonethingbook.com"&gt;best at one thing&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;First the robust NPM ecosystem might have some crap in it but it means that you’ll almost never encounter a problem without a pre-built solution. For example, you won’t have to spend hours or days writing your own date parser/formatter. Just &lt;code&gt;$ npm install moment&lt;/code&gt; and you have a better date library than you could have ever written yourself. For every challenge there is a decent to excellent solution. And if in a blue moon, there’s no acceptable solution, you can write your own module and publish it to npm. The beauty of npm is that it radically speeds up your development cycles by letting you focus on what you should actually be working on and results in a cleaner codebase, focused on its one job.&lt;/p&gt;

&lt;p&gt;Second, Node’s architecture is almost magical in how well it handles both synchronous and asynchronous logic. Asynchronous Javascript used to result in some ugly code (callback hell is a real thing) but with good support for &lt;a href="https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Statements/async_function"&gt;Async functions&lt;/a&gt; and &lt;a href="https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Promise"&gt;Promises&lt;/a&gt;, the end result code is much cleaner. If you’ve banged your head over your keyboard trying to make PHP or Ruby do asynchronous logic, Node will seem like a revelation to you. And &lt;a href="https://nodejs.org/api/repl.html"&gt;Node’s REPL module&lt;/a&gt; makes testing and debugging code effortlessly simple.&lt;/p&gt;

&lt;p&gt;Finally, Node has the advantage of Javascript. Yes, Javascript is an insane language but it runs just about everywhere on the stack. Imagine what it means to have everyone on your engineering team, back-end, front-end, full-stack, speaking the same language. Think of what that does for code reviews, for knowledge sharing, for best practices, for recruiting. Ultimately the result of a standard language means that you can stay fast as your team scales. Speed doesn’t stop being important as your team grows, it just gets harder. But Node.js + Javascript helps.&lt;/p&gt;

&lt;h2&gt;
  
  
  The case for going serverless
&lt;/h2&gt;

&lt;p&gt;When I started at Riskalyze and Droplr, one of my first tasks was provisioning a server in our cloud, installing and compiling dependencies, configuring firewalls and security. We had a document as long as your arm in our private Wiki with all the meticulous steps to set up a new production API instance. Did I enjoy server admin? No. Was it time consuming? Yes. Is there a better way? Hell, yes.&lt;/p&gt;

&lt;p&gt;If you haven’t heard of &lt;a href="https://serverless.com"&gt;Serverless&lt;/a&gt;, I bet you can guess what it is by the name: Serverless is a Node framework that lets you write functions simply run in the cloud without a server. It cuts out the server admin part of building and shipping a web app, it removes scaling from the equation (at least for a while), and deployment is a single step. It makes shipping a fully functional app fast. And remember, speed is everything.&lt;/p&gt;

&lt;p&gt;One of the beauties of Serverless is that it fully supports synchronous, asynchronous, and scheduled logic. It’s trivial to ship code that is triggered by a HTTP request, or a SNS notification, or a cron schedule. In one package it contains all the features you’ll need.&lt;/p&gt;

&lt;p&gt;You might be thinking, “that’s great but I don’t have an AWS cloud on my desktop, how do I develop an app without pushing every change to the cloud?” That’s where the flexibility of Node leaps to the rescue. For HTTP functions, you can easily run those on a local port like any other http app. For functions that rely more on AWS services (say they need to be triggered by a SNS message), I recommend running localstack.&lt;/p&gt;

&lt;p&gt;There’s a lot more that could be said about Serverless. At Droplr we’ve done a lot of work internally figuring out some best practices around this. For further reference, check out &lt;a href="https://github.com/levinunnink/serverless-boilerplate"&gt;my sample project&lt;/a&gt; that gives examples of synchronous, asynchronous, and scheduled logic: &lt;a href="https://github.com/levinunnink/serverless-boilerplate"&gt;https://github.com/levinunnink/serverless-boilerplate&lt;/a&gt; You can use this as a starting point for your own serverless app.&lt;/p&gt;

&lt;h2&gt;
  
  
  What about the database?
&lt;/h2&gt;

&lt;p&gt;As I previously mentioned, when I started at Riskalyze we used &lt;a href="https://github.com/levinunnink/serverless-boilerplate%5D"&gt;MySQL&lt;/a&gt;, which ended up being a great choice. Riskalyze is a fintech company and financial data is highly structured and relational. It was crucial for us to be able to join and roll up data based off of different schema keys. Postgres would have also been a fine choice. At Droplr we have a much simpler data set and we ended up going with &lt;a href="https://www.mongodb.com/"&gt;MongoDB&lt;/a&gt;. This also turned out to be a good choice as it allowed us to store huge sets of data without enforcing a rigid structure and needing to constantly migrate our data (very nice if your model is going to evolve with your company).&lt;/p&gt;

&lt;p&gt;Ultimately, the best answer is to pick what is going to be fastest for you. What’s going to be the easiest for you build schemas, write queries? Go with that one.&lt;/p&gt;

&lt;p&gt;The one thing that I’d strongly recommend is that whatever database solution you choose, make sure it’s managed. I.E. you’re not the one managing it. For every major database there’s a fine managed option available. You shouldn’t be the one worrying about backups and replication. Pay the money and use a service that will handle these things for you. You need to focus on being best at one thing, and that one thing isn’t database administration.&lt;/p&gt;

&lt;p&gt;===&lt;/p&gt;

&lt;p&gt;All of these are just my suggestions. The beauty of being a technical founder is that you get to pick the technology and as long as it allows you to be best at one thing and adequate everywhere else. Be proud of your tech stack. Let other people hate on LAMP. You go forth and ship your product!&lt;/p&gt;

&lt;p&gt;If you want to stay in touch or would like to chat about some of the stuff I brought up in this article, hit me up on Twitter &lt;a href="https://twitter.com/LeviNunnink"&gt;@LeviNunnink&lt;/a&gt;. I'm here to help.&lt;/p&gt;

</description>
      <category>serverless</category>
      <category>node</category>
      <category>javascript</category>
      <category>startup</category>
    </item>
  </channel>
</rss>
