<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: George Mandis</title>
    <description>The latest articles on DEV Community by George Mandis (@georgemandis).</description>
    <link>https://dev.to/georgemandis</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/georgemandis"/>
    <language>en</language>
    <item>
      <title>OpenAI Charges by the Minute, So Make the Minutes Shorter</title>
      <dc:creator>George Mandis</dc:creator>
      <pubDate>Tue, 24 Jun 2025 00:00:00 +0000</pubDate>
      <link>https://dev.to/georgemandis/openai-charges-by-the-minute-so-make-the-minutes-shorter-3e2l</link>
      <guid>https://dev.to/georgemandis/openai-charges-by-the-minute-so-make-the-minutes-shorter-3e2l</guid>
      <description>&lt;p&gt;Want to make OpenAI transcriptions faster and cheaper? Just speed up your audio.&lt;/p&gt;

&lt;p&gt;I mean that very literally. Run your audio through &lt;a href="https://gist.github.com/georgemandis/4fd62bf5027b7a058f913d5dc32c2040" rel="noopener noreferrer"&gt;ffmpeg&lt;/a&gt; at 2x or 3x before transcribing it. You’ll spend fewer tokens and less time waiting with almost no drop in transcription quality.&lt;/p&gt;

&lt;p&gt;That’s it!&lt;/p&gt;

&lt;p&gt;Here’s a script combining of all my favorite little toys and tricks to get the job. You’ll need &lt;a href="https://github.com/yt-dlp/yt-dlp" rel="noopener noreferrer"&gt;yt-dlp&lt;/a&gt;, &lt;a href="https://ffmpeg.org" rel="noopener noreferrer"&gt;ffmpeg&lt;/a&gt; and &lt;a href="https://github.com/simonw/llm" rel="noopener noreferrer"&gt;llm&lt;/a&gt; installed.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Extract the audio from the video
yt-dlp -f 'bestaudio[ext=m4a]' --extract-audio --audio-format m4a -o 'video-audio.m4a' "https://www.youtube.com/watch?v=LCEmiRjPEtQ" -k;

# Create a low-bitrate MP3 version at 3x speed
ffmpeg -i "video-audio.m4a" -filter:a "atempo=3.0" -ac 1 -b:a 64k video-audio-3x.mp3;

# Send it along to OpenAI for a transcription
curl --request POST \
  --url https://api.openai.com/v1/audio/transcriptions \
  --header "Authorization: Bearer $OPENAI_API_KEY" \
  --header 'Content-Type: multipart/form-data' \
  --form file=@video-audio-3x.mp3 \
  --form model=gpt-4o-transcribe &amp;gt; video-transcript.txt;

# Get a nice little summary

cat video-transcript.txt | llm --system "Summarize the main points of this talk."

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;I just saved you time by jumping straight to the point, but read-on if you want more of a story about how I accidentally discovered this while trying to summarize a 40-minute talk from Andrej Karpathy.&lt;/p&gt;

&lt;p&gt;Also read-on if you’re wondering why I didn’t just use the built-in auto-transcription that YouTube provides, though the short answer there is easy: I’m sort of a doofus and thought—incorrectly—it wasn’t available. So I did things the hard way.&lt;/p&gt;

&lt;h3&gt;
  
  
  I Just Wanted the TL;DW(atch)
&lt;/h3&gt;

&lt;p&gt;A former colleague of mine sent me &lt;a href="https://www.youtube.com/watch?v=LCEmiRjPEtQ" rel="noopener noreferrer"&gt;this talk&lt;/a&gt; from Andrej Karpathy about how AI is changing software. I wasn’t familiar with Andrej, but saw he’d worked at Tesla. That coupled with the talk being part of a Y Combinator series and 40 minutes made me think “Ugh. Do I… really want to watch this? Another 'AI is changing everything' talk from the usual suspects, to the usual crowds?”&lt;/p&gt;

&lt;p&gt;If ever there were a use-case for dumping something into an LLM to get the gist of it and walk away, this felt like it. I respected the person who sent it to me though and wanted to do the noble thing: use AI to summarize the thing for me, blindly trust it and engage with the person pretending I had watched it.&lt;/p&gt;

&lt;p&gt;My first instinct was to pipe the transcript into an LLM and get the gist of it. &lt;a href="https://gist.github.com/simonw/9932c6f10e241cfa6b19a4e08b283ca9" rel="noopener noreferrer"&gt;This script&lt;/a&gt; is the one I would previously reach for to pull the auto-generated transcripts from YouTube:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;yt-dlp --all-subs --skip-download \
  --sub-format ttml/vtt/best \
  [url]

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For some reason though, no subtitles were downloaded. I kept running into an error!&lt;/p&gt;

&lt;p&gt;Later, after some head-scratching and rereading &lt;a href="https://github.com/yt-dlp/yt-dlp?tab=readme-ov-file#subtitle-options" rel="noopener noreferrer"&gt;the documentation&lt;/a&gt;, I realized my version (2025.04.03) was outdated.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Long story short&lt;/strong&gt; : Updating to the latest version (2025.06.09) fixed it, but for some reason I did not try this &lt;em&gt;before&lt;/em&gt; going down a totally different rabbit hole. I guess I got this little write-up and exploration out of it though.&lt;/p&gt;

&lt;p&gt;If you care more about summarizing transcripts and less about the vagaries of audio-transcriptions and tokens, this is the correct answer and your off-ramp.&lt;/p&gt;

&lt;h3&gt;
  
  
  My Transcription Workflow
&lt;/h3&gt;

&lt;p&gt;I already had an old, home-brewed script that would extract the audio from any video URL, pipe it through &lt;a href="https://github.com/openai/whisper" rel="noopener noreferrer"&gt;whisper&lt;/a&gt; locally and dump the transcription in a text file.&lt;/p&gt;

&lt;p&gt;That worked, but I was on dwindling battery power in a coffee shop. Not ideal for longer, local inference, mighty as my M3 MacBook Air still feels to me. I figured I would try offloading it to &lt;a href="https://platform.openai.com/docs/guides/speech-to-text" rel="noopener noreferrer"&gt;OpenAI’s API&lt;/a&gt; instead. Surely that would be faster?&lt;/p&gt;

&lt;h3&gt;
  
  
  Testing OpenAI’s Transcription Tools
&lt;/h3&gt;

&lt;p&gt;Okay, using the &lt;code&gt;whisper-1&lt;/code&gt; model it’s &lt;em&gt;still&lt;/em&gt; pretty slow, but it gets the job done. Had I opted for the model I knew and moved on, the story might end here.&lt;/p&gt;

&lt;p&gt;However, out of curiosity, I went straight for the newer &lt;code&gt;gpt-4o-transcribe&lt;/code&gt; model first. It’s built to handle multimodal inputs and promises faster responses.&lt;/p&gt;

&lt;p&gt;I quickly hit another roadblock: there’s a 25-minute audio limit and my audio was nearly 40 minutes long.&lt;/p&gt;

&lt;h3&gt;
  
  
  Let's Try Something Obvious
&lt;/h3&gt;

&lt;p&gt;At first I thought about trimming the audio to fit somehow, but there wasn’t an obvious 14 minutes to cut. Trimming the beginning and end would give me a minute or so at most.&lt;/p&gt;

&lt;p&gt;An interesting, weird idea I thought about for a second but never tried was cutting a chunk or two out of the middle. Maybe I would somehow still have enough info for a relevant summary?&lt;/p&gt;

&lt;p&gt;Then it crossed my mind— &lt;strong&gt;what if I just sped up the audio before sending it over?&lt;/strong&gt; People listen to podcasts at accelerated 1-2x speeds all the time.&lt;/p&gt;

&lt;p&gt;So I wrote a &lt;a href="https://gist.github.com/georgemandis/4fd62bf5027b7a058f913d5dc32c2040" rel="noopener noreferrer"&gt;quick script&lt;/a&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ffmpeg -i video-audio.m4a -filter:a "atempo=2.0" -ac 1 -b:a 64k video-audio-2x.mp3

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Ta-da! Now I had something closer to a 20 minute file to send to OpenAI.&lt;/p&gt;

&lt;p&gt;I uploaded it and… it worked like a charm! &lt;a href="https://gist.github.com/georgemandis/b2a68b345262b94782fa6b08e41fbcf2" rel="noopener noreferrer"&gt;Behold the summary&lt;/a&gt; bestowed upon me that gave me enough confidence to reply to my colleague as though I had watched it.&lt;/p&gt;

&lt;p&gt;But there was something... interesting here. Did I just stumble across a sort of obvious, straightforward hack? Is everyone in the audio-transcription business already doing this and am I just haphazardly bumbling into their secrets?&lt;/p&gt;

&lt;p&gt;I had to dig deeper.&lt;/p&gt;

&lt;h3&gt;
  
  
  Why This Works: Our Brains Forgive, and So Does AI
&lt;/h3&gt;

&lt;p&gt;There’s an interesting parallel here in my mind with optimizing images. Traditionally you have lossy and lossless file formats. A lossy file-format kind of gives away the game in its description—the further you crunch and compact the bytes the more fidelity you’re going to lose. It works because the human brain just isn’t likely to pick-up on the artifacts and imperfection&lt;/p&gt;

&lt;p&gt;But even with a “lossless” file format there are tricks you can lean into that rely on the limits of human perception. One of the primary ways you can do that with a PNG or GIF is reducing the number of unique colors in the palette. You’d be surprised by how often a palette of 64 colors or fewer might actually be enough and perceived as significantly more.&lt;/p&gt;

&lt;p&gt;There’s also a parallel in my head between this and the brain’s ability to still comprehend text with spelling mistakes, dropped words and other errors, i.e. &lt;a href="https://en.wikipedia.org/wiki/Transposed_letter_effect" rel="noopener noreferrer"&gt;transposed letter effects&lt;/a&gt;. Our brains have a knack for filling in the gaps, and when you go looking through the world with magnifying glass you'll start to notice lots of them.&lt;/p&gt;

&lt;p&gt;Speeding up the audio starts to drop the more subtle sounds and occasionally shorter words from the audio, but it doesn’t seem to hurt my ability to &lt;em&gt;comprehend&lt;/em&gt; what I’m hearing—even if I do have to focus. These audio transcription models seem to be pretty good at this as well.&lt;/p&gt;

&lt;h3&gt;
  
  
  Wait—how far can I push this? Does It Actually Save Money?
&lt;/h3&gt;

&lt;p&gt;Turns out yes. OpenAI &lt;a href="https://platform.openai.com/docs/pricing" rel="noopener noreferrer"&gt;charges for transcription&lt;/a&gt; based on audio tokens, which scale with the duration of the input. Faster audio = fewer seconds = fewer tokens.&lt;/p&gt;

&lt;p&gt;Here are some rounded numbers based on the 40-minute audio file breaking down the audio input and text output token costs:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Speed&lt;/th&gt;
&lt;th&gt;Duration (seconds)&lt;/th&gt;
&lt;th&gt;Audio Input Tokens&lt;/th&gt;
&lt;th&gt;Input Token Cost&lt;/th&gt;
&lt;th&gt;Output Token Cost&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;1x (original)&lt;/td&gt;
&lt;td&gt;2,372&lt;/td&gt;
&lt;td&gt;NA (too long)&lt;/td&gt;
&lt;td&gt;NA&lt;/td&gt;
&lt;td&gt;NA&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;2x&lt;/td&gt;
&lt;td&gt;1,186&lt;/td&gt;
&lt;td&gt;11,856&lt;/td&gt;
&lt;td&gt;$0.07&lt;/td&gt;
&lt;td&gt;$0.02&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;3x&lt;/td&gt;
&lt;td&gt;791&lt;/td&gt;
&lt;td&gt;7,904&lt;/td&gt;
&lt;td&gt;$0.04&lt;/td&gt;
&lt;td&gt;$0.02&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;That’s a solid 33% price reduction on input tokens at 3x! However the bulk of your costs for these transcription models are still going to be the output tokens. Those are priced at $10 per 1M tokens whereas audio input tokens are priced at $6 per 1M token as of the time of this writing.&lt;/p&gt;

&lt;p&gt;Also interesting to note—my output tokens for the 2x and 3x versions were exactly the same: 2,048. This kind of makes sense, I think? To the extent the output tokens are a reflection of that models ability to understand and summarize the input, my takeaway is a “summarized” (i.e. reduced-token) version of the same audio yields the same amount of comprehensibility.&lt;/p&gt;

&lt;p&gt;This is also probably a reflection of the 4,096 token ceiling on transcriptions generally when using the &lt;code&gt;gpt-4o-transcription&lt;/code&gt; model. I suspect half the context window is reserved for the output tokens and this is basically reflecting our request using it up in its entirety. I suspect we might get diminishing results with longer transcriptions.&lt;/p&gt;

&lt;p&gt;But back to money.&lt;/p&gt;

&lt;p&gt;So the back-of-the-envelope calculator for a single transcription looks something like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;6 * (audio_input_tokens / 1_000_000) + 10 * (text_output_tokens / 1_000_000);

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That does &lt;em&gt;not&lt;/em&gt; quite seem to jibe with the estimated cost of $0.006 per minute stated on the pricing page, at least for the 2x speed. That version (19-20 minutes) seemed to cost about $0.09 whereas the 3x version (13 minutes) cost about $0.07 (pretty accurate actually), if I’m adding up the tokens correctly.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Pricing for 2x speed
6 * (11_856 / 1_000_000) + 10 * (2_048 / 1_000_000) = 0.09

# Pricing for 3x speed
6 * (7_904 / 1_000_000) + 10 * (2_048 / 1_000_000) = 0.07

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;It would seem that estimate isn’t just based on the length of the audio but also some assumptions around how many tokens per minute are going to be generated from a normal speaking cadence.&lt;/p&gt;

&lt;p&gt;That’s… kind of fascinating! I wonder how &lt;a href="https://en.wikipedia.org/wiki/John_Moschitta_Jr." rel="noopener noreferrer"&gt;John Moschitta’s&lt;/a&gt; feels about this.&lt;/p&gt;

&lt;p&gt;Comparing these costs to &lt;code&gt;whisper-1&lt;/code&gt; is easy because the pricing table more confidently advertises the cost—not “estimated” cost—as a flat $0.006 per minute. I’m assuming that’s minute of audio processed, not minute of inference.&lt;/p&gt;

&lt;p&gt;The &lt;code&gt;gpt-4o-transcription&lt;/code&gt; model actually compares pretty favorably.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Speed&lt;/th&gt;
&lt;th&gt;Duration&lt;/th&gt;
&lt;th&gt;Cost&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;1x&lt;/td&gt;
&lt;td&gt;2372&lt;/td&gt;
&lt;td&gt;$0.24&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;2x&lt;/td&gt;
&lt;td&gt;1186 seconds&lt;/td&gt;
&lt;td&gt;$0.12&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;3x&lt;/td&gt;
&lt;td&gt;791 seconds&lt;/td&gt;
&lt;td&gt;$0.08&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h3&gt;
  
  
  Does This Save Money?
&lt;/h3&gt;

&lt;p&gt;In short, yes! It’s not particularly rigorous, but it seems like we reduced the cost of transcribing our 40-minute audio file by 23% from $0.09 to $0.07 simply by speeding up the audio.&lt;/p&gt;

&lt;p&gt;If we could compare to a 1x version of the audio file trimmed to the 25-minute limit, I bet we could paint an even more impressive picture of cost reduction. We kind of can with the &lt;code&gt;whisper-1&lt;/code&gt; chart. You could make the case this technique reduced costs by 67%!&lt;/p&gt;

&lt;h3&gt;
  
  
  Is It Accurate?
&lt;/h3&gt;

&lt;p&gt;I don’t know—I didn’t watch it, lol. That was the whole point. And if that answer makes you uncomfortable, buckle-up for this future we're hurtling toward. Boy, howdy.&lt;/p&gt;

&lt;p&gt;More helpfully, I didn’t compare word-for-word, but spot checks on the 2x and 3x versions looked solid. 4x speed was too fast—the transcription started getting hilariously weird. So, 2x and 3x seem to be the sweet spot between efficiency and fidelity, though it will obviously depend on how fast the people are speaking in the first place.&lt;/p&gt;

&lt;h3&gt;
  
  
  Why Not 4x?
&lt;/h3&gt;

&lt;p&gt;When I pushed it to 4x the results became &lt;a href="https://gist.github.com/georgemandis/1ec4ef084789f92ee06ac6283338a194" rel="noopener noreferrer"&gt;comically unusable&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs5me2erlz9up3n572lfr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs5me2erlz9up3n572lfr.png" alt="Output of a 4x transcription mostly repeating " width="800" height="702"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;That sure didn't stop my call to summarize from &lt;a href="https://gist.github.com/georgemandis/1ec4ef084789f92ee06ac6283338a194#file-summarization-md" rel="noopener noreferrer"&gt;trying&lt;/a&gt; though.&lt;/p&gt;

&lt;p&gt;Hey, not the worst talk I've been to!&lt;/p&gt;

&lt;h3&gt;
  
  
  In Summary
&lt;/h3&gt;

&lt;p&gt;Always, in short, to save time and money, consider doubling or tripling the speed of the audio you want to transcribe. The trade-off is, as always, fidelity, but it’s not an insignificant savings.&lt;/p&gt;

&lt;p&gt;Simple, fast, and surprisingly effective.&lt;/p&gt;

&lt;h3&gt;
  
  
  TL;DR
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;OpenAI charges for transcriptions based on audio duration (&lt;code&gt;whisper-1&lt;/code&gt;) or tokens (&lt;code&gt;gpt-4o-transcribe&lt;/code&gt;).&lt;/li&gt;
&lt;li&gt;You can &lt;strong&gt;speed up audio&lt;/strong&gt; with &lt;code&gt;ffmpeg&lt;/code&gt; before uploading to save time and money.&lt;/li&gt;
&lt;li&gt;This reduces audio tokens (or duration), lowering your bill.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;2x or 3x speed&lt;/strong&gt; works well.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;4x speed&lt;/strong&gt;? Probably too much—but fun to try.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If you find problems with my math, have questions, found a more rigorous study qualitatively comparing different output speeds please &lt;a href="https://george.mand.is/contact" rel="noopener noreferrer"&gt;get in touch&lt;/a&gt;! Or if you thought this was so cool you want to &lt;a href="https://george.mand.is/hire" rel="noopener noreferrer"&gt;hire me&lt;/a&gt; for something fun...&lt;/p&gt;

</description>
      <category>ai</category>
      <category>openai</category>
      <category>productivity</category>
      <category>ffmpeg</category>
    </item>
    <item>
      <title>Querying Random Blog Posts with Netlify Functions</title>
      <dc:creator>George Mandis</dc:creator>
      <pubDate>Wed, 04 Dec 2019 00:00:00 +0000</pubDate>
      <link>https://dev.to/georgemandis/querying-random-blog-posts-with-netlify-functions-487c</link>
      <guid>https://dev.to/georgemandis/querying-random-blog-posts-with-netlify-functions-487c</guid>
      <description>&lt;p&gt;Inspired by something &lt;a href="https://sivers.org"&gt;Derek Sivers&lt;/a&gt; implemented on his site, I decided to add a URL to this site that automatically redirects to a random blog post. You can find the link via the &lt;strong&gt;/dev/random&lt;/strong&gt; menu item on my &lt;a href="https://george.mand.is"&gt;website&lt;/a&gt; or simply go to &lt;a href="https://george.mand.is/random"&gt;george.mand.is/random&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;I like it because it adds a quality that's tricky to capture on the web: "skimmability." It reminds me of being able to thumb through the pages of a book before committing.&lt;/p&gt;

&lt;h3&gt;
  
  
  With a Traditional Server Setup
&lt;/h3&gt;

&lt;p&gt;Setting this up on a traditional server would've been fairly straightforward. If it was running on Apache or NGINX for example it probably would've been just a matter of adding a line to the configuration file to redirect requests to another script on the server that could pick a blog post at random and tell the browser to redirect. There would be other implementation details, but that's the gist of it.&lt;/p&gt;

&lt;h3&gt;
  
  
  With Netlify
&lt;/h3&gt;

&lt;p&gt;This site, however, is hosted on &lt;a href="https://netlify.com"&gt;Netlify&lt;/a&gt;. In all the ways Netlify eases the development and deployment experience for some types of sites, doing relatively simple backend things often requires finding interesting workarounds.&lt;/p&gt;

&lt;p&gt;For this random URL redirection idea I was able to get this up and running without too much trouble using &lt;a href="https://www.netlify.com/products/functions/"&gt;Netlify Functions&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Here are the steps to take:&lt;/p&gt;

&lt;h4&gt;
  
  
  Install the Netlify Command Line Tool.
&lt;/h4&gt;

&lt;p&gt;This will allow you to setup and test your functions locally. You can find more information on the &lt;a href="https://cli.netlify.com/"&gt;documentation site&lt;/a&gt; on how to configure your project locally and connect it to one of your Netlify sites.&lt;/p&gt;

&lt;p&gt;Once you've successfully installed the command line tools and connected your local working folder to your site you can run &lt;code&gt;npm run dev&lt;/code&gt; in the console and access your site at &lt;code&gt;localhost:8888&lt;/code&gt; in the browser. Functions, redirects and other Netlify-specific features will behave just as though they're in production on Netlify's servers and allow us to test this feature as we're building it.&lt;/p&gt;

&lt;h4&gt;
  
  
  Setup Netlify functions.
&lt;/h4&gt;

&lt;p&gt;I suggest calling the folder &lt;code&gt;functions&lt;/code&gt; and configuring it via a &lt;code&gt;netlify.toml&lt;/code&gt; file instead of using the web interface. There is more informatoin about how to set that up on Netlify's &lt;a href="https://docs.netlify.com/functions/configure-and-deploy/#configure-the-functions-folder"&gt;documentation page about configuring functions&lt;/a&gt;.&lt;/p&gt;

&lt;h4&gt;
  
  
  Setup Your Redirect
&lt;/h4&gt;

&lt;p&gt;Create a &lt;strong&gt;_redirects&lt;/strong&gt; file in your Netlify site and add this line:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;  /random /.netlify/functions/random 302

&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;You can also set this up in your &lt;code&gt;netlify.toml&lt;/code&gt; file, which is explained in &lt;a href="https://www.netlify.com/blog/2019/01/16/redirect-rules-for-all-how-to-configure-redirects-for-your-static-site/"&gt;this blog post&lt;/a&gt;. My site has a lot of simple redirects though and I find the separation to be more manageable.&lt;/p&gt;

&lt;h4&gt;
  
  
  Selecting a Random URL From Your Blog
&lt;/h4&gt;

&lt;p&gt;We'll need a way to have all the URLs available in our function. This is the trickier part and will vary depending on how you built your site. There are many approaches, but this was my approach:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Create a special URL that returns a JSON feed that's nothing but URLs for all my blog posts&lt;/li&gt;
&lt;li&gt;Use &lt;code&gt;node-fetch&lt;/code&gt; in my function to pull in that data and pick one at random&lt;/li&gt;
&lt;li&gt;Send information in the header response to tell the browser to perform a 302 redirect to the random selection&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I debated adding some measuer of security to this special URL, but decided it didn't much matter. It's really no different than a sitemap and I've ensured that only blog post URLs are presented in this feed. You can see it here:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://george.mand.is/_all.json"&gt;george.mand.is/_all.json&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;You'll notice it's returning relative URLs. This is helpful for testing it locally.&lt;/p&gt;

&lt;p&gt;I found creating this feed fairly straightforward with &lt;a href="https://11ty.io"&gt;Eleventy&lt;/a&gt; but you could probably do this with whatever static generator you're using. If you're using &lt;a href="https://jekyllrb.com/"&gt;Jekyll&lt;/a&gt; I'd taking a look at my &lt;a href="https://github.com/snaptortoise/jekyll-json-feeds"&gt;Jekyll JSON feed templates&lt;/a&gt; on GitHub.&lt;/p&gt;

&lt;h4&gt;
  
  
  Creating the Function
&lt;/h4&gt;

&lt;p&gt;Last but not least we need to create the actual function! I've written mine in Node.js, but you could write yours in Go as well.&lt;/p&gt;

&lt;p&gt;It's worth noting that the directory structure influences the URL structure for your Netlify function. I've saved the file that contains my function at &lt;code&gt;functions/random.js&lt;/code&gt; in my project folder. This function's endpoint is automatically created at &lt;code&gt;/.netlify/functions/random&lt;/code&gt; both in production and locally.&lt;/p&gt;

&lt;p&gt;Here is the code:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;/**
 * Random Blog Post (/random)
 * ===
 * George Mandis (george.mand.is)
 */

require("dotenv").config();
const fetch = require("node-fetch");

exports.handler = function(event, context, callback) {
  fetch(`${process.env.URL}/_all.json`, { headers: { "Accept": "application/json" } })
    .then(res =&amp;gt; res.json())
    .then(response =&amp;gt; {
      const randomPost =
        response.posts[Math.round(Math.random() * response.posts.length - 1)];

      callback(null, {
        statusCode: 302,        
        body: `Location:${process.env.URL}${randomPost}`,
        headers: {
          "Location": `${process.env.URL}${randomPost}`          
        }
      });

    });
};

&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;If you've completed all of these steps you should be able to test your redirection URL locally at &lt;code&gt;localhost:8888/random&lt;/code&gt; and see a random blog post returned!&lt;/p&gt;

&lt;p&gt;So far I really enjoy this feature. Anecdotally I'm noticing a few more hits on older posts than normal, but it's nice even for my own sake. It's fun to be able to flip back through the posts I've written over the years.&lt;/p&gt;

</description>
      <category>netlify</category>
      <category>random</category>
      <category>serverless</category>
      <category>node</category>
    </item>
    <item>
      <title>Introducing Bubo RSS: An Absurdly Minimalist RSS Feed Reader</title>
      <dc:creator>George Mandis</dc:creator>
      <pubDate>Thu, 28 Nov 2019 00:00:00 +0000</pubDate>
      <link>https://dev.to/georgemandis/introducing-bubo-rss-an-absurdly-minimalist-rss-feed-reader-2180</link>
      <guid>https://dev.to/georgemandis/introducing-bubo-rss-an-absurdly-minimalist-rss-feed-reader-2180</guid>
      <description>&lt;p&gt;&lt;a href="https://github.com/georgemandis/bubo-rss"&gt;Bubo Reader&lt;/a&gt; is a somewhat irrationally minimalist RSS and JSON feed reader you can deploy on &lt;a href="https://netlify.com"&gt;Netlify&lt;/a&gt; in a few simple steps. I created this one weekend last summer after nostalgically lamenting the &lt;a href="https://killedbygoogle.com/"&gt;demise of Google Reader&lt;/a&gt; many years ago. It's named after the &lt;a href="https://www.youtube.com/watch?v=MYSeCfo9-NI"&gt;silly robot owl&lt;/a&gt; from the film &lt;em&gt;Clash of the Titans (1981)&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;View the demo&lt;/strong&gt; → &lt;a href="http://bubo-rss-demo.netlify.com/"&gt;bubo-rss-demo.netlify.com&lt;/a&gt;&lt;br&gt;&lt;br&gt;
&lt;strong&gt;View the source code&lt;/strong&gt; → &lt;a href="https://github.com/georgemandis/bubo-rss"&gt;github.com/georgemandis/bubo-rss&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You will find instructions on the &lt;a href="https://github.com/georgemandis/bubo-rss"&gt;project's GitHub page&lt;/a&gt; for how to make your own instance of Bubo Reader, update the list of feeds and deploy to Netlify. If you already have interconnected accounts on those two services it's really quite straightforward!&lt;/p&gt;

&lt;p&gt;Read on for more information about the original impetus, design goals and thoughts surrounding this project.&lt;/p&gt;

&lt;h3&gt;
  
  
  What Does "Irrationally Minimalist" Mean?
&lt;/h3&gt;

&lt;p&gt;Many RSS readers—including the former Google Reader—would pull the contents of a post into your feed so you could read everything in one place. Although I completely understand why someone would want to do that, I decided even that introduced too much complexity for my liking.&lt;/p&gt;

&lt;p&gt;My goal with Bubo Reader was to be able to see a list of the most recent posts from websites I like in one place with links to read them if I want. That's it. If I want to read something, I'll click through and read it on the publisher's site. If I want to keep track of what I've clicked on and read I can reflect that using the &lt;code&gt;a:visited&lt;/code&gt; pseudo selector in my CSS.&lt;/p&gt;

&lt;p&gt;I think it's often overlooked how many problems the browser has already solved for us. Sometimes modern web development feels like fighting against the problems we've already solved as we try to build things that pretend they don't live in the browser.&lt;/p&gt;

&lt;p&gt;Bubo Reader does not store posts in a database or keep track of what I've read. If an item is no longer available in the site's feed then it no longer appears in Bubo. If I miss something, that's just life. I can live with that.&lt;/p&gt;

&lt;h3&gt;
  
  
  Design Goals for This Project
&lt;/h3&gt;

&lt;p&gt;Many RSS feed reader services have sprouted up since Google Reader left, but they all do more than I need. All I wanted was something that:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Had an absurdly simple interface, relying almost entirely on default HTML/browser behaviors and functionality&lt;/strong&gt;. I'm using lesser-known HTML like the &lt;a href="https://developer.mozilla.org/en-US/docs/Web/HTML/Element/details"&gt;&lt;code&gt;details&lt;/code&gt; and &lt;code&gt;summary&lt;/code&gt; elements&lt;/a&gt; to hide and reveal content instead of recreating that wheel in JavaScript, for example.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Could be themed with CSS or mildly extended using JavaScript&lt;/strong&gt;. For my personal copy of Bubo Reader I changed the font to use the &lt;code&gt;system-ui&lt;/code&gt; font and bumped the font-size a little, but that's mostly it.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Didn't worry about pulling in the link's &lt;em&gt;content&lt;/em&gt; into the reader interface&lt;/strong&gt;. I'm happy to read most content on the site it originated from and many feeds only provide a short description of th content as it is. Mostly I wanted a single dashboard to know when new stuff is published and available.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Didn't rely on a database to see what I've read or keep an archive of content over time&lt;/strong&gt;. Again, I found the browser has a way of taking care of this for me. The &lt;code&gt;a:visited&lt;/code&gt; pseudo selector is enough indication for me to know whether or not I've already read something.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  A Healthier Way to Read Online
&lt;/h3&gt;

&lt;p&gt;I also think this is a healthier way to consume information and news. So much of social media is designed to reward things that hit the dopamine buttons in our brains, manipulate our common, &lt;a href="https://en.wikipedia.org/wiki/Fear_of_missing_out"&gt;human propensity for anxiety&lt;/a&gt; or distrubute outright propaganda.&lt;/p&gt;

&lt;h3&gt;
  
  
  Refreshing Content
&lt;/h3&gt;

&lt;p&gt;The beauty of running Bubo on Netlify is you can &lt;a href="https://www.netlify.com/docs/webhooks/#incoming-webhooks"&gt;setup a Build Hook&lt;/a&gt; to rebuild the site when you want to "refresh" the list of feeds. I'm using &lt;a href="https://ifttt.com"&gt;IFTTT&lt;/a&gt; to trigger rebuilds once an hour, which is a perfectly sane rate to consume information at. You could do the same, or use another service like Zapier, EasyCron, setup a cronjob on your server or even setup a cronjob to run locally on your machine and ping the hook as often as you wish.&lt;/p&gt;

&lt;h3&gt;
  
  
  What About Authentication?
&lt;/h3&gt;

&lt;p&gt;There is no authenticaton required for Bubo Reader. Netlify does offer Basic Authentication under their &lt;a href="https://www.netlify.com/pricing/"&gt;Pro plan&lt;/a&gt;, which would probably be the easiest solution to implement. You could also utilize their &lt;a href="https://www.netlify.com/docs/identity/?_ga=2.147267447.1334380953.1567004741-1681444902.1549770801"&gt;Identity&lt;/a&gt; feature to add some authentication. I don't subscribe to any private or sensitive feeds, so at the moment that isn't much of a priority for this project.&lt;/p&gt;

&lt;h3&gt;
  
  
  Adding New Feeds
&lt;/h3&gt;

&lt;p&gt;Find them in the site's source code and add them to the &lt;code&gt;feeds.json&lt;/code&gt; file. This is the trickiest part of this whole setup I suppose, but I'm willing to bet the people who clone projects from GitHub and deploy them to Netlify are okay with editing a little JSON from time to time.&lt;/p&gt;

&lt;p&gt;The first version of this project used &lt;a href="https://github.com/puppeteer/puppeteer"&gt;Puppeteer&lt;/a&gt; to extract the feeds from a site. This was actually quite cool, but would hang or fail periodicially. Builds were slow and there was a lot of work making sure things didn't timeout or use too much memory on the tiny server I'd setup. Simply parsing a list of known RSS feeds was much simpler and faster.&lt;/p&gt;

&lt;p&gt;It's on my list to look into converting this into a serverless version that could run using Netlify's Functions, but after using my own project for a month I realized it didn't make the thing feel much more usable to me.&lt;/p&gt;

&lt;h3&gt;
  
  
  In Summary
&lt;/h3&gt;

&lt;p&gt;If you find Bubo Reader useful, interesting or it inspires a project of your own I'd love to hear about it! Again, you'll find &lt;a href="https://github.com/georgemandis/bubo-rss"&gt;the code on GitHub&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>netlify</category>
      <category>rss</category>
    </item>
    <item>
      <title>What's the Difference between link preload, preconnect and prefetch</title>
      <dc:creator>George Mandis</dc:creator>
      <pubDate>Thu, 14 Nov 2019 00:00:00 +0000</pubDate>
      <link>https://dev.to/georgemandis/what-s-the-difference-between-link-preload-preconnect-and-prefetch-49na</link>
      <guid>https://dev.to/georgemandis/what-s-the-difference-between-link-preload-preconnect-and-prefetch-49na</guid>
      <description>&lt;p&gt;Lately I've been running &lt;a href="https://developers.google.com/web/fundamentals/performance/resource-prioritization"&gt;Lighthouse&lt;/a&gt; audits some of my client projects as well as my own. With my own projects I typically regard it as an opportunity to dive into some nitty-gritty optimizations that I might typically ignore for a client, only because the billable hours might be better spent elsewhere.&lt;/p&gt;

&lt;p&gt;One of those areas of optimization is resource prioritization with links tags such as these:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;lt;link rel="preload"&amp;gt;
&amp;lt;link rel="preconnect"&amp;gt;
&amp;lt;link rel="prefetch"&amp;gt; 

&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;I was generally aware of these and had even used them, but I wasn't actually aware of the difference between most of these offhand. When the Lighthouse audit recent led me to the &lt;a href="https://developers.google.com/web/fundamentals/performance/resource-prioritization"&gt;Google Developer documentation about resource prioritization&lt;/a&gt; I saw a chance to get better acquainted with the differences between them.&lt;/p&gt;

&lt;p&gt;It'll probably &lt;em&gt;serve&lt;/em&gt; you best (Pun!) to remember them in this order:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;code&gt;preload&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;preconnect&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;prefetch&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Any time you use one of these &lt;code&gt;&amp;lt;link rel="pre*"&amp;gt;&lt;/code&gt; tags you're telling the browser about a resource that you think is more important than others on the page and should be prioritized over them, in some fashion. The order that they're presented in above is basically in order from the "most" important of the important things to the least.&lt;/p&gt;

&lt;p&gt;How exactly are these things prioritized? Well, rather than completely plagiarize some excellent documentation (which I started to realize was going to happen as I kept writing this blog post) you can read more about the details of each of these here:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://developers.google.com/web/fundamentals/performance/resource-prioritization"&gt;https://developers.google.com/web/fundamentals/performance/resource-prioritization&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Here's my short summary of when you might use these:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;preload&lt;/code&gt;&lt;/strong&gt; : Use for fonts and super-critical assets that you absolutely know you'll be using.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;preconnect&lt;/code&gt;&lt;/strong&gt; : Use for CDNs or any resource you know you'll be loading from another domain.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;prefetch&lt;/code&gt;&lt;/strong&gt; : Any content we &lt;em&gt;thnk&lt;/em&gt; we'll need soon after some kind of user interaction. An interesting example is to prefetch the HTML content for another page that we think the user will be navigating to.&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>webdev</category>
      <category>link</category>
      <category>optimization</category>
      <category>lighthouse</category>
    </item>
    <item>
      <title>Detecting invalid dates in JavaScript</title>
      <dc:creator>George Mandis</dc:creator>
      <pubDate>Thu, 27 Jun 2019 00:00:00 +0000</pubDate>
      <link>https://dev.to/georgemandis/detecting-invalid-dates-in-javascript-1plc</link>
      <guid>https://dev.to/georgemandis/detecting-invalid-dates-in-javascript-1plc</guid>
      <description>&lt;p&gt;When you want to instantiate a new instance of a JavaScript date object you can pass along date information as a string like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;new Date('June 27, 2019')

&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;In fact, it's somewhat forgiving in accepting a variety of date formats. All of these in Chrome 75 as of this writing are valid ways to create a date object:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;new Date('June 27, 2019')
new Date('2019, June 27')
new Date('2019/06/27')
new Date('6/27/19')
new Date('6-27-19')
new Date('6/27/2019')
new Date('6-27-19')
new Date('2019-06-27')
new Date('6/27/19 11:00')
new Date('2019-06-27')

&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;For production I wouldn't suggest doing this. I'd suggest leaning on an established library like &lt;a href="https://momentjs.com/"&gt;Moment.js&lt;/a&gt; or one of the many smaller alternatives suggested in &lt;a href="https://github.com/you-dont-need/You-Dont-Need-Momentjs"&gt;this GitHub repository&lt;/a&gt;. But if you're just experimenting with something and need a quick-and-dirty solution to creating a date object, knowing you can do this is handy.&lt;/p&gt;

&lt;p&gt;It's also handy to know if you've accidentally created an invalid date. Looking at the output in the console, this seemed like this should be easy:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;new Date('My Birthday')
// Invalid Date

&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;You might expect to be able to do something like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const birthday = new Date('My Birthday')

if (birthday === "Invalid Date") { ... }

&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;But you would be mistaken! While it's generally considered a best practice to test for strict equality and use that triple equals sign, it's not helping you here. The value spit out to the console isn't actually a string, strange as that might seem.&lt;/p&gt;

&lt;p&gt;Technically, this will work:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;if (birthday == "Invalid Date") { ... }

&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;But I don't fully trust it and am not sure about support in older browsers.&lt;/p&gt;

&lt;p&gt;What's helpful to remember is that the date object is just a number, provided it's formatted right:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const day1 = new Date('June 27, 2019')
const day2 = new Date('My Birthday')

console.log(day1.valueOf())
// 1561618800000 

console.log(day2.valueOf())
// NaN

&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;That means we can use the delightfully bizarre &lt;a href="https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/isNaN"&gt;isNaN()&lt;/a&gt; function to figure out if our date is formatted correctly or not:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;if (isNaN(birthday)) { ... }

&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Voilà.&lt;/p&gt;

&lt;p&gt;If you're new to all of this JavaScript/Date/NaN chicanery, I'll leave you with this to copy-and-paste into your console:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;typeof NaN

&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



</description>
      <category>javascript</category>
      <category>dates</category>
      <category>isnan</category>
    </item>
    <item>
      <title>Edge Canary Supports Shape Detection API</title>
      <dc:creator>George Mandis</dc:creator>
      <pubDate>Tue, 21 May 2019 00:00:00 +0000</pubDate>
      <link>https://dev.to/georgemandis/edge-canary-supports-shape-detection-api-4ogl</link>
      <guid>https://dev.to/georgemandis/edge-canary-supports-shape-detection-api-4ogl</guid>
      <description>&lt;p&gt;The macOS version of Microsoft Edge Insider was released today at &lt;a href="https://www.microsoftedgeinsider.com/en-us/download"&gt;microsoftedgeinsider.com&lt;/a&gt;. Currently only the Canary Channel is available, which is updated daily. The more stable Beta and Dev channels will likely become available in the coming weeks as we run the gauntlet between Microsoft Build, Google I/O, WWDC and probably some other developer-centric events I'm forgetting about that happen this time of year.&lt;/p&gt;

&lt;p&gt;The new version of Edge is built on &lt;a href="https://www.chromium.org/"&gt;Chromium&lt;/a&gt;, which is kind of a bittersweet victory for the web depending on how you look at it. On the bitter side, choice and differentation is one of the things that makes the open web great. No one company owns the web, nor should it. For all the things it does well, adopting Google's project to power their browser feels like Microsoft ceding a little too much control to Google's grip on the web, even if I think they've generally been decent stewards.&lt;/p&gt;

&lt;p&gt;On the sweeter side, troubleshooting and debugging a Chromium-based browser is going to be &lt;em&gt;much&lt;/em&gt; nicer than dealing with Edge! While I was at Microsoft Build the other week I asked a question about whether or not the new Edge would support the Shape Detection API. It's currently available in Chromium hidden beneath the experimental web features flag found at &lt;code&gt;chrome://flags.&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;I asked about this while I was speaking at Microsoft Build the other week:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Does anyone know if the new MS Edge based on Chromium has an equivalent of the "Experiments" page (chrome://flags) to enable experimental web platform features?  &lt;/p&gt;

&lt;p&gt;You can enable the Shape Detection API on Chromium, which sort of surprised me. Wondering if Edge inherited this...&lt;/p&gt;

&lt;p&gt;— George Mandis (@georgeMandis) &lt;a href="https://twitter.com/georgeMandis/status/1124443813110374400?ref_src=twsrc%5Etfw"&gt;May 3, 2019&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Long-story short—looks like you can! Just go to &lt;code&gt;edge://flags&lt;/code&gt; and you'll find an identical screen.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;To answer my own question: YES! In Edge Canary on macOS you can find the flags at: edge://flags/  &lt;/p&gt;

&lt;p&gt;I also tested the Shape Detection API, and it works!!!  &lt;/p&gt;

&lt;p&gt;Long-term, I'm excited to think this means wide-support for a very cool feature:&lt;a href="https://t.co/wj4CKixLGZ"&gt;https://t.co/wj4CKixLGZ&lt;/a&gt; &lt;a href="https://t.co/Dx4Pt1fZch"&gt;https://t.co/Dx4Pt1fZch&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;— George Mandis (@georgeMandis) &lt;a href="https://twitter.com/georgeMandis/status/1130920860330602496?ref_src=twsrc%5Etfw"&gt;May 21, 2019&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Once you've enabled this you should be able to view this demo at my slightly neglected project &lt;strong&gt;Debugging Art&lt;/strong&gt; :&lt;/p&gt;

&lt;p&gt;&lt;a href="https://debuggingart.com/sketch/shape-detection-dva-brata/"&gt;debuggingart.com/sketch/shape-detection-dva-brata/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Though my feelings are still mixed on the state of browser homogeneity, I &lt;em&gt;do&lt;/em&gt; think it's cool that features like this might become wide-spread more quickly.&lt;/p&gt;

</description>
      <category>microsoftedge</category>
      <category>shapedetectionapi</category>
    </item>
    <item>
      <title>Huff Post, Konami Code and Pet Photos Makes the News More Palatable</title>
      <dc:creator>George Mandis</dc:creator>
      <pubDate>Wed, 10 Apr 2019 00:00:00 +0000</pubDate>
      <link>https://dev.to/georgemandis/huff-post-konami-code-and-pet-photos-makes-the-news-more-palatable-4p9l</link>
      <guid>https://dev.to/georgemandis/huff-post-konami-code-and-pet-photos-makes-the-news-more-palatable-4p9l</guid>
      <description>

&lt;p&gt;I've always kept tabs on my public profile out on the web. Sometimes it's surprising what you can find. You might have &lt;a href="/2018/03/does-a-google-scholar-page-help-seo/"&gt;your own Google Scholar page&lt;/a&gt; because of a college paper you wrote when you were 18 or find out you were scheduled to speak at a conference in another country well before the organizers reached out to you.&lt;/p&gt;

&lt;p&gt;Today I found out an old project of mine is being used on HuffPost—or should I say, FluffPost?&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--agxsvUWk--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://moonbase.georgemandis.com/huffpost-konami/fluffpost-5.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--agxsvUWk--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://moonbase.georgemandis.com/huffpost-konami/fluffpost-5.jpg" alt="FluffPost"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Go check it out! If you go to the &lt;a href="https://huffpost.com"&gt;HuffPost's homepage&lt;/a&gt; and enter the &lt;a href="https://en.wikipedia.org/wiki/Konami_Code"&gt;Konami Code&lt;/a&gt; all of the images turn into adorable photos of people's pets.&lt;/p&gt;

&lt;p&gt;It has not been the best couple years for happy, uplifting news, but seeing some of these headlines accompanied by cute dogs and cats made it a lot more palatable. Cases in point:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--FAbkMXi_--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://moonbase.georgemandis.com/huffpost-konami/fluffpost-3.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--FAbkMXi_--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://moonbase.georgemandis.com/huffpost-konami/fluffpost-3.jpg" alt="Feisty Mnuchin"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Mnuchin definitely looks feisty. And that team Barr is assembling looks like serious business. I'm sure they'll get to the bottom of things.&lt;/p&gt;

&lt;h2&gt;
  
  
  How did I find this?
&lt;/h2&gt;

&lt;p&gt;I accidentally stumbled across this not via a Google search but by doing a search for my name on GitHub in other people's code repositories. This might seem a little odd, but if you've created or contributed to a lot of open-source projects it's an interesting view into how people might actually be using what you've made.&lt;/p&gt;

&lt;p&gt;It's also a good way to see if a private repository you handed off to another development agency accidentally found it's way into the public... but that's a different story!&lt;/p&gt;

&lt;p&gt;In looking for my name I came across references to a project of mine in what looked like code scraped from the homepage of Huff Post. The code contains comments with information about licensing, the version and a byline with my name, which is what helped me find this.&lt;/p&gt;

&lt;p&gt;I decided to go the Huff Post and try the code out. When I entered the Konami Code and saw the pets I laughed, and when I went to view the source code for the page I was tickled to see my name and code sitting there.&lt;/p&gt;

&lt;h2&gt;
  
  
  About the project
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://github.com/snaptortoise/konami-js"&gt;Konami-JS&lt;/a&gt; is an open-source easter-egg project I made back in 2009. It lets you add an Easter Egg to any website when a visitor enters the Konami Code. The differentiator, at the time, was my project worked on mobile devices.&lt;/p&gt;

&lt;p&gt;Most of what I find are people putting Konami Code easter eggs on their GitHub pages, blogs and other personal projects. Occasionally it pops up in bigger places like Marvel.com or Newsweek. I recently discovered it was even being used on &lt;a href="https://teslamotorsclub.com/tmc/threads/easter-egg-in-the-design-studio-konami-code.7944/"&gt;Tesla's online design tool&lt;/a&gt; back in 2012—I found remnants of it on archive.org &lt;a href="https://web.archive.org/web/20130408184419js_/http://www.teslamotors.com/sites/all/themes/tesla/configurator/js/libs/konami.1.3.3.pack.js?c"&gt;here&lt;/a&gt;. I guess now I can add Huff Post to that list!&lt;/p&gt;

&lt;p&gt;By modern JavaScript standards the script is kind of archaic, but it lives in a sort of gray area where I'm not sure it stands to gain much by modernizing it. The code footprint is compact but but understandable and it's easy to plop into any existing project with a quick copy-and-paste. Beginners can and do use my code, all the time.&lt;/p&gt;

&lt;p&gt;As the barrier to entry into web development seems to grow a little steeper each year, I'm happy to have contributed a project that's being used by novices and high-traffic websites alike. I'm also happy that it's a joyful, frivolous contribution that's been mostly used to make the web a fun and sillier place.&lt;/p&gt;

&lt;p&gt;Watch me give a talk about my experience maintaining this "frivolous but popular" project at &lt;a href="https://odessajs.org"&gt;OdessaJS&lt;/a&gt; in 2017:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=F3xI3ps7syI"&gt;https://www.youtube.com/watch?v=F3xI3ps7syI&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Lastly, I &lt;em&gt;do&lt;/em&gt; have open issues and discussions &lt;a href="https://github.com/snaptortoise/konami-js"&gt;on GitHub&lt;/a&gt; surrounding what Konami-JS 2.0 could/should look like, a decade later. Your contributions are welcome.&lt;/p&gt;


</description>
      <category>javascript</category>
      <category>konami</category>
      <category>opensource</category>
      <category>frivolous</category>
    </item>
    <item>
      <title>Creating Trusted SSL Certificates for localhost on MacOS</title>
      <dc:creator>George Mandis</dc:creator>
      <pubDate>Thu, 27 Sep 2018 07:00:00 +0000</pubDate>
      <link>https://dev.to/georgemandis/creating-trusted-ssl-certificates-for-localhost-on-macos-1bn6</link>
      <guid>https://dev.to/georgemandis/creating-trusted-ssl-certificates-for-localhost-on-macos-1bn6</guid>
      <description>&lt;p&gt;Recently I was working with the &lt;a href="https://stripe.com/docs/stripe-js/elements"&gt;Stripe Elements&lt;/a&gt; toolkit for a piece of custom billing software &lt;a href="https://snaptortoise"&gt;my web development company&lt;/a&gt; uses. The library is a collection of well-designed UI components for interfacing with Stripe’s services. Specifically I was working with their &lt;a href="https://stripe.com/docs/stripe-js/elements/payment-request-button"&gt;Payment Request Button&lt;/a&gt; which is designed to work with &lt;a href="https://www.w3.org/TR/payment-request/"&gt;the W3C Payment Request API standard&lt;/a&gt; currently being discussed.&lt;/p&gt;

&lt;p&gt;To implement the Payment Request Button you need to be using a secure (https) connection. This isn’t difficult to setup using &lt;a href="https://www.openssl.org/"&gt;OpenSSL&lt;/a&gt;, but the Payment Request API requires a little more than a basic SSL certificate in place — it has to be a trusted as well. Stripe’s suggestion to use a service like &lt;a href="https://ngrok.com/"&gt;ngrok&lt;/a&gt; is a good one, but not necessarily always possible or desirable. Such was the case for my project.&lt;/p&gt;

&lt;p&gt;Creating a trusted SSL certificate for your localhost server on macOS turned out to require a lot of steps and a fair amount of research. When I finally generated the certificates and had everything running smoothly I decided to package it up in a single command you could run in the terminal and made a GitHub project out of it:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/snaptortoise/ssloca"&gt;github.com/snaptortoise/ssloca&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now, instead of copying-and-pasting cryptic OpenSSL commands into the terminal like I’m sure so many developers before me have done, you can generate and trust these certificates in two lines:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;git clone git@github.com:snaptortoise/ssloca.git &amp;amp;&amp;amp; cd ssloca&lt;/code&gt;&lt;code&gt;./create-local-certificates.sh&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;You should end up with a folder called &lt;code&gt;certs&lt;/code&gt; containing all the locally generate certificates including the root certificate and the certificate for your server. More information on what and how things are generated can be &lt;a href="https://github.com/snaptortoise/ssloca/blob/master/create-local-certificates.sh"&gt;found in the code&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;I called the project &lt;a href="https://github.com/snaptortoise/ssloca"&gt;SSLoca&lt;/a&gt; — a portmanteau of SSL, local and loca, in Spanish. Because getting &lt;strong&gt;SSL&lt;/strong&gt; to work &lt;strong&gt;locally&lt;/strong&gt; made me feel a little &lt;strong&gt;crazy&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;There’s plenty of room for improvement in my script, and I know there are lots of developers out there who have jumped through the same hoops to solve this problem. Please give it a look, let me know if you found it useful and by all means submit a pull request if you have thoughts on how to improve it.&lt;/p&gt;

</description>
      <category>localhost</category>
      <category>ssl</category>
      <category>openssl</category>
    </item>
    <item>
      <title>A framework-agnostic explanation of web components</title>
      <dc:creator>George Mandis</dc:creator>
      <pubDate>Tue, 31 Oct 2017 07:00:00 +0000</pubDate>
      <link>https://dev.to/georgemandis/a-framework-agnostic-explanation-of-web-components-20m</link>
      <guid>https://dev.to/georgemandis/a-framework-agnostic-explanation-of-web-components-20m</guid>
      <description>&lt;p&gt;Last month I spoke at JSFoo and really enjoyed my time there. Besides the overall experience and amazing people there were some very good talks as well! One of them was by Rahat Khanna from Apple on web components.&lt;/p&gt;

&lt;p&gt;Web components are one of those modern development tools that are hard to talk about in a way that’s framework agnostic. Rahat’s talk manages to do just that — discuss web components in a way that’s completely separate from frameworks like React, Angular and Vue. I often prefer these kind of talks that hone in on core ideas, concepts and APIs rather than the abstractions we build up around them. Frameworks come and go, but the underlying ideas end up sticking around… hopefully!&lt;/p&gt;

&lt;p&gt;His talk inspired me to try a framework-agnostic approach to a client project using web components. It seemed like a good idea and turned out to be &lt;em&gt;great&lt;/em&gt; idea! If you’re just getting started with web components or need a little inspiration you should watch his talk:&lt;/p&gt;

&lt;p&gt;&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/71JdaRofCgA"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

</description>
      <category>webcomponents</category>
      <category>talks</category>
    </item>
    <item>
      <title>Resources and strategies for remote workers</title>
      <dc:creator>George Mandis</dc:creator>
      <pubDate>Sun, 01 Oct 2017 07:00:00 +0000</pubDate>
      <link>https://dev.to/georgemandis/resources-and-strategies-for-remote-workers-ej9</link>
      <guid>https://dev.to/georgemandis/resources-and-strategies-for-remote-workers-ej9</guid>
      <description>&lt;p&gt;A little over 10 years ago I threw myself into freelance web development without thinking it through too thoroughly. Since then I’ve thought on many occasions that my ignorance was probably to my benefit. If I’d known all the stuff I didn’t know I’m not so sure I would’ve done it!&lt;/p&gt;

&lt;p&gt;There are plenty of things I would have done differently, better or perhaps not at all if given the chance to again. There are also some surprisingly good strategies and habits I managed to luck into over the past decade! One of those things was figuring out how to find work when I was just starting out.&lt;/p&gt;

&lt;p&gt;Recently I was in India speaking at &lt;a href="https://youtu.be/R0-XLrr8icY"&gt;JSFoo&lt;/a&gt; about &lt;a href="https://midi.mand.is"&gt;JavaScript, MIDI and Tiny Computers&lt;/a&gt;. In one of the introductory slides I mentioned &lt;a href="https://george.mand.is/2016/11/my-ignite-portland-talk/"&gt;my year spent traveling and working remotely&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;After my talk, an earnest developer came up to me and asked about getting started as a freelancer and remote worker. That conversation inspired me to put together this resource on GitHub:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Remote Work List –&lt;/strong&gt; &lt;a href="https://github.com/georgemandis/remote-working-list"&gt;github.com/georgemandis/remote-working-list&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;It’s a compilation of websites that post jobs that are remote-worker and freelance friendly. At the time of this writing there are over 60 sites listed. Additionally, I’ve indicated whether or not there’s an RSS feed available for each job board. For sites that provide a feed I’ve provide a link.&lt;/p&gt;

&lt;p&gt;With the &lt;a href="https://www.google.com/reader/about/"&gt;death of Google Reader in 2013&lt;/a&gt; a lot of sites abandoned RSS, which is too bad. It was integral to my job search strategy as a budding freelancer. I’m happy to say that a little over half the sites on my remote work list have one:&lt;/p&gt;

&lt;p&gt;This makes my original strategy still viable – though we’ll need a modern replacement for Google Reader – so I’ll go on to explain.&lt;/p&gt;

&lt;h3&gt;
  
  
  My Remote Worker / Freelance Strategy
&lt;/h3&gt;

&lt;p&gt;Here are my steps and tips for anyone starting off as a remote-working/freelancer today and needs to build a client base:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Subscribe to all of the RSS feeds in &lt;a href="https://github.com/georgemandis/remote-working-list/blob/master/remote-working-resources.csv"&gt;this list&lt;/a&gt; using an RSS subscription service or dedicated reader. I’m partial to &lt;a href="https://github.com/georgemandis/remote-working-list/blob/master/remote-working-resources.csv"&gt;FeedBin&lt;/a&gt; and &lt;a href="http://reederapp.com/"&gt;Reeder&lt;/a&gt; for Mac. You can find &lt;a href="https://alternativeto.net/software/google-reader/"&gt;lots of candidates&lt;/a&gt; but I’d suggest finding one with the ability to star/bookmark or otherwise “save specific posts as you cycle through them. Keyboard shortcuts are a bonus too, but maybe that’s just because I was used to the ones Google Reader provided.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Every morning, while you’re drinking your coffee or tea or otherwise starting your day, cycle through all of the job postings that have accumulated. If you see something you feel could be a good match or if you find something you might be interested in &lt;em&gt;learning&lt;/em&gt; how to do, save it in your reader of choice – give it a â­ï¸ or a bookmark or whatever the nomenclature is for your particular app/service. If it’s not something you’re interested in, mark it as read or otherwise archive it.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The most important part of that last step, which I’ve now turned into its own step: &lt;strong&gt;don’t overthink it!&lt;/strong&gt; You can assess if the job is a good match or if you’re actually qualified in a later step. For now we’re just filtering out the obvious mismatches.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;When you’re done going through all of the job listings, put it away and do something else. Make breakfast, go get exercise, or take a shower and start a proper work day. If you really don’t have anything else to do and feel eager to continue searching for new jobs and gigs you can go straight to the next step, but I’m a big proponent of breaks and resets.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;In the afternoon, maybe after you’ve eaten lunch, or when you feel ready to take a midday break from whatever you’ve been working on all day, pull up the starred/bookmarked posts you separated from earlier. Read them a little more in-depth and decide if any of them now look like poor matches. If they are, go ahead and remove them.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Hopefully you should have a list of jobs you’re interested in and feel somewhat qualified for. Now it’s time to start applying! Coming up with a proper introduction letter that you can tailor to individual clients and companies is a topic for another day. Go through the starred/bookmarked posts one at a time, starting with the oldest listing you marked, and start applying. As you finish applying be sure to remove them from your saved list.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;An important tip: &lt;strong&gt;It’s okay if you don’t get through all of them!&lt;/strong&gt; Although the strategy I’m promoting really encourages searching and filtering these listings in bulk, take your time when applying. You want to make a good impression. As you get better at this you’ll be able to apply to jobs more quickly but still do so in a way the feels personable. If you have starred/bookmarked posts left over after an hour or two (or however much time you put aside for this activity) you can pick them up the next day. If you want you have a backlog after a few days and are getting swamped I suggest either being more judicious with your bookmarking and/or be willing to remove listings from your starred/bookmarked page if they’re more than, say, a week old.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;That’s really it! I did this for months when I started out and acquired a lot of clients and worked on a variety of freelance gigs. Some of them were good and turned into client relationships I continue to have to this day. Others amounted to what I’ll call unanticipated learning experiences :)&lt;/p&gt;

&lt;p&gt;It’s not rocket science, but I found this approach to be highly affective. This strategy also works well if you want to hit the “reset button at some point in your career and inject new clients and projects into your freelance life.&lt;/p&gt;

&lt;p&gt;Do you have your own freelance/remote working tips? I’d love to hear about them! &lt;a href="//mailto:george@mand.is"&gt;Send me an email&lt;/a&gt;, &lt;a href="https://twittier.com/georgeMandis"&gt;ping me on Twitter&lt;/a&gt; or post in the comments below.&lt;/p&gt;

&lt;p&gt;And if you need a &lt;a href="https://snaptortise.com"&gt;web developer&lt;/a&gt; for your project – even something a bit more &lt;a href="https://apptortoise.com"&gt;outside the box&lt;/a&gt; – get in touch ðŸ˜Š&lt;/p&gt;

</description>
      <category>remote</category>
      <category>work</category>
      <category>devtips</category>
      <category>career</category>
    </item>
    <item>
      <title>Tiny computers that run JavaScript natively</title>
      <dc:creator>George Mandis</dc:creator>
      <pubDate>Fri, 22 Sep 2017 07:00:00 +0000</pubDate>
      <link>https://dev.to/georgemandis/tiny-computers-that-run-javascript-natively</link>
      <guid>https://dev.to/georgemandis/tiny-computers-that-run-javascript-natively</guid>
      <description>&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fs3-us-west-2.amazonaws.com%2Fgeorge.mand.is%2Fimages%2Fbanana-computers.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fs3-us-west-2.amazonaws.com%2Fgeorge.mand.is%2Fimages%2Fbanana-computers.jpg" alt="These computers are bananas"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I’ve had &lt;a href="http://johnny-five.io" rel="noopener noreferrer"&gt;johnny-five.io&lt;/a&gt; bookmarked for a long time as something to explore, and earlier today I took a break to do it. If you’re not familiar with the project, it allows you to program single-board computers and controllers like the Arduino, Rasperry Pi and many, many others in JavaScript.&lt;/p&gt;

&lt;p&gt;For some of the platforms it runs directly on the devices, but for others it requires a host machine to run your JavaScript and communicate the hardware interactions with your tiny computer over a serial connection. Depending on what you’re trying to build that’s probably fine, but I’m more interested these days in hardware that allows you to run JavaScript directly.&lt;/p&gt;

&lt;p&gt;Fortunately their hardware list is quite easy to filter! Essentially this list comprises all the SBCs that run some kind of Linux environment and allow you to install Node. Still, for posterity, here’s a list of all the tiny computers that you can run JavaScript on directly that I’m aware of:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="http://beagleboard.org/bone" rel="noopener noreferrer"&gt;BeagleBone Black&lt;/a&gt; (&lt;a href="http://amzn.to/2fGnXuL" rel="noopener noreferrer"&gt;Amazon&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;&lt;a href="https://getchip.com/" rel="noopener noreferrer"&gt;C.H.I.P. Computers&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="http://www.espruino.com/" rel="noopener noreferrer"&gt;Espruino&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://ark.intel.com/products/78919/Intel-Galileo-Board" rel="noopener noreferrer"&gt;Intel Galileo Gen 1 &amp;amp; 2&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://www.arduino.cc/en/ArduinoCertified/IntelEdison" rel="noopener noreferrer"&gt;Intel Edison Arduino&lt;/a&gt; (&lt;a href="http://amzn.to/2wIuEXP" rel="noopener noreferrer"&gt;Amazon&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://www.intel.com/content/www/us/en/support/boards-and-kits/000005574.html" rel="noopener noreferrer"&gt;Intel Edison Mini&lt;/a&gt; (&lt;a href="http://amzn.to/2xZ8qkO" rel="noopener noreferrer"&gt;Amazon&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.sparkfun.com/products/13038" rel="noopener noreferrer"&gt;SparkFun Edison GPIO Block&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.sparkfun.com/products/retired/13036" rel="noopener noreferrer"&gt;SparkFun Arduino Block&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://ark.intel.com/products/96414/Intel-Joule-570x-Developer-Kit" rel="noopener noreferrer"&gt;Intel Joule 570x (Carrier Board)&lt;/a&gt; (&lt;a href="http://amzn.to/2xuGqnz" rel="noopener noreferrer"&gt;Amazon&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;
&lt;a href="http://www.linino.org/portfolio/linino-one/" rel="noopener noreferrer"&gt;Linino One&lt;/a&gt; (&lt;a href="http://amzn.to/2xuqiCw" rel="noopener noreferrer"&gt;Amazon&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;&lt;a href="https://onion.io/omega2/" rel="noopener noreferrer"&gt;Onion Omega2&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.sparkfun.com/products/retired/12856" rel="noopener noreferrer"&gt;pcDuino3 Dev Board&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://www.raspberrypi.org/" rel="noopener noreferrer"&gt;Raspberry Pi 3 Model B&lt;/a&gt; (&lt;a href="http://amzn.to/2yx8p4A" rel="noopener noreferrer"&gt;Amazon&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.raspberrypi.org/" rel="noopener noreferrer"&gt;Raspberry Pi 2 Model B&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.raspberrypi.org/" rel="noopener noreferrer"&gt;Raspberry Pi Zero&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.raspberrypi.org/" rel="noopener noreferrer"&gt;Raspberry Pi Model A Plus&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.raspberrypi.org/" rel="noopener noreferrer"&gt;Raspberry Pi Model B Plus&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.raspberrypi.org/" rel="noopener noreferrer"&gt;Raspberry Pi Model B Rev 1&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.raspberrypi.org/" rel="noopener noreferrer"&gt;Raspberry Pi Model B Rev 2&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://tessel.io/" rel="noopener noreferrer"&gt;Tessel 2&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Is this list missing something? In particular I’m curious about other projects like the Espruino that don’t run some flavor of Linux but instead allow you to run JavaScript natively on the board. All of these would make nice computers for some of my &lt;a href="https://midi.mand.is" rel="noopener noreferrer"&gt;MIDI projects&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Did I miss any? Please let me know — send me an email or &lt;a href="https://twitter.com/georgeMandis" rel="noopener noreferrer"&gt;ping me on Twitter&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>linux</category>
      <category>node</category>
      <category>javascript</category>
      <category>hardware</category>
    </item>
  </channel>
</rss>
