<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Nicolas Beauvais</title>
    <description>The latest articles on DEV Community by Nicolas Beauvais (@nicolasbeauvais).</description>
    <link>https://dev.to/nicolasbeauvais</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/nicolasbeauvais"/>
    <language>en</language>
    <item>
      <title>Recreating Laravel Cloud’s range input with native HTML</title>
      <dc:creator>Nicolas Beauvais</dc:creator>
      <pubDate>Wed, 02 Jul 2025 15:27:54 +0000</pubDate>
      <link>https://dev.to/phare/recreating-laravel-clouds-range-input-with-native-html-1b69</link>
      <guid>https://dev.to/phare/recreating-laravel-clouds-range-input-with-native-html-1b69</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs8m3mf7zrbfgwnlr94fo.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs8m3mf7zrbfgwnlr94fo.webp" alt="Recreating Laravel Cloud’s range input with native HTML"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;For the past few days, I’ve been working on improving the billing experience in Phare, with the addition of &lt;a href="https://docs.phare.io/changelog/platform/2025#credits-payment-for-scale-plan" rel="noopener noreferrer"&gt;prepaid credits&lt;/a&gt;. While tweaking the billing UI, I realized the current input for configuring additional quota wasn’t great, it didn’t clearly show what was already included in the paid plan versus what the user could add.&lt;/p&gt;

&lt;p&gt;The UX of entering large number in a text input could certainly be improved. It wasn't horrible, but I felt it could be better.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frff1ne8ri458z5wr14o1.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frff1ne8ri458z5wr14o1.webp" alt="Recreating Laravel Cloud’s range input with native HTML"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;So I went hunting for inspiration on Dribbble, and other listing websites that post screenshots of SaaS services interfaces. After some research, I stumbled upon the &lt;a href="https://cloud.laravel.com/pricing" rel="noopener noreferrer"&gt;Laravel Cloud pricing calculator&lt;/a&gt;. Their range input design was spot-on: clear separation between included and additional values, visually appealing, and user-friendly.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj8q9aglkwpgep7xj045o.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj8q9aglkwpgep7xj045o.webp" alt="Recreating Laravel Cloud’s range input with native HTML"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Naturally, I did what any self-respecting developer would do, open the browser inspector to &lt;del&gt;steal&lt;/del&gt; look at the code. Turns out, they recreated a full range input with a few HTML elements and glued everything with JavaScript using Alpine.js. Here's the structure:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;lt;div class="group/range relative h-8 self-stretch"&amp;gt;
  &amp;lt;!– Full track --&amp;gt;
  &amp;lt;div /&amp;gt;

  &amp;lt;!-- Static track --&amp;gt;
  &amp;lt;div /&amp;gt;

  &amp;lt;!-- Progress bar --&amp;gt;
  &amp;lt;div /&amp;gt;

  &amp;lt;!-- Handle --&amp;gt;
  &amp;lt;div&amp;gt;
    &amp;lt;div&amp;gt;
      &amp;lt;span&amp;gt;&amp;lt;/span&amp;gt;
    &amp;lt;/div&amp;gt;
  &amp;lt;/div&amp;gt;

  &amp;lt;!-- Tick --&amp;gt;
  &amp;lt;div /&amp;gt;

  &amp;lt;!-- HTML range input --&amp;gt;
  &amp;lt;input /&amp;gt;
&amp;lt;/div&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Because I'm the laziest developer, rebuilding all this for the six people (love you guys 🫶) that pay for Phare felt like overkill. Could I recreate a similar input with less work? There's probably a way to get a similar result with some CSS on top of the native range input, and maybe a few lines of JavaScript.&lt;/p&gt;

&lt;h2&gt;
  
  
  Building the range input
&lt;/h2&gt;

&lt;p&gt;To match the Laravel Cloud design, we need the following components:  &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Range track&lt;/strong&gt; : the rail where the handle moves.&lt;br&gt;&lt;br&gt;
&lt;strong&gt;Handle&lt;/strong&gt; : the draggable thumb.&lt;br&gt;&lt;br&gt;
&lt;strong&gt;Progress bar&lt;/strong&gt; : the filled area left of the handle.&lt;br&gt;&lt;br&gt;
&lt;strong&gt;Static part&lt;/strong&gt; : a fixed section showing the value already included in the plan.&lt;br&gt;&lt;br&gt;
&lt;strong&gt;Tick&lt;/strong&gt; : a visual marker where the included value ends and extra begins.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd7aijwkkzsf6rtqo4ave.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd7aijwkkzsf6rtqo4ave.webp" alt="Recreating Laravel Cloud’s range input with native HTML"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The static part and tick are mostly cosmetic and can easily be visually faked outside the range input itself. Everything else is already included in the native HTML range input.  &lt;/p&gt;

&lt;p&gt;So why did the Laravel team go full custom?&lt;/p&gt;
&lt;h2&gt;
  
  
  Limitations of the native HTML range input
&lt;/h2&gt;

&lt;p&gt;To look great, the handle needs to land &lt;strong&gt;exactly at the tick’s position&lt;/strong&gt; when at the minimum value. It should also cover the tick to be visually appealing:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhcngft55u5lj5aw5q4p3.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhcngft55u5lj5aw5q4p3.webp" alt="Recreating Laravel Cloud’s range input with native HTML"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Unfortunately, this isn't possible as the native range input’s handle is confined to the boundaries of the track. So, what if we make the range input track overlap under the static part to allow the handle to sit on the tick? (such a weird sentence).&lt;/p&gt;

&lt;p&gt;Well, the native range inputs don’t let us set a different &lt;code&gt;z-index&lt;/code&gt; for the handle and for the track. If we push the track behind the static part, the handle goes with it. If you bring it forward, the whole thing looks messy.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm36fujvcgisxj5fml7ui.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm36fujvcgisxj5fml7ui.webp" alt="Recreating Laravel Cloud’s range input with native HTML"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  The solution
&lt;/h2&gt;

&lt;p&gt;Enter the &lt;strong&gt;CSS inner shadow&lt;/strong&gt; : using an inner shadow allows us to fake a few extra pixels of the static part &lt;strong&gt;inside&lt;/strong&gt; the track. This lets the handle glide over it without getting hidden.  &lt;/p&gt;

&lt;p&gt;By carefully layering the tick and the static track visually outside the actual input, and using this inner shadow to fake part of the static part, we can get something that works well.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxvghry1msjk3hjr48cgz.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxvghry1msjk3hjr48cgz.webp" alt="Recreating Laravel Cloud’s range input with native HTML"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  Styling the handle
&lt;/h2&gt;

&lt;p&gt;Using the border property on the handle with &lt;code&gt;-moz-range-thumb&lt;/code&gt; work great in Firefox, but Chrome does not seem to support it. Again, inner shadows are here to save the day, and bring us cross browser consistency.&lt;/p&gt;
&lt;h2&gt;
  
  
  Styling the progress bar
&lt;/h2&gt;

&lt;p&gt;To make the progress bar pattern, Laravel's team used a clever trick based on &lt;code&gt;repeating-linear-gradient&lt;/code&gt; to create infinitely repeating stripes.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;background-image: repeating-linear-gradient(135deg, black 0px, black 1px, #99a1af 1px, #99a1af 4px);
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;But applying that to our native range input will cover the entire track. I only wanted it on the left side of the handle to represent progress.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9djd8hakd0vm6w64os27.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9djd8hakd0vm6w64os27.webp" alt="Recreating Laravel Cloud’s range input with native HTML"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;For fixing this, there isn't any other solution, we will need a few lines of JavaScript:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;document.getElementById("range").addEventListener('input', function (event) {
    let input = event.target
    let value = parseInt(input.value)
    let min = parseInt(input.getAttribute('min'))
    let max = parseInt(input.getAttribute('max'))

    let percentage = (value - min) / (max - min) * 100

    input.style.backgroundSize = `${percentage}% 100%, 100% 100%`
})
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Final result
&lt;/h2&gt;

&lt;p&gt;The end result is not quite as flexible as Laravel Cloud’s full custom implementation. Since the track should fake the design of the static part and tick, it does not allow more complex design, but it fits perfectly for my use case:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F59e6je76xh0mx6hn3cap.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F59e6je76xh0mx6hn3cap.gif" alt="Final input version"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;The final native HTML approach is quite simple, with minimal tricks, and still good-looking. I think it shows that it's possible to go quite far with native elements without having to resort to recreating everything with JavaScript.  &lt;/p&gt;

&lt;p&gt;You can see a fully working example and the code to recreate the input on CodePen:&lt;/p&gt;

&lt;p&gt;&lt;iframe height="600" src="https://codepen.io/nicbvs/embed/raVgORg?height=600&amp;amp;default-tab=result&amp;amp;embed-version=2"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

&lt;p&gt;And if you like my attention to details, you should try Phare, it's a great tool for &lt;a href="https://phare.io/products/uptime/website-monitoring" rel="noopener noreferrer"&gt;uptime monitoring&lt;/a&gt;, &lt;a href="https://phare.io/products/uptime/incident-management" rel="noopener noreferrer"&gt;incident management&lt;/a&gt;, and &lt;a href="https://phare.io/products/uptime/status-pages" rel="noopener noreferrer"&gt;status pages&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>css</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>What to look for in an uptime monitoring tool</title>
      <dc:creator>Nicolas Beauvais</dc:creator>
      <pubDate>Mon, 16 Jun 2025 20:43:51 +0000</pubDate>
      <link>https://dev.to/phare/what-to-look-for-in-an-uptime-monitoring-tool-4chk</link>
      <guid>https://dev.to/phare/what-to-look-for-in-an-uptime-monitoring-tool-4chk</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fiuy7gzemehnuo834tif5.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fiuy7gzemehnuo834tif5.webp" alt="What to look for in an uptime monitoring tool" width="800" height="418"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If your website is how you pay the bills, whether it’s a SaaS, an API, or that side project financing your daily ramen, you need to know when it’s down. Preferably before your customers start angrily spamming that F5 key.&lt;/p&gt;

&lt;p&gt;There are thousands of uptime monitoring tools out there, but after running one myself for a few years, here’s what I think you should actually be paying for.&lt;/p&gt;

&lt;h2&gt;
  
  
  Pick a tool that won’t sleep on the job
&lt;/h2&gt;

&lt;p&gt;There’s no point in using an uptime monitoring service that’s less reliable than the thing you’re monitoring.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy9sxpm1xwleh5r175b1t.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy9sxpm1xwleh5r175b1t.jpg" alt="What to look for in an uptime monitoring tool" width="774" height="500"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I love indie products (obviously), but this is one of those times where time in the market beats timing the market*.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Check how long the tool’s been around&lt;/li&gt;
&lt;li&gt;Carefully review stats on their status page, some are... enlightening&lt;/li&gt;
&lt;li&gt;Check out the documentation, it’s usually a good indicator of quality&lt;/li&gt;
&lt;li&gt;Most reviews online are fake, ask your friends instead&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;*Not financial advice.&lt;/p&gt;

&lt;h2&gt;
  
  
  Cloud or self-hosted? Choose wisely
&lt;/h2&gt;

&lt;p&gt;Your monitoring system should live outside your infrastructure, &lt;a href="https://dev.to/phare/the-3-year-journey-to-an-actually-good-monitoring-stack-2dd2"&gt;spread across multiple data centers&lt;/a&gt;, poking your endpoints from different parts of the world. That’s typically not something you get with a self-hosted setup.&lt;/p&gt;

&lt;p&gt;That said, self-hosted might make sense if:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;You’re monitoring a closed/private network&lt;/li&gt;
&lt;li&gt;You’ve got confidential credentials involved&lt;/li&gt;
&lt;li&gt;You like the smell of YAML in the morning&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If you go the DIY route, open-source projects like &lt;a href="https://github.com/rajnandan1/kener" rel="noopener noreferrer"&gt;Kener&lt;/a&gt; and &lt;a href="https://github.com/openstatusHQ/openstatus" rel="noopener noreferrer"&gt;OpenStatus&lt;/a&gt; give you slick status pages and great features while being easy to host.&lt;/p&gt;

&lt;p&gt;Otherwise, uptime monitoring being a brutally competitive market, good cloud options are often &lt;a href="https://phare.io/pricing" rel="noopener noreferrer"&gt;cheaper than spinning up a new VPS&lt;/a&gt;, with the benefit of not having to spend time on maintenance.&lt;/p&gt;

&lt;h2&gt;
  
  
  Pricing that won’t eat your runway
&lt;/h2&gt;

&lt;p&gt;Some tools charge by tiers, others charge by usage. Both can be good, but you do need to know how far your plan will take you.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuvv280dp9m763l9uihva.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuvv280dp9m763l9uihva.jpg" alt="What to look for in an uptime monitoring tool" width="390" height="255"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Keep an eye out for:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Surprise features locked behind expensive plans, like &lt;a href="https://sso.tax/" rel="noopener noreferrer"&gt;SSO&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Pricing jumps that increase your monthly plan by 600% to monitor that one additional endpoint&lt;/li&gt;
&lt;li&gt;Per-seat costs (gets expensive fast if you grow)&lt;/li&gt;
&lt;li&gt;Extra costs for things like API monitoring, fancy assertions, or exotic check types&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If you’re planning to grow, plan for growth. Otherwise, a cheap starter plan could turn into a budget black hole real quick.&lt;/p&gt;

&lt;h2&gt;
  
  
  Fast and silent checks
&lt;/h2&gt;

&lt;p&gt;Short intervals of 1 minute to 30 seconds are great, but the increased false positive alerts are not. Make sure your monitoring service confirms failures before it blows up your phone in the middle of the night. Good providers give you:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Failure confirmation&lt;/li&gt;
&lt;li&gt;Recovery confirmation&lt;/li&gt;
&lt;li&gt;Options to tune how aggressive or chill your alerts are&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I wrote a whole guide about this if you want the nerdy details: &lt;a href="https://phare.io/blog/best-practices-to-configure-an-uptime-monitoring-service" rel="noopener noreferrer"&gt;Best practices to configure an uptime monitoring service&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Check from around the world
&lt;/h2&gt;

&lt;p&gt;Just because your site work in Paris does not mean it works in Singapore. Routing is weird. DNS is weird. Internet infrastructure is insanely complex, and sometimes fragile.&lt;/p&gt;

&lt;p&gt;If your users are global, your monitoring should be too, especially if you’re doing edge deployments or running multi-region setups.&lt;/p&gt;

&lt;h2&gt;
  
  
  More than just up and down
&lt;/h2&gt;

&lt;p&gt;“Up or down” is just the start. Depending on what you’re building, you’ll want your uptime tool to handle:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;API checks with custom payloads&lt;/li&gt;
&lt;li&gt;SSL certificate monitoring (inventory, validity, expiration, AIA, OCSP)&lt;/li&gt;
&lt;li&gt;DNS validation&lt;/li&gt;
&lt;li&gt;Performance &amp;amp; response times&lt;/li&gt;
&lt;li&gt;Tracing &amp;amp; diagnostic info&lt;/li&gt;
&lt;li&gt;Custom assertions (e.g., make sure that PHP version header is not present in production)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://ohdear.app/" rel="noopener noreferrer"&gt;OhDear&lt;/a&gt; is a great example that offers an extensive list of extra checks, like SEO monitoring or broken link and mixed content detection.&lt;/p&gt;

&lt;h2&gt;
  
  
  Solo today, team tomorrow
&lt;/h2&gt;

&lt;p&gt;Right now you might be a team of one (hey friend 👋), but good monitoring tools support teams, shared dashboards, incident timelines, etc.&lt;/p&gt;

&lt;p&gt;Even solo founders need to sleep occasionally. Having a friend or colleague see the same alerts is a life upgrade worth investing in early on.&lt;/p&gt;

&lt;h2&gt;
  
  
  Plays nice with your stack
&lt;/h2&gt;

&lt;p&gt;Your monitoring tool should work with whatever communication channels you already use, no matter if you’re a Slack, Email, or Webhook person. Alerts should come to you, and not the other way around.   &lt;/p&gt;

&lt;p&gt;Also keep an eye for generic integration like incoming and outgoing webhooks as well as APIs. They will provide you with ways of integrating a third party or custom-made solution as you grow.&lt;/p&gt;

&lt;h2&gt;
  
  
  Good logs save bad days
&lt;/h2&gt;

&lt;p&gt;When things break, you need as many details as possible:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The actual HTTP status code&lt;/li&gt;
&lt;li&gt;The full response body&lt;/li&gt;
&lt;li&gt;Headers&lt;/li&gt;
&lt;li&gt;DNS resolution steps&lt;/li&gt;
&lt;li&gt;Request trace&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbr26jne9upa1ww114z99.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbr26jne9upa1ww114z99.jpg" alt="What to look for in an uptime monitoring tool" width="500" height="729"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You could even go further with traceroute logging or screenshot capture. Most cloud solutions provide this. If you’re going self-hosted, you can rig something with webhooks and a great screenshot API like &lt;a href="https://www.capturekit.dev/" rel="noopener noreferrer"&gt;CaptureKit&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;It’ll save you hours writing postmortems, debugging edge cases, or explaining to your users why everything went sideways last Thursday.&lt;/p&gt;

&lt;h2&gt;
  
  
  TL;DR
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Pick a tool you can trust&lt;/li&gt;
&lt;li&gt;Make sure it’s got the features you need today and tomorrow&lt;/li&gt;
&lt;li&gt;Choose something that helps you fix problems, not just point at them&lt;/li&gt;
&lt;/ul&gt;




&lt;p&gt;I’m building Phare.io with this mindset, check it out if you’re looking for a great uptime monitoring tool with incident management and status pages. It’s free to start and scales with your needs.&lt;/p&gt;

</description>
      <category>tutorial</category>
      <category>uptime</category>
      <category>monitoring</category>
      <category>guide</category>
    </item>
    <item>
      <title>The 3-Year Journey to an Actually Good Monitoring Stack</title>
      <dc:creator>Nicolas Beauvais</dc:creator>
      <pubDate>Tue, 15 Apr 2025 19:34:35 +0000</pubDate>
      <link>https://dev.to/phare/the-3-year-journey-to-an-actually-good-monitoring-stack-2dd2</link>
      <guid>https://dev.to/phare/the-3-year-journey-to-an-actually-good-monitoring-stack-2dd2</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5pyailovqdubw4f8y944.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5pyailovqdubw4f8y944.webp" alt="The 3-Year Journey to an Actually Good Monitoring Stack" width="800" height="418"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;When I started building &lt;a href="https://phare.io" rel="noopener noreferrer"&gt;Phare&lt;/a&gt; in early 2022, I planned the architecture assuming that fetching websites to perform uptime checks would be the main scaling bottleneck, and oh boy, was I wrong. While scaling that part is challenging, this assumption led to suboptimal architectural choices that I had to carry for the past three years.&lt;/p&gt;

&lt;p&gt;Of course, when you build an uptime monitoring service, the last thing you want is your monitoring infrastructure to be inefficient, or worse, inaccurate. Maintenance and planning take priority over everything else, and your product stops evolving. You're no longer building a fast-paced side project, you're just babysitting a web crawler.&lt;/p&gt;

&lt;p&gt;It took a lot of work to fix things while maintaining the best possible service for the hundreds of users relying on it. But it was worth it, and the future is now brighter than ever for Phare.io.&lt;/p&gt;

&lt;p&gt;Let’s go back to an afternoon in the summer of 2022, when I said to myself:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Fuck it, I’m going to make an uptime monitoring tool and compete with the 2,000 that already exist. It should only take a weekend to build anyway.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;(It was probably in French in my head, but you get the idea).&lt;/p&gt;

&lt;h2&gt;
  
  
  The very first version: Python on AWS Lambda
&lt;/h2&gt;

&lt;p&gt;AWS Lambda immediately felt like a perfect fit. I had written a few Lambda functions before, and it seemed like a good choice to easily run code in multiple regions, with the major benefit of no upfront costs and no maintenance. Compared to setting up multiple VPSs, with provisioning and maintenance on top, the choice was clear.&lt;/p&gt;

&lt;p&gt;I wrote the Python code for the Lambda, and all that was left was to invoke it in all required regions from my PHP backend whenever I needed to run an uptime check.&lt;/p&gt;

&lt;p&gt;The AWS SDK supports parallel invocation, which solved the problem of data reconciliation. I had the results of all regions in a single array and could easily decide if a monitor was up or down, sweet.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$results = Utils::unwrap([
  $lambdaClient-&amp;gt;invokeAsync('eu-central-1', $payload),
  $lambdaClient-&amp;gt;invokeAsync('us-east-1', $payload),
  $lambdaClient-&amp;gt;invokeAsync('ap-south-2', $payload),
]);

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Most of the business logic was built on top of that result set. How many regions are returning errors? How many consecutive errors does this particular monitor have? Is an incident already in progress? Should the user be notified? etc. (As you guessed, this becomes important later.)&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw8yaugv4yaj6x2zlce42.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw8yaugv4yaj6x2zlce42.png" alt="Phare on AWS Lambda" width="800" height="243"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This setup worked well, delivering accurate and reliable uptime monitoring to the early adopters of Phare, while I focused on building incident management and status pages.&lt;/p&gt;

&lt;p&gt;Until May of 2024, when I received a ~25 euro invoice from AWS. Okay, that’s not much, but that was for only 4M performed checks. That’s the cost of five entry-level VPSs, all to monitor about 100 websites. Not cost efficient at all.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxxqn5lrltckasfyh22uz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxxqn5lrltckasfyh22uz.png" alt="The 3-Year Journey to an Actually Good Monitoring Stack" width="800" height="611"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;I might have created the most expensive uptime monitoring service&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;The biggest part of the spending was from Lambda duration (GB-Seconds). As Phare’s user base grew, websites got more complex, no more just monitoring my friends’ single-page portfolios with 100 out of 100 Lighthouse scores. Websites can be slow, and even with a 5-second timeout, the Lambda execution ended up being far too expensive.&lt;/p&gt;

&lt;p&gt;Another issue was request timing accuracy. AWS Lambda lets you select the memory limit from 128MB to 10GB, and with more memory comes more CPU power. To fetch a URL with realistic browser-like timing, the Lambda needed at least 512MB of memory, a significant cost factor for longer checks, and a huge financial attack vector.&lt;/p&gt;

&lt;p&gt;It was time to find an alternative.&lt;/p&gt;

&lt;h2&gt;
  
  
  Enter Cloudflare Workers
&lt;/h2&gt;

&lt;p&gt;Cloudflare Workers seemed unreal, much cheaper than AWS Lambda, and you only pay for actual CPU time. That meant all the idle time waiting for timeouts was now completely free. I could build the &lt;a href="https://phare.io/pricing" rel="noopener noreferrer"&gt;cheapest uptime monitoring service&lt;/a&gt; while keeping a good margin, and offer an unbeatable 180 regions.&lt;/p&gt;

&lt;p&gt;Setting it up wasn’t straightforward. On top of having to rewrite the code in JavaScript, it was not possible to invoke a Worker in a specific region. And that was a major blocker.&lt;/p&gt;

&lt;p&gt;After many failed attempts, I came across a post from another Cloudflare user who had figured out how to do exactly that, using a first Worker to invoke another one in a chosen region. It wasn’t documented, but Cloudflare seemed aware of this loophole for a while, with no public plan to restrict it. The performance and pricing were too good to ignore, so I went with it. YOLO.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Febvs3yyles1diwuzs9sf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Febvs3yyles1diwuzs9sf.png" alt="Phare on workers" width="800" height="243"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The two-Workers technique changed everything. I could send large payloads of monitors, have the first Worker create smaller regional batches, and return reconciled results. My backend became more and more dependent to the way Cloudflare Workers behaved.&lt;/p&gt;

&lt;p&gt;Of course, there were limitations: non-secure HTTP checks were a no no, it was impossible to get details on SSL certificate errors, and TCP port access was restricted. But I managed to find a few workarounds, and everything was running smoothly.&lt;/p&gt;

&lt;p&gt;The ecosystem was growing fast, edge databases and integrated queues were being released by Cloudflare, my workers averaged sub 3ms execution times. The future looked bright.&lt;/p&gt;

&lt;p&gt;Of course, after just a few months, on November 14th, 2024, regional invocation was patched, and &lt;strong&gt;the entire uptime infrastructure went down&lt;/strong&gt;. That day was a looong day.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqal1eewl1uy3icywl23l.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqal1eewl1uy3icywl23l.jpg" alt="That's your face while reading this" width="450" height="391"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I quickly patched the script, rerouting all requests to the invoking region so uptime checks still ran, even if not in the right region.&lt;/p&gt;

&lt;p&gt;It was time to find an alternative. Fast.&lt;/p&gt;

&lt;h2&gt;
  
  
  Bunny.net Edge Scripts to the rescue
&lt;/h2&gt;

&lt;p&gt;At that time, Bunny.net had just released their Edge Scripts service in closed beta, a direct competitor to Cloudflare Workers, built on Deno. Pricing was similar, and the migration looked plug-and-play, which was all that mattered, because I couldn’t afford the time to rewrite the backend logic.&lt;/p&gt;

&lt;p&gt;I got into the beta, rewrote the script in Deno using the same two-invocation strategy, and began rerouting traffic from Cloudflare to Bunny.&lt;/p&gt;

&lt;p&gt;The first part of the migration went smoothly, regional monitoring was back up, and I could finally relax a bit.&lt;/p&gt;

&lt;p&gt;Of course it wasn't long until shit hits the fan, and the uptime monitoring performance data started to get funky. Cloudflare was a more mature solution that handled many things in the background, like keeping TCP pools in an healthy state, which is important when you perform thousands of requests to different domains.&lt;/p&gt;

&lt;p&gt;Thankfully, Bunny’s technical team was amazing. They helped me a lot, and I gave them plenty to work on in return.&lt;/p&gt;

&lt;p&gt;Eventually, things got better. Edge Scripts left beta and became generally available, and that’s when a new bottleneck appeared.&lt;/p&gt;

&lt;p&gt;The backend code was still invoking Edge Scripts and waiting for a batched response. As Phare gained new users daily, the number of invocations grew. My backend started hitting 502/503 errors on Bunny’s side. Queue wait times forced me to increase concurrency. And I was still facing the same limitations I previously had with Cloudflare Workers.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi3nqdu7odamgvqag8cit.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi3nqdu7odamgvqag8cit.png" alt="This costed me a Sentry subscription" width="800" height="297"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Maybe Edge Scripts weren’t the best long-term solution after all.&lt;/p&gt;

&lt;p&gt;I knew what I had to do from the beginning: decouple the backend from the edge scripts and process results asynchronously. But doing so meant reworking the deepest, most fundamental part of my backend logic, now massive after years of accumulated features.&lt;/p&gt;

&lt;p&gt;Again, I had no choice if I wanted to keep improving Phare.&lt;/p&gt;

&lt;p&gt;It was time to find an alternative.&lt;/p&gt;

&lt;h2&gt;
  
  
  The obvious answer: Bunny Magic Containers
&lt;/h2&gt;

&lt;p&gt;In early 2025, Bunny announced &lt;a href="https://bunny.net/magic-containers/" rel="noopener noreferrer"&gt;Magic Containers&lt;/a&gt;, a new service letting you deploy full Docker containers across Bunny’s global network. I had been desperately trying to find a European hosting provider with such a diverse range of locations. I was already integrated with the Bunny ecosystem, and had full confidence in their amazing support team.&lt;/p&gt;

&lt;p&gt;This time, I did things slowly. I built a few preview regions to test at scale with real users, in parallel with the still-working Edge Script setup. Of course this meant running two versions of the backend logic at the same time, two different ways of triggering monitoring checks, and thousands of new line of code to make it work. Not fun, but necessary to finally fix the past mistakes.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdjohz89sopgumba5qmxm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdjohz89sopgumba5qmxm.png" alt="I don't often do pull request on a solo project" width="800" height="195"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The new uptime monitoring agent would run continuously in a Docker container, billed by CPU and memory usage. Cost was a major concern, so I rebuilt it in Go with the following goals:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;The Phare backend and the monitoring agent must be fully decoupled.&lt;/li&gt;
&lt;li&gt;The agent should fetch its monitor list from an API, no backend push.&lt;/li&gt;
&lt;li&gt;Results are sent asynchronously to the backend.&lt;/li&gt;
&lt;li&gt;Data exchange should be minimal.&lt;/li&gt;
&lt;li&gt;The agent must be fault-tolerant and self-healing.&lt;/li&gt;
&lt;li&gt;It should match the feature set of the Edge Script version.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frv2zokm2blagxjsbmct1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frv2zokm2blagxjsbmct1.png" alt="Phare on magic containers" width="800" height="223"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And just like that, six new preview regions were added to Phare at the end of February, and they ran like fine clockwork. I actually went on vacation a few days after the release, for a full month, and didn’t have a single issue. I did have a lot of time to reflect on my past mistakes.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpmm9v8y6n6lrxu1vvk90.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpmm9v8y6n6lrxu1vvk90.webp" alt="The 3-Year Journey to an Actually Good Monitoring Stack" width="800" height="367"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I won’t go into too much detail about the new infrastructure, this post is painfully long enough already. Today, all checks run on Bunny Magic Containers. And for the first time in years, I can focus on building new features for both, the agent, and the platform.&lt;/p&gt;

&lt;p&gt;And if I ever need to change provider again, I can just spin up a few VPSs with my Docker image and it’ll work. I should’ve done that from the beginning, but I wanted to go fast, and that costed me a few years of real progress.&lt;/p&gt;

&lt;h2&gt;
  
  
  What’s next
&lt;/h2&gt;

&lt;p&gt;The current infrastructure works well, but it’s not perfect. When a container is restarted there's a brief overlap where two instances might run the same check. If a region goes offline, there’s no re-routing, users need to monitor from at least two regions to stay safe.&lt;/p&gt;

&lt;p&gt;Fetching the monitor list every minute via API works surprisingly well, thanks to ETags and a two-tier cache system. But I’m still exploring how to reduce HTTP calls. Having read replicas closer to the containers might be the best bet.&lt;/p&gt;

&lt;p&gt;From the outside, it didn’t look so bad, Phare grew to nearly a thousand users during all this infra chaos. Users loved the quality of the service far more than I did.&lt;/p&gt;

&lt;p&gt;This post is mostly a rant at my past self. I took too many shortcuts while building what started as a weekend project, which held the company back once it grew beyond that. &lt;strong&gt;But maybe that’s what startups are all about.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;That said… see you in three years for the blog post about Phare.io Monitoring Stack v8, probably rewritten in Rust, because history repeats itself.&lt;/p&gt;

</description>
      <category>devops</category>
      <category>programming</category>
      <category>learning</category>
    </item>
    <item>
      <title>Best practices to configure an uptime monitoring service</title>
      <dc:creator>Nicolas Beauvais</dc:creator>
      <pubDate>Mon, 26 Aug 2024 16:00:22 +0000</pubDate>
      <link>https://dev.to/phare/best-practices-to-configure-an-uptime-monitoring-service-1oep</link>
      <guid>https://dev.to/phare/best-practices-to-configure-an-uptime-monitoring-service-1oep</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fszkqy91byewfdrv9peo0.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fszkqy91byewfdrv9peo0.jpg" alt="Best practices to configure an uptime monitoring service" width="800" height="418"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Getting alerted of downtime is an essential part of running a healthy website. It's a problem that got solved a long time ago by uptime monitoring services, but as simple as setting up a monitoring service for your website might seem, there are a few best practices that I learned other the years maintaining dozens of websites from side-projects to Fortune 500, and building Phare.io, my own take on uptime monitoring.&lt;/p&gt;

&lt;p&gt;We will dive into some best practices to get the best possible monitoring without false positives, the configurations explored in the article should work with most monitoring services.&lt;/p&gt;

&lt;h2&gt;
  
  
  Choosing the Right URLs to Monitor
&lt;/h2&gt;

&lt;p&gt;Defining which resources to monitor is the first step to a successful uptime monitoring strategy, and as simple as it might seem, there some thinking to do here.&lt;/p&gt;

&lt;p&gt;The first thing to consider is how your website is hosted. Many modern startups will have landing pages on a static hosting provider like Vercel or Netlify, and a backend API hosted on a cloud provider like AWS or GCP. Then you might have external services hosted on a subdomain like a blog, a status page, a changelog, etc. Each of these resources can go down independently, and you should monitor them separately.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;🎓 Find all resources that can independently go down and monitor them separately.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;For each of these resources, you need to define the right URL to monitor, and there are again a few things to consider:&lt;/p&gt;

&lt;h3&gt;
  
  
  Static hosting
&lt;/h3&gt;

&lt;p&gt;Most statically hosted websites will use some form of caching through a CDN. If you monitor a URL cached at the CDN level, you might not get alerted when the origin server is down. You then need to check with your monitoring service or your CDN for a way to bypass the cache layer.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;🎓 Make sure you monitor the origin server and not a cached version of your website.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  Dynamic websites
&lt;/h3&gt;

&lt;p&gt;For dynamic websites or API endpoints, it's tempting to monitor a simple health check route that returns a static JSON response, but you might miss issues that are only visible when hitting API endpoints that do some actual work.&lt;/p&gt;

&lt;p&gt;Ideally, the URL that you monitor should at least perform a database query, or execute any critical resources of your application to make sure everything is working as expected. Creating a dedicated URL for monitoring is usually a good idea.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;🎓 Monitor an endpoint that performs actual work and not just a static health check.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  External services
&lt;/h3&gt;

&lt;p&gt;Monitoring external services is usually not as important as you are not responsible for their uptime. However, it's always good to be proactive and get alerted before your users do. This will allow you to communicate about the issues and show that you are on top of things.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;🎓 Monitor external services to be proactive and communicate about issues before your users do.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  Redirections
&lt;/h3&gt;

&lt;p&gt;Now you should have a good idea of the urls you need to monitor, you need to check for any redirections. Be careful with the URL format that you use to monitor your resources, some services will end all URLs with a &lt;code&gt;/&lt;/code&gt; and some won't, you will put an unnecessary load on your server if you don't use the right format and will likely get wrong performance metrics on your uptime monitoring service.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;🎓 Be mindful of unnecessary URL redirection to avoid load on your server and inaccurate performance metrics.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Monitoring that a few critical redirections work as expected is also a good idea, things like www to non-www, or http to https redirections are critical for your website SEO and user experience and could be monitored.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;🎓 Monitor critical redirections to make sure they work as expected.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Response monitoring
&lt;/h2&gt;

&lt;p&gt;Now that you have defined the right URLs to monitor, you need to define the excepted result of your monitors. In the case of HTTP checks, that will usually be a &lt;a href="https://developer.mozilla.org/en-US/docs/Web/HTTP/Status" rel="noopener noreferrer"&gt;status code&lt;/a&gt; or a keyword on the page.&lt;/p&gt;

&lt;p&gt;It is common knowledge among web developers that status codes are not always to be trusted, and that a &lt;code&gt;200 OK&lt;/code&gt; status code doesn't mean that the page is working as expected. This is why it's a good idea to also monitor for the presence of a keyword on the page.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgoso35hpix91wdzeouiz.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgoso35hpix91wdzeouiz.jpg" alt="Best practices to configure an uptime monitoring service" width="800" height="791"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;A good keyword is something unique to the page that would not be present on any error page. For example, if you choose the name of your website, there's a high chance that it will also be present on a 4xx error page, and you will get false positives monitoring for it.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;🎓 Always check the response status and the presence of a unique keyword on the page.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Request timeout
&lt;/h2&gt;

&lt;p&gt;Finding the right timeout for your monitors is a true balancing act. You want to make sure that the timeout is not too wide to avoid any false positives, but you also want to make sure that it's not too short to get alerted when your server is too slow to respond.&lt;/p&gt;

&lt;p&gt;My advice is to start with a large timeout for a few days and then gradually decrease it until you find the right balance. Of course this should be done on a per-url basis, as some resources might be naturally slower than others.&lt;/p&gt;

&lt;p&gt;Some monitoring services will have special configurations for performance monitoring that you could use for this purpose, you should also keep in mind that services will calculate response time differently, and you might get different results from different services, so it's always a good idea to start with a large timeout.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;🎓 Start with a large timeout and gradually decrease it until you find the right balance.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Monitoring frequency
&lt;/h2&gt;

&lt;p&gt;The monitoring frequency is another balancing act. You want to make sure that you get alerted as soon as possible when your website goes down, but without wasting resources on unnecessary checks for your website that is up 99.99% of the time and for our beautiful planet.&lt;/p&gt;

&lt;p&gt;Choose shorter intervals for critical resources and longer intervals for less important things like third-party services or redirections. You could also consider the time of day and monitor more aggressively during your business peak hours.&lt;/p&gt;

&lt;p&gt;Keep in mind the following when choosing the monitoring frequency:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Every 30 seconds = ~90k requests per month
Every 1 minute = ~45k requests per month
Every 5 minutes = ~9k requests per month
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;🎓 Choose shorter intervals for critical resources and longer intervals for less important things.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Incident confirmations
&lt;/h2&gt;

&lt;p&gt;I would strongly advise against using any monitoring service that does not offer a way to configure a number of confirmations before sending an alert. This is, with multi-region monitoring the most impactful way to avoid false positives.&lt;/p&gt;

&lt;p&gt;The internet is a complex system, and a single network glitch could prevent your monitoring service from reaching your server. It might not seem like a big deal, but the more alert you get, the more you will ignore them, and you will certainly miss a real incident after a few weeks of receiving daily false positives alerts.&lt;/p&gt;

&lt;p&gt;This setting should be configured based on your monitoring frequency, and the criticality of the resource you are monitoring. The more frequent the monitoring, the more confirmations you should require before sending an alert, here is a good rule of thumb:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;30 seconds monitoring interval -&amp;gt; 2 to 3 confirmations
1 minute monitoring interval -&amp;gt; 2 to 3 confirmations
2 to 10 minutes monitoring interval -&amp;gt; 2 confirmations
Any greater monitoring interval -&amp;gt; 1 to 2 confirmations
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;🎓 Always require a confirmation before sending an alert.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Multi-region monitoring
&lt;/h2&gt;

&lt;p&gt;Just like incident confirmations, multi-region monitoring is a must-have feature for any monitoring service. It often happens that a request fails temporarily from a specific monitoring endpoint, but it doesn't mean that your website is down.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa9qnt76kj2utrpjvrhhs.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa9qnt76kj2utrpjvrhhs.jpg" alt="Best practices to configure an uptime monitoring service" width="633" height="394"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;When checking from multiple regions, uptime monitoring services will usually require a certain number of regions to fail before sending an alert. This is a great way to avoid false positives and make sure that your website is really down for your users.&lt;/p&gt;

&lt;p&gt;You should always monitor all resources from at least 2 regions, and more for critical resources. When possible, choose the regions closest to your users this will give you the best results and accurate performance metrics.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;🎓 Monitor all resources from at least 2 regions.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Alerting
&lt;/h2&gt;

&lt;p&gt;The last thing to consider is how you want to be alerted. Most monitoring services will offer a wide range of alerting options, from email to SMS, to Slack or Discord notifications.&lt;/p&gt;

&lt;p&gt;As we previously established, not all resources are equally important, and you might want to be alerted differently for each of them. Think about the way your company communicates, and how you could integrate the alerts into your existing workflow. You might want to create a dedicated channel for alerts, or use a dedicated email address for alerts. For the most critical resources, you might want to use SMS or Phone notifications, but discuss this topic with your team and make sure that everyone is on the same page. If you configure SMS alerts and the on-call person keeps a phone on silent, that might not be the best idea.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;🎓 Choose the alerting method adapted to each resource and discuss this topic with your team.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;In most cases uptime monitoring is a set and forget kind of thing, but I've seen many teams struggle with false positives and alerts fatigue. By following these best practices, you should be able to get the best possible monitoring without false positives, and make sure that you are alerted when your website is really down.&lt;/p&gt;




&lt;p&gt;If you are looking for an uptime monitoring service that helps you implement these best practices, you should check out &lt;a href="https://phare.io" rel="noopener noreferrer"&gt;Phare.io&lt;/a&gt;. It's free to start and scale with your needs.&lt;/p&gt;

</description>
      <category>tutorial</category>
      <category>uptime</category>
      <category>monitoring</category>
      <category>guide</category>
    </item>
    <item>
      <title>How we run Ghost on Docker with subdirectory routing</title>
      <dc:creator>Nicolas Beauvais</dc:creator>
      <pubDate>Thu, 22 Aug 2024 17:00:17 +0000</pubDate>
      <link>https://dev.to/phare/how-we-run-ghost-on-docker-with-subdirectory-routing-5b20</link>
      <guid>https://dev.to/phare/how-we-run-ghost-on-docker-with-subdirectory-routing-5b20</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fihy48qgcs0qdhlo7kuqe.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fihy48qgcs0qdhlo7kuqe.jpg" alt="How we run Ghost on Docker with subdirectory routing" width="800" height="418"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Deciding on the right blog platform is always a bit of a hassle, whether it's for my personal blog or my company. I often have to resist the urge to build something from scratch, which inevitably means sinking the next two weeks into coding yet another blog from the ground up.&lt;/p&gt;

&lt;p&gt;When it came to setting up a blog for &lt;a href="https://phare.io" rel="noopener noreferrer"&gt;Phare.io&lt;/a&gt;, I made a conscious effort to minimize the time spent on setup. After some research, I decided on &lt;a href="https://ghost.org" rel="noopener noreferrer"&gt;Ghost&lt;/a&gt;, a well-regarded content platform that seemed to meet all our needs. Self-hosting looked straightforward, and the documentation mentioned support for subdirectory routing, which was a key requirement for our SEO strategy.&lt;/p&gt;

&lt;p&gt;But as is often the case, things weren't quite as simple as they first appeared. Hence, this blog post to guide anyone looking to do something similar.&lt;/p&gt;

&lt;h2&gt;
  
  
  Setting Up Ghost on Docker
&lt;/h2&gt;

&lt;p&gt;To keep things organized, the plan was to isolate Ghost on its own server. For this, I spun up a new VPS instance on &lt;a href="https://www.hetzner.com/cloud/" rel="noopener noreferrer"&gt;Hetzner&lt;/a&gt; running a Docker-CE image.&lt;/p&gt;

&lt;p&gt;This instance runs on a private network without a public IP, and the firewall is configured to accept traffic only from Phare's NGINX server on port 8080.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F26av6vvcue3uw3uilapc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F26av6vvcue3uw3uilapc.png" alt="How we run Ghost on Docker with subdirectory routing" width="800" height="315"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This setup might be a bit over the top for hosting a blog, but it was quick to implement and significantly reduces the attack surface, so there’s no reason not to do it.&lt;/p&gt;

&lt;p&gt;With the server ready, the next step was to write a Docker Compose file to configure Ghost's &lt;a href="https://hub.docker.com/_/ghost" rel="noopener noreferrer"&gt;Docker image&lt;/a&gt; on port &lt;code&gt;8080&lt;/code&gt; along with a MySQL database:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;services&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;ghost&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ghost:5-alpine&lt;/span&gt;
    &lt;span class="na"&gt;restart&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;always&lt;/span&gt;
    &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;8080:2368&lt;/span&gt;
    &lt;span class="na"&gt;environment&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;database__client&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;mysql&lt;/span&gt;
      &lt;span class="na"&gt;database __connection__ host&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;db&lt;/span&gt;
      &lt;span class="na"&gt;database __connection__ user&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;root&lt;/span&gt;
      &lt;span class="na"&gt;database __connection__ password&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;{{&lt;/span&gt; &lt;span class="nv"&gt;ghost_db_password&lt;/span&gt; &lt;span class="pi"&gt;}}&lt;/span&gt;
      &lt;span class="na"&gt;database __connection__ database&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ghost&lt;/span&gt;
      &lt;span class="na"&gt;mail__transport&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;smtp&lt;/span&gt;
      &lt;span class="na"&gt;mail __options__ host&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;{{&lt;/span&gt; &lt;span class="nv"&gt;ghost_mail_host&lt;/span&gt; &lt;span class="pi"&gt;}}&lt;/span&gt;
      &lt;span class="na"&gt;mail __options__ port&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;{{&lt;/span&gt; &lt;span class="nv"&gt;ghost_mail_port&lt;/span&gt; &lt;span class="pi"&gt;}}&lt;/span&gt;
      &lt;span class="na"&gt;mail __options__ auth__user&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;{{&lt;/span&gt; &lt;span class="nv"&gt;ghost_mail_user&lt;/span&gt; &lt;span class="pi"&gt;}}&lt;/span&gt;
      &lt;span class="na"&gt;mail __options__ auth__pass&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;{{&lt;/span&gt; &lt;span class="nv"&gt;ghost_mail_password&lt;/span&gt; &lt;span class="pi"&gt;}}&lt;/span&gt;
      &lt;span class="na"&gt;mail __options__ secure&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
      &lt;span class="na"&gt;url&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;https://phare.io/blog&lt;/span&gt;
    &lt;span class="na"&gt;volumes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;ghost:/var/lib/ghost/content&lt;/span&gt;

  &lt;span class="na"&gt;db&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;mysql:8.0&lt;/span&gt;
    &lt;span class="na"&gt;restart&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;always&lt;/span&gt;
    &lt;span class="na"&gt;environment&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;MYSQL_ROOT_PASSWORD&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;{{&lt;/span&gt; &lt;span class="nv"&gt;ghost_db_password&lt;/span&gt; &lt;span class="pi"&gt;}}&lt;/span&gt;
    &lt;span class="na"&gt;volumes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;db:/var/lib/mysql&lt;/span&gt;

&lt;span class="na"&gt;volumes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;ghost&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;db&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here are some key points to note in that file:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The &lt;code&gt;ghost&lt;/code&gt; service binds to port 8080, which is the one we opened on the firewall.&lt;/li&gt;
&lt;li&gt;Both services use persistent storage, making backups straightforward.&lt;/li&gt;
&lt;li&gt;The &lt;code&gt;url&lt;/code&gt; environment variable should be set to the public URL where your blog will be hosted.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Once the configuration is complete, you can start the services with Docker Compose:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker compose up
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In our case this step is automated with an Ansible playbook task:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;community.docker.docker_compose_v2&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;project_src&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/docker/ghost&lt;/span&gt;
    &lt;span class="na"&gt;files&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;docker-compose-ghost.yml&lt;/span&gt;
    &lt;span class="na"&gt;state&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;present&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And just like that, we have a running Ghost instance.&lt;/p&gt;

&lt;h2&gt;
  
  
  Configuring Subdirectory Routing with NGINX
&lt;/h2&gt;

&lt;p&gt;Phare.io uses an NGINX server to manage load balancing, headers, and a few other tasks. Our setup involves complex routing to allow users to &lt;a href="https://phare.io/products/uptime" rel="noopener noreferrer"&gt;create status pages&lt;/a&gt; on &lt;code&gt;*.status.phare.io&lt;/code&gt; or their own domains.&lt;/p&gt;

&lt;p&gt;For the blog, we wanted it to be accessible only on our main &lt;code&gt;phare.io&lt;/code&gt; domain, so the first step was to adjust our configuration to ensure only &lt;code&gt;phare.io&lt;/code&gt; was served, excluding any subdomains.&lt;/p&gt;

&lt;p&gt;With that in place, I created a location block to route all &lt;code&gt;/blog&lt;/code&gt; traffic to the Ghost instance:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight nginx"&gt;&lt;code&gt;&lt;span class="k"&gt;server&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="kn"&gt;listen&lt;/span&gt; &lt;span class="mi"&gt;443&lt;/span&gt; &lt;span class="s"&gt;ssl&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="kn"&gt;listen&lt;/span&gt; &lt;span class="s"&gt;[::]:443&lt;/span&gt; &lt;span class="s"&gt;ssl&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="kn"&gt;http2&lt;/span&gt; &lt;span class="no"&gt;on&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

    &lt;span class="kn"&gt;server_name&lt;/span&gt; &lt;span class="s"&gt;phare.io&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

    &lt;span class="c1"&gt;# location / {&lt;/span&gt;
        &lt;span class="c1"&gt;# Configuration for our Laravel app&lt;/span&gt;
    &lt;span class="c1"&gt;# }&lt;/span&gt;

    &lt;span class="kn"&gt;location&lt;/span&gt; &lt;span class="s"&gt;^~&lt;/span&gt; &lt;span class="n"&gt;/blog&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="kn"&gt;client_max_body_size&lt;/span&gt; &lt;span class="mi"&gt;10G&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

        &lt;span class="kn"&gt;proxy_set_header&lt;/span&gt; &lt;span class="s"&gt;X-Forwarded-For&lt;/span&gt; &lt;span class="nv"&gt;$proxy_add_x_forwarded_for&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
        &lt;span class="kn"&gt;proxy_set_header&lt;/span&gt; &lt;span class="s"&gt;Host&lt;/span&gt; &lt;span class="nv"&gt;$http_host&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
        &lt;span class="kn"&gt;proxy_set_header&lt;/span&gt; &lt;span class="s"&gt;X-Forwarded-Proto&lt;/span&gt; &lt;span class="nv"&gt;$scheme&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
        &lt;span class="kn"&gt;proxy_pass&lt;/span&gt; &lt;span class="s"&gt;http://10.0.1.2:8080&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;

    &lt;span class="kn"&gt;location&lt;/span&gt; &lt;span class="p"&gt;~&lt;/span&gt;&lt;span class="sr"&gt;*&lt;/span&gt; &lt;span class="err"&gt;\&lt;/span&gt;&lt;span class="s"&gt;.(jpg|jpeg|webp|png|svg|gif|ico|css|js|eot|ttf|woff)&lt;/span&gt;$ &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="kn"&gt;gzip&lt;/span&gt; &lt;span class="no"&gt;on&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
        &lt;span class="kn"&gt;expires&lt;/span&gt; &lt;span class="mi"&gt;1M&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
        &lt;span class="kn"&gt;access_log&lt;/span&gt; &lt;span class="no"&gt;off&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
        &lt;span class="kn"&gt;add_header&lt;/span&gt; &lt;span class="s"&gt;Cache-Control&lt;/span&gt; &lt;span class="s"&gt;public&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;I removed a few irrelevant lines, here are the important details:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;As recommended by the Ghost documentation, set a high &lt;code&gt;client_max_body_size&lt;/code&gt; to allow large file uploads via the Ghost admin panel.&lt;/li&gt;
&lt;li&gt;The &lt;code&gt;^~&lt;/code&gt; directive in the location block ensures no other location block takes precedence, which is crucial to prevent interference with the caching rules further down that could break asset loading.&lt;/li&gt;
&lt;li&gt;The &lt;code&gt;proxy_pass&lt;/code&gt; directive points to our Docker server's private IP &lt;code&gt;10.0.1.2&lt;/code&gt; and the previously opened port &lt;code&gt;8080&lt;/code&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Accessing the blog
&lt;/h2&gt;

&lt;p&gt;With everything set up, the blog is now accessible at &lt;code&gt;phare.io/blog&lt;/code&gt;, and the admin panel at &lt;code&gt;phare.io/blog/ghost&lt;/code&gt;. Our Ghost Docker instance runs securely on a private network.&lt;/p&gt;

&lt;p&gt;To speed up asset loading and caching, we use &lt;a href="https://bunny.net/" rel="noopener noreferrer"&gt;bunny.net&lt;/a&gt; on the &lt;code&gt;phare.io&lt;/code&gt; domain. Most of our existing rules worked seamlessly on the blog, but I hit a snag when Ghost couldn’t create a session cookie, preventing me from signing in.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fchclm07286sjpzm66dcd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fchclm07286sjpzm66dcd.png" alt="How we run Ghost on Docker with subdirectory routing" width="800" height="350"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The problem was that cookies were disabled on the domain. Changing this setting solved the issue without affecting the rest of the site, as Phare only uses session cookies on the &lt;code&gt;app.phare.io&lt;/code&gt; domain. However, a potential improvement could be moving Ghost's admin panel to its own subdomain, which would allow this setting to be re-enabled.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Hosting a Ghost blog on a &lt;code&gt;/blog&lt;/code&gt; subdirectory path using NGINX is a practical solution when you want to seamlessly integrate your blog with your main website. While it requires some configuration, the benefits for SEO and branding make the effort worthwhile.&lt;/p&gt;

&lt;p&gt;I hope this post helps you in setting up your own Ghost blog. The Phare team is delighted with the platform so far, and I’m glad I didn’t spend weeks building a half-baked in-house solution.&lt;/p&gt;




&lt;p&gt;Would you like to make sure your blog or any other part of your website stays online? &lt;a href="https://app.phare.io/register" rel="noopener noreferrer"&gt;Create a Phare account for free&lt;/a&gt; and start monitoring your website today.&lt;/p&gt;

</description>
      <category>docker</category>
      <category>ghost</category>
      <category>nginx</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>Downsampling time series data</title>
      <dc:creator>Nicolas Beauvais</dc:creator>
      <pubDate>Mon, 29 Jul 2024 09:50:00 +0000</pubDate>
      <link>https://dev.to/phare/downsampling-time-series-data-4e0p</link>
      <guid>https://dev.to/phare/downsampling-time-series-data-4e0p</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs49jwu4iineussi8ycza.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs49jwu4iineussi8ycza.jpg" alt="Downsampling time series data" width="800" height="418"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;At &lt;a href="https://phare.io/products/uptime" rel="noopener noreferrer"&gt;Phare uptime&lt;/a&gt; we allow you to view your monitor's performance data with up to 90 days of history. While this might not seem like a lot, a monitor running every minute will span about 130 thousand data points during that time frame, for a single region.&lt;/p&gt;

&lt;p&gt;Showing that amount of data in a graph would be slow and impossible to understand due to the sheer number of data points. Finding the best solution requires the right balance between user experience, data quality, and performance. We need to tell you the story of your monitor's performance in a way that is easy to understand, fast, and accurate.&lt;/p&gt;

&lt;p&gt;In this article, we dive into a few techniques that we used to downsample the data by 99.4% while keeping the most important information.&lt;/p&gt;

&lt;h2&gt;
  
  
  Raw data
&lt;/h2&gt;

&lt;p&gt;We start with the raw data collected from monitoring the Phare.io dashboard in the last 90 days. The performance varied a lot during this time frame thanks to a noisy CPU neighbor, which made it the perfect candidate for this article.&lt;/p&gt;

&lt;p&gt;If we plot the raw data with 130 thousand points, we get a chart that looks like this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbuswd7742mdzavlfhhk5.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbuswd7742mdzavlfhhk5.webp" alt="Downsampling time series data" width="800" height="219"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The performance is not terrible, loading the data takes about 230ms, and rendering the chart is done under 100ms, which is already better than a lot of other charts out there. But the real problem is that the chart is unreadable, we can't see any patterns, and it's hard to understand what's going on.&lt;/p&gt;

&lt;p&gt;The confusion does not only come from the number of data points but also from the scale difference in the data. The vast majority of the requests will be performed under 200ms, but about 0.1% of them will be slower. It does not matter how fast your website and our monitoring infrastructure are, there will always be a few requests that will be slower due to network latency, congestion, or other factors.&lt;/p&gt;

&lt;h2&gt;
  
  
  Removing anomalies
&lt;/h2&gt;

&lt;p&gt;Our first step is to remove the anomalies from our data set. A single 4-second request among a few thousand will skew the data and make it hard to read. We need to remove these anomalies in a way that will keep any sustained decline in performance visible.&lt;/p&gt;

&lt;p&gt;To solve this problem, we can use one of two techniques:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Standard deviation: We can calculate the standard deviation of the dataset and remove any data points that are outside a certain multiple of the standard deviation.&lt;/li&gt;
&lt;li&gt;Percentile: We can calculate the 99th percentile of the data set and remove any data points that are above that value.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Both solutions offer similar results, but need to be applied on a rolling window to make sure that we don't remove any sustained decline in performance and keep anomalies in periods of high performance.&lt;/p&gt;

&lt;p&gt;We use the formula &lt;code&gt;rolling mean + (3 x rolling standard deviation)&lt;/code&gt; to remove any data points that are three standard deviations above the rolling mean, and chose a rolling window of 30 (15 points before and after the current point):&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7azqcbgmthygj3fnfk24.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7azqcbgmthygj3fnfk24.webp" alt="Downsampling time series data" width="800" height="205"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;By zooming in on the few remaining spikes, we can see that they are not isolated anomalies but sustained periods of lower performance which we want to keep in our data set.&lt;/p&gt;

&lt;p&gt;The following is a zoomed-in view of the tallest spike on the right, we can see that the spike is followed by a series of slower requests over 1h30, this is exactly the kind of information we want to keep:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1s1ahy7fu8snyf07uogc.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1s1ahy7fu8snyf07uogc.webp" alt="Downsampling time series data" width="800" height="205"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If we did not calculate the deviation or quantile in a rolling window, we would get the following chart, which removes everything above 600ms while keeping many anomalies in the lower performance range:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpecxgqucqp1tjlw0thyx.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpecxgqucqp1tjlw0thyx.webp" alt="Downsampling time series data" width="800" height="205"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Rolling window average
&lt;/h2&gt;

&lt;p&gt;The next step is to smooth the data with a rolling window average. This technique will slightly reduce the gap between two adjacent data points and make the chart more readable while allowing us to detect trends in the data.&lt;/p&gt;

&lt;p&gt;For this example, we use a rolling window of 10 data points (5 points before and after the current point) to smooth the curve without losing too much information:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk07a7uq1wayuqtwr34rw.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk07a7uq1wayuqtwr34rw.webp" alt="Downsampling time series data" width="800" height="205"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Downsampling
&lt;/h2&gt;

&lt;p&gt;Our data is now clean and smooth, but we still have 130 thousand data points to display which cost bandwidth and rendering performance. To reduce the number of data points without losing too much information, we can use the largest triangle three buckets (LTTB) algorithm.&lt;/p&gt;

&lt;p&gt;The LTTB algorithm is a downsampling technique that finds the most important points in the data set by dividing the data into buckets and selecting the point with the largest triangle area in each bucket. In simpler words, the algorithm will only keep the points that are the most representative of the data set so that the overall shape of the curve is preserved.&lt;/p&gt;

&lt;p&gt;By applying the LTTB algorithm to our data set, we can reduce the number of data points from 130 thousand to 750, which is a reduction of almost 99.5%. In the form of a JSON payload, we go from 1.53 MB to 13 KB, which is a significant reduction in bandwidth.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh9dzmwejiku3b9cq4upp.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh9dzmwejiku3b9cq4upp.webp" alt="Downsampling time series data" width="800" height="205"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As you can see, the chart looks almost identical to the full data set, but uses only a fraction of the data points.&lt;/p&gt;

&lt;p&gt;It is important to carefully prepare the data before applying the LTTB algorithm, as the algorithm is specifically designed to preserve the overall shape of the curve it will keep any anomalies into the final data set.&lt;/p&gt;

&lt;h2&gt;
  
  
  Implementation with ClickHouse
&lt;/h2&gt;

&lt;p&gt;ClickHouse is a powerful column-oriented database optimized for analytical queries that offers unparalleled performance for time series data. We extensively use it at Phare to store and analyze the performance data of your monitors.&lt;/p&gt;

&lt;p&gt;All the techniques described in this article can be implemented in a single ClickHouse query using the &lt;a href="https://clickhouse.com/docs/en/sql-reference/aggregate-functions/reference/largestTriangleThreeBuckets" rel="noopener noreferrer"&gt;largestTriangleThreeBuckets&lt;/a&gt; function, as well as &lt;a href="https://clickhouse.com/docs/en/sql-reference/window-functions" rel="noopener noreferrer"&gt;window functions&lt;/a&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="c1"&gt;-- Apply the LTTB algorithm to the data set&lt;/span&gt;
&lt;span class="k"&gt;SELECT&lt;/span&gt;
  &lt;span class="n"&gt;largestTriangleThreeBuckets&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;750&lt;/span&gt;&lt;span class="p"&gt;)(&lt;/span&gt;
    &lt;span class="nv"&gt;`cleaned_results`&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nv"&gt;`timestamp`&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="nv"&gt;`cleaned_results`&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nv"&gt;`time`&lt;/span&gt;
  &lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="k"&gt;FROM&lt;/span&gt;
  &lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="c1"&gt;-- Smooth the remaining data with a rolling window average&lt;/span&gt;
    &lt;span class="k"&gt;SELECT&lt;/span&gt;
      &lt;span class="nv"&gt;`raw_results`&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nv"&gt;`timestamp`&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="k"&gt;avg&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nv"&gt;`raw_results`&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nv"&gt;`time`&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="n"&gt;OVER&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;
          &lt;span class="k"&gt;ROWS&lt;/span&gt; &lt;span class="k"&gt;BETWEEN&lt;/span&gt; &lt;span class="mi"&gt;5&lt;/span&gt; &lt;span class="k"&gt;PRECEDING&lt;/span&gt; &lt;span class="k"&gt;AND&lt;/span&gt; &lt;span class="mi"&gt;5&lt;/span&gt; &lt;span class="k"&gt;FOLLOWING&lt;/span&gt;
      &lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;AS&lt;/span&gt; &lt;span class="nv"&gt;`time`&lt;/span&gt;
    &lt;span class="k"&gt;FROM&lt;/span&gt;
      &lt;span class="p"&gt;(&lt;/span&gt;
        &lt;span class="c1"&gt;-- Select the raw data&lt;/span&gt;
        &lt;span class="k"&gt;SELECT&lt;/span&gt;
          &lt;span class="nv"&gt;`timestamp`&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
          &lt;span class="nv"&gt;`time`&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
          &lt;span class="c1"&gt;-- Calculate the rolling window average and standard deviation&lt;/span&gt;
          &lt;span class="k"&gt;avg&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nv"&gt;`time`&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="n"&gt;OVER&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;
              &lt;span class="k"&gt;ORDER&lt;/span&gt; &lt;span class="k"&gt;BY&lt;/span&gt; &lt;span class="nv"&gt;`timestamp`&lt;/span&gt; &lt;span class="k"&gt;ROWS&lt;/span&gt; &lt;span class="k"&gt;BETWEEN&lt;/span&gt; &lt;span class="mi"&gt;15&lt;/span&gt; &lt;span class="k"&gt;PRECEDING&lt;/span&gt; &lt;span class="k"&gt;AND&lt;/span&gt; &lt;span class="mi"&gt;15&lt;/span&gt; &lt;span class="k"&gt;FOLLOWING&lt;/span&gt;
          &lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="mi"&gt;3&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="n"&gt;stddevSamp&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nv"&gt;`time`&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="n"&gt;OVER&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;
            &lt;span class="k"&gt;ORDER&lt;/span&gt; &lt;span class="k"&gt;BY&lt;/span&gt; &lt;span class="nv"&gt;`timestamp`&lt;/span&gt; &lt;span class="k"&gt;ROWS&lt;/span&gt; &lt;span class="k"&gt;BETWEEN&lt;/span&gt; &lt;span class="mi"&gt;15&lt;/span&gt; &lt;span class="k"&gt;PRECEDING&lt;/span&gt; &lt;span class="k"&gt;AND&lt;/span&gt; &lt;span class="mi"&gt;15&lt;/span&gt; &lt;span class="k"&gt;FOLLOWING&lt;/span&gt;
          &lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;AS&lt;/span&gt; &lt;span class="n"&gt;anomalies&lt;/span&gt;
        &lt;span class="k"&gt;FROM&lt;/span&gt;
          &lt;span class="nv"&gt;`performance_table`&lt;/span&gt;
      &lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;AS&lt;/span&gt; &lt;span class="nv"&gt;`raw_results`&lt;/span&gt;
    &lt;span class="c1"&gt;-- Filter out anomalies&lt;/span&gt;
    &lt;span class="k"&gt;WHERE&lt;/span&gt;
      &lt;span class="nv"&gt;`raw_results`&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nv"&gt;`time`&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&lt;/span&gt; &lt;span class="nv"&gt;`raw_results`&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nv"&gt;`anomalies`&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;AS&lt;/span&gt; &lt;span class="nv"&gt;`cleaned_results`&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Drawing the data
&lt;/h2&gt;

&lt;p&gt;Phare uses &lt;a href="https://github.com/leeoniya/uPlot" rel="noopener noreferrer"&gt;uPlot&lt;/a&gt; to draw charts in the frontend. uPlot is a small, fast, and flexible charting library, which perfectly fits our needs. It allows us to draw charts with a large number of data points with the best possible performance, where other libraries would struggle.&lt;/p&gt;

&lt;p&gt;Keep in mind that uPlot is a low-level library, which means that you will need to spend a good amount of time configuring it to get the desired result. But the performance and flexibility it offers are worth the effort.&lt;/p&gt;

&lt;p&gt;Because we already processed the data with ClickHouse, we only need to separate the data in two arrays, one for the x-axis and one for the y-axis, and pass them to uPlot.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nf"&gt;uPlot&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;options&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
  &lt;span class="nx"&gt;timestamps&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="c1"&gt;// x-axis values&lt;/span&gt;
  &lt;span class="nx"&gt;times&lt;/span&gt; &lt;span class="c1"&gt;// y-axis values&lt;/span&gt;
&lt;span class="p"&gt;],&lt;/span&gt; &lt;span class="nb"&gt;document&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;body&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Of course, Phare implementation is more complex than that, we need to handle responsive, live-streaming, and displaying uptime incidents, but this goes beyond the scope of this article.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;By removing anomalies, smoothing the data with a rolling window average, and downsampling the data with the LTTB algorithm, we were able to reduce the amount of data by 99.4% while keeping the most important information.&lt;/p&gt;

&lt;p&gt;The chart is now readable, fast to load, and easy to understand.&lt;/p&gt;




&lt;p&gt;Would you like to see this performance chart for your own website? &lt;a href="https://app.phare.io/register" rel="noopener noreferrer"&gt;Sign up for free&lt;/a&gt; and start monitoring your website today.&lt;/p&gt;

</description>
      <category>clickhouse</category>
      <category>sql</category>
      <category>database</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>Deploy Laravel on Railway</title>
      <dc:creator>Nicolas Beauvais</dc:creator>
      <pubDate>Thu, 04 Aug 2022 13:43:09 +0000</pubDate>
      <link>https://dev.to/nicolasbeauvais/deploy-laravel-on-railway-eho</link>
      <guid>https://dev.to/nicolasbeauvais/deploy-laravel-on-railway-eho</guid>
      <description>&lt;p&gt;In this article, I will show you how to properly deploy a Laravel project on Railway, an infrastructure platform that handle all the system administration and DevOps tasks for you.&lt;/p&gt;

&lt;p&gt;Making Laravel work on Railway is easy enough, but there are a few catches that deserve a thorough explanation. Unless you want to spend an entire day browsing the internet like I did to search for nonexistent explanations, perks of being an early adopter!&lt;/p&gt;

&lt;p&gt;One of Railway characteristic is that every time a deployment occurs, your project server is rebuilt with your new code, which mean that everything stored on the file system is erased. &lt;/p&gt;

&lt;p&gt;If you're not familiar with PHP deployment on horizontally scaled architecture or blue-green deployment, we will cover all the configuration tweaks that you need to do to make your Laravel project work properly.&lt;/p&gt;

&lt;h2&gt;
  
  
  1. First deployment
&lt;/h2&gt;

&lt;p&gt;Railway deployments are based on Nixpacks, an open source tool (developed by the Railway team) that read your project source code and create a compatible image where it can run.&lt;/p&gt;

&lt;p&gt;On a standard Laravel project, Nixpacks will set up nginx to handle requests, PHP with your project's required version, NodeJS with your favorite package manager Yarn, pnpm or npm.&lt;/p&gt;

&lt;p&gt;Creating a project is straightforward, just head to &lt;a href="https://railway.app/new" rel="noopener noreferrer"&gt;Railway.app/new&lt;/a&gt;, set up your account, and choose the repository to deploy.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwlg8qf92de2ctqkwd4x1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwlg8qf92de2ctqkwd4x1.png" alt="New Railway project" width="800" height="450"&gt;&lt;/a&gt; &lt;/p&gt;

&lt;p&gt;If you gave Railway access to a GitHub repository, you automatically got push-to-deploy setup, pretty nice.&lt;/p&gt;

&lt;p&gt;Your project is now deploying, the build step should work fine, but you won't be able to visit your landing page because Laravel need a few environment variables to run.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3nqffbvvzkg0izmegd0r.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3nqffbvvzkg0izmegd0r.png" alt="First project build" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You can paste the content of your &lt;code&gt;.env&lt;/code&gt; file in the raw editor with the values you would like for production.&lt;/p&gt;

&lt;p&gt;We can also tell Nixpacks to add a few dependencies when building our project with the &lt;code&gt;NIXPACKS_PKGS&lt;/code&gt; env variable, like PHP Redis to communicate with a Redis server, and PHP GD to work with images file.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feb4h751lbs5q5720gj44.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feb4h751lbs5q5720gj44.png" alt="Project environment variables" width="800" height="450"&gt;&lt;/a&gt; &lt;/p&gt;

&lt;p&gt;You can search for &lt;a href="https://search.nixos.org/packages" rel="noopener noreferrer"&gt;available packages here&lt;/a&gt; to make sure you get the syntax right.&lt;/p&gt;

&lt;p&gt;A new deployment will automatically start to rebuild your project with the provided environment variables.&lt;/p&gt;

&lt;p&gt;By default, your project will not be publicly accessible, you can head to the settings panel and generate a Railway subdomain with SSL, or follow instructions to use your own domain.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo0zueyzjxm9dcqdhbykr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo0zueyzjxm9dcqdhbykr.png" alt="Railway subdomain" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  2. Database creation
&lt;/h2&gt;

&lt;p&gt;Because the file system is reset on every deployment, using a local database like SQLite is not possible. No worries, Railway offers MySQL, PostgreSQL, MongoDB and Redis.&lt;/p&gt;

&lt;p&gt;In your Railway project, click on the &lt;strong&gt;new&lt;/strong&gt; button, and choose your favorite database management system.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmy572joxllfkuneb40de.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmy572joxllfkuneb40de.png" alt="New service" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once created, you will be able to retrieve the credentials in the &lt;strong&gt;Variables&lt;/strong&gt; panel.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0synf4kvompmlmotfqgq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0synf4kvompmlmotfqgq.png" alt="MySQL credentials" width="800" height="450"&gt;&lt;/a&gt; &lt;/p&gt;

&lt;p&gt;You can copy these values in your Laravel project environment variable to give your code access to the database.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F016c07u1b146xuk121z9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F016c07u1b146xuk121z9.png" alt="Laravel database environment variable" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;A new deployment will automatically rebuild your project with the provided environment variables and give your code access to the database. Keep in mind the database does not yet contain any table as we did not run Laravel's migration.&lt;/p&gt;

&lt;h2&gt;
  
  
  3. Redis
&lt;/h2&gt;

&lt;p&gt;No persistent file system means that the default Laravel drivers for cache, queue, and session will not work properly, disconnecting our users at every deployment.&lt;/p&gt;

&lt;p&gt;To fix this we need Redis, it can be created the same way as our MySQL/PostgreSQL database.&lt;/p&gt;

&lt;p&gt;Again, you will need to copy the credentials found in the &lt;strong&gt;Variables&lt;/strong&gt; panel to your &lt;code&gt;.env&lt;/code&gt; file, and configure Laravel session, cache, and queue to use the Redis driver.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx5apl4d58xgoa6n1vb7j.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx5apl4d58xgoa6n1vb7j.png" alt="Redis credentials" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjb16wz7evi1zt03l49r6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjb16wz7evi1zt03l49r6.png" alt="Laravel Redis environment variable" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  4. Customizing the build step
&lt;/h2&gt;

&lt;p&gt;During deployment, Nixpack will install your project dependencies and run the &lt;strong&gt;build&lt;/strong&gt; command of your &lt;code&gt;package.json&lt;/code&gt; to build assets:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;composer require&lt;/li&gt;
&lt;li&gt;[yarn|npm|pnpm] install&lt;/li&gt;
&lt;li&gt;[yarn|npm|pnpm] run build&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This is fine if you are serving a simple static site, but it is missing some Laravel specific steps.&lt;/p&gt;

&lt;p&gt;The best way to do this is to set the &lt;code&gt;NIXPACKS_BUILD_CMD&lt;/code&gt; environment variable to run the following commands:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;[yarn|npm|pnpm] run build&lt;/li&gt;
&lt;li&gt;php artisan optimize&lt;/li&gt;
&lt;li&gt;php artisan migrate --force&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Notice that we need to include the assets building step, as it will be otherwise replaced by our new commands.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsy46pdxybdox2v4cl7o6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsy46pdxybdox2v4cl7o6.png" alt="Custom build environment variable" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I illustrate a simple approach here by running each command in a chain with &lt;code&gt;&amp;amp;&amp;amp;&lt;/code&gt;, but you could put all this in a composer script, a bash script or a Makefile.&lt;/p&gt;

&lt;h2&gt;
  
  
  5. Creating a worker server
&lt;/h2&gt;

&lt;p&gt;Railway only allows a single process by server, you currently have a web process running nginx and serving your PHP code. &lt;/p&gt;

&lt;p&gt;What if you need another process to run a queue job in your app?&lt;/p&gt;

&lt;p&gt;Then we just create a new server to serve that exact purpose! &lt;/p&gt;

&lt;p&gt;In your Railway project, click on the &lt;strong&gt;new&lt;/strong&gt; button, and choose your code repository again to create a second server.&lt;/p&gt;

&lt;p&gt;Copy all your environment variable to make sure both code instances run with the same configuration, and override the &lt;code&gt;NIXPACKS_START_CMD&lt;/code&gt; variable to run Laravel queue worker instead of a web server.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7dxtrh4cmukmm5pieslb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7dxtrh4cmukmm5pieslb.png" alt="Custom start environment variable" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Your worker should now process your queue jobs. You can create as many workers as you want with different configuration to suit your project's needs.&lt;/p&gt;

&lt;h2&gt;
  
  
  6. Make signed route works
&lt;/h2&gt;

&lt;p&gt;You might already experience issues with signed routes if you worked with a load balancer before.&lt;/p&gt;

&lt;p&gt;When setting up a project, Railway will receive the request and proxy it to your server with the HTTP protocol.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Request -https-&amp;gt; Railway -http-&amp;gt; Server
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;When generating a signed route, Laravel with create a hash value of the full URL (which will be https), but when a user clicks the link, your code will verify the signature hash with a URL starting with &lt;code&gt;http&lt;/code&gt; instead of https, making the signature invalid.&lt;/p&gt;

&lt;p&gt;There are two things you can add to your code to prevent this, first force all URLs to be generated with the &lt;code&gt;https&lt;/code&gt; protocol. This can be done in your &lt;code&gt;AppServiceProvider&lt;/code&gt; boot method:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight php"&gt;&lt;code&gt;&lt;span class="k"&gt;public&lt;/span&gt; &lt;span class="k"&gt;function&lt;/span&gt; &lt;span class="n"&gt;boot&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nv"&gt;$this&lt;/span&gt;&lt;span class="o"&gt;-&amp;gt;&lt;/span&gt;&lt;span class="n"&gt;app&lt;/span&gt;&lt;span class="o"&gt;-&amp;gt;&lt;/span&gt;&lt;span class="nf"&gt;environment&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s1"&gt;'production'&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="no"&gt;URL&lt;/span&gt;&lt;span class="o"&gt;::&lt;/span&gt;&lt;span class="nf"&gt;forceScheme&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s1"&gt;'https'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then we need to trust Railway proxy to serve &lt;code&gt;https&lt;/code&gt;, this can be configured in the &lt;code&gt;TrustProxies&lt;/code&gt; middleware by setting the &lt;code&gt;proxies&lt;/code&gt; property to a wildcard &lt;code&gt;*&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight php"&gt;&lt;code&gt;&lt;span class="cp"&gt;&amp;lt;?php&lt;/span&gt;

&lt;span class="kn"&gt;namespace&lt;/span&gt; &lt;span class="nn"&gt;App\Http\Middleware&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="kn"&gt;use&lt;/span&gt; &lt;span class="nc"&gt;Illuminate\Http\Middleware\TrustProxies&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="nc"&gt;Middleware&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="kn"&gt;use&lt;/span&gt; &lt;span class="nc"&gt;Illuminate\Http\Request&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="kd"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;TrustProxies&lt;/span&gt; &lt;span class="kd"&gt;extends&lt;/span&gt; &lt;span class="nc"&gt;Middleware&lt;/span&gt;
&lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;protected&lt;/span&gt; &lt;span class="nv"&gt;$proxies&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s1"&gt;'*'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

    &lt;span class="mf"&gt;...&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Using a wildcard might not be a best practice, but I could not find a list of Railway IPs, let me know if you find one!&lt;/p&gt;

&lt;h2&gt;
  
  
  7. Blue green deployment
&lt;/h2&gt;

&lt;p&gt;Finally, we need to make deployment zero-downtime. For this, we need to make sure that when a new deployment is creating a new server, Railway can verify our code run properly before switching user traffic to the new server and shutting down the previous one.&lt;/p&gt;

&lt;p&gt;Railway offers health checks that do exactly this, you can choose the route that will be requested, and when it returns a success code (2xx), Railway will know that your server is ready to receive requests.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7h1qy21o2eh70k9vboiv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7h1qy21o2eh70k9vboiv.png" alt="Health checks" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;I've been looking for the "Vercel for PHP" for a long time, Railway is still a young service that lacks some flexibility, but the developer experience make up for it. I can't wait to see it evolve in the following years!&lt;/p&gt;

&lt;p&gt;That said, I still use services like Laravel Vapor or Laravel forge, that are pricier but offer better performance, flexibility, configuration and remain my go-to platforms for serious professional project.&lt;/p&gt;

</description>
      <category>laravel</category>
      <category>beginners</category>
      <category>tutorial</category>
      <category>devops</category>
    </item>
    <item>
      <title>Why I learned the Linux command line as a developer, and you should too</title>
      <dc:creator>Nicolas Beauvais</dc:creator>
      <pubDate>Tue, 26 Jul 2022 09:14:00 +0000</pubDate>
      <link>https://dev.to/nicolasbeauvais/why-i-learned-the-linux-command-line-as-a-developer-and-you-should-too-m8o</link>
      <guid>https://dev.to/nicolasbeauvais/why-i-learned-the-linux-command-line-as-a-developer-and-you-should-too-m8o</guid>
      <description>&lt;p&gt;It’s always intimidating to find yourself in front of a command line when beginning your journey as a developer. We have the chance to live in a world where you can choose to do pretty much anything from a comfortable GUI, so why bother working with an old school terminal?&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Read the original article on &lt;a href="https://divinglinux.com/blog/how-to-choose-the-best-linux-distribution-for-your-use-case" rel="noopener noreferrer"&gt;DivingLinux.com&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Well, let me tell you that learning to use a command line is far from outdated, and it remains one of the most powerful tool to improve your productivity. I personally chose to focus on Linux, but developers working on macOS or Windows (with WSL2) can benefit just as much from it, and thanks to &lt;a href="https://en.wikipedia.org/wiki/POSIX" rel="noopener noreferrer"&gt;POSIX compliance&lt;/a&gt;, apply this knowledge on most operating systems.&lt;/p&gt;

&lt;h2&gt;
  
  
  I tell you this from experience
&lt;/h2&gt;

&lt;p&gt;I'm a full-stack developer working with Linux for the past 7 years, and slowly embraced the command line, using it every day to execute simple and complex tasks.&lt;/p&gt;

&lt;p&gt;The first time I was presented a terminal, all I saw was an inefficient and complicated way to do the things I already knew how to do with the native applications available on my operating system, so why bother?&lt;/p&gt;

&lt;p&gt;Even if I did not like it, it was the only way I knew to tinker with servers, mostly to configure Apache to host my newbie PHP code online. From theses simple operations I got used to doing more things, started to develop confidence in my skills to a point where I can’t see myself working without a command line, not only to work with servers, but also on my desktop.&lt;/p&gt;

&lt;p&gt;It’s hard to illustrate how learning to use a command-line might be a good idea, so we will explore together a few advantages of the command line with real-world uses-cases, and I encourage you to perform each of them on your computer for comparison.&lt;/p&gt;

&lt;h2&gt;
  
  
  Quickly try something, fail, and iterate
&lt;/h2&gt;

&lt;p&gt;Small trial and error hacking is one of the thing that I love the most doing in a command line.&lt;/p&gt;

&lt;p&gt;A recent example, I found a government website that list all general practitioner in my city, along with their status on accepting new patients. After a digging a little, I discovered that the list is available in JSON format, on a public URL.&lt;/p&gt;

&lt;p&gt;Could I create an automated task that notify me when a general practitioner becomes available, all from a Linux command line? Of course, I can do that!&lt;/p&gt;

&lt;p&gt;First I need to get the JSON file, this can be done with &lt;code&gt;curl&lt;/code&gt;, make the JSON data readable for exploration, and finally filter it, both tasks can be done with &lt;code&gt;jq&lt;/code&gt; :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;student@waddle:~/&lt;span class="nv"&gt;$ &lt;/span&gt;curl https://health.gov/available-gp.json &lt;span class="se"&gt;\&lt;/span&gt;
| jq &lt;span class="s1"&gt;'.[] | select(.available == true)'&lt;/span&gt;

&lt;span class="o"&gt;{&lt;/span&gt;
&lt;span class="s2"&gt;"name"&lt;/span&gt;: &lt;span class="s2"&gt;"Dr. John Doe"&lt;/span&gt;,
&lt;span class="s2"&gt;"available"&lt;/span&gt;: &lt;span class="nb"&gt;true&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Surely, finding the right combination of commands takes experience and time, and you could do the same thing with a few lines of Python or JavaScript. But for simple tasks like this, I find the terminal more efficient, no needs to set up a development environment, create a project, install packages, or start a code editor, all I need is my terminal.&lt;/p&gt;

&lt;p&gt;Once you know how to properly navigate your terminal, you can try to tinker with &lt;code&gt;grep&lt;/code&gt;, &lt;code&gt;sed&lt;/code&gt;, &lt;code&gt;awk&lt;/code&gt;, combined with piping and redirection. This will open you to a whole new world of possibilities.&lt;/p&gt;

&lt;h2&gt;
  
  
  Automating common tasks
&lt;/h2&gt;

&lt;p&gt;Now I would like to automate finding available doctors and receive some sort of notification when a new availability is discovered. For this, I need to check if my list has any results, and if that's the case, receive a desktop notification.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;student@waddle:~/&lt;span class="nv"&gt;$ &lt;/span&gt;curl https://health.gov/available-gp.json &lt;span class="se"&gt;\&lt;/span&gt;
| jq &lt;span class="s1"&gt;'.[] | select(.available == true).name'&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
| xargs &lt;span class="nt"&gt;-I&lt;/span&gt;&lt;span class="o"&gt;{}&lt;/span&gt; notify-send &lt;span class="s2"&gt;"Found available general practioner"&lt;/span&gt; &lt;span class="s2"&gt;"{}"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;There's some dark magic involved for the beginners eye. The &lt;code&gt;jq&lt;/code&gt; command will return every available general practitioner name, and thanks to the pipe to &lt;code&gt;xargs&lt;/code&gt;, a &lt;code&gt;notify-send&lt;/code&gt; command will be executed for each returned line, which means no notification if no results.&lt;/p&gt;

&lt;p&gt;I can now execute this command every hour with a Cron task to fully automate the process, all it took is about 10 minutes.&lt;/p&gt;

&lt;p&gt;Learning about Cron and systemd are great ways to start playing with automation. You can for instance build your own backup system with a single command using &lt;code&gt;rsync&lt;/code&gt; in a cron task.&lt;/p&gt;

&lt;h2&gt;
  
  
  Master your developer tools
&lt;/h2&gt;

&lt;p&gt;It doesn't matter if you're currently a frontend or backend developer, knowing to use a command-line will give you the most flexibility in terms of developer tools.&lt;/p&gt;

&lt;p&gt;Both need continuous-integration and continuous delivery, that most probably run in a Linux server. Both need version control, package management, and testing tools that run in a command line.&lt;/p&gt;

&lt;p&gt;You can use a GUI for 90% of your use cases, and it's completely fine. But when you run into an uncommon error or want to perform a very particular task, you will have to go back to your terminal, simply because most GUIs only replicate a subset of the raw capabilities of command-line tools.&lt;/p&gt;

&lt;p&gt;A simple way to make yourself more comfortable using developer tools in the terminal is to replicate with a command line something you usually do in a GUI.&lt;/p&gt;

&lt;p&gt;For instance, I like to see my branches and commit history in GitKraken, as it makes it more visual and understandable to manage a Git repository.&lt;/p&gt;

&lt;p&gt;A quick Google search will show me that it's in fact possible to get a similar visual directly in a terminal with the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;student@waddle:~/&lt;span class="nv"&gt;$ &lt;/span&gt;git log &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="nt"&gt;--pretty&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;format:&lt;span class="s1"&gt;'%Cred%h%Creset -%C(yellow)%d%Creset %s %Cgreen(%cr) %C(bold blue)&amp;lt;%an&amp;gt;%Creset'&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="nt"&gt;--graph&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="nt"&gt;--abbrev-commit&lt;/span&gt;

&lt;span class="k"&gt;*&lt;/span&gt; 1e274a9 - &lt;span class="o"&gt;(&lt;/span&gt;origin/main, origin/HEAD&lt;span class="o"&gt;)&lt;/span&gt; WIP server provisioning &lt;span class="o"&gt;(&lt;/span&gt;1 day ago&lt;span class="o"&gt;)&lt;/span&gt; &amp;lt;Nicolas&amp;gt;
|&lt;span class="se"&gt;\&lt;/span&gt;
| &lt;span class="k"&gt;*&lt;/span&gt; b62d50d - Base server driver abstraction &lt;span class="o"&gt;(&lt;/span&gt;2 days ago&lt;span class="o"&gt;)&lt;/span&gt; &amp;lt;Nicolas&amp;gt;
|/
&lt;span class="k"&gt;*&lt;/span&gt; e1fdf32 - Initial commit &lt;span class="o"&gt;(&lt;/span&gt;2 weeks ago&lt;span class="o"&gt;)&lt;/span&gt; &amp;lt;Nicolas&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;It's not convenient to type, or even remember, this command, which makes it the perfect candidate for creating a Git alias.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;student@waddle:~/&lt;span class="nv"&gt;$ &lt;/span&gt;git config &lt;span class="nt"&gt;--global&lt;/span&gt; alias.pretty &lt;span class="s2"&gt;"log --graph --pretty=format:'%Cred%h%Creset -%C(yellow)%d%Creset %s %Cgreen(%cr) %C(bold blue)&amp;lt;%an&amp;gt;%Creset' --abbrev-commit"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now all I have to do is type &lt;code&gt;git pretty&lt;/code&gt; to get the expected result in a few milliseconds, and I can combine it with other tools like &lt;code&gt;grep&lt;/code&gt; to quickly search a commit.&lt;/p&gt;

&lt;h2&gt;
  
  
  Learn to do things once, apply everywhere
&lt;/h2&gt;

&lt;p&gt;As stated in the introduction, &lt;a href="https://pubs.opengroup.org/onlinepubs/9699919799/idx/utilities.html" rel="noopener noreferrer"&gt;most of the base commands&lt;/a&gt; you will use in a terminal are POSIX compliant, which mean that you can use them on macOs, Windows with WSL2, Linux desktop and server.&lt;/p&gt;

&lt;p&gt;By developing the habit to work in the command-line, you will also get used to work in a remote server environment, making the switch between desktop and server friction less. Learning to perform system administration and DevOps tasks when you are comfortable with a terminal and a few base commands is much easier.&lt;/p&gt;

&lt;p&gt;Sure, the tasks performed on a server and on your desktop aren't the same, but the tools are common between the two, in most cases running &lt;code&gt;sed&lt;/code&gt; on macOS, Ubuntu or Debian server will provide a fairly similar result.&lt;/p&gt;

&lt;h2&gt;
  
  
  Expand your stack
&lt;/h2&gt;

&lt;p&gt;Being at ease working in a command line, especially on a Linux operating system, is the perfect gateway to expand your knowledge.&lt;/p&gt;

&lt;p&gt;I personally started my career as a front end developer, learned backend along the way, and later system administration and DevOps to securely host and deploy my code on the cloud. This does not make me an expert on every topic, but knowing how each pillar works definitely make me a better developer.&lt;/p&gt;

&lt;p&gt;Having at least a base knowledge of how a Linux server work will make you a better backend developer, and understanding a bit of networking / caching / DevOps will make you a better front end developer.&lt;/p&gt;

&lt;p&gt;And as most action on a Linux server are made from a command line, it's the perfect starting point.&lt;/p&gt;

&lt;h2&gt;
  
  
  It's not all fun and games, at first
&lt;/h2&gt;

&lt;p&gt;As a beginner, doing tasks from a command line will be painful and slow. Give yourself enough time and practice to create the neural pathways that will make you able to “think” in command line terms. After a few weeks of doing small tasks it will become as normal as breathing, you just need to force yourself a little to get there, and it will be worth it.&lt;/p&gt;




&lt;p&gt;If you want to learn more on this topic, consider trying &lt;a href="https://divinglinux.com" rel="noopener noreferrer"&gt;DivingLinux&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>linux</category>
      <category>beginners</category>
      <category>programming</category>
    </item>
    <item>
      <title>How to choose the best Linux distribution for your use case</title>
      <dc:creator>Nicolas Beauvais</dc:creator>
      <pubDate>Mon, 27 Jun 2022 08:00:02 +0000</pubDate>
      <link>https://dev.to/nicolasbeauvais/how-to-choose-the-best-linux-distribution-for-your-use-case-2ncc</link>
      <guid>https://dev.to/nicolasbeauvais/how-to-choose-the-best-linux-distribution-for-your-use-case-2ncc</guid>
      <description>&lt;p&gt;The Linux ecosystem is large, and a proof of this is the large number of Linux distributions you can choose from. However, this itself can be overwhelming for a user with no Linux knowledge. So today, I will help you answer the question, &lt;strong&gt;Which is the best Linux distribution for me?&lt;/strong&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Read the original article on &lt;a href="https://divinglinux.com/blog/how-to-choose-the-best-linux-distribution-for-your-use-case" rel="noopener noreferrer"&gt;DivingLinux.com&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Many distributions but Linux being Linux
&lt;/h2&gt;

&lt;p&gt;We already learned in our &lt;a href="https://divinglinux.com/blog/a-practical-beginner-introduction-to-discover-the-world-of-linux" rel="noopener noreferrer"&gt;introduction to the world of Linux&lt;/a&gt; that Linux is in fact just the Kernel piece of an operating system. That kernel being Open Source allows it to be used by anybody to make a full operating system that are called Linux distributions.&lt;/p&gt;

&lt;p&gt;Each one of them is different from the others, but share the Linux kernel technology. This makes that even if they are different in policy, behavior or interface, we are talking about very similar systems from the technical perspective.&lt;/p&gt;

&lt;p&gt;So, you can select between many distributions to suit your needs. Do you want a robust and stable system, even sacrificing new features? Don't worry, there is always one that meets those requirements; or on the contrary, do you need a very active distribution with recent versions of applications? There is also one with those features.&lt;/p&gt;

&lt;p&gt;In this way, it is very difficult not to find the ideal Linux distribution.&lt;/p&gt;

&lt;p&gt;But not everything is so easy and simple, there are some considerations that are always good to know before making the choice. These considerations can influence the way in which the distribution is used or the purpose of its use.&lt;/p&gt;

&lt;h2&gt;
  
  
  Ease of installation and use
&lt;/h2&gt;

&lt;p&gt;One of the first criteria for evaluating a distribution is its ease of installation and use.&lt;/p&gt;

&lt;p&gt;Long gone are the days when installing and using Linux was for geniuses. Now, most Linux distributions are easy to install and use.&lt;/p&gt;

&lt;p&gt;However, there are Linux distributions that place special emphasis on ease of use. These are &lt;strong&gt;oriented towards the novice Linux user&lt;/strong&gt; or those who don't want to complicate things.&lt;/p&gt;

&lt;p&gt;In this range we find &lt;a href="https://ubuntu.com/" rel="noopener noreferrer"&gt;Ubuntu&lt;/a&gt;, &lt;a href="https://linuxmint.com/" rel="noopener noreferrer"&gt;Linux Mint&lt;/a&gt; and &lt;a href="https://elementary.io/" rel="noopener noreferrer"&gt;Elementary OS&lt;/a&gt;. &lt;strong&gt;They are stable distributions that have an effortless to use installer&lt;/strong&gt; and once installed, they have everything you need to get your hands on it without any problems.&lt;/p&gt;

&lt;h2&gt;
  
  
  Source code and licence of the distribution
&lt;/h2&gt;

&lt;p&gt;One of the most exciting thing about Linux distribution is that the Linux Kernel use the &lt;a href="https://www.gnu.org/licenses/old-licenses/gpl-2.0.en.html" rel="noopener noreferrer"&gt;GPL v2 license&lt;/a&gt;, which force all Linux distributions to publicly publish their source code.&lt;/p&gt;

&lt;p&gt;Open source doesn't mean free, for instance &lt;a href="https://www.redhat.com/en/technologies/linux-platforms/enterprise-linux" rel="noopener noreferrer"&gt;Red Hat enterprise Linux&lt;/a&gt; a distribution focused on professional users, has a paid license. You can still access the source code, compile it by yourself and use it on your home PC. Of course restrictions apply, they own the trademark and copyrights on it, so you can't redistribute it. This model works because they provide enterprise level support and other service with a lot of value for big companies.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;For personal usage, there's little chance that you require a commercial distributions.&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Hardware is also important for choice
&lt;/h2&gt;

&lt;p&gt;Computers and technology are advancing, but not everyone has the opportunity to change hardware frequently. That is why there are &lt;strong&gt;Linux distributions that focus on upgrading so that they can be used on older hardware&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;As a general rule, these distributions &lt;strong&gt;don't have a polished and refined graphical interface&lt;/strong&gt; as you might expect these days. However, they are very &lt;strong&gt;efficient&lt;/strong&gt; and can revive old forgotten computers, or devices like Raspberry Pi.&lt;/p&gt;

&lt;p&gt;Some of these distributions are: &lt;a href="https://mxlinux.org/" rel="noopener noreferrer"&gt;MX Linux&lt;/a&gt;, &lt;a href="https://sparkylinux.org/" rel="noopener noreferrer"&gt;Sparky Linux&lt;/a&gt;, &lt;a href="https://lubuntu.me/" rel="noopener noreferrer"&gt;Lubuntu,&lt;/a&gt; and &lt;a href="https://antixlinux.com/" rel="noopener noreferrer"&gt;AntiX&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Active development guarantees support
&lt;/h2&gt;

&lt;p&gt;Most distributions are free to use, and provide no guarantee of continuity in their development, leaving users who have put their trust in their projects abandoned. So, it is important to choose a distribution that has not only an active development but a consolidated and mature one.&lt;/p&gt;

&lt;p&gt;In this sense, distributions supported by a serious and responsible company can be a viable solution. Among these, &lt;a href="https://ubuntu.com/" rel="noopener noreferrer"&gt;Ubuntu&lt;/a&gt;, &lt;a href="https://getfedora.org/" rel="noopener noreferrer"&gt;Fedora&lt;/a&gt;, and &lt;a href="https://www.opensuse.org/" rel="noopener noreferrer"&gt;openSUSE&lt;/a&gt; stand out.&lt;/p&gt;

&lt;p&gt;Although there are also others that are based on a strong, consolidated and mature community. In this line we have the very important &lt;a href="https://debian.org/" rel="noopener noreferrer"&gt;Debian&lt;/a&gt; &lt;a href="https://archlinux.org/" rel="noopener noreferrer"&gt;Arch Linux&lt;/a&gt; or &lt;a href="https://linuxmint.com/" rel="noopener noreferrer"&gt;Linux Mint&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;An active development guarantees that the distribution will not be suddenly abandoned, which will give you security when using it. In addition, you can be sure that new features will be added, bugs will be fixed and packages will be widely available.&lt;/p&gt;

&lt;h2&gt;
  
  
  Community support or commercial support?
&lt;/h2&gt;

&lt;p&gt;In this segment, it depends on what you are looking for in the Linux distribution. There are some that have the possibility to purchase a license and with it technical support. Others do not.&lt;/p&gt;

&lt;p&gt;Often the distributions with commercial support are reserved for &lt;strong&gt;companies that buy these licenses for their workstations&lt;/strong&gt;. This is understandable since that for enterprise, stability is fundamental. It is also possible to choose distributions for production servers, which increases the importance of support.&lt;/p&gt;

&lt;p&gt;On the other hand, Linux can be used domestically and in this case, it is best to use serious distributions with community support.&lt;/p&gt;

&lt;h2&gt;
  
  
  Stability or novelty?
&lt;/h2&gt;

&lt;p&gt;In general, Linux distributions are stable and reliable. There are distributions that have stability as a premise and will only add new packages and version once every few year.&lt;/p&gt;

&lt;p&gt;On the other hand, there are &lt;strong&gt;Rolling Release distributions that do not have a stable release cycle&lt;/strong&gt; but are updated frequently. These distributions, although stable, are not as reliable as others, but here what is important is the novelty.&lt;/p&gt;

&lt;p&gt;So in this aspect, the decision lies only in terms of necessity, if your computer or server break during a rolling release update you will need the skills and time to fix it.&lt;/p&gt;

&lt;h2&gt;
  
  
  Privacy and security taken to another level
&lt;/h2&gt;

&lt;p&gt;There are also Linux distributions that put a special emphasis on security and data privacy.&lt;/p&gt;

&lt;p&gt;For example, &lt;strong&gt;there are distributions specially designed to erase all our tracks and provide us with absolute anonymity&lt;/strong&gt;. Important in countries where there is no freedom of expression or where there is any kind of danger. They are often used by criminals, whistleblower or activist to stay protected from government surveillance. One of the well-known distribution for privacy is &lt;a href="https://tails.boum.org/" rel="noopener noreferrer"&gt;Tails&lt;/a&gt; which is configured to only use the &lt;a href="https://www.torproject.org/" rel="noopener noreferrer"&gt;Tor&lt;/a&gt; network.&lt;/p&gt;

&lt;p&gt;Most Linux distribution have good security default, but there are &lt;strong&gt;distributions specialized in defensive security&lt;/strong&gt;. These distributions are great for enterprise user in sector that need high level of security on desktop or production servers. &lt;a href="https://www.qubes-os.org/" rel="noopener noreferrer"&gt;Qube OS&lt;/a&gt; or &lt;a href="https://silverblue.fedoraproject.org/" rel="noopener noreferrer"&gt;Fedora Silverblue&lt;/a&gt; are known for putting an emphasis on security.&lt;/p&gt;

&lt;p&gt;Finally, some distribution are focused on offensive security, packaged with tools for penetration testing, ethical hacking and auditing. The most famous one being &lt;a href="https://www.kali.org/" rel="noopener noreferrer"&gt;Kali Linux&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Software components
&lt;/h2&gt;

&lt;p&gt;The Linux kernel is not enough to get a good operating system, thus many other pieces of software are required by default in a distribution. For instance, some experienced users hate working with SystemD, a popular system manager used by a lot of distribution, other will choose a distribution on the base display server communications protocols provided, X or Wayland.&lt;/p&gt;

&lt;p&gt;For a beginner user these requirements should not be much of an issue, unless you need to work with a niche software that require specific pieces to be available in your distribution.&lt;/p&gt;

&lt;h2&gt;
  
  
  Which Linux distribution should I choose?
&lt;/h2&gt;

&lt;p&gt;As you may have noticed, there are many Linux distributions to decide from. And the answer depends on your needs and wants. But I will introduce you to some Linux distributions that are recommended for beginners:&lt;/p&gt;

&lt;h3&gt;
  
  
  Ubuntu
&lt;/h3&gt;

&lt;p&gt;When Ubuntu was born, it was born with the motto "Linux for human beings" so the main letter of introduction is that it is simple but modern, besides being stable and supported for several years.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frvrdsxp4umnm9cjw56b9.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frvrdsxp4umnm9cjw56b9.jpg" alt="Ubuntu Desktop" width="615" height="346"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Aesthetically, it is impeccable thanks to the GNOME desktop environment. It has an excellent documentation, and it is also the distributions with the biggest online community.&lt;/p&gt;

&lt;p&gt;One point against it is that Canonical, which is the company behind Ubuntu, pushes the use of &lt;code&gt;snap&lt;/code&gt; packages which are heavy and a bit slower. This push has led to Firefox being packaged in this format.&lt;/p&gt;

&lt;p&gt;Pros:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  Easy to use.&lt;/li&gt;
&lt;li&gt;  Stable, thanks to being based on Debian.&lt;/li&gt;
&lt;li&gt;  Good community support.&lt;/li&gt;
&lt;li&gt;  Possibility to purchase support.&lt;/li&gt;
&lt;li&gt;  Large number of packages.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Cons:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  Not the most resource efficient.&lt;/li&gt;
&lt;li&gt;  Requires modern hardware.&lt;/li&gt;
&lt;li&gt;  Canonical's snap packages are heavy.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Linux Mint: From Freedom came elegance
&lt;/h3&gt;

&lt;p&gt;Linux Mint is based on Ubuntu. So, it shares essential packages and components such as the kernel, graphics stack and so on. Therefore, what is the difference? Its desktop environment and mint-apps.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft1y28xhws5m2ywwte1zi.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft1y28xhws5m2ywwte1zi.jpg" alt="Linux Mint Cinnamon Edition" width="615" height="346"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The desktop is not GNOME but a fork of it called Cinnamon, which has a more traditional look but built with modern technologies.&lt;/p&gt;

&lt;p&gt;Mint-apps are applications created by the distribution's development team that give a simpler and more consistent feel to the distribution. These include a text editor, media player, update manager and so on.&lt;/p&gt;

&lt;p&gt;Pros:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  Very stable.&lt;/li&gt;
&lt;li&gt;  Secure updates through the graphical interface.&lt;/li&gt;
&lt;li&gt;  Better memory consumption.&lt;/li&gt;
&lt;li&gt;  Traditional but customizable interface.&lt;/li&gt;
&lt;li&gt;  Great community support due to being based on Ubuntu.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Cons:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  Conservative to innovate desktop&lt;/li&gt;
&lt;li&gt;  Slow development cycle&lt;/li&gt;
&lt;li&gt;  Include many packages by default.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Debian
&lt;/h3&gt;

&lt;p&gt;Debian is the base distribution used to build Ubuntu, Linux Mint, and many others. Thanks to a very conservative development cycle, Debian acquired the reputation of being one of the most stable distribution on the market. But this stability come with the cost of having packages that sometimes can be 1 or 2 years out of date.&lt;/p&gt;

&lt;p&gt;Another aspect of Debian is that it is widely used both on servers and on the desktop. Moreover, although GNOME is the default desktop environment, you will be able to choose between a few others like Plasma or XFCE.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjgq3blgw4tl5t2f6pffw.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjgq3blgw4tl5t2f6pffw.jpg" alt="Debian with Gnome" width="615" height="384"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Debian is my personal distribution of choice when I need to set up a Linux server, and it's also the one used for the student servers of the &lt;a href="https://dev.to/"&gt;Diving Linux command-line course&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Pros:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  It is very stable&lt;/li&gt;
&lt;li&gt;  Excellent community support&lt;/li&gt;
&lt;li&gt;  Its package repositories are huge.&lt;/li&gt;
&lt;li&gt;  Great base to make it your own&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Cons:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  Somewhat outdated packages&lt;/li&gt;
&lt;li&gt;  Development branches not so clear to the user&lt;/li&gt;
&lt;li&gt;  A bit less friendly for newcomers&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Arch Linux
&lt;/h3&gt;

&lt;p&gt;Not all users can a complete and functional installation of Arch Linux. The reason, the installation is modular, and it is necessary to play with many configuration files. This makes it a distribution focused on those who already know Linux or who want to learn about it.&lt;/p&gt;

&lt;p&gt;Arch Linux is a rolling release distribution that is constantly updated, bringing the most up-to-date packages. This is a great distribution to learn the many components of a Linux distribution, as you will have to work with many of them to get it to work as you want. By default, an Arch Linux installation is kind of raw, which makes it a good starting point to create something truly customized to your taste and needs.&lt;/p&gt;

&lt;p&gt;Pros:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  Latest updated packages&lt;/li&gt;
&lt;li&gt;  Large package repository&lt;/li&gt;
&lt;li&gt;  Good community and the best Wiki of all distributions&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Cons:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  Being a Rolling Release, Arch is not as stable as others&lt;/li&gt;
&lt;li&gt;  Difficult to install and use for the novice user&lt;/li&gt;
&lt;li&gt;  Strictly community support&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Kali Linux
&lt;/h3&gt;

&lt;p&gt;Kali Linux is a security distribution derived from Debian. So, it inherits from it the number of tools available and some of its support. It is a particular distribution because it is designed specifically for computer security, computer forensics and advanced penetration testing. I would not recommend it to beginner, unless you need it to learn and practice penetration testing, security research, computer forensics or reverse engineering.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqmmeijvvj0i0124njsc7.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqmmeijvvj0i0124njsc7.jpg" alt="Kali Linux" width="615" height="346"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Pros:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  Comes with specialized professional tools.&lt;/li&gt;
&lt;li&gt;  Unique in its style.&lt;/li&gt;
&lt;li&gt;  Good support&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Cons:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  Installation different from the rest.&lt;/li&gt;
&lt;li&gt;  Don't use it if you're not aiming to be a security professional.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Fedora
&lt;/h3&gt;

&lt;p&gt;Fedora is maintained by Red Hat, but it has the reputation of being innovative because it is where the first tests of packages or technologies are made. By default, Fedora come with the latest Gnome desktop and SELinux enabled, making it a stable, secure and beautiful distribution, well suited for professional use.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0f9ynd33alfwt63x7qrh.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0f9ynd33alfwt63x7qrh.jpg" alt="Fedora Workstation" width="615" height="384"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Fedora is emerging as a solid alternative to debian-based distributions. It also gave birth to other projects like Fedora Server, Fedora IOT and Fedora Silverblue. I've personally used Fedora on my laptop for the past year and been happy with it.&lt;/p&gt;

&lt;p&gt;Pros:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  Great combination of stability and package versions.&lt;/li&gt;
&lt;li&gt;  Abundant documentation and support.&lt;/li&gt;
&lt;li&gt;  Backed by Red Hat.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Cons:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  Less packages than Debian-based distributions&lt;/li&gt;
&lt;li&gt;  SELinux being activated by default can complicate things for beginners&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  MX Linux
&lt;/h3&gt;

&lt;p&gt;MX Linux is a lightweight Debian-based distribution that aims to provide users with an operating system that is well cared for in every way. Thanks to MX Linux, you will be able to dust off old computers and give them an updated system without sacrificing features.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fatalsuoof13gopbwn9ap.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fatalsuoof13gopbwn9ap.jpg" alt="Linux MX XFCE" width="615" height="309"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;One of the main features is speed. It's very agile and with a polished desktop. Of course, being focused on non-new computers, it is possible to miss some new features.&lt;/p&gt;

&lt;p&gt;Pros:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  Based on Debian Stable.&lt;/li&gt;
&lt;li&gt;  Active development.&lt;/li&gt;
&lt;li&gt;  Friendly community.&lt;/li&gt;
&lt;li&gt;  Low resource consumption.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Cons:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  So thin that it may lack features.&lt;/li&gt;
&lt;li&gt;  Uses lightweight desktop environments and lacks certain commodities.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Lubuntu
&lt;/h3&gt;

&lt;p&gt;Lubuntu is another lightweight distribution that incorporates a desktop environment called LXQT that combines the simplicity and power of QT. The result is a complete and aesthetically beautiful distribution.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Friy3te1omkj8tz2jm25h.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Friy3te1omkj8tz2jm25h.jpg" alt="Lubuntu" width="615" height="346"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Pros:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  Based on Ubuntu&lt;/li&gt;
&lt;li&gt;  Large number of programs available&lt;/li&gt;
&lt;li&gt;  Good number of applications installed by default&lt;/li&gt;
&lt;li&gt;  Easy to install&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Cons:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  Doubts about its development&lt;/li&gt;
&lt;li&gt;  As in MX Linux, it is lightweight and may be insufficient.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;There are many Linux distributions and there are many criteria to evaluate before making a final decision. Things like ease of use, support, and philosophy are all factors in the decision you will have to make.&lt;/p&gt;

&lt;p&gt;So, Which Linux distribution should I choose? The answer depends on your needs but in general if you have modern hardware Ubuntu, Linux Mint and Fedora are all great choices. If stability is a primary factor then Debian might be the optimal Debian option.&lt;/p&gt;

&lt;p&gt;In any case, you will have a robust system based on the great Linux kernel.&lt;/p&gt;




&lt;p&gt;If you want to learn more on this topic, consider trying &lt;a href="https://divinglinux.com" rel="noopener noreferrer"&gt;DivingLinux&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;You will learn to use Linux from the command-line with confidence, by doing interactive hands-on exercises, and build strong foundation in monitoring, networking, and system administration.&lt;/p&gt;

</description>
      <category>linux</category>
      <category>beginners</category>
      <category>opensource</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>Slack workspaces in the browser</title>
      <dc:creator>Nicolas Beauvais</dc:creator>
      <pubDate>Tue, 04 Jan 2022 07:25:52 +0000</pubDate>
      <link>https://dev.to/nicolasbeauvais/slack-workspaces-in-the-browser-5470</link>
      <guid>https://dev.to/nicolasbeauvais/slack-workspaces-in-the-browser-5470</guid>
      <description>&lt;p&gt;I always try to avoid using Electron apps when I can, more so when I need them running all day like Slack. It doesn't make much sense to me, it forces my computer to run another WebKit engine just for Slack when it already works perfectly in a browser.&lt;/p&gt;

&lt;p&gt;But Slack does not offer a way to get all workspaces notifications in a single tab, and keeping 3-5 open tabs just for it is annoying, making the Electron app mandatory to use. &lt;/p&gt;

&lt;p&gt;If you were stuck like me in this situation, I have exactly what you need: &lt;strong&gt;a way to activate Slack's workspace switcher in your web browser&lt;/strong&gt;!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F10pojwrbtzcsx4eq7sv8.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F10pojwrbtzcsx4eq7sv8.jpg" alt="Slack's Workspaces switcher in the browser" width="574" height="712"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;All you need to make this work is to change your favourite browser's user agent to make Slack think that you are using a chromebook, as detailed in &lt;a href="https://webapps.stackexchange.com/questions/144258/slacks-web-version-shows-workspace-switching-sidebar-but-only-on-chromebooks" rel="noopener noreferrer"&gt;this stack exchange thread&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;You can use the great Tampermonkey extension, or similar, to easily change your user agent on the Slack website. If you do not know about this extension, it allows you to run a user-defined script on a particular website, and more things outside this article's scope.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://chrome.google.com/webstore/detail/tampermonkey/dhdgffkkebhmkfjojejmpbldmpobfkfo?hl=en" rel="noopener noreferrer"&gt;Tampermonkey for Chrome&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://addons.mozilla.org/en-US/firefox/addon/tampermonkey/" rel="noopener noreferrer"&gt;Tampermonkey for Firefox&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Once installed, you need to create the following script that will change your user agent, and make the Slack website think that you are running the latest Google Chrome version on a Chrome OS laptop:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="c1"&gt;// ==UserScript==&lt;/span&gt;
&lt;span class="c1"&gt;// @name        Enable Slack workspaces in the browser&lt;/span&gt;
&lt;span class="c1"&gt;// @namespace   slack.com&lt;/span&gt;
&lt;span class="c1"&gt;// @version     https://dev.to/nicolasbeauvais&lt;/span&gt;
&lt;span class="c1"&gt;// @description Enable Slack workspaces in the browser&lt;/span&gt;
&lt;span class="c1"&gt;// @match       https://app.slack.com/*&lt;/span&gt;
&lt;span class="c1"&gt;// @match       https://app.slack.com/&lt;/span&gt;
&lt;span class="c1"&gt;// @grant       none&lt;/span&gt;
&lt;span class="c1"&gt;// @run-at      document-start&lt;/span&gt;
&lt;span class="c1"&gt;// ==/UserScript==&lt;/span&gt;

&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nf"&gt;function &lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;use strict&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="nb"&gt;Object&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;defineProperty&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nb"&gt;navigator&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;userAgent&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="na"&gt;value&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Mozilla/5.0 (X11; CrOS x86_64 10066.0.0) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/96.0.4664.110 Safari/537.36&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;
    &lt;span class="p"&gt;});&lt;/span&gt;
&lt;span class="p"&gt;})();&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you never used Tampermonkey before, you can find many online tutorials that will show you how to add a script. &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Be careful with the scripts that you add to Tampermonkey, you should never add code that you do not fully understand.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;And that's it, you can now open all your Slack workspaces in a single tab, and receive all notifications, just like with the Electron app.&lt;/p&gt;

</description>
      <category>productivity</category>
      <category>webdev</category>
      <category>beginners</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>A practical beginner introduction to discover the world of Linux</title>
      <dc:creator>Nicolas Beauvais</dc:creator>
      <pubDate>Mon, 13 Dec 2021 16:55:12 +0000</pubDate>
      <link>https://dev.to/nicolasbeauvais/an-ultimate-introduction-to-the-linux-world-1odn</link>
      <guid>https://dev.to/nicolasbeauvais/an-ultimate-introduction-to-the-linux-world-1odn</guid>
      <description>&lt;p&gt;&lt;strong&gt;Linux&lt;/strong&gt;, you already heard of it, maybe even used it once or twice for work, and certainly a thousand of times without knowing. It has a different aura, something special that you do not feel when thinking about Apple's macOS or Microsoft's Windows, it's loved by geeks, power user, misfits, scientists, engineers...&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Read the original article on &lt;a href="https://divinglinux.com/blog/a-practical-beginner-introduction-to-discover-the-world-of-linux" rel="noopener noreferrer"&gt;DivingLinux.com&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;It's not advertised on TV or online, you most likely won't see any computer sold with Linux in a shop, and yet Linux and its derivatives runs:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;100% of the top 500 supercomputers&lt;/li&gt;
&lt;li&gt;96% of the internet&lt;/li&gt;
&lt;li&gt;85% of all smartphones&lt;/li&gt;
&lt;li&gt;The International space station&lt;/li&gt;
&lt;li&gt;Nasa's Perseverance rover on Mars&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;When it comes to desktop use, Linux sits at 2% [6] of market share, 5% if we count ChromeOS which is based on Linux, way behind macOS and Windows. Going through all the reason behind this would make this article too long and boring [7] so we will stick to one key point: Linux is hard for newcomers.&lt;/p&gt;

&lt;p&gt;You need to learn a new vocabulary, find new online resources, learn new ways of doing things, even if the reward is worth it at the end, this can discourage the most tech-savvy users. To make it easier for you, I've made a compilation of the questions I get the most from my students when they begin to learn Linux, and answered them with brief overview answers to make it easy to digest.&lt;/p&gt;

&lt;p&gt;Questions covered in this article:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;What exactly is Linux?&lt;/li&gt;
&lt;li&gt;What is the difference between Linux, macOS and Windows?&lt;/li&gt;
&lt;li&gt;Are UNIX, and Linux the same thing?&lt;/li&gt;
&lt;li&gt;What is GNU?&lt;/li&gt;
&lt;li&gt;Is macOS based on Linux?&lt;/li&gt;
&lt;li&gt;What is a Linux distribution?&lt;/li&gt;
&lt;li&gt;How can I try Linux as a beginner?&lt;/li&gt;
&lt;li&gt;Tl;dr&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  What exactly is Linux?
&lt;/h2&gt;

&lt;p&gt;You probably know that Linux is an operating system, but almost nobody uses it directly as is. When we refer to Linux, we most often refer to any operating system that is based on the Linux Kernel.&lt;/p&gt;

&lt;p&gt;The Linux Kernel is a secure and stable base program that will handle all the things that you typically don't think about when using a computer, like communicating with the processor, handling the memory, connecting to the internet, receive the keys you pressed on your keyboard and display them on your screen.&lt;/p&gt;

&lt;p&gt;This base kernel is then completed by other program to make it operable (see GNU), do specific tasks, or make it easy to use by non-technical users, like Android on mobile or Ubuntu on desktop.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is the difference between Linux, macOS and Windows?
&lt;/h2&gt;

&lt;p&gt;Contrary to macOS and Windows, the Linux Kernel is free and open-source, meaning that anybody can see the code [8], improve it for everyone or modify it for their particular use case. It also means that you can install and use it for free on any device, but if something goes wrong, there is no hotline to complains to, you're on your own.&lt;/p&gt;

&lt;p&gt;As it is not locked in any way by a vendor, you can do anything you want with a Linux operating system. Customize its appearance or behavior, and also make it completely unusable with an ease that you won't find on Microsoft an Apple operating systems. Which is why Linux is loved by power users, and Windows / macOS by companies that want to keep a maximum control on their fleet of devices.&lt;/p&gt;

&lt;p&gt;Linux and macOS are both based on a UNIX-like architecture, so there is a lot of similarity between the two operating systems, and most of the command-line knowledge that you learn from one can be applied to the other. Windows on the other end use its own architecture, although, since Windows 10, the Windows Subsystem for Linux (WSL) makes it possible to use a virtualized Linux Kernel and run Linux binaries directly in Windows.&lt;/p&gt;

&lt;h2&gt;
  
  
  Are UNIX, and Linux the same thing?
&lt;/h2&gt;

&lt;p&gt;They're not, UNIX was created about 20 years before Linux. UNIX got popular with businesses and academics in the 70s, but the creators of Unix used a licensing scheme forbidding modification of the operating system. At this time the GNU project started with the goal of creating a UNIX compatible operating system that was open source and free to use.&lt;/p&gt;

&lt;p&gt;Unfortunately the GNU kernel was not complete and progress was slow, then Linux got released and became the go-to Kernel for the GNU suite creating the "GNU/Linux" operating system that we know today.&lt;/p&gt;

&lt;p&gt;The UNIX system gave birth to a lot of other operating systems, called UNIX-like[9], as they do not share the same code, but are most of the time compatible and follow a similar architecture. Linux, macOS, FreeBSD or OpenBSD are just a few of them.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is GNU?
&lt;/h2&gt;

&lt;p&gt;If you look anything related to Linux online, you will see theses three letters in no time "GNU", and sometime in the form "GNU/Linux" but what is it?&lt;/p&gt;

&lt;p&gt;GNU is a collection of software written by the GNU project, the most famous being Bash, Gimp, and GRUB to only name a few. Started about 10 years before Linux, had a crucial role in the history of open source and free software, and remains highly important to this day, it is also at the origin of the General Public License (GPL)[10].&lt;/p&gt;

&lt;p&gt;The GNU software collection offer a base operating system when coupled with a Kernel (Linux). This is why you can come across the name "GNU/Linux" because most operating-system that use Linux as a Kernel will also use some part of the GNU software collection. There's been some conflict on this particular naming so let's not expand on that, but I advise that you do your own research on the subject[11].&lt;/p&gt;

&lt;h3&gt;
  
  
  GNU coreutils
&lt;/h3&gt;

&lt;p&gt;A big part of the GNU software collection, that is less known by its name, is coreutils. It is a set of tools to help you use your operating system, most base commands on a Linux operating system, like cd, ls, cp, cat, echo and a few dozens more are part of GNU coreutils.&lt;/p&gt;

&lt;h2&gt;
  
  
  Is macOS based on Linux?
&lt;/h2&gt;

&lt;p&gt;As we've seen previously, macOS uses a UNIX-like architecture, with its own kernel called XNU[12]. So macOS is definitely not based on the Linux kernel, but they do share a lot of architectural concept and standards. Both operating systems are POSIX[11] compliant, making it easy for user to move from one to the other, with shared base commands that are implemented in a similar way.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is a Linux distribution?
&lt;/h2&gt;

&lt;p&gt;Installing the raw Linux kernel on your computer is like driving a car, with nothing else than the engine. To do task on your computer you need many tools, something to handle Wifi, bluetooth, a file explorer, a web browser, and so on.&lt;/p&gt;

&lt;p&gt;Just like Windows and macOS come prepackaged with everything you need for a general use, Linux distributions, commonly called "Distro" are fully operational operating systems. Each distro as its particularity and targeted audience.&lt;/p&gt;

&lt;p&gt;Here is a brief overview of well-known Linux distribution:&lt;/p&gt;

&lt;h3&gt;
  
  
  General purpose
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Ubuntu&lt;/li&gt;
&lt;li&gt;Debian&lt;/li&gt;
&lt;li&gt;Fedora&lt;/li&gt;
&lt;li&gt;Manjaro&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Advanced use
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Arch linux&lt;/li&gt;
&lt;li&gt;Gentoo&lt;/li&gt;
&lt;li&gt;Void Linux&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Cyber security and penetration testing
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Kali linux&lt;/li&gt;
&lt;li&gt;BackBox&lt;/li&gt;
&lt;li&gt;Parrot OS&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You can find information on the wide range of Linux distributions on the distrowatch.com website.&lt;/p&gt;

&lt;h2&gt;
  
  
  How can I try Linux as a beginner?
&lt;/h2&gt;

&lt;p&gt;There is already a fair amount of online tutorials with step-by-step instructions for each technique explained. So I will just give you an overview of what you could do to test a Linux Distribution at home.&lt;/p&gt;

&lt;h3&gt;
  
  
  Virtual machine
&lt;/h3&gt;

&lt;p&gt;The easiest way to try a Linux distribution is by using a Virtual Machine, for this you can install VirtualBox on Windows or macOS. Choose the Linux distribution that you would like to try and download it as an ISO file, you should find this option on most well-known Linux distribution website. Now run that ISO file using VirtualBox and Tada!&lt;/p&gt;

&lt;p&gt;Virtual machine is a way to recreate a "fake" computer on top of your running computer, which can be slow if you do not have a modern computer, but it is the easiest and safest way to learn and try new Linux Distributions.&lt;/p&gt;

&lt;h3&gt;
  
  
  Live USB
&lt;/h3&gt;

&lt;p&gt;A live USB is a way to install an operating system, in our case a Linux Distribution, on a USB sticks. This allows to boot your computer using the operating system installed on live USB stick instead of the storage device where your computer's usual operating system is installed.&lt;/p&gt;

&lt;p&gt;All you need for this method is a USB stick, the ISO file of the Linux distribution that you would like to try, and a tool to move the ISO file to the USB stick and make it bootable. You can try BalenaEtcher which work on Windows and macOs.&lt;/p&gt;

&lt;h3&gt;
  
  
  Dual boot
&lt;/h3&gt;

&lt;p&gt;The dual boot technique is probably the most complicated one, and you can damage your computer if you're not careful. Dual boot consist on installing a full Linux distribution on your computer along your main operating system (Windows or macOS). This method is probably the best if you are serious about learning to use a Linux operating system, you will be able to leverage the full hardware power of your computer, and still be able to go back to Windows or macOS when necessary.&lt;/p&gt;

&lt;p&gt;Creating a dual boot computer can be challenging for beginners, make sure to back up all important files, and do not hesitate to get some help if you do not fully understand a step of the process.&lt;/p&gt;

&lt;h3&gt;
  
  
  Other computer
&lt;/h3&gt;

&lt;p&gt;One last way is to install Linux on a computer that you are not using, you could also buy a cheap second-hand laptop to experiment. This way you do not risk damaging your main computer.&lt;/p&gt;

&lt;h2&gt;
  
  
  Tl;dr
&lt;/h2&gt;

&lt;p&gt;Linux is a kernel, a piece of code that allow other software to communicate with the hardware of your computer, it is inspired by and compliant with the UNIX kernel.&lt;br&gt;
Combine the Linux kernel with GNU utils, a set of basic programs, and you have a basic, but fully working operating system which is referred as "GNU/Linux".&lt;/p&gt;

&lt;p&gt;MacOS is based on UNIX too, which is why there is similarity between macOs and Linux-based operating systems.&lt;br&gt;
Distro, or distributions, are variant of the "GNU/Linux" operating system, with carefully chosen default settings and installed programs that make it easier to use.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;I hope this article cleared things out for you. I did not include many names or dates to not complicate things too much and keep the explanations as concise as possible. Don't hesitate to look for more info on Wikipedia or other sources I linked, the history of Linux and the people who created its ecosystem is fascinating.&lt;/p&gt;

&lt;p&gt;Sources:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://www.top500.org/lists/top500/2021/06/" rel="noopener noreferrer"&gt;https://www.top500.org/lists/top500/2021/06/&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://web.archive.org/web/20150806093859/http://www.w3cook.com/os/summary/" rel="noopener noreferrer"&gt;https://web.archive.org/web/20150806093859/http://www.w3cook.com/os/summary/&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://haydenjames.io/85-of-all-smartphones-are-powered-by-linux/" rel="noopener noreferrer"&gt;https://haydenjames.io/85-of-all-smartphones-are-powered-by-linux/&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.fsf.org/blogs/community/gnu-linux-chosen-as-operating-system-of-the-international-space-station" rel="noopener noreferrer"&gt;https://www.fsf.org/blogs/community/gnu-linux-chosen-as-operating-system-of-the-international-space-station&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://fossbytes.com/perseverance-rover-linux-os/" rel="noopener noreferrer"&gt;https://fossbytes.com/perseverance-rover-linux-os/&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://en.wikipedia.org/wiki/Usage_share_of_operating_systems" rel="noopener noreferrer"&gt;https://en.wikipedia.org/wiki/Usage_share_of_operating_systems&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://en.wikipedia.org/wiki/Criticism_of_desktop_Linux" rel="noopener noreferrer"&gt;https://en.wikipedia.org/wiki/Criticism_of_desktop_Linux&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/torvalds/linux" rel="noopener noreferrer"&gt;https://github.com/torvalds/linux&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://en.wikipedia.org/wiki/Unix-like" rel="noopener noreferrer"&gt;https://en.wikipedia.org/wiki/Unix-like&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://en.wikipedia.org/wiki/GNU_General_Public_License" rel="noopener noreferrer"&gt;https://en.wikipedia.org/wiki/GNU_General_Public_License&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://en.wikipedia.org/wiki/GNU/Linux_naming_controversy" rel="noopener noreferrer"&gt;https://en.wikipedia.org/wiki/GNU/Linux_naming_controversy&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://en.wikipedia.org/wiki/XNU" rel="noopener noreferrer"&gt;https://en.wikipedia.org/wiki/XNU&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://en.wikipedia.org/wiki/POSIX" rel="noopener noreferrer"&gt;https://en.wikipedia.org/wiki/POSIX&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;




&lt;p&gt;If you want to learn more on this topic, consider trying &lt;a href="https://divinglinux.com" rel="noopener noreferrer"&gt;DivingLinux&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;You will learn to use Linux from the command-line with confidence, by doing interactive hands-on exercises, and build strong foundation in monitoring, networking, and system administration.&lt;/p&gt;

</description>
      <category>linux</category>
      <category>productivity</category>
      <category>beginners</category>
    </item>
    <item>
      <title>Silently validating a Laravel request</title>
      <dc:creator>Nicolas Beauvais</dc:creator>
      <pubDate>Mon, 29 Nov 2021 18:26:33 +0000</pubDate>
      <link>https://dev.to/nicolasbeauvais/silently-validating-a-laravel-request-j1i</link>
      <guid>https://dev.to/nicolasbeauvais/silently-validating-a-laravel-request-j1i</guid>
      <description>&lt;p&gt;While working on the Content Security Policy implementation of &lt;a href="https://phare.app" rel="noopener noreferrer"&gt;Phare&lt;/a&gt;, I had to implement a public endpoint to receive violation report from web browsers. The issue being that this endpoint URL can receive data from anyone that throw a request to it, and in slightly different  format depending on the browser.&lt;/p&gt;

&lt;p&gt;As the input cannot be trusted, using some form of validation is mandatory, Laravel validator is perfect for this, and as it can be quite a complicated validation, using a &lt;a href="https://laravel.com/docs/master/validation#form-request-validation" rel="noopener noreferrer"&gt;Form Request&lt;/a&gt; seemed to be the most appropriate. &lt;/p&gt;

&lt;p&gt;This is where things can get annoying, if a browser with an old content security policy sends a payload that I do not which to support in my API, the Form Request will send a response with a 422 status code, which will create a console error in the browser. And if a malicious script kiddy troll want to send a payload to the endpoint, I do not want to the API response to contains exactly how to correct a wrong payload.&lt;/p&gt;

&lt;p&gt;After some digging, I found out that the &lt;code&gt;FormRequest&lt;/code&gt; class has a &lt;code&gt;failedValidation&lt;/code&gt; method that throw a &lt;code&gt;ValidationException&lt;/code&gt;, caught by the Laravel exception handler to create the default 422 response with the error bag.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight php"&gt;&lt;code&gt;&lt;span class="c1"&gt;// source: vendor/laravel/framework/src/Illuminate/Foundation/Http/FormRequest.php&lt;/span&gt;

&lt;span class="kd"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;FormRequest&lt;/span&gt; &lt;span class="kd"&gt;extends&lt;/span&gt; &lt;span class="nc"&gt;Request&lt;/span&gt; &lt;span class="kd"&gt;implements&lt;/span&gt; &lt;span class="nc"&gt;ValidatesWhenResolved&lt;/span&gt;
&lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="mf"&gt;...&lt;/span&gt;

    &lt;span class="cd"&gt;/**
     * Handle a failed validation attempt.
     *
     * @param  \Illuminate\Contracts\Validation\Validator  $validator
     * @return void
     *
     * @throws \Illuminate\Validation\ValidationException
     */&lt;/span&gt;
    &lt;span class="k"&gt;protected&lt;/span&gt; &lt;span class="k"&gt;function&lt;/span&gt; &lt;span class="n"&gt;failedValidation&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="kt"&gt;Validator&lt;/span&gt; &lt;span class="nv"&gt;$validator&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="k"&gt;throw&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;ValidationException&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nv"&gt;$validator&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;
                    &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt;&lt;span class="nf"&gt;errorBag&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nv"&gt;$this&lt;/span&gt;&lt;span class="o"&gt;-&amp;gt;&lt;/span&gt;&lt;span class="n"&gt;errorBag&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
                    &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt;&lt;span class="nf"&gt;redirectTo&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nv"&gt;$this&lt;/span&gt;&lt;span class="o"&gt;-&amp;gt;&lt;/span&gt;&lt;span class="nf"&gt;getRedirectUrl&lt;/span&gt;&lt;span class="p"&gt;());&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;

    &lt;span class="mf"&gt;...&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;By overriding this method in our own &lt;code&gt;FormRequest&lt;/code&gt;, we can throw a custom &lt;code&gt;ValidationException&lt;/code&gt; that fail silently, by returning a 2XX status code and not showing any error message.&lt;/p&gt;

&lt;p&gt;Let's start by creating our custom exception, I named it &lt;code&gt;SilentValidationException&lt;/code&gt;, it takes two parameters, first an instance of the Laravel validator which will contain the errors of the &lt;code&gt;FormRequest&lt;/code&gt; validation, and a custom exception message. I chose to store the error payload as an array to reuse it later.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight php"&gt;&lt;code&gt;&lt;span class="cp"&gt;&amp;lt;?php&lt;/span&gt;

&lt;span class="kn"&gt;namespace&lt;/span&gt; &lt;span class="nn"&gt;App\Exceptions&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="kn"&gt;use&lt;/span&gt; &lt;span class="nc"&gt;Exception&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="kn"&gt;use&lt;/span&gt; &lt;span class="nc"&gt;Illuminate\Contracts\Validation\Validator&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="kd"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;SilentValidationException&lt;/span&gt; &lt;span class="kd"&gt;extends&lt;/span&gt; &lt;span class="nc"&gt;Exception&lt;/span&gt;
&lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;private&lt;/span&gt; &lt;span class="kt"&gt;array&lt;/span&gt; &lt;span class="nv"&gt;$errors&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

    &lt;span class="k"&gt;public&lt;/span&gt; &lt;span class="k"&gt;function&lt;/span&gt; &lt;span class="n"&gt;__construct&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="kt"&gt;string&lt;/span&gt; &lt;span class="nv"&gt;$message&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="kt"&gt;Validator&lt;/span&gt; &lt;span class="nv"&gt;$validator&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="k"&gt;parent&lt;/span&gt;&lt;span class="o"&gt;::&lt;/span&gt;&lt;span class="nf"&gt;__construct&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nv"&gt;$message&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

        &lt;span class="nv"&gt;$this&lt;/span&gt;&lt;span class="o"&gt;-&amp;gt;&lt;/span&gt;&lt;span class="n"&gt;errors&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nv"&gt;$validator&lt;/span&gt;&lt;span class="o"&gt;-&amp;gt;&lt;/span&gt;&lt;span class="nf"&gt;errors&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;&lt;span class="o"&gt;-&amp;gt;&lt;/span&gt;&lt;span class="nf"&gt;toArray&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;

    &lt;span class="k"&gt;public&lt;/span&gt; &lt;span class="k"&gt;function&lt;/span&gt; &lt;span class="n"&gt;getErrors&lt;/span&gt;&lt;span class="p"&gt;():&lt;/span&gt; &lt;span class="kt"&gt;array&lt;/span&gt;
    &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nv"&gt;$this&lt;/span&gt;&lt;span class="o"&gt;-&amp;gt;&lt;/span&gt;&lt;span class="n"&gt;errors&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We can then make our &lt;code&gt;FormRequest&lt;/code&gt; that will throw the &lt;code&gt;SilentValidationException&lt;/code&gt; if the payload validation fail.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight php"&gt;&lt;code&gt;&lt;span class="cp"&gt;&amp;lt;?php&lt;/span&gt;

&lt;span class="kn"&gt;namespace&lt;/span&gt; &lt;span class="nn"&gt;App\Http\Requests&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="kn"&gt;use&lt;/span&gt; &lt;span class="nc"&gt;App\Exceptions\SilentValidationException&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="kn"&gt;use&lt;/span&gt; &lt;span class="nc"&gt;Illuminate\Contracts\Validation\Validator&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="kn"&gt;use&lt;/span&gt; &lt;span class="nc"&gt;Illuminate\Foundation\Http\FormRequest&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="kd"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;ContentSecurityPolicyViolationRequest&lt;/span&gt; &lt;span class="kd"&gt;extends&lt;/span&gt; &lt;span class="nc"&gt;FormRequest&lt;/span&gt;
&lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;public&lt;/span&gt; &lt;span class="k"&gt;function&lt;/span&gt; &lt;span class="n"&gt;authorize&lt;/span&gt;&lt;span class="p"&gt;():&lt;/span&gt; &lt;span class="kt"&gt;bool&lt;/span&gt;
    &lt;span class="p"&gt;{&lt;/span&gt;      
        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;

    &lt;span class="k"&gt;public&lt;/span&gt; &lt;span class="k"&gt;function&lt;/span&gt; &lt;span class="n"&gt;rules&lt;/span&gt;&lt;span class="p"&gt;():&lt;/span&gt; &lt;span class="kt"&gt;array&lt;/span&gt;
    &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
            &lt;span class="s1"&gt;'csp_report'&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
                &lt;span class="s1"&gt;'required'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
                &lt;span class="s1"&gt;'array'&lt;/span&gt;
            &lt;span class="p"&gt;],&lt;/span&gt;
            &lt;span class="c1"&gt;// Many validation rules&lt;/span&gt;
        &lt;span class="p"&gt;];&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;

    &lt;span class="k"&gt;protected&lt;/span&gt; &lt;span class="k"&gt;function&lt;/span&gt; &lt;span class="n"&gt;failedValidation&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="kt"&gt;Validator&lt;/span&gt; &lt;span class="nv"&gt;$validator&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="k"&gt;throw&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;SilentValidationException&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
            &lt;span class="nv"&gt;$validator&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; 
            &lt;span class="s1"&gt;'Content security policy violation ignored'&lt;/span&gt;
        &lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now, if you try that code, Laravel will handle our &lt;code&gt;SilentValidationException&lt;/code&gt; as any other exception and show an error page. To avoid this, we need to change the exception handling behaviour for this particular exception. This can be done in the &lt;code&gt;app/Exceptions/Handler.php&lt;/code&gt; file.&lt;/p&gt;

&lt;p&gt;There are two things to do in that file, we first want to register our custom exception in the &lt;code&gt;$dontReport&lt;/code&gt; array to avoid logging the error in your log file, Sentry, Flare or whatever error service that you use.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight php"&gt;&lt;code&gt;&lt;span class="cp"&gt;&amp;lt;?php&lt;/span&gt;

&lt;span class="kn"&gt;namespace&lt;/span&gt; &lt;span class="nn"&gt;App\Exceptions&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="kn"&gt;use&lt;/span&gt; &lt;span class="nc"&gt;Illuminate\Foundation\Exceptions\Handler&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="nc"&gt;ExceptionHandler&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="kn"&gt;use&lt;/span&gt; &lt;span class="nc"&gt;Illuminate\Support\Facades\Log&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="kd"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;Handler&lt;/span&gt; &lt;span class="kd"&gt;extends&lt;/span&gt; &lt;span class="nc"&gt;ExceptionHandler&lt;/span&gt;
&lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="cd"&gt;/**
     * A list of the exception types that are not reported.
     *
     * @var string[]
     */&lt;/span&gt;
    &lt;span class="k"&gt;protected&lt;/span&gt; &lt;span class="nv"&gt;$dontReport&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
        &lt;span class="nc"&gt;SilentValidationException&lt;/span&gt;&lt;span class="o"&gt;::&lt;/span&gt;&lt;span class="n"&gt;class&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="p"&gt;];&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now this does not change the response, to do this we need to register a callback in the &lt;code&gt;register&lt;/code&gt; method of the &lt;code&gt;Handler&lt;/code&gt; class, as explained &lt;a href="https://laravel.com/docs/master/errors#rendering-exceptions" rel="noopener noreferrer"&gt;in the documentation&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Here we can get creative and do whatever we want with the error payload before sending the response. I could for instance store the errors in a database table to see which error occurs the most to make my API compatible with more browsers.&lt;/p&gt;

&lt;p&gt;To keep this example simple, let's just log the validation errors and return a &lt;code&gt;no content&lt;/code&gt; response:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight php"&gt;&lt;code&gt;&lt;span class="cp"&gt;&amp;lt;?php&lt;/span&gt;

&lt;span class="kn"&gt;namespace&lt;/span&gt; &lt;span class="nn"&gt;App\Exceptions&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="kn"&gt;use&lt;/span&gt; &lt;span class="nc"&gt;Illuminate\Foundation\Exceptions\Handler&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="nc"&gt;ExceptionHandler&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="kn"&gt;use&lt;/span&gt; &lt;span class="nc"&gt;Illuminate\Support\Facades\Log&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="kd"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;Handler&lt;/span&gt; &lt;span class="kd"&gt;extends&lt;/span&gt; &lt;span class="nc"&gt;ExceptionHandler&lt;/span&gt;
&lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="mf"&gt;...&lt;/span&gt;

    &lt;span class="cd"&gt;/**
     * Register the exception handling callbacks for the application.
     *
     * @return void
     */&lt;/span&gt;
    &lt;span class="k"&gt;public&lt;/span&gt; &lt;span class="k"&gt;function&lt;/span&gt; &lt;span class="n"&gt;register&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
    &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="nv"&gt;$this&lt;/span&gt;&lt;span class="o"&gt;-&amp;gt;&lt;/span&gt;&lt;span class="nf"&gt;renderable&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;function&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="kt"&gt;SilentValidationException&lt;/span&gt; &lt;span class="nv"&gt;$exception&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="nc"&gt;Log&lt;/span&gt;&lt;span class="o"&gt;::&lt;/span&gt;&lt;span class="nf"&gt;info&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nv"&gt;$exception&lt;/span&gt;&lt;span class="o"&gt;-&amp;gt;&lt;/span&gt;&lt;span class="nf"&gt;getMessage&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt; &lt;span class="nv"&gt;$exception&lt;/span&gt;&lt;span class="o"&gt;-&amp;gt;&lt;/span&gt;&lt;span class="nf"&gt;getErrors&lt;/span&gt;&lt;span class="p"&gt;());&lt;/span&gt;

            &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nf"&gt;response&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;&lt;span class="o"&gt;-&amp;gt;&lt;/span&gt;&lt;span class="nf"&gt;noContent&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
        &lt;span class="p"&gt;});&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That's it! You now know how to silently validate a payload using a Laravel Form Request.&lt;/p&gt;

</description>
      <category>laravel</category>
      <category>validation</category>
      <category>exception</category>
      <category>api</category>
    </item>
  </channel>
</rss>
