<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Luke Liukonen</title>
    <description>The latest articles on DEV Community by Luke Liukonen (@liukonen).</description>
    <link>https://dev.to/liukonen</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/liukonen"/>
    <language>en</language>
    <item>
      <title>Logging Frameworks... Moving from SEQ to f/Lo/G... and why I didn't pick ELK</title>
      <dc:creator>Luke Liukonen</dc:creator>
      <pubDate>Sun, 21 Apr 2024 01:24:15 +0000</pubDate>
      <link>https://dev.to/liukonen/logging-framoworks-moving-from-seq-to-flog-and-why-i-didnt-pick-elk-4p00</link>
      <guid>https://dev.to/liukonen/logging-framoworks-moving-from-seq-to-flog-and-why-i-didnt-pick-elk-4p00</guid>
      <description>&lt;p&gt;First, anyone with a "FLOG" app... not trying to take the name away or claim it, but I do think Flog is a neat little way of explaining the logging stack... similar to ELK... (FluentBit / Loki / Grafana)&lt;/p&gt;

&lt;h2&gt;
  
  
  the why
&lt;/h2&gt;

&lt;p&gt;With the latest flare-ups of companies doing a "rug pull" on the licensing schemas, locking down functionality, and outright abandoning the people who put them in the spots they are in (looking at you, Redis)... I felt it was time to start migrating any proprietary solutions over to more free and open-source solutions. I'm not in any way saying DataLust is doing this with their SEQ product. I have used this for years as a log aggregator, and it has worked very well. I did have a few hiccups a bit back over looking at logs on my computer and phone at the same time due to limitations of the software, but overall, it has done what I wanted it to do, which is a very lightweight logging system that just works out of the box. Evaluating the options, though, was rough. The industry standard is ELK (or an OpenSearch variation), is the way to go. Having Elestisearch as your main engine, Logstash as the aggregator, and Kilbana as your main UI felt very heavy to me in terms of running something that does one thing... or in my mind does one thing. Also.. is it just me, or do setting up these solutions feel like you need a PHD in Logging to get them to work? So I did what any developer would do... google something for 5 minutes, give up, and asked ChatGPT what it would do. A lot of the recommendations it gave Im sure, are great, but for the full suite of things to run on something like a Raspberry Pi, most of my research pointed to Grafana Loki as the main product to use.&lt;/p&gt;

&lt;p&gt;I've used Loki before, and it was OK, but Grafana was kind of overkill. I originally started with Grafana / Prometheus / Agents to monitor my systems. This solution is great, but very, very overkill for what I have, which is a basic home network with a handful of services I host. I prefer and still use Uptime Kuma as my main monitoring solution, but when using Loki, I was overwhelmed. Not so much in the data, but in the amount of data. I was using Promtail as my collector, and I felt the system was very, very slow... especially with the amount of data I collected. See, I was collecting all of my Docker container logs, and some of my home lab services are very chatty to the console log. That said, I figured I'd give it a fresh start and use something other than promtail. To me, the concept of something like Promtail that would either have to read the data live or query it every so many minutes felt like a waste. When it comes to my applications, I want live data coming into one and only one location, and limited data.&lt;/p&gt;

&lt;p&gt;I could have used Lokis rest API for my solution. I hate adding complexity to a project, but I do love performance. For me, FluentBit felt like a good solution for this. I can configure my endpoints and how I want to connect to them, which I did. I picked a TCP connection since it is a level down in the network stack from http, and doesn't need a handshake or the overhead of establishing a connection to transmit the data. Meaning my log event is as light as possible.  It also gave me a similar trio of services as the ELK platform, each serving the same use case. The 3rd reason I went with FluentBit was if I ever needed to switch over to ELK or Opensearch, I could. FluentBit is a great, quick middleware platform and is recommended for people who don't want to use Logstash in their ELK platform.&lt;/p&gt;

&lt;p&gt;Final thought on this... In terms of resources used, looking them up on my Portainer instance, it appears my "Flog" stack uses just a few more resources than my SEQ instance. This is good since I was worried that by cutting over to a multi-app stack, somehow I'd be pushing the hardware limits on my Raspberry Pi.&lt;/p&gt;

&lt;h2&gt;
  
  
  the code
&lt;/h2&gt;

&lt;p&gt;The cutover. Im at the time doing a proof of concept more than anything, but I found that Serilog for DotNet was probably the best bet in cutting over. My ASP.NET services already use the Microsoft Logging platform, and my original service was calling in the service.cs a simple&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight csharp"&gt;&lt;code&gt;&lt;span class="n"&gt;services&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;AddLogging&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;loggingBuilder&lt;/span&gt; &lt;span class="p"&gt;=&amp;gt;&lt;/span&gt;
 &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="n"&gt;loggingBuilder&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;AddSeq&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;Configuration&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;GetSection&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;SEQ&lt;/span&gt;&lt;span class="p"&gt;));&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;With Serilog, it took a bit, but I was able to get it fairly close to the same code. I needed to go to Nuget and install the serilog.aspnetcore and serilog.sinks.network packages, and really, really tried using the serilog.configuration package to make the imports easy, but settled without the automatic configuration bindings. This left me with&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight csharp"&gt;&lt;code&gt; &lt;span class="n"&gt;services&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;AddLogging&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;loggingBuilder&lt;/span&gt; &lt;span class="p"&gt;=&amp;gt;&lt;/span&gt;
 &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="kt"&gt;var&lt;/span&gt; &lt;span class="n"&gt;logger&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nf"&gt;LoggerConfiguration&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
      &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Enrich&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;WithProperty&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"App"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;name&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
      &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;MinimumLevel&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;Debug&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
      &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;WriteTo&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;Console&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
      &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;WriteTo&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;TCPSink&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;loggerConnection&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nf"&gt;CompactJsonFormatter&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt; &lt;span class="n"&gt;Serilog&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Events&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;LogEventLevel&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Warning&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
      &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;CreateLogger&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
    &lt;span class="n"&gt;loggingBuilder&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;AddSerilog&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;logger&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Not as clean as Seq importing, but to be fair, not too bad either. I can reduce it by not writing to the console if I don't want to, and I am adding an extra app attribute to the JSON structure so that I can tell what app is causing the error message.&lt;/p&gt;

&lt;p&gt;Now to the server-side stuff. &lt;br&gt;
I am a huge fan of Docker Compose files. This stack is no exception. for it, the setup is as follows (note: anything with an _ is a folder)&lt;/p&gt;

&lt;p&gt;.&lt;br&gt;
├── docker-compose.yml&lt;br&gt;
├── fluent-bit.conf&lt;br&gt;
├── loki-config.yaml&lt;br&gt;
├── _fluentbit&lt;br&gt;
│   └── _logs&lt;br&gt;
├── _loki&lt;br&gt;
│   ├── _loki&lt;br&gt;
│   └── _wal&lt;br&gt;
└── _grafana&lt;/p&gt;

&lt;p&gt;I prefer having my configs under the same directory as my root, but each their own. Folders are needed for storing the live data. Im not a fan of volumes in the Docker sense, and just mount to the folder that exists in the project. &lt;/p&gt;

&lt;p&gt;The first step in the process is fluent. The config I have set is&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight conf"&gt;&lt;code&gt;[&lt;span class="n"&gt;INPUT&lt;/span&gt;]
    &lt;span class="n"&gt;name&lt;/span&gt; &lt;span class="n"&gt;http&lt;/span&gt;
    &lt;span class="n"&gt;listen&lt;/span&gt; &lt;span class="m"&gt;0&lt;/span&gt;.&lt;span class="m"&gt;0&lt;/span&gt;.&lt;span class="m"&gt;0&lt;/span&gt;.&lt;span class="m"&gt;0&lt;/span&gt;
    &lt;span class="n"&gt;port&lt;/span&gt; &lt;span class="m"&gt;24224&lt;/span&gt;   

[&lt;span class="n"&gt;INPUT&lt;/span&gt;]
    &lt;span class="n"&gt;Name&lt;/span&gt;            &lt;span class="n"&gt;tcp&lt;/span&gt;
    &lt;span class="n"&gt;Listen&lt;/span&gt;          &lt;span class="m"&gt;0&lt;/span&gt;.&lt;span class="m"&gt;0&lt;/span&gt;.&lt;span class="m"&gt;0&lt;/span&gt;.&lt;span class="m"&gt;0&lt;/span&gt;
    &lt;span class="n"&gt;Port&lt;/span&gt;            &lt;span class="m"&gt;12201&lt;/span&gt;

[&lt;span class="n"&gt;OUTPUT&lt;/span&gt;]
    &lt;span class="n"&gt;Name&lt;/span&gt;        &lt;span class="n"&gt;grafana&lt;/span&gt;-&lt;span class="n"&gt;loki&lt;/span&gt;
    &lt;span class="n"&gt;Match&lt;/span&gt;       *
    &lt;span class="n"&gt;Url&lt;/span&gt;         &lt;span class="n"&gt;http&lt;/span&gt;://&lt;span class="n"&gt;loki&lt;/span&gt;:&lt;span class="m"&gt;3100&lt;/span&gt;/&lt;span class="n"&gt;loki&lt;/span&gt;/&lt;span class="n"&gt;api&lt;/span&gt;/&lt;span class="n"&gt;v1&lt;/span&gt;/&lt;span class="n"&gt;push&lt;/span&gt;
    &lt;span class="n"&gt;RemoveKeys&lt;/span&gt;  &lt;span class="n"&gt;source&lt;/span&gt;,&lt;span class="n"&gt;container_id&lt;/span&gt;
    &lt;span class="n"&gt;Labels&lt;/span&gt;      {&lt;span class="n"&gt;job&lt;/span&gt;=&lt;span class="s2"&gt;"fluent-bit"&lt;/span&gt;}
    &lt;span class="n"&gt;LabelKeys&lt;/span&gt;   &lt;span class="n"&gt;container_name&lt;/span&gt;
    &lt;span class="n"&gt;BatchWait&lt;/span&gt;   &lt;span class="m"&gt;1&lt;/span&gt;&lt;span class="n"&gt;s&lt;/span&gt;
    &lt;span class="n"&gt;BatchSize&lt;/span&gt;   &lt;span class="m"&gt;1001024&lt;/span&gt;
    &lt;span class="n"&gt;LineFormat&lt;/span&gt;  &lt;span class="n"&gt;json&lt;/span&gt;
    &lt;span class="n"&gt;LogLevel&lt;/span&gt;    &lt;span class="n"&gt;info&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;I give 2 endpoints, on 2 different ports (12201, 24224)&lt;br&gt;
Input 1 is an HTTP endpoint that I can make post-calls to. This will come in handy for basic bash shell scripts I have, and logging their items. The second input is the TCP input. This one I will use for any application-based logging I do. The output is configured specifically for Loki, and the log levels are set to info. While I could have gone higher, I think having that limitation in the application fits better than in the logger (at least for my scenario, which is hosted 100% internally and has a lesser likelihood of being breached).&lt;/p&gt;

&lt;p&gt;From FluentBit, we go to the Loki Configuration&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;auth_enabled&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;
&lt;span class="na"&gt;server&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;http_listen_port&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;3100&lt;/span&gt;
  &lt;span class="na"&gt;grpc_listen_port&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;9095&lt;/span&gt;

&lt;span class="na"&gt;ingester&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;lifecycler&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;address&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;127.0.0.1&lt;/span&gt;
    &lt;span class="na"&gt;ring&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;kvstore&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;store&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;inmemory&lt;/span&gt;
      &lt;span class="na"&gt;replication_factor&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;1&lt;/span&gt;
    &lt;span class="na"&gt;final_sleep&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;0s&lt;/span&gt;
  &lt;span class="na"&gt;chunk_idle_period&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;15m&lt;/span&gt;
  &lt;span class="na"&gt;max_chunk_age&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;1h&lt;/span&gt;
  &lt;span class="na"&gt;chunk_target_size&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;1048576&lt;/span&gt;
  &lt;span class="na"&gt;chunk_retain_period&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;30s&lt;/span&gt;


&lt;span class="na"&gt;schema_config&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;configs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;from&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;2020-02-25&lt;/span&gt;
      &lt;span class="na"&gt;store&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;boltdb&lt;/span&gt;
      &lt;span class="na"&gt;object_store&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;filesystem&lt;/span&gt;
      &lt;span class="na"&gt;schema&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v11&lt;/span&gt;
      &lt;span class="na"&gt;index&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;prefix&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;index_&lt;/span&gt;
        &lt;span class="na"&gt;period&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;24h&lt;/span&gt;

&lt;span class="na"&gt;storage_config&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;boltdb&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;directory&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/var/lib/loki/index&lt;/span&gt;

  &lt;span class="na"&gt;filesystem&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;directory&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/var/lib/loki/chunks&lt;/span&gt;

&lt;span class="na"&gt;limits_config&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;enforce_metric_name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;
  &lt;span class="na"&gt;reject_old_samples&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
  &lt;span class="na"&gt;reject_old_samples_max_age&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;168h&lt;/span&gt;

&lt;span class="na"&gt;chunk_store_config&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;max_look_back_period&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;0s&lt;/span&gt;

&lt;span class="na"&gt;table_manager&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;retention_deletes_enabled&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
  &lt;span class="na"&gt;retention_period&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;30d&lt;/span&gt;

&lt;span class="na"&gt;compactor&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;working_directory&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/var/lib/loki/boltdb-shipper&lt;/span&gt;
  &lt;span class="na"&gt;shared_store&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;filesystem&lt;/span&gt;

&lt;span class="na"&gt;ruler&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;storage&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;local&lt;/span&gt;
    &lt;span class="na"&gt;local&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;directory&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/var/lib/loki/rules&lt;/span&gt;
  &lt;span class="na"&gt;rule_path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/var/lib/loki/rules-temp&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Im sure there is some cleanup I can do with this. For one, Im not using grpc for any of my logging, so I don't think I need this (could be wrong though) My goal for this though is to keep data for about a month before throwing it away (which means persistence with the Loki instance) I've yet to run this any longer then a few hours, so there might end up being some corrections to these files.&lt;/p&gt;

&lt;p&gt;Third and last is the compose file. I could have set up Loki right off the bat in the compose, or something like that, but it takes a few seconds to add Loki (&lt;a href="HTTP://loki:3100"&gt;HTTP://loki:3100&lt;/a&gt;) to the sources in Grafana.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;version&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;3'&lt;/span&gt;

&lt;span class="na"&gt;services&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;grafana&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;grafana/grafana:latest&lt;/span&gt;
    &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;3003:3000"&lt;/span&gt;
    &lt;span class="na"&gt;volumes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;./grafana:/var/lib/grafana&lt;/span&gt;
    &lt;span class="na"&gt;networks&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;logging_network&lt;/span&gt;

  &lt;span class="na"&gt;loki&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;grafana/loki:latest&lt;/span&gt;
    &lt;span class="na"&gt;command&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;-config.file=/etc/loki/local-config.yaml&lt;/span&gt;
    &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;3100:3100"&lt;/span&gt;
    &lt;span class="na"&gt;volumes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;./loki/loki:/var/lib/loki/&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;./loki-config.yaml:/etc/loki/local-config.yaml&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;./loki/wal:/wal&lt;/span&gt;
    &lt;span class="na"&gt;networks&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;logging_network&lt;/span&gt;

  &lt;span class="na"&gt;fluent-bit&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;grafana/fluent-bit-plugin-loki:latest&lt;/span&gt;
    &lt;span class="c1"&gt;#image: fluent/fluent-bit:latest&lt;/span&gt;
    &lt;span class="na"&gt;container_name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;fluent-bit&lt;/span&gt;
    &lt;span class="na"&gt;environment&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;LOKI_URL=http://loki:3100/loki/api/v1/push&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;LOG_PATH=/var/log/*.log=value&lt;/span&gt;
    &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;24224:24224"&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;12201:12201"&lt;/span&gt;
    &lt;span class="na"&gt;volumes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;./fluentbit/log:/var/log&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;./fluent-bit.conf:/fluent-bit/etc/fluent-bit.conf&lt;/span&gt;
    &lt;span class="na"&gt;networks&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;logging_network&lt;/span&gt;

&lt;span class="na"&gt;networks&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;logging_network&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Im currently using the Grafana/fluent-bit-plugin container, which only runs on x64 at the moment. I do plan on trying to cut this over to the official Fluentbit container, which supports ARM-based CPUS. &lt;/p&gt;

&lt;p&gt;As with any of my posts... if you see any improvements I can make or ways to clean things up, let me know. Im still in the middle of my project, but at a point right now where publishing a quick little how-to / why article felt ok to do. &lt;/p&gt;

</description>
    </item>
    <item>
      <title>Building Microsofts contender for a Redis (Garnet) on the Raspberry Pi</title>
      <dc:creator>Luke Liukonen</dc:creator>
      <pubDate>Mon, 25 Mar 2024 03:18:05 +0000</pubDate>
      <link>https://dev.to/liukonen/building-microsofts-contender-for-a-redis-garnet-on-the-raspberry-pi-5bek</link>
      <guid>https://dev.to/liukonen/building-microsofts-contender-for-a-redis-garnet-on-the-raspberry-pi-5bek</guid>
      <description>&lt;p&gt;When finding out about Redis going from a BSD to a dual license, I felt a bit betrayed. I was still a holdout of using Redis locally and became a huge fan of its speed and support. While there were competitors on the market, such as KeyDB and DragonFly DB, Key had no native Pi abilities that I could tell, and DragonFly also had a Business License over an open-source license. I used Redis because not only was it the top name in a quick in-memory database, but because it was open source, meaning there was a community of engineers and businesses willing to support the product.  &lt;/p&gt;

&lt;p&gt;I just, like 2 hours ago, found out about Microsofts recent open source offering, and had to give it a go. With a larger name backing the product, and being more of a "dot net" shop in my house, I wanted to see what this product could do, and see if it was a decent drop-in replacement for my Redis server, which I had just ported to my Raspberry Pi 5&lt;/p&gt;

&lt;p&gt;I don't see myself playing with this too much tonight, I did want to post at least my Dockerfile that I used to get this running&lt;/p&gt;

&lt;p&gt;I am curious though, and could use some help... I don't know exactly where the files are stored for the Tsavorite DB, as I'd like to keep that on a volume for rebuilds and upgrades... I'm sure it's buried somewhere in the documentation.&lt;/p&gt;

&lt;p&gt;So... without further ado... Here is the dockerfile I used... it is slightly tweaked from what Microsoft has in the repo but doesn't require a full download of the git repo to build. I also specifically targeted the arm64-based image instead of the x64. I also am using a chiseled version of the runtime, to help reduce surface attack area, and have a smaller overall image. The final image output appears to be 101 MB in size, which isn't too bad, but still more than double that of the official 7.2.4-alpine image (41.6 MB) I'm using from Redis&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;FROM mcr.microsoft.com/dotnet/sdk:8.0 AS build
WORKDIR /source

# Copy files
RUN curl -L https://github.com/microsoft/garnet/archive/refs/tags/v1.0.1.tar.gz -o garnet.tar.gz \
    &amp;amp;&amp;amp; tar -xzf garnet.tar.gz --strip-components=1 \
    &amp;amp;&amp;amp; rm garnet.tar.gz

RUN dotnet restore
RUN dotnet build -c Release

# Copy and publish app and libraries
WORKDIR /source/main/GarnetServer
RUN dotnet publish -c Release -o /app -r linux-arm64 --self-contained false -f net8.0

# Final stage/image
# FROM mcr.microsoft.com/dotnet/runtime:8.0
FROM mcr.microsoft.com/dotnet/runtime:8.0-jammy-chiseled
WORKDIR /app
COPY --from=build /app .

# Run GarnetServer with an index size of 128MB
ENTRYPOINT ["/app/GarnetServer", "-i", "128m"]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For a Docker-compose file, I went with something simple&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;version: '3.3'
services:
  garnet:
    restart: always
    build:
      context: .
    image: garnet
    container_name: garnet
    ports:
      - "6379:3278"
    volumes:
      - /etc/localtime:/etc/localtime:ro
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;I map the port of entry for Garnet to the same port Redis uses. Let me know if this helps you, or if you have any recommendations! &lt;/p&gt;

&lt;p&gt;Side... as I play with this more, I'll be sure to update this post with my findings. My current use cases are using the Pub/Sub functionality, as well as a simple key-value memory cache for my services. &lt;/p&gt;

&lt;h2&gt;
  
  
  updates 2023-04-05
&lt;/h2&gt;

&lt;p&gt;So.. after getting some info back from github, I was able to get my docker instance to persist. so I have some updates to my files... for starters, I migrated my configs out of the entry point, and into a file. This way, I could keep persistence on the data I have. There is some cleanup work or optimizations I'd like to do, but I might reserve that for when I try out ValKey... the Linux Foundations version of Redis they are developing. &lt;/p&gt;

&lt;p&gt;Here is what my dockerfile now looks like.. if you're running this on an Intel or AMD based machine, you'll want to drop the -r linux-arm64 from the publish command&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;FROM mcr.microsoft.com/dotnet/sdk:8.0 AS build
WORKDIR /source

RUN curl -L https://github.com/microsoft/garnet/archive/refs/tags/v1.0.2.tar.gz -o garnet.tar.gz \
    &amp;amp;&amp;amp; tar -xzf garnet.tar.gz --strip-components=1 \
    &amp;amp;&amp;amp; rm garnet.tar.gz

RUN dotnet restore
RUN dotnet build -c Release

# Copy and publish app and libraries
WORKDIR /source/main/GarnetServer
RUN dotnet publish -c Release -o /app -r linux-arm64 --self-contained false -f net8.0

# Final stage/image
FROM mcr.microsoft.com/dotnet/runtime:8.0
WORKDIR /app
COPY --from=build /app .
COPY garnet.config /app/garnet.conf
VOLUME /data
RUN mkdir -p /data/checkpoint /data/logs
ENTRYPOINT ["/app/GarnetServer", "-i", "128m", "--config-import-path", "/app/garnet.conf"]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;my docker-compose.yml has also slightly changed&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;version: '3.3'
services:
  garnet:
    restart: always
    build:
      context: .
    image: garnet
    container_name: garnetpersist
    ports:
      - "6399:3278"
    volumes:
      - ./data:/data
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;as noted below, I dropped the time syncing between the container and system time in this container, however, I do have a data directory in the same directory as the volume... I map my folders in docker, but if you want to use proper "volumes", it should be a hard change to invoke.&lt;/p&gt;

&lt;p&gt;For my config, to save persistence, I am using the following config file, which is pulled into my image at the time of compile. It wouldn't also be that hard to map, but in my simple case, I just pull it in from the dockerfile build script&lt;/p&gt;

&lt;p&gt;garnet.config&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
  "MemorySize": "128m",
  "Port" : 3278,
  "Address": "0.0.0.0",
  "CheckpointDir": "/data/checkpoint",
  "EnableAOF": true,
  "Recover": true
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;It was also recommended to me that I call SAVE or BGSAVE then data will be checkpointed... this way I'm not running every write operation to the system through AOF. I have to thank Badrish Chandramouli and kingcomxu on Github for the recommendations! here is a &lt;a href="https://github.com/microsoft/garnet/discussions/166"&gt;link&lt;/a&gt; to our discussion of it on Github &lt;/p&gt;

</description>
    </item>
    <item>
      <title>Unleashing the Power of Developer AI: A Journey into Hosting a Private LLM/Code Assistant locally</title>
      <dc:creator>Luke Liukonen</dc:creator>
      <pubDate>Fri, 22 Dec 2023 01:53:05 +0000</pubDate>
      <link>https://dev.to/liukonen/unleashing-the-power-of-developer-ai-a-journey-into-hosting-a-private-llmcode-assistant-locally-4kma</link>
      <guid>https://dev.to/liukonen/unleashing-the-power-of-developer-ai-a-journey-into-hosting-a-private-llmcode-assistant-locally-4kma</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;In the ever-evolving landscape of software development, the quest for efficient coding tools led me to Twinny, a VSCode extension promising to bring GitHub Copilot-like capabilities to the local environment. Eager to play around with something that promises copilot-like capabilities without the cost, I set out to host this private GitHub Copilot alternative on my machines.  I'll take you through the highs and lows, the challenges faced, and the eventual win on this fascinating journey.&lt;/p&gt;

&lt;p&gt;As a note. I am not affiliated with the twinny project in any way.&lt;/p&gt;

&lt;h3&gt;
  
  
  Discovery
&lt;/h3&gt;

&lt;p&gt;When I first encountered Twinny, the prospect of having GitHub Copilot's insights, right within the confines of my local development environment, was, well, a bit mind-blowing. The promise of streamlined coding, tailored to my needs, I had to see extension had to offer.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Hardware Landscape: Two Machines, Two Stories
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Machine 1 - Framework Laptop
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Processor:&lt;/strong&gt; 11th Gen Intel Core i7-1165G7 @ 2.80GHz&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Memory:&lt;/strong&gt; 32GB RAM&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Storage:&lt;/strong&gt; 1TB NVMe SSD&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Graphics:&lt;/strong&gt; Onboard Graphics&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;OS:&lt;/strong&gt; Windows 11&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Containerization:&lt;/strong&gt; Rancher Desktop over Docker&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Machine 2 - Custom-built Desktop
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Processor:&lt;/strong&gt; AMD Ryzen 5 5600X 6-Core&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Memory:&lt;/strong&gt; 32GB RAM&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Storage:&lt;/strong&gt; 1TB SSD&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Graphics:&lt;/strong&gt; NVIDIA GeForce RTX 3060 TI&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;OS:&lt;/strong&gt; Windows 11&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Containerization:&lt;/strong&gt; Docker Desktop&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  The Pros and Cons: Navigating the Local Deployment Terrain
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Advantages of Local Deployment:
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Cost Efficiency:&lt;/strong&gt; Running GitHub Copilot-like capabilities without incurring cloud costs was a game-changer.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Privacy Control:&lt;/strong&gt; Keeping code and data on-premises provided an added layer of security.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Customization:&lt;/strong&gt; While I'm stuck using Ollama as my backend/host for LLMs, there are more LLMs available than 1.&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Disadvantages of Local Deployment:
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Hardware Requirements:&lt;/strong&gt; The demand for powerful GPUs for optimal performance proved to be a consideration.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Setup Complexity:&lt;/strong&gt; While not too bad, having to have an LLM running all the time and the complexity of a client/server model is a bit more than just installing a plugin&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Maintenance Responsibility:&lt;/strong&gt; Regular updates and maintenance became my responsibility, adding a layer of ownership.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Machine 1: No luck
&lt;/h2&gt;

&lt;p&gt;While Ollama and the LLM were installed and running, when calling the API using Postman, it would just spin. Calling directly from within the terminal appeared to work, but was really, really slow. In my IDE, I would just see a spinning wheel where my twinny icon should appear. My CPU would spike, and that's it. &lt;/p&gt;

&lt;h2&gt;
  
  
  A Glimpse Into Success on Machine 2
&lt;/h2&gt;

&lt;p&gt;Not surprisingly, it was on my gaming PC, a custom-built powerhouse with an AMD Ryzen processor and NVIDIA GeForce RTX 3060 TI, where Twinny truly came to life. The performance boost, notably attributed to GPU acceleration, turned what initially seemed like a challenge into a success story.&lt;/p&gt;

&lt;h3&gt;
  
  
  Key Observations:
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;GPU acceleration played a pivotal role in Twinny's optimal performance.&lt;/li&gt;
&lt;li&gt;Comparisons of Docker Desktop configurations between machines shed light on factors influencing responsiveness.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Installation Instructions: (Windows)
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;get the backend running

&lt;ul&gt;
&lt;li&gt;Have a container-based system or WSL. I prefer Containers since they are easy to spin up or down&lt;/li&gt;
&lt;li&gt;as you saw above, I have Rancher Desktop and Docker desktop. &lt;/li&gt;
&lt;li&gt;Run the following command to download and start up your container... &lt;em&gt;the gpus=all is needed to access my NVidia GPU. My Framework laptop however did not have the same luck. As a side, for my Framework laptop, I did try both with and without this flag.&lt;/em&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight powershell"&gt;&lt;code&gt;&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="n"&gt;docker&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;run&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;--gpus&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;all&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;-v&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;ollama:/root/.ollama&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;-p&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;11434:11434&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;--name&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;ollamagpu&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;ollama/ollama&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Run the following command to install the LLM needed for Twinny
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight powershell"&gt;&lt;code&gt;&lt;span class="n"&gt;docker&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;exec&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;-it&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;ollama&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;ollama&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;run&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;codellama:7b-code&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;I validated things "should work" by running the following curl command
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;  curl &lt;span class="nt"&gt;--location&lt;/span&gt; &lt;span class="s1"&gt;'http://localhost:11434/api/chat'&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="nt"&gt;--header&lt;/span&gt; &lt;span class="s1"&gt;'Content-Type: application/json'&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="nt"&gt;--data&lt;/span&gt; &lt;span class="s1"&gt;'{
  "model": "codellama:7b-code",
  "messages": [
    { "role": "user", "content": "why is the sky blue?" }
  ]
}'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Install the extension &lt;a href="https://marketplace.visualstudio.com/items?itemName=rjmacarthy.twinny"&gt;link&lt;/a&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;From there in the bottom right-hand corner, you should see an extension for Twinny. As you start typing, hopefully, you'll see IntelliSense kick in with automatic code-generation recommendations. &lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion: Unveiling the Potential
&lt;/h2&gt;

&lt;p&gt;In conclusion, self-hosting a private "GitHub Copilot"-like tool locally with Twinny has been a bit of a journey with some definite learnings. The challenges faced on the Framework Laptop were met with at least a win on the gaming PC, showcasing the potential benefits and potential alternatives to the bigger players in the game. I'm excited about this proof of concept, and I like the idea that soon organizations and individuals will have the capability to run code assistants from within their organization and reduce the fear of 3rd parties getting a hold of their code.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>privacy</category>
      <category>development</category>
      <category>programming</category>
    </item>
    <item>
      <title>Automating a Backup Server and Gaming PC with Wake on Lan, PowerShell, Bash Shell Scripts, and a Raspberry Pi</title>
      <dc:creator>Luke Liukonen</dc:creator>
      <pubDate>Wed, 06 Sep 2023 02:47:44 +0000</pubDate>
      <link>https://dev.to/liukonen/automating-a-backup-server-and-gaming-pc-with-wake-on-lan-powershell-bash-shell-scripts-and-a-raspberry-pi-5d65</link>
      <guid>https://dev.to/liukonen/automating-a-backup-server-and-gaming-pc-with-wake-on-lan-powershell-bash-shell-scripts-and-a-raspberry-pi-5d65</guid>
      <description>&lt;p&gt;&lt;strong&gt;Introduction:&lt;/strong&gt;&lt;br&gt;
Efficiency has always fascinated me as a software engineer. I'm constantly seeking ways to optimize existing technologies. In 2020, I built a gaming PC equipped with a mid-tier Zen 3 CPU, 16 GB of RAM, and an Nvidia 3060. However, it often took a back seat to my Framework Laptop due to its compatibility with my kitchen workstation setup, and the ease of hot-swapping using USB-C with my work laptop, an M1 Apple MacBook. &lt;/p&gt;

&lt;p&gt;With a decent setup in my kitchen area, because of the whole, work from home privilege many of us have seen over the last few years, my gaming PC seldom saw use. When it did, it was usually for video transcoding, light gaming, or overdue Windows updates. Meanwhile, my home lab, running on Ubuntu Server, saw substantial power consumption reduction over the years, thanks to moving from a AMD 5350 to an Intel J1900 CPU/motherboard combo and Raspberry Pi devices.&lt;/p&gt;

&lt;p&gt;I knew about Wake on LAN but never explored it until I had two idle PCs: the old home lab and Windows Gaming PC.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Problems to Solve:&lt;/strong&gt;&lt;br&gt;
Going through the above, I had three problems I wanted to solve:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;How to keep my backup Linux PC in sync with my home lab so if my machine ever went down, cutover would be nearly seamless.&lt;/li&gt;
&lt;li&gt;How to keep my Windows PC up to date with software updates and apps.&lt;/li&gt;
&lt;li&gt;How to automate video transcoding seamlessly.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Solution:&lt;/strong&gt;&lt;br&gt;
This led me to utilize the "efficiency cores" of a Raspberry Pi 3 for automation. My goals, driven by cron jobs, were clear:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Every day at 6:30 am, check for videos to encode on my network share. If found, wake up the gaming PC, run the encoding script, and shut down when finished.&lt;/li&gt;
&lt;li&gt;Once a week, wake up my backup Linux server, perform a backup, and shut down.&lt;/li&gt;
&lt;li&gt;Once a week wake up the gaming PC, execute system updates, and shut down.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Wake on LAN played a crucial role in achieving these goals, offering substantial energy savings. (just the AMD server alone, Im looking at just over $42 a year by not running it 24/7)&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Script Description:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This script serves as the core component of our automation process. Its primary objective is to initiate the wake-up and execution sequence for a target machine, such as a Windows PC, using a Raspberry Pi. Here's how it works:&lt;/p&gt;

&lt;p&gt;Setting the Target: We begin by specifying the target machine's IP address, which allows us to precisely identify the system we want to awaken and interact with.&lt;/p&gt;

&lt;p&gt;Checking the Samba Share: Our script first verifies the contents of a designated Samba share folder. This step is critical for determining whether there are any pending tasks or files to process. If the folder contains subfolders or files, it signals the need for the target machine's activation.&lt;/p&gt;

&lt;p&gt;Waking Up the Target: To wake up the target machine, we employ the Wake-on-LAN (WoL) mechanism. This involves sending a "magic packet" to the machine, causing it to power up from a sleep or powered-off state. The unique MAC address of the target machine is used to identify it within the local network.&lt;/p&gt;

&lt;p&gt;Ensuring Connectivity: Once the target machine is awakened, we establish a connection check loop. We repeatedly ping the machine to confirm its availability. This process continues for a predefined duration, ensuring that we don't proceed until a successful connection is established.&lt;/p&gt;

&lt;p&gt;Executing Tasks Remotely: Upon successful connection, we leverage SSH to establish a secure connection to the target machine. From there, we execute a PowerShell script (typically used for tasks like video encoding). You can customize this step by replacing 'mainuser' with the correct username and '~\encodevideo.ps1' with the actual script path you want to run.&lt;/p&gt;

&lt;p&gt;Exit and Reporting: If the connection and task execution are successful, we exit the script gracefully. In the event of a failed connection within the designated time frame, the script provides a notification that the connection attempt has failed.&lt;/p&gt;

&lt;p&gt;This script streamlines the process of waking up and automating tasks on a target machine remotely. It is particularly useful for scenarios where you need to conserve power by keeping the target machine in a low-power state until specific tasks require execution. To use this script effectively, ensure you've installed the 'wakeonlan' app and 'sambaclient' on your Raspberry Pi.&lt;/p&gt;

&lt;p&gt;Feel free to customize the script to suit your specific requirements and automate a wide range of tasks on your target machine.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;#!/bin/bash&lt;/span&gt;

&lt;span class="c"&gt;# Set the target IP address of the machine you want to wake up.&lt;/span&gt;
&lt;span class="nv"&gt;target_ip&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"127.0.0.1"&lt;/span&gt;

&lt;span class="c"&gt;# Define the Samba share you want to check for files.&lt;/span&gt;
&lt;span class="nv"&gt;samba_share&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"//NAS/hdd/tmp"&lt;/span&gt;

&lt;span class="c"&gt;# List the contents of the Samba share folder for video backups.&lt;/span&gt;
&lt;span class="c"&gt;# Replace 'videos/backup' with the correct path if needed.&lt;/span&gt;
&lt;span class="nv"&gt;samba_contents&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;smbclient &lt;span class="nt"&gt;-A&lt;/span&gt; smb.conf &lt;span class="nt"&gt;-c&lt;/span&gt; &lt;span class="s2"&gt;"cd videos/backup; ls"&lt;/span&gt; //NAS/HDD | &lt;span class="nb"&gt;grep &lt;/span&gt;D | &lt;span class="nb"&gt;grep&lt;/span&gt; &lt;span class="nt"&gt;-vF&lt;/span&gt; &lt;span class="se"&gt;\.&lt;/span&gt;| &lt;span class="nb"&gt;grep&lt;/span&gt; &lt;span class="nt"&gt;-v&lt;/span&gt; &lt;span class="nt"&gt;-e&lt;/span&gt; &lt;span class="s1"&gt;'^$'&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;
&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"Contents of the Samba share folder &lt;/span&gt;&lt;span class="nv"&gt;$samba_share&lt;/span&gt;&lt;span class="s2"&gt;:"&lt;/span&gt;
&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$samba_contents&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;

&lt;span class="c"&gt;# Check if there are subfolders in the Samba share.&lt;/span&gt;
&lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="o"&gt;[&lt;/span&gt; &lt;span class="nt"&gt;-n&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$samba_contents&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="o"&gt;]&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;then
  &lt;/span&gt;&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"Subfolders exist in the Samba share folder &lt;/span&gt;&lt;span class="nv"&gt;$samba_share&lt;/span&gt;&lt;span class="s2"&gt;."&lt;/span&gt;
&lt;span class="k"&gt;else
  &lt;/span&gt;&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"No subfolders found in the Samba share folder &lt;/span&gt;&lt;span class="nv"&gt;$samba_share&lt;/span&gt;&lt;span class="s2"&gt;. Exiting."&lt;/span&gt;
  &lt;span class="c"&gt;# Exit the script, as there's no need to wake up the PC.&lt;/span&gt;
   &lt;span class="nb"&gt;exit &lt;/span&gt;1
&lt;span class="k"&gt;fi&lt;/span&gt;

&lt;span class="c"&gt;# Wake up the target PC using its MAC address.&lt;/span&gt;
&lt;span class="c"&gt;# Replace 'AA:AA:AA:AA:AA:AA' with the actual MAC address.&lt;/span&gt;
/usr/bin/wakeonlan &lt;span class="nt"&gt;-i&lt;/span&gt; 1&lt;span class="nv"&gt;$target_ip&lt;/span&gt; AA:AA:AA:AA:AA:AA

&lt;span class="c"&gt;# Set a timeout for checking if the target PC is awake (5 minutes).&lt;/span&gt;
&lt;span class="nb"&gt;timeout&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="k"&gt;$((&lt;/span&gt;SECONDS+300&lt;span class="k"&gt;))&lt;/span&gt;

&lt;span class="k"&gt;while&lt;/span&gt; &lt;span class="o"&gt;[&lt;/span&gt; &lt;span class="nv"&gt;$SECONDS&lt;/span&gt; &lt;span class="nt"&gt;-lt&lt;/span&gt; &lt;span class="nv"&gt;$timeout&lt;/span&gt; &lt;span class="o"&gt;]&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;do
    if &lt;/span&gt;ping &lt;span class="nt"&gt;-c&lt;/span&gt; 1 &lt;span class="nt"&gt;-W&lt;/span&gt; 1 &lt;span class="nv"&gt;$target_ip&lt;/span&gt; &amp;amp;&amp;gt; /dev/null&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;then
        &lt;/span&gt;&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"Connection to &lt;/span&gt;&lt;span class="nv"&gt;$target_ip&lt;/span&gt;&lt;span class="s2"&gt; successful."&lt;/span&gt;
        &lt;span class="c"&gt;# Connect to the Windows PC over SSH and run a PowerShell script.&lt;/span&gt;
        &lt;span class="c"&gt;# Replace 'mainuser' with the correct username and '~\encodevideo.ps1' with the actual script path.&lt;/span&gt;
        ssh mainuser@&lt;span class="nv"&gt;$target_ip&lt;/span&gt; &lt;span class="s2"&gt;"pwsh ~&lt;/span&gt;&lt;span class="se"&gt;\e&lt;/span&gt;&lt;span class="s2"&gt;ncodevideo.ps1"&lt;/span&gt;
        &lt;span class="nb"&gt;exit &lt;/span&gt;0
    &lt;span class="k"&gt;else
        &lt;/span&gt;&lt;span class="nb"&gt;sleep &lt;/span&gt;20
    &lt;span class="k"&gt;fi
done

&lt;/span&gt;&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"Connection to &lt;/span&gt;&lt;span class="nv"&gt;$target_ip&lt;/span&gt;&lt;span class="s2"&gt; failed after 5 minutes."&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Conclusion:&lt;/strong&gt;&lt;br&gt;
My experience with Wake on LAN has been fascinating. The ability to put PCs to sleep or shut them down, only to wake them when needed, aligns with my vision of efficiency. I'd love to hear others' ideas or projects in this realm. In the back of my mind, I can envision small-scale data centers with empty Kubernetes clusters benefiting from this approach. Has anyone used K8ts or something similar to auto-scale on-demand locally? Share your thoughts and experiences!&lt;/p&gt;

</description>
      <category>cicd</category>
      <category>automation</category>
    </item>
    <item>
      <title>Transitioning from Vue 3 to Svelte: Unleashing Performance with Vanilla JavaScript (my take)</title>
      <dc:creator>Luke Liukonen</dc:creator>
      <pubDate>Wed, 12 Jul 2023 03:58:49 +0000</pubDate>
      <link>https://dev.to/liukonen/transitioning-from-vue-3-to-svelte-unleashing-performance-with-vanilla-javascript-my-take-1aje</link>
      <guid>https://dev.to/liukonen/transitioning-from-vue-3-to-svelte-unleashing-performance-with-vanilla-javascript-my-take-1aje</guid>
      <description>&lt;p&gt;&lt;strong&gt;Introduction:&lt;/strong&gt;&lt;br&gt;
In the ever-evolving landscape of web development, staying up to date with the latest frameworks and technologies is crucial. Vue.js has gained significant popularity due to its ease of use and powerful features. Because of this, a while back I went from vanilla javascript with Jquery and a template-based system over to Vue2 and eventually Vue3. However, for those seeking even greater performance gains and a more native development experience, transitioning from Vue 3 to Svelte can be a game-changer. In my case, going from a lighthouse performance score of 85 to 100. I want to be 100% honest though, this may not be an exact apples-to-apples comparison, as with my changes, I made some fairly large refactors, such as not loading my video file in the background until the play button is invoked, or using session caching on API calls to reduce network IO. In this article, I'll explore the struggle I faced while converting from my Vue 3 homepage, written using a CDN and a sprinkling of vanilla javascript, to a Svelte-based homepage. I'll try to help uncover the challenges, benefits, and the joy of embracing vanilla JavaScript in the form of Svelte, while making comparisons along the way.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Appeal of Performance:&lt;/strong&gt;&lt;br&gt;
When it comes to web development, performance is a top priority. The Virtual DOM (VDOM) approach, employed by Vue.js and React, offers great flexibility and ease of development. However, it can come at the cost of performance. Svelte, on the other hand, takes a different approach by compiling components at build time, resulting in a smaller and more efficient bundle size. This shift from a VDOM-based framework to a more native one was a major appeal for me to undertake the transition. In comparison, Angular, while highly performant, brings a heavier framework with a steeper learning curve.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Conceptual Struggle:&lt;/strong&gt;&lt;br&gt;
Vue.js, React, and Angular each have their own unique paradigms, which can make transitioning between them challenging. Moving to Svelte required a conceptual shift since it follows a different approach to building components. While Vue.js and React rely on a declarative approach, Svelte embraces a more imperative approach by directly manipulating the DOM. This shift challenged me to rethink my code structure and understand Svelte's component-based architecture. However, the effort was well worth it, as I discovered the true power of a lean, framework-agnostic approach.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Embracing Vanilla JavaScript:&lt;/strong&gt;&lt;br&gt;
One of the key aspects of Svelte that appealed to me was its use of vanilla JavaScript. Unlike Vue.js, React, and Angular, which introduce their own abstractions and syntax, Svelte leverages the full power of JavaScript without any additional layers. This made the transition easier for someone like me, who prefers working with vanilla JavaScript. I could utilize my existing knowledge and avoid the learning curve associated with a new framework-specific syntax. While React and Angular allow using JavaScript, they still come with additional concepts and patterns to learn.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Assembling the Webpage:&lt;/strong&gt;&lt;br&gt;
In Vue.js, React, and Angular, the process of assembling a webpage often involves defining components, writing templates, and wiring them together using directives or JSX. Svelte simplifies this process by allowing developers to work directly with HTML, CSS, and JavaScript, resulting in a more intuitive development experience. The shift from Vue's template-centric approach or JSX in React and Angular to a markup-driven approach in Svelte felt right to me. I found myself enjoying the freedom to write clean, concise code that was easily maintainable and highly performant.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Balancing Performance and Maintainability:&lt;/strong&gt;&lt;br&gt;
While I loved programming in a more vanilla JavaScript way, I initially felt pressure to use an assembled webpage due to the native performance gains and more maintainable code it offered. I was already using somewhat of a framework-driven approach using Vue3's application framework, but felt I was losing performance in having unminimized code, and a framework I'd have to load every time my site was visited. Svelte's compilation process eliminates the need for a runtime framework, resulting in smaller bundle sizes and faster load times. Additionally, the markup-driven approach and absence of a virtual DOM in Svelte simplified the codebase and made it more maintainable. It allowed me to focus on the core functionality of my application without the overhead of managing a complex framework or having to load multiple javascript files in order to have my site work.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Conclusion:&lt;/strong&gt;&lt;br&gt;
Transitioning from Vue 3 to Svelte was not without its challenges, but it was a decision I made after thorough research and consideration. As someone who prefers programming in a more vanilla JavaScript way, I explored various frameworks like Next.js, Solid, and Qwik (all of which look like great options). However, after careful evaluation, I found that the transition to Svelte felt the most intuitive for converting my JavaScript/Vue 3 homepage.&lt;/p&gt;

&lt;p&gt;Svelte's blend of performance, simplicity, and familiarity resonated with me. The conceptual shift and the freedom to work directly with HTML, SASS (after importing the Sass library), and JavaScript made the development process more enjoyable. The native performance gains and the more maintainable codebase were significant factors that influenced my decision.&lt;/p&gt;

&lt;p&gt;While React and Angular have their strengths and are popular choices for many developers, Svelte provided the optimal balance for my needs. It empowered me to deliver a performant website without sacrificing the simplicity and familiarity of vanilla JavaScript. Ultimately, the choice of a framework is subjective, and it's essential to thoroughly research and evaluate various options before making a decision. For me, the transition from JavaScript/Vue 3 to Svelte felt like the natural and intuitive choice, combining the performance gains, maintainability, and the joy of programming in a more familiar way.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Footnote:&lt;/strong&gt;&lt;br&gt;
Yes... this article was "co-published" as I used an AI to help assist in writing this up. Sometimes asking for help on expanding sentences and topics can lead to some interesting sentences and word choices that you may not ever think of. Feel free to comment in the comments on your thoughts on Svelte, React, Angular, Vue, Solid, NextJs or Qwik... Please keep it kind though. Coming from a background of mostly vanilla JS, and the bit of Angular, and Vue I've used in other projects, I feel like we are in the next round of the frontend framework wars, please dont make this thread a "battleground".&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Why ChatGPT is close but not all there yet</title>
      <dc:creator>Luke Liukonen</dc:creator>
      <pubDate>Fri, 30 Jun 2023 00:39:45 +0000</pubDate>
      <link>https://dev.to/liukonen/why-chatgpt-is-close-but-not-all-there-yet-11j4</link>
      <guid>https://dev.to/liukonen/why-chatgpt-is-close-but-not-all-there-yet-11j4</guid>
      <description>&lt;p&gt;This 4-hour journey started with a Youtube video. "Encrypt your DNS traffic using Pi-Hole and a pf-sense router" may not have been its exact title, but in essence, it's what the topic was. This YouTuber indicates DNS server lookups go over an unencrypted line between you and the DNS provider, and how some of this traffic, can be watched by your ISP, even if you are using a private DNS provider like Google, Cloudflare, or Quad9. My current setup wasn't far off. I have instances of Pi-Hole running on my network, and my Pi-Hole instances pointed straight to Quad9 and a secondary provider. So I had half of the hardware but was missing pf-sense. Watching the video through, I found it really wasn't pf-sense, but a product called unbound. &lt;/p&gt;

&lt;p&gt;Some backstories on what some of the technologies are for those interested.&lt;br&gt;
Pi-Hole acts as a passthrough DNS server for your network. A DNS server you can think of as the yellow pages to the internet. Everyone connects to one and it translates "google.com" over to the IP addresses that the string represents. Pihole does the lookups from the provider, and filters anything it knows is an advertisement.&lt;/p&gt;

&lt;p&gt;Unbound is an open-source, recursive DNS resolver that provides secure and efficient resolution of domain names. It is designed to enhance privacy and security by implementing features such as DNSSEC validation and DNS over TLS. Unbound is known for its high performance and flexibility, making it a popular choice for individuals, organizations, and even internet service providers looking for a reliable and customizable DNS resolver. It can be used as a standalone resolver or integrated into various systems and applications to improve DNS resolution capabilities. In this case, it acts as a DNS over TLS passthrough between myself and Quad9.&lt;/p&gt;

&lt;p&gt;Like the Pen-Pineapple-Apple-Pen guy... I knew that it was only a matter of setting up a docker-compose file, that would pair up my Pi-Hole system with Unbound, and create a decent encrypted DNS out-the-door system.&lt;br&gt;
Essentially traffic would go&lt;br&gt;
PC -&amp;gt; Router -&amp;gt; PiHole -&amp;gt; unbound -&amp;gt; Quad9&lt;/p&gt;

&lt;p&gt;Awesome... I had a game plan... now to implement. I went straight to ChatGPT, and asked it to set up the above for me... and within seconds I had a docker-compose file. The issue was, it directed me to a 3rd party project called Stubby.&lt;/p&gt;

&lt;p&gt;Now I'm not going to bash Stubby or the developers who wrote it. ChatGPT recommended it to me because it was going to do what I needed without the extra overhead. Like using MS Word when all you need is notepad, sometimes it's better to use the right tool for the job. So down a rabbit hole, I went. First I found that the image ChatGPT sent didn't exist... like.. ever. Did some searches for any reference and came up blank. Ok, I thought... will find a different image. So for about 2 hours, I spent either trying to find a base image that looked reputable or even attempting to build my own. This is for security, I thought, I wasn't going to add a base image that was extremely outdated.&lt;/p&gt;

&lt;p&gt;After two hours of failed compiles and images that were either too old or too sketchy, I caved. rewatched the video to catch the name of the product they used, unbound, and started searching Google myself. Well... to be honest, I asked chatGPT who hesitantly sent me to ANOTHER container that did not exist. &lt;/p&gt;

&lt;p&gt;So the next two hours, I found a docker container that fits the bill, with the only exception that they used Cloudflare instead of Quad9. I still leveraged ChatGPT, but with more remedial tasks, and after 4 hours I got one of the two Pi-Holes I have running with full DNS over TLS encryption, and in the meantime, learned a lot about DNS over TLS, DNS over HTTPS, and even the new DNS over Quic. I have to reference &lt;a href="https://github.com/andrey0001/unbound-tls"&gt;this &lt;/a&gt; as the repo that got me up and running. I took their Docker-compose files and tweaked them slightly (pointing my custom unbound2.conf to /etc/unbound/unbound.conf)in order to point strictly to Quad9, and not Google or Cloudflare, as well as implement IP6 DNS Lookups as well. &lt;/p&gt;

&lt;p&gt;So, to the Moral of the story, and the title of the article. ChatGPT helps in a lot of ways, and for some tasks, can slay. However, it is still a long way off from replacing developers' jobs, and sometimes knowing how to google your problems is still the better option.&lt;/p&gt;

</description>
      <category>chatgpt</category>
      <category>docker</category>
      <category>security</category>
    </item>
    <item>
      <title>Why I am taking a break</title>
      <dc:creator>Luke Liukonen</dc:creator>
      <pubDate>Sun, 02 May 2021 03:01:03 +0000</pubDate>
      <link>https://dev.to/liukonen/why-i-am-taking-a-break-20j6</link>
      <guid>https://dev.to/liukonen/why-i-am-taking-a-break-20j6</guid>
      <description>&lt;p&gt;It sucks. I've been enjoying my time writing my ideas on a community such as Dev.to, however, I feel like I am being forced to take a break. See, I just had to sign paperwork (as did everyone else) expressing ownership and IP. I do plan on staying at my current location (for now), however, this is definitely a strike against them, as the documentation indicated that they had the right to my IP, and any open source work I worked on would also need their blessing before being distributed. Now, I am wise enough to know not to work on any personal items on anything they provide. And when I use the equipment, it stays on its own subdomain and connects directly to its network. I won't mix personal with professional. What I do find discouraging, is that there was no verbiage in the contract indicating my own development outside of working hours. The wording around it seemed to indicate that they essentially own all development I do no matter what time of day. I signed off on it anyway assuming what most do... development time outside of work is not the same as at home, and if I have an idea that has nothing to do with the industry I work in, I'm safe. That said, Google, as with all things, put the fear of God into me as I read story after story of companies (not the one I am currently working at!) suing and going after IP of developers and engineers who worked on their side hustles. I'm sure if I talk with my direct manager, he would be more than ok with what I work on... that's not the issue... it's always when things change, and all the work you put into your own projects somehow gets swallowed up by some new manager who thinks that somehow your Red Square will magically fit the hole of the triangle and tries taking credit for your work (again, not saying where I am working... horror stories on Reddit and other forums are enough for me). So while I will be passively reading new and interesting things on this forum, my contributions are now going to have to be limited to reading and the comments section.  &lt;/p&gt;

&lt;p&gt;Where does that leave my Github?&lt;br&gt;
 That is still up for debate I guess. All my work can be dated back well before I started working at this new place, and all have licenses attached to them. For now, I feel I am safe leaving my work open and available, however, I refuse to leave it as "abandonware". Meaning all my work will receive any needed patches and bug fixes. That said, again, this stays off the company computers. It is hard to say where it goes though... there might be one day I am asked to remove it or somehow give the IP rights to code I wrote almost 20 years ago to them. &lt;/p&gt;

&lt;p&gt;How did we get to this point?&lt;br&gt;
I, unfortunately, have no idea how we let our IP, our thoughts, become the ownership of a company... especially during the time we are not working. Again... have to stress this enough!, NotMyCompanyYet... however, there have been, and will be cases where a company decides that they want ownership of something one of their employees do in their free time. Imagine if MITS wouldn't have allowed Bill Gates and Paul Allen to have their partnership Micro-Soft... or if HP would have not let Steve Wozniak work on his one-off computers where we would be today. At some point, we let the companies take control and we look to them as being the innovators while in reality, a company is only as strong as the employees that work there. And while I'm sure MITS and HP would have loved to have "thought of it back then".. the truth is, many of us, most of us, have fantastic ideas... and some of these ideas could and might be the next big thing! However, when put in a circumstance where our ideas are not our own, but that of a company, many of us will just not do it! We know that they will distort and twist the original vision to fit the mold of what they want. Removing its oringal purpose and leaving it an empty shell of something that could have been better. And with a trend of underappreciation and not having recognition of one's work (Again Not my company)(I have worked at many crap jobs during my college days to know that many companies do NOT care about you, only what you can do for them) how many of us will have our brilliant idea, only to not run with it, holding everyone back in the process. &lt;/p&gt;

</description>
      <category>programming</category>
      <category>productivity</category>
      <category>privacy</category>
    </item>
    <item>
      <title>Developer Tools - Terminal</title>
      <dc:creator>Luke Liukonen</dc:creator>
      <pubDate>Sun, 25 Apr 2021 03:01:58 +0000</pubDate>
      <link>https://dev.to/liukonen/developer-tools-terminal-2l4f</link>
      <guid>https://dev.to/liukonen/developer-tools-terminal-2l4f</guid>
      <description>&lt;p&gt;I love my terminal setups. I don’t know what it is about it. It reminds me of my first PC, where I was running Windows 3.1 / MS-Dos 6, except, well, its better. It also reminds me of when I first tried out linux. And I don’t mean a bloated full blown distro… this was back when I was in highschool and I found a distro that could boot off a floppy disk. The amount of power you have in the terminal is outstanding, and yet, a lot of people shy away from it. I think seeing a blinking cursor and a c:\ can be a bit intimidating. That said, as a developer, I have a terminal window open much of the time. I find switching git repos and getting latest a lot smoother using the git command then most GUI based tools. That said I’ve been using some form of terminal / prompt for many moons, and even I get nervous using it. To make my life easier however, I have “prettyfied” my terminal, and will show you how you can too.&lt;br&gt;
So, I use the phrase terminal gets thrown around a LOT. I am not going to get into the nuances of terminal vs shell, since Scott Hanselman covers it very well here… &lt;a href="https://www.hanselman.com/blog/whats-the-difference-between-a-console-a-terminal-and-a-shell"&gt;https://www.hanselman.com/blog/whats-the-difference-between-a-console-a-terminal-and-a-shell&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  As a Windows Developer -
&lt;/h2&gt;

&lt;p&gt;First and foremost is I download Windows Terminal. Second is Powershell Core. I also run the latest Powershell Core and Powershell shells. Windows Terminal gives you the option of quickly switching to Command Prompt, Powershell, Powershell core, Azure, or any installs WSL linux distros on your machine. From there I install NerdFonts, powerline, and Oh My Posh. Directions to install can be found here..&lt;a href="https://docs.microsoft.com/en-us/winddows/terminal/tutorials/powerline-setup"&gt;https://docs.microsoft.com/en-us/winddows/terminal/tutorials/powerline-setup&lt;/a&gt;&lt;br&gt;
This not only makes my terminal super nice looking, it gives me quick access to git information such as number of changes made, if it’s in sync, and which branch I am in.&lt;/p&gt;

&lt;h2&gt;
  
  
  As a Mac or Linux Developer -
&lt;/h2&gt;

&lt;p&gt;Ok, so I haven’t tried this in a shell only linux distro, but in Gnome using xterm. For Mac, I have used terminal and Term2 and between the two, I haven’t seen a reason not to use Macs default terminal. For linux, I install Zsh shell. Mac already has Zsh installed (most recent versions). Once I have it installed, as above, I install a nerdfont based font for the GUI. This gives the nice fancy Icons such as the Git Icon and Linux / Apple logo in the corner. Oh my Zsh (&lt;a href="https://ohmyz.sh"&gt;https://ohmyz.sh&lt;/a&gt;) is then installed to get the fancy tools for git. For a theme, I installed Powerlevel10k. This gives me a GUI very similar to powerline on windows. Install directions can be found at its website, &lt;a href="https://github.com/romkatv/powerlevel10k"&gt;https://github.com/romkatv/powerlevel10k&lt;/a&gt;. This gives me the same bells and whistles I use on my Windows PC&lt;/p&gt;

</description>
    </item>
    <item>
      <title>How to use your Cellphone as a development machine*</title>
      <dc:creator>Luke Liukonen</dc:creator>
      <pubDate>Sun, 18 Apr 2021 00:37:12 +0000</pubDate>
      <link>https://dev.to/liukonen/how-to-use-your-cellphone-as-a-development-machine-3g9m</link>
      <guid>https://dev.to/liukonen/how-to-use-your-cellphone-as-a-development-machine-3g9m</guid>
      <description>&lt;p&gt;Yes... You see that right. After playing around with my Samsung Note 9, I have been able to create a full fledged machine and develop code on it... But it's not entirely running on my local machine, so there is the giant astric next to it... Here are the tools I am using&lt;br&gt;
 Hardware&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;My samsung galaxy note&lt;/li&gt;
&lt;li&gt;A raspberry pi or other machine to connect to. You could even set up a docker instance using AWS or Azure&lt;/li&gt;
&lt;li&gt;A usb multiport usb type c adapter, to hook up a keyboard, mouse and monitor&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;For software&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Dex. Since I am in the Samsung ecosystem, it gives me a fantastic desktop environment&lt;/li&gt;
&lt;li&gt;A browser&lt;/li&gt;
&lt;li&gt;(optional) Termius. I was dumbfounded when I fired up an instance against my local machine and saw it worked locally (and run linux commands directly against my phone), but in this instance, it is to ssh into your remote machine for install and maintenance&lt;/li&gt;
&lt;li&gt;Code Server. I am running this in a container on my remote machine, but essentially it is a instance of VS code in a web browser. This is my remote ide for development&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;This is a proof of concept that I confirmed works. I did run into some trouble with not having node installed, but there is an accessable terminal in code Server to install those dependencies, as it's built in Debian. Overall I am fairly happy with it. Let me know if anyone is interested in the setup / config directions on how to set it up.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>But do you Terminal?</title>
      <dc:creator>Luke Liukonen</dc:creator>
      <pubDate>Sat, 10 Apr 2021 23:52:18 +0000</pubDate>
      <link>https://dev.to/liukonen/but-do-you-terminal-2d1e</link>
      <guid>https://dev.to/liukonen/but-do-you-terminal-2d1e</guid>
      <description>&lt;p&gt;With my new job comes a new PC... well, a new machine. I have run Windows for so long that going to a mac makes me feel like a new user all over again, and not in a good way. Goodbye is the print screen key, Ctrl C, Ctrl V, (I found how to bring that back by remapping the command key to ctrl) as well as Windows Terminal. I have been a HUGE fan of Windows Terminal and OH-MY-POSH and used them on a daily basis. If I need a Powershell, Command prompt or Ubuntu/Debian shell, it was a click away! That said, I don't know how many devs even touch the command line anymore. I'd like to think it's either still widely used or even making a coming back, especially with Git, Node, DotNet, and other cli tools, but I was surprised to see how many people use 3rd party tools or plugins! Ok, so VS code is fantastic with its built-in Git process... but at some point, I would imagine devs have to use shell at some point. Mac has its terminal, but, wow is it bare bones! Luckily, I know that Macs use ZSH, and Oh my zsh did the trick. Well, "Oh my ZSH" and Powerlevel 10k (along with a Nerdest font). There are some glitches when resizing the window, but does it feel good seeing git information in the active directory I am working in. &lt;/p&gt;

</description>
    </item>
    <item>
      <title>Handling outages gracefully</title>
      <dc:creator>Luke Liukonen</dc:creator>
      <pubDate>Sun, 04 Apr 2021 03:42:47 +0000</pubDate>
      <link>https://dev.to/liukonen/handling-outages-gracefully-3604</link>
      <guid>https://dev.to/liukonen/handling-outages-gracefully-3604</guid>
      <description>&lt;p&gt;Lately, my Spectrum internet has been on the fritz. I really can't tell why either. It will just cut out and cut back in again. I've seen this occurring almost like clockwork when I was working from home, and expect to see more of it in the upcoming future, as I'll be working from home again. It does have me thinking though about how we as developers have to handle the rare case that the user can't connect. &lt;br&gt;
Here are some ideas on what we can do as developers to help in the event that something goes down....&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Service Workers. Writing progressive web apps, the service worker has been both a blessing and a nightmare. If configured improperly, expect none of your JS files to ever be upgraded or it to do nothing for you. That said, the ideal in at least my world right now is to use the Service worker to cache an instance of my Javascript, but always try for the latest. More info about service workers can be found &lt;a href="https://developers.google.com/web/fundamentals/primers/service-workers"&gt;here&lt;/a&gt;. &lt;/li&gt;
&lt;li&gt;Data caching. This can be client or server-side. Most data needs to come from somewhere. Many times it's either a SQL server or Web API. I am a huge fan of implementing caches on frequently called data. Especially on calls that will always return the same data. Implementing caches not only reduces the potential of a bad user experience in the event of a network outage but in many cases, can speed up the response time of your site.&lt;/li&gt;
&lt;li&gt;Use a CDN. CDN's are like giant global internet caches. They help with response time by serving up data closer to the user and behave like a bit internet cache. As well many of them cost nothing or little to use. There can be downsides, like if a CDN acts up, but all in all, the pros to using one outweigh the cons.&lt;/li&gt;
&lt;li&gt;Have a backup box. In the event of an outage, either have a fallback / secondary box to hit, or use a service like a load balencer if one machine acts up. In the case of my chatbot, if my box goes down, the code SHOULD fall back to Heroku (that said, there may be a bug or two in my logic on that... something I need to investigate)&lt;/li&gt;
&lt;li&gt;Investigate based on your platform. Going to last year's virtual Mongo DB conference, I found they were putting a lot of time and effort into "local Mode", where in the event you can't connect to the DB, it keeps it local until you can. I'm sure other platforms either have their own or similar concepts you can use to help with this.
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;I'm keeping this week's post a bit light, with the holiday going on (Happy Easter).Last note... Keep the mindset that while we live in a connected world, there are still dead zones. Let me know what you think, or if you have any additional recommendations on the topic&lt;/p&gt;

</description>
    </item>
    <item>
      <title>How I made my API more usable</title>
      <dc:creator>Luke Liukonen</dc:creator>
      <pubDate>Sun, 28 Mar 2021 01:36:40 +0000</pubDate>
      <link>https://dev.to/liukonen/how-i-made-my-api-more-usable-4pkc</link>
      <guid>https://dev.to/liukonen/how-i-made-my-api-more-usable-4pkc</guid>
      <description>&lt;p&gt;I enjoy writing API's. I think it is fascinating how we have a somewhat universal platform of letting computers talk with each other. That said, over the years, APIs have become better and better at giving end-users the ability to input and output records. When I first left college, web services was going to be the greatest thing ever. You had with many of these services, a universal data contract, known as SOAP, that could be used to easily access and pull in the API's contents. After web services, WCF services became more widespread. I come primarily from a Microsoft environment, so the upgrade from web services to WCF was fairly easy since Visual Studio had a lot of the items baked in. But as that was going on, Rest and JSON-based services became more and more widespread. And while I played with these services, I never really dove in unless I was consuming one. &lt;/p&gt;

&lt;p&gt;That said, being in the Microsoft world, I always thought of API calls as simple client-based calls you'd have to write some special one-off code for. Typically, I'd have a simple .net web client code that I would have in a base or helper class call the JSON service, and have either a custom data contract or wrapper, consume and parse the object to the data I needed. Having my experience in WCF and web services, I really felt that calling Rest services was almost going backward in time. VS made it easy to just plop in a WSDL, and it would do the heavy lifting of generating my client, objects, and code I needed.&lt;/p&gt;

&lt;p&gt;So, during a brief time off due to Covid in 2020, I decided to migrate some code and logic I wrote years ago into Rest-based services. It started with a chatbot. I wanted to migrate the UI to a web-based solution and found Rivescript to be an easy platform to migrate my old AIML solution over to. That said, it did come with a large headache. Every time you'd call the page, the client would have to download a 2 meg file from the internet, onto their device. I worked with caching on the client-side, but all in all, I did not find this to be a great solution. So, after some playing around, I settled on building a Node.js server that the UI could call. This now meant I had a Full-stack application, as now I had a proper backend to manage. As I am fairly new(ish) to the ecosystem of hosting and writing rest servers, I decided that if the client was calling my box, and didn't pass params, I should give notice on how to properly call my API. I've seen other APIs do this in the past, so I modeled it around their "UI".&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Ahb1W_iM--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/yvhoghcsf9rve0cnrje1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Ahb1W_iM--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/yvhoghcsf9rve0cnrje1.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I thought to myself, this is perfect. I am giving users a proper way to call my API if they want to play with it on their own. And as I learned from other APIs I modeled after, it does work. That said, I really wasn't happy with it. You see, I see other sites that have APIs. I see these really nice interfaces that give you the ability to test out your own code. It's something I wanted but didn't want to have to hand code. Then after a presentation on Swagger, I saw recently, I found out that I was behind on what I really knew. &lt;/p&gt;

&lt;p&gt;Swagger was introduced to the world back in 2011. It was picked up by SmartBear and the Linux foundation picked it up as a standard in 2015. By 2017, the biggest competitor to Swagger jumped ship to the Swagger standard by having a tool that could convert RAML to Swagger docs. All of this occurring, and I completely oblivious to it. So, recently I decided to upgrade my API. As the API is ultra lightweight, and running on a Raspberry Pi, I decided to have a hard-coded Swagger JSON document instead of auto-generating a document on the fly. after a bit of trial and error on the &lt;a href="https://editor.swagger.io/"&gt;Swagger Editor&lt;/a&gt; I hand wrote a YAML file that seemed to work perfectly with my API. Since my project is in Node.JS, I decided to use &lt;a href="https://www.npmjs.com/package/swagger-ui-express"&gt;Swagger UI Express&lt;/a&gt; as my default look and feel.&lt;/p&gt;

&lt;p&gt;I did experiment with &lt;a href="https://www.npmjs.com/package/redoc-express"&gt;Redoc-Express&lt;/a&gt; however, I was not a fan of it. See, Swagger offered me something I was seeing on other sites that I thought was absolutely brilliant, an easy way to execute the API right from the website itself. After some playing around, I found some interesting things I needed to do in order to get my item working. For starters, I decided it would be easiest to leave the URL structure alone, and if you hit my base URL site, you are redirected to a subdirectory URL that hosts the Swagger express platform. I was initially trying to run it off the base, however, I was running into small issues here and there, and settled on a sub-hosted URL instead. I also wasn't a fan of the giant Swagger UI logo on the top of the page. That said, I did find some forums that showed you could inject CSS attributes to the page, and after 5 minutes on the google, I removed much of the extra bloat that was on the site.&lt;/p&gt;

&lt;p&gt;The Swagger doc also provided me a few more things that the traditional "URL" I had spun up did not give me. I was now able to show that my code was an MIT license. As well, since others could use the API, I decided to spin up a terms and conditions doc as well. I am a developer, not a lawyer! I do not want others to use my chatbot to catfish poor souls on dating sites or be used in a way other than what it was intended. That being, a small rough draft of a simple chatbot hosted on some developers' personal portfolio and hosted on a free Heroku account as well as on some Raspberry Pi out in the cloud somewhere. So, after 20 minutes of googling, I have a basic Terms page as well. &lt;/p&gt;

&lt;p&gt;So, after all this extra effort, I think the outcome of it is a lot better than its original variation. &lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--tZ_GeIZ7--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/59mhc88hvraalzr4ibjx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--tZ_GeIZ7--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/59mhc88hvraalzr4ibjx.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;My new page has links on the top, as well as the descriptive text of what the API is. From that point on down, there is a description of what the params do, as well as the ability to actually make calls to the API, and see the results of the API. The interface to me at least is 100x better than a simple API response object telling the user that you called it wrong, and I learned a lot throughout the process. &lt;/p&gt;

&lt;p&gt;If you want to try it yourself, the website is &lt;a href="https://bot.liukonen.dev"&gt;https://bot.liukonen.dev&lt;/a&gt; or the original UI to the bot can be found at &lt;a href="https://chat.liukonen.dev"&gt;https://chat.liukonen.dev&lt;/a&gt;. Please let me know what you think of it, as well as any suggestions, recommendations, or thoughts.  &lt;/p&gt;

</description>
      <category>node</category>
      <category>webdev</category>
      <category>todayilearned</category>
    </item>
  </channel>
</rss>
