<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Odysseas Lamtzidis</title>
    <description>The latest articles on DEV Community by Odysseas Lamtzidis (@odyslam).</description>
    <link>https://dev.to/odyslam</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/odyslam"/>
    <language>en</language>
    <item>
      <title>How to monitor your Ethereum Node in under 5m</title>
      <dc:creator>Odysseas Lamtzidis</dc:creator>
      <pubDate>Mon, 02 Aug 2021 15:29:33 +0000</pubDate>
      <link>https://dev.to/odyslam/how-to-monitor-ethereum-node-in-under-5m-3n</link>
      <guid>https://dev.to/odyslam/how-to-monitor-ethereum-node-in-under-5m-3n</guid>
      <description>&lt;p&gt;This piece is a blog post version of a workshop I gave at &lt;a href="https://ethcc.io/" rel="noopener noreferrer"&gt;EthCC&lt;/a&gt; about monitoring an Ethereum Node using Netdata.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Disclaimer:&lt;/strong&gt; Although we use Netdata, this guide is generic. We talk about metrics that can be surfaced by many other tools, such as Prometheus/Grafana or Datadog.&lt;/p&gt;

&lt;p&gt;The contents are as follows:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Introduction to Ethereum Nodes&lt;/li&gt;
&lt;li&gt;What is Netdata&lt;/li&gt;
&lt;li&gt;How to monitor a system that runs go-ethereum (Geth)&lt;/li&gt;
&lt;li&gt;How to monitor go-ethereum (Geth)&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Ethereum Nodes
&lt;/h2&gt;

&lt;p&gt;Running a node is no small feat, as it requires increasingly more and more resources to store the state of the blockchain and quickly process new transactions.&lt;/p&gt;

&lt;p&gt;Nodes are useful for both those who develop on Ethereum (dapp developers) and users.&lt;/p&gt;

&lt;p&gt;For users, it's crucial so that they can verify, independently, the state of the chain. Moreover, using their own node, they can both send transactions and read the current state of the blockchain more efficiently. This is important, as a range of activities require the lowest of latencies (e.g MEV). &lt;/p&gt;

&lt;p&gt;For developers, it's important to run a Node so that they can easily look through the state of the blockchain. &lt;/p&gt;

&lt;p&gt;Given this reality, services like &lt;a href="https://infura.io/" rel="noopener noreferrer"&gt;Infura&lt;/a&gt; or &lt;a href="https://www.alchemy.com/" rel="noopener noreferrer"&gt;Alchemy&lt;/a&gt; have been created to offer "Ethereum Node-as-a-Service", so that a developer or user can use their Ethereum Node to read the chain or send transactions. &lt;/p&gt;

&lt;p&gt;This is not ideal, as users and developers need both the speed of their own node and the lack of dependency on an external actor who can go offline at any time. &lt;/p&gt;

&lt;h2&gt;
  
  
  Running the Ethereum Node
&lt;/h2&gt;

&lt;p&gt;Thus, running an Ethereum Node is not as a fringe activity as one outsider would expect, but rather a common practice for experienced users and developers. On top of that, running an Ethereum node is one of the core principles of decentralisation. If it becomes very hard or complex, the system becomes increasingly centralised, as fewer and fewer parties will have the capital and expertise required to run a node. &lt;/p&gt;

&lt;p&gt;Geth is the most widely-used implementation of the Ethereum Node, written in Go. &lt;/p&gt;

&lt;h2&gt;
  
  
  The Netdata Agent
&lt;/h2&gt;

&lt;p&gt;The &lt;a href="https://github.com/netdata/netdata" rel="noopener noreferrer"&gt;Netdata Agent&lt;/a&gt; was released back in 2016 as an open-source project and since then it has gathered over 55K GitHub ✨.&lt;/p&gt;

&lt;p&gt;TL;DR of netdata monitoring:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;You run a single command to install the agent.&lt;/li&gt;
&lt;li&gt;Netdata will auto-configure itself and detect &lt;strong&gt;all&lt;/strong&gt; available data sources. It will also create sane default alarms for them. &lt;/li&gt;
&lt;li&gt;It will gather every metric, every second. &lt;/li&gt;
&lt;li&gt;It will produce, instantly, stunning charts about those metrics.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;In other words, you don't have to setup&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;a) A dashboard agent&lt;/li&gt;
&lt;li&gt;b) A time series database (TSDB)&lt;/li&gt;
&lt;li&gt;c) An alert system. &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Netdata is all three.&lt;/strong&gt; &lt;/p&gt;

&lt;h2&gt;
  
  
  How to monitor your Ethereum Node
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi.imgur.com%2FDtuQ5Y7.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi.imgur.com%2FDtuQ5Y7.gif"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;EthCC was a blast, not only for the energy of the ecosystem, but also for how our workshop was received by node operators from a dozens of projects. &lt;/p&gt;

&lt;p&gt;I was stunned to see how many professionals are struggling with monitoring their infrastructure, often using some outdated Grafana Dashboard or the default monitoring system of a cloud provider.&lt;/p&gt;

&lt;p&gt;Let's get right into it. &lt;/p&gt;

&lt;h3&gt;
  
  
  Preparation
&lt;/h3&gt;

&lt;p&gt;The first order of business is to install netdata on a machine that is already running Geth. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; Make sure you run Geth with the &lt;code&gt;--metrics&lt;/code&gt; flag. Netdata expects the metric server to live in port &lt;code&gt;6060&lt;/code&gt; and be accessible by &lt;code&gt;localhost&lt;/code&gt;. If you have modifed that, we will need to make a configuration change in the collector so that we point it to your custom port. &lt;/p&gt;

&lt;p&gt;To install Netdata, run:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;bash &amp;lt;&lt;span class="o"&gt;(&lt;/span&gt;curl &lt;span class="nt"&gt;-Ss&lt;/span&gt; https://my-netdata.io/kickstart.sh&lt;span class="o"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Visit the Netdata dashboard at &lt;code&gt;&amp;lt;node_ip&amp;gt;:19999&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;For illustration purposes, we run a public test Geth server at &lt;a href="http://163.172.166.66:19999" rel="noopener noreferrer"&gt;http://163.172.166.66:19999&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Action plan
&lt;/h3&gt;

&lt;p&gt;We will not cover every single metric that is surfaced by Netdata. Instead, we will focus on a few important ones. &lt;/p&gt;

&lt;p&gt;For these metrics, we will:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Talk about what the particular system metric means in general.&lt;/li&gt;
&lt;li&gt;Discuss how to read these system metrics, no matter the workload.&lt;/li&gt;
&lt;li&gt;Analyze how Geth affects these system metrics.&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  How to read the dashboard
&lt;/h3&gt;

&lt;p&gt;The dashboard is organized into 4 main areas:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The top utility bar. Particularly important to access the time picker and running alerts. &lt;/li&gt;
&lt;li&gt;The main section where the charts are displayed. &lt;/li&gt;
&lt;li&gt;The right menu which organizes our charts into sections and submenus. For example, the system overview section has many different submenus (e.g cpu) and each submenu has different charts.&lt;/li&gt;
&lt;li&gt;The left menu which concerns &lt;a href="https://app.netdata.cloud" rel="noopener noreferrer"&gt;Netdata Cloud&lt;/a&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  System Overview section
&lt;/h2&gt;

&lt;p&gt;First, we take a look at the System Overview section. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi.imgur.com%2FLESIu1d.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi.imgur.com%2FLESIu1d.png" alt="System overview screenshot"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://i.imgur.com/LESIu1d.png" rel="noopener noreferrer"&gt;full-resolution image&lt;/a&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  Top-level Gauges
&lt;/h3&gt;

&lt;p&gt;It has a nice review of the whole system. During sync, we expect to see elevated &lt;code&gt;Disk Read/Write&lt;/code&gt; and &lt;code&gt;Net inbound/outbound&lt;/code&gt;. &lt;code&gt;CPU&lt;/code&gt; usage will be elevated only if there is high use of Geth's RPC server. &lt;/p&gt;
&lt;h3&gt;
  
  
  CPU utilization chart
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi.imgur.com%2FUJaOOUX.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi.imgur.com%2FUJaOOUX.png"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://i.imgur.com/UJaOOUX.png" rel="noopener noreferrer"&gt;full-resolution image&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;IOwait dimension&lt;/strong&gt;&lt;br&gt;
It's the time that the CPU waits for an IO operation to complete. It could be running other things, but it doesn't.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How do I read this?&lt;/strong&gt;&lt;br&gt;
High &lt;code&gt;iowait&lt;/code&gt; means that the system is &lt;code&gt;iowait&lt;/code&gt; constrained. Usually, this is related to Hard Disk work, but it could be other hardware as well.  If I see a consistently low value, that means that I use the CPU efficiently. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;softirq dimension&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;It's the time spent on &lt;a href="https://en.wikipedia.org/wiki/Interrupt_handler#:~:text=Interrupt%20handlers%20are%20initiated%20by,is%20the%20hardware%20interrupt%20handler." rel="noopener noreferrer"&gt;hardware interrupts handlers&lt;/a&gt;.  For example, network code is very &lt;code&gt;softirq&lt;/code&gt; heavy, as the CPU spends time in &lt;code&gt;kernel&lt;/code&gt; mode to handle the network packets.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How do I read this?&lt;/strong&gt;&lt;br&gt;
It should be very low.  Consistently high values mean that the system is not able to keep up with (probably) the network traffic.&lt;/p&gt;
&lt;h3&gt;
  
  
  CPU Pressure Stall Information (PSI) chart
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi.imgur.com%2FfTy21nB.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi.imgur.com%2FfTy21nB.png"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://i.imgur.com/fTy21nB.png" rel="noopener noreferrer"&gt;full-resolution image&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In the abstract, it's a measure of how much time is spent waiting for a resource to become available. The CPU could run other tasks, but can't find an available CPU core. &lt;/p&gt;

&lt;p&gt;This is only available on Linux systems. FreeBSD and MacOS don't support this, thus you won't find this chart on these systems. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How do I read this?&lt;/strong&gt;&lt;br&gt;
If you are not utilizing 100% of your CPU, this should be zero.  Keep track of this for a couple of days to see the whole range of the "expected" spikes. You can set a new alert for a spike beyond the highest spike under normal load, that way you will know when an abnormal load is detected.&lt;/p&gt;
&lt;h3&gt;
  
  
  CPU Load chart
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi.imgur.com%2FAcEKIlP.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi.imgur.com%2FAcEKIlP.png"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://i.imgur.com/AcEKIlP.png" rel="noopener noreferrer"&gt;full-resolution image&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;It's the running average of the processes that are waiting for resource availability. Historically, it has been the only measure of CPU performance issues.&lt;/p&gt;

&lt;p&gt;The difference with &lt;strong&gt;CPU PSI&lt;/strong&gt;:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Load&lt;/strong&gt; measures how &lt;em&gt;many&lt;/em&gt; processes are waiting for resource availability, while PSI measures &lt;em&gt;how much time&lt;/em&gt; applications are waiting for resource availability.&lt;/p&gt;

&lt;p&gt;Generally speaking, we care more about &lt;code&gt;PSI&lt;/code&gt; than &lt;code&gt;Load&lt;/code&gt;. If we are going to use &lt;code&gt;Load&lt;/code&gt;, we should keep track of &lt;code&gt;load1&lt;/code&gt; because by the time the other running averages are high, then it's already too late. The system is already throttled. &lt;/p&gt;

&lt;p&gt;A rule of thumb is to set an alarm for the following value: &lt;code&gt;8(or 16)*number_of_cpu_cores&lt;/code&gt;. Note that this can greatly vary (even 4 times could be too high) and it's possible that by the time the alert is raised, that you can't interact with the system due to the load. &lt;/p&gt;
&lt;h3&gt;
  
  
  How Geth affect the CPU charts
&lt;/h3&gt;

&lt;p&gt;Regarding the &lt;code&gt;CPU utilization chart&lt;/code&gt;, I see &lt;code&gt;iowait&lt;/code&gt; at about ~17%. It's the first evidence that something is not right in my Geth server. Either the network or disks are throttling my system. I see &lt;code&gt;softirq&lt;/code&gt; almost non-existent, so disk becomes even more suspicious.&lt;/p&gt;

&lt;p&gt;I see about 1-2% of &lt;code&gt;PSI&lt;/code&gt;. It should be zero, as the CPU is at about 30% of utilization, but it's not a bottleneck. Most probably, it means that Geth could be more optimized. &lt;/p&gt;

&lt;p&gt;As soon as I start spamming my Geth node with RPC requests, I see a considerable bump in both the &lt;code&gt;CPU utilization&lt;/code&gt; gauge and the &lt;code&gt;PSI&lt;/code&gt; chart. By stress-testing my node, I can set sensible alerts.&lt;/p&gt;

&lt;p&gt;In the following image, we can easily identify the time at which I started the &lt;code&gt;RPC request&lt;/code&gt; spam to my Geth node. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi.imgur.com%2Fyv4RkTB.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi.imgur.com%2Fyv4RkTB.png"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://i.imgur.com/yv4RkTB.png" rel="noopener noreferrer"&gt;full-resolution image&lt;/a&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  Disk Charts
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi.imgur.com%2F6JbBcHi.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi.imgur.com%2F6JbBcHi.png"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://i.imgur.com/6JbBcHi.png" rel="noopener noreferrer"&gt;full-resolution image&lt;/a&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  Disk IO chart
&lt;/h3&gt;

&lt;p&gt;The first chart measures the DiskIO. It's necessary to run Disk benchmarks to truly find the peak of your system and set the alerts accordingly. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How do I read this?&lt;/strong&gt;&lt;br&gt;
First I run my benchmarks to understand the peak performance of the disks. If I observe that during normal load the disk consistently reaches near the peak performance, then what I do is probably disk io bound and I need to upgrade my disk. &lt;/p&gt;
&lt;h3&gt;
  
  
  PageIO chart
&lt;/h3&gt;

&lt;p&gt;It measures the data that is pulled from memory. Usually, it's close to DiskIO. &lt;/p&gt;
&lt;h3&gt;
  
  
  Disk PSI chart
&lt;/h3&gt;

&lt;p&gt;Conceptually, it's the same as CPU PSI. The amount of time that processes are waiting in order to be able to perform DiskIO.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How do I read this?&lt;/strong&gt;&lt;br&gt;
The charts should be zero most of the time. If they are consistently non-zero, then the disk is a limiting factor on the system and we need to upgrade it. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Important Note&lt;/strong&gt;&lt;br&gt;
Viewing your Netdata dashboard is actually heavy in Disk IO, as data is being streamed directly from the system to your browser. That means that you will need to look at this chart at a time when you weren't viewing the dashboard. &lt;/p&gt;
&lt;h3&gt;
  
  
  How Geth affect the Disk charts
&lt;/h3&gt;

&lt;p&gt;This is the most clear indication that something is off with my disks. &lt;/p&gt;

&lt;p&gt;The &lt;code&gt;Disk PSI&lt;/code&gt; is about 30%, which means that for about 1/3 of the time, some tasks are waiting for Disk resources to be available. That means that my Disks are simply not fast enough. &lt;/p&gt;

&lt;p&gt;To verify the correlation with Geth, I can simply stop the process and see the PSI decreasing considerably. &lt;/p&gt;
&lt;h3&gt;
  
  
  RAM charts
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi.imgur.com%2FbGGAHQF.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi.imgur.com%2FbGGAHQF.png"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://i.imgur.com/bGGAHQF.png" rel="noopener noreferrer"&gt;full-resolution image&lt;/a&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  RAM utilization chart
&lt;/h3&gt;

&lt;p&gt;It's the absolute physical memory in use.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How do I read this?&lt;/strong&gt;&lt;br&gt;
Ideally, I don't want to see anything listed as &lt;code&gt;free&lt;/code&gt;. If I have a lot of free memory, that means that I have more memory than I need.  &lt;code&gt;used&lt;/code&gt; should be approximately a bit above &lt;code&gt;50%&lt;/code&gt; and it shouldn't be a lot larger than &lt;code&gt;cached&lt;/code&gt;. &lt;/p&gt;

&lt;p&gt;&lt;code&gt;cached&lt;/code&gt; is memory that is used by the kernel to cache disk files for faster access. It is not &lt;code&gt;used&lt;/code&gt;, as the kernel will use that memory if a process requires it.&lt;/p&gt;

&lt;p&gt;If &lt;code&gt;buffers&lt;/code&gt; are very high, that means that the system is under heavy network load. Even in a large server, &lt;code&gt;buffered&lt;/code&gt; should be a couple of hundred MBs.  &lt;code&gt;buffers&lt;/code&gt; are used to store network packets to be processed by the CPU. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Note&lt;/strong&gt;&lt;br&gt;
A system where the main application is taking care of memory caching (instead of the system) could have a lot of &lt;code&gt;used&lt;/code&gt; and almost no &lt;code&gt;cached&lt;/code&gt;. This is very rare and probably does not concern most of us. &lt;/p&gt;
&lt;h3&gt;
  
  
  RAM PSI chart
&lt;/h3&gt;

&lt;p&gt;Conceptually, this is the same metric as CPU PSI. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How do I read this?&lt;/strong&gt;&lt;br&gt;
If RAM PSI is consistently above zero, then the speed of my memory modules is a limiting factor.  I need to get faster (not bigger) RAM. &lt;/p&gt;
&lt;h3&gt;
  
  
  RAM swap usage chart
&lt;/h3&gt;

&lt;p&gt;When the system can't find the memory it needs, it creates files on the hard disk and uses them as a sort of &lt;em&gt;very&lt;/em&gt; slow memory.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Note 1:&lt;/strong&gt;&lt;br&gt;
It's worth noting that mac, Linux, and FreeBSD have an unintuitive use of swap. They will remove the swap files when no running process is referencing them, &lt;strong&gt;not&lt;/strong&gt; when memory is free. That means that a long-running process will continue to use swap files even if there is available memory.&lt;/p&gt;

&lt;p&gt;To solve this, we should either reboot the system, restart the processes, or disable and enable swap. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Note 2:&lt;/strong&gt;&lt;br&gt;
If you don't see the swap chart, that means that the machine has no swap enabled. Netdata will not show charts that have zero values.&lt;/p&gt;
&lt;h3&gt;
  
  
  How Geth affect the Ram charts
&lt;/h3&gt;

&lt;p&gt;Geth is really gentle on RAM, consuming what we define in as command line argument. Since there is no swap, we can safely assume that we don't need more RAM with the current configuration. &lt;/p&gt;

&lt;p&gt;Moreover, since the &lt;code&gt;RAM PSI&lt;/code&gt; is about 3%, I can safely assume that my RAM is fast enough for this workload. &lt;/p&gt;
&lt;h2&gt;
  
  
  Network charts
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi.imgur.com%2Faq3WrSs.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi.imgur.com%2Faq3WrSs.png"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://i.imgur.com/aq3WrSs.png" rel="noopener noreferrer"&gt;full-resolution image&lt;/a&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  Total Bandwidth chart
&lt;/h3&gt;

&lt;p&gt;It's the total actual data that is being sent and received by the system. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How do I read this?&lt;/strong&gt;&lt;br&gt;
You need a baseline to read this. If you have consistently more traffic than expected, then something is off. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Important Note&lt;/strong&gt;&lt;br&gt;
Viewing your Netdata dashboard is actually heavy in network usage, as data is being streamed directly from the system to your browser. That means that you will need to look at this chart at a time when you weren't viewing the dashboard. &lt;/p&gt;
&lt;h3&gt;
  
  
  How Geth affects the Network charts
&lt;/h3&gt;

&lt;p&gt;We should care about these charts only if they go out of the ordinary (e.g DDoS attack). Observe the baseline of the system (e.g mine is about 1-2 megabit/s) and set the alerts for above the highest spike. &lt;/p&gt;
&lt;h3&gt;
  
  
  Softnet chart
&lt;/h3&gt;

&lt;p&gt;It counts network receive interrupts processed by the kernel. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How do I read this?&lt;/strong&gt;&lt;br&gt;
We mainly care about 2 dimensions that should be zero most of the time. If you can't see them, that's a good thing, as Netdata will not display dimensions that are 0. &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;dropped&lt;/code&gt; should always be zero, if it is non-zero your system is having serious issues keeping up with network traffic.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;squeezed&lt;/code&gt; should be zero, or no more than single digit. If it’s in the double digits or higher the system is having trouble keeping up, but not to the point of losing packets.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Personal computers that have been converted to homelab servers usually have non-zero dimensions, as they are not designed to handle a lot of network bandwidth. &lt;/p&gt;
&lt;h3&gt;
  
  
  How Geth affects the Softnet chart
&lt;/h3&gt;

&lt;p&gt;In reality, it is the other way around. If we see a high number of &lt;code&gt;dropped&lt;/code&gt; or &lt;code&gt;squeezed&lt;/code&gt; packets, that could explain strange Geth behavior. It simply is not receiving packets that it should!&lt;/p&gt;
&lt;h2&gt;
  
  
  Disks section
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi.imgur.com%2FukYMri7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi.imgur.com%2FukYMri7.png"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://i.imgur.com/ukYMri7.png" rel="noopener noreferrer"&gt;full-resolution image&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The disk section is organized into submenus, one for each Disk. &lt;/p&gt;

&lt;p&gt;In my case, I use a block-volume SSD, called &lt;code&gt;sda&lt;/code&gt; and mounted on &lt;code&gt;/mnt/block-volume&lt;/code&gt;. &lt;/p&gt;
&lt;h3&gt;
  
  
  Disk Operations chart
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi.imgur.com%2FPoSQPQE.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi.imgur.com%2FPoSQPQE.png"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://i.imgur.com/PoSQPQE.png" rel="noopener noreferrer"&gt;full-resolution image&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The number of completed operations on the disks. &lt;/p&gt;

&lt;p&gt;This is important because it's more taxing on the system to read/write the same amount of data in a high number of small operations, rather in a few larger ones. &lt;/p&gt;

&lt;p&gt;The disk may be able to keep up with the write/read IO bandwidth, but not with the amount of operations that are being requested to perform that particular IO bandwidth. &lt;/p&gt;
&lt;h3&gt;
  
  
  IO backlog chart
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://i.imgur.com/Sl08a7e.png" rel="noopener noreferrer"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi.imgur.com%2FSl08a7e.png"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://i.imgur.com/Sl08a7e.png" rel="noopener noreferrer"&gt;full-resolution image&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The backlog is an indication of the duration of pending disk operations&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How to read this?&lt;/strong&gt;&lt;br&gt;
On an ideal system, this should be zero. In practice, this sill is non-zero every now and then, simply because of the IO that the system has. &lt;/p&gt;

&lt;p&gt;It's relevant to the baseline of the system. You want to see observe the graph for a specific period and set your alerts &lt;strong&gt;above&lt;/strong&gt; the peaks that you see. &lt;/p&gt;

&lt;p&gt;Note that if you run backups, these are particularly taxing on IO, so you will need to take those peaks into consideration.&lt;/p&gt;
&lt;h3&gt;
  
  
  How Geth affects the Disks charts
&lt;/h3&gt;

&lt;p&gt;The first order of business is to locate the disk that is used by Geth to store it's data. &lt;/p&gt;

&lt;p&gt;We first see an increased utilization. If that utilization is approaching 100%, that is a clear indication that the Disk can't handle the traffic that is being sent by Geth. &lt;/p&gt;

&lt;p&gt;This will most likely result in &lt;strong&gt;Geth not syncing&lt;/strong&gt;. &lt;/p&gt;

&lt;p&gt;Continuing, we go to &lt;code&gt;IO backlog&lt;/code&gt; and we see what we expected. There is about &lt;code&gt;100ms&lt;/code&gt; constant IO backlog. Our Disk simply can't perform fast enough. It's good that the backlog is constant, that means that the Disk can keep up (but not fast enough for Geth to sync). If the backlog was ever increasing, it means that the Disk can't keep up. &lt;/p&gt;

&lt;p&gt;Now that we are sure of the bottleneck, we can observe the other charts to understand better &lt;em&gt;why&lt;/em&gt; Geth is hammering our Disks. From what I see the &lt;code&gt;Read/Write IO bandwidth&lt;/code&gt; is not terribly high. &lt;/p&gt;

&lt;p&gt;A closer examination will bring us to the chart &lt;code&gt;disk_ops&lt;/code&gt;, which shows the number of operations/s that the Disks performs. It does about 500 operations per second on the test machine, which would explain why the disk can't keep up. &lt;/p&gt;

&lt;p&gt;It's not a matter of how much data is read/written on disk, but rather in how many operations that data is read/written. &lt;strong&gt;Geth does a lot of small operations that can be taxing on the disk.&lt;/strong&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  Networking Stack Section
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi.imgur.com%2FwRHCVxp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi.imgur.com%2FwRHCVxp.png"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://i.imgur.com/wRHCVxp.png" rel="noopener noreferrer"&gt;full resolution image&lt;br&gt;
&lt;/a&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  tcp chart
&lt;/h3&gt;

&lt;p&gt;It shows TCP connection aborts. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How do I read this?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;All the dimensions of this chart should be zero. If there are non-zero dimensions, that means that there is &lt;em&gt;something&lt;/em&gt; in the network, that is not behaving well (e.g a router, the network card on the system, etc.) Consistently high numbers point to a bad network card and you will need to change that. &lt;/p&gt;

&lt;p&gt;High numbers of &lt;strong&gt;connection aborts&lt;/strong&gt; mean that your system can't handle the number of connections, probably due to low available memory.&lt;/p&gt;

&lt;p&gt;High numbers of &lt;strong&gt;time-outs&lt;/strong&gt; mean that there is some error in the network path between your systems and the system with which you are having the connections.&lt;/p&gt;
&lt;h3&gt;
  
  
  How Geth is affecting the tcp charts
&lt;/h3&gt;

&lt;p&gt;Geth is a highly networked application, with peers connecting and disconnecting all the time. It's expected to have some &lt;code&gt;baddata&lt;/code&gt;, but it shouldn't be worrying. If you observe elevated values, and it originates from Geth (e.g it lowers when Geth isn't running), it's good that you open a GitHub issue on the &lt;a href="https://github.com/ethereum/go-ethereum" rel="noopener noreferrer"&gt;GitHub repository&lt;/a&gt; of Geth. &lt;/p&gt;
&lt;h2&gt;
  
  
  Applications Section
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi.imgur.com%2FPpswdZy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi.imgur.com%2FPpswdZy.png"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://i.imgur.com/PpswdZy.png" rel="noopener noreferrer"&gt;full-resolution&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Interestingly, this section has the same group of metrics that are available in the &lt;strong&gt;System Overview Sectio&lt;/strong&gt;n. The difference is that they are grouped in a per application group basis. &lt;/p&gt;

&lt;p&gt;The application groups are defined in the &lt;a href="https://github.com/netdata/netdata/blob/master/collectors/apps.plugin/apps_groups.conf" rel="noopener noreferrer"&gt;apps_groups.conf&lt;/a&gt;. &lt;/p&gt;

&lt;p&gt;The user can customize it by running the following command. We assume that the netdata configuration lives in &lt;code&gt;/etc/netdata&lt;/code&gt;. Depending on the installation method, this can vary.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;/etc/netdata/edit-config apps_groups.conf
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The reason we group different processes into &lt;code&gt;application groups&lt;/code&gt; is that the user cares about the "functionality" of a certain application, more than they care about the implementation details. &lt;/p&gt;

&lt;p&gt;We care about the "web server", not if it's nginx or appache. &lt;/p&gt;

&lt;p&gt;Moreover, the user could care about the aggregate behaviour all the "databases" that live in the system.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How do I read this?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Again, we use a baseline. We let the system running under normal load to define our baseline metrics. All the readings afterward will be against that baseline. Generally, we start from a general observation about the system (e.g high RAM usage) and then move to the &lt;strong&gt;applications section&lt;/strong&gt; to identify which application is misbehaving.&lt;/p&gt;

&lt;h3&gt;
  
  
  How Geth affects the Application Section
&lt;/h3&gt;

&lt;p&gt;The application section is a great pleace to see the resource utilization of your Geth Node. Currently it will be shown as &lt;code&gt;go-ethereum&lt;/code&gt;, but we will group all ethereum clients as &lt;code&gt;ethereum node&lt;/code&gt;. &lt;/p&gt;

&lt;p&gt;The more interesting chart is in the RAM submenu, where we can verify that Geth is not consuming more memory than we want. &lt;/p&gt;

&lt;p&gt;Moreover, in the case of an incident (e.g a &lt;code&gt;CPU utilization&lt;/code&gt; spike), we can see the application charts for the same period to verify if Geth is behind the anomaly behaviour. If it is, we can then go to Geth's logs or use the Javascript console to see what happened around that time. &lt;/p&gt;

&lt;p&gt;This way, we can trace back an incident to the root cause.&lt;/p&gt;

&lt;h2&gt;
  
  
  eBPF charts
&lt;/h2&gt;

&lt;p&gt;Netdata offers a handful of eBPF charts out-of-the-box. With eBPF we can see in a per-application basis how the application is directly interacting with the Operating System (e.g how many &lt;code&gt;syscalls&lt;/code&gt; it does to &lt;code&gt;do_fork&lt;/code&gt;).&lt;/p&gt;

&lt;p&gt;Although they are not particularly useful for Node Operators, they are &lt;strong&gt;very&lt;/strong&gt; useful to Developers. Using Netdata, they can verify for example that their application is not having a memory leak or that it's not forgetting to close a file that it opened. &lt;/p&gt;

&lt;p&gt;If you are a developer in a Ethereum client, please do check out the eBPF charts. We would be grateful for you to try them in your workflow and share any feedback you may have over &lt;a href="https://discord.gg/mPZ6WZKKG2" rel="noopener noreferrer"&gt;Discord&lt;/a&gt; or our &lt;a href="https://community.netdata.cloud" rel="noopener noreferrer"&gt;Community Forums&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Although I won't go into the metrics themselves, here are some resources about eBPF as a technology:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://ebpf.io/what-is-ebpf/" rel="noopener noreferrer"&gt;Documentation -- What is eBPF&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://brendangregg.com/blog/2019-01-01/learn-ebpf-tracing.html" rel="noopener noreferrer"&gt;Blog-post -- Learn eBPF Tracing: Tutorial and Examples&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://youtu.be/CiztMr3cFfA?t=8954" rel="noopener noreferrer"&gt;Youtube -- eBPF + Netdata&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Geth section
&lt;/h2&gt;

&lt;p&gt;As already mentioned, I have created a proof of concept integration between Geth and Netdata. It's a collector that automatically detects a running Geth instance, it gathers metrics and it creates charts for them. &lt;/p&gt;

&lt;p&gt;The Geth collector uses the Prometheus endpoint of the Geth node, available at &lt;code&gt;node:6060/debug/metrics/prometheus&lt;/code&gt;. To activate the endpoint, we must start Geth with the CLI arguments &lt;code&gt;--metrics&lt;/code&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;./geth --metrics.addr 0.0.0.0 --metrics
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Read more about it on the &lt;a href="https://geth.ethereum.org/docs/interface/metrics" rel="noopener noreferrer"&gt;Geth docs&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;If you access the above path with your browser, you will see all the metrics that are exposed by Geth. &lt;/p&gt;

&lt;p&gt;For example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# TYPE chain_account_commits_count counter
chain_account_commits_count 0

# TYPE chain_account_commits summary
chain_account_commits {quantile="0.5"} 0
chain_account_commits {quantile="0.75"} 0
chain_account_commits {quantile="0.95"} 0
chain_account_commits {quantile="0.99"} 0
chain_account_commits {quantile="0.999"} 0
chain_account_commits {quantile="0.9999"} 0

# TYPE chain_account_hashes_count counter
chain_account_hashes_count 0

# TYPE chain_account_hashes summary
chain_account_hashes {quantile="0.5"} 0
chain_account_hashes {quantile="0.75"} 0
chain_account_hashes {quantile="0.95"} 0
chain_account_hashes {quantile="0.99"} 0
chain_account_hashes {quantile="0.999"} 0
chain_account_hashes {quantile="0.9999"} 0

# TYPE chain_account_reads_count counter
chain_account_reads_count 0

# TYPE chain_account_reads summary
chain_account_reads {quantile="0.5"} 0
chain_account_reads {quantile="0.75"} 0
chain_account_reads {quantile="0.95"} 0
chain_account_reads {quantile="0.99"} 0
chain_account_reads {quantile="0.999"} 0
chain_account_reads {quantile="0.9999"} 0
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This is a sample. You can find a full list of the available metrics on the &lt;a href="http://163.172.166.66:6060/debug/metrics/prometheus" rel="noopener noreferrer"&gt;prometheus endpoint&lt;/a&gt; of my test server.&lt;/p&gt;

&lt;p&gt;To find the source code where the metrics are defined, you can do a &lt;a href="https://github.com/search?q=org%3Aethereum+metrics.NewRegisteredMeter&amp;amp;type=code" rel="noopener noreferrer"&gt;GitHub search&lt;/a&gt; in the codebase. It will help you understand what each metric means. &lt;/p&gt;

&lt;p&gt;Before continuing with the metrics that I chose for the PoC, it's important to note two things:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;I am not a Node Operator, thus my expertise on the Geth-specific metrics is very limited. As you see, I only make a small comment about each chart, without offering any advice on &lt;strong&gt;how&lt;/strong&gt; to read the chart. &lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Geth actually exposes a lot of metrics&lt;/strong&gt;. The selection below is only a small subset that I was able to identify as helpful. I assume that there are more metrics that would make sense to surface, but I may have missed them. &lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Chaindata session total read/write chart
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi.imgur.com%2F6xBcuab.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi.imgur.com%2F6xBcuab.png"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://i.imgur.com/6xBcuab.png" rel="noopener noreferrer"&gt;full-resolution image&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Total data that has been written/read during the session (since Geth's last restart). &lt;/p&gt;

&lt;p&gt;Charts for both LevelDB and AncientDB.&lt;/p&gt;

&lt;h3&gt;
  
  
  Chaindata rate chart
&lt;/h3&gt;

&lt;p&gt;Rate of data that are being written/read. Charts for both LevelDB and AncientDB&lt;/p&gt;

&lt;h3&gt;
  
  
  Chaindata size chart
&lt;/h3&gt;

&lt;p&gt;The size of the Ancient and LevelDB databases. Useful to gauge how much storage you need. &lt;/p&gt;

&lt;h3&gt;
  
  
  Chainhead chart
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi.imgur.com%2FiV7Hbdq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi.imgur.com%2FiV7Hbdq.png"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://i.imgur.com/iV7Hbdq.png" rel="noopener noreferrer"&gt;full-resolution&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;It shows the number of the block of the &lt;code&gt;header&lt;/code&gt; and &lt;code&gt;block&lt;/code&gt;. &lt;code&gt;header&lt;/code&gt; is the latest block that your node is aware of. &lt;code&gt;block&lt;/code&gt; is the latest block that has been processed and added to the local blockchain. &lt;/p&gt;

&lt;p&gt;If these two dimensions are not the same, then the node is not synced. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Tip&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;A good addition to this chart is the &lt;code&gt;header&lt;/code&gt; dimension from another node (or perhaps some service). Having the view of another node in the network can help us understand if our node is seeing what the majority of nodes are seeing. &lt;/p&gt;

&lt;h3&gt;
  
  
  P2P bandwidth &amp;amp; peers charts
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi.imgur.com%2FQucZpQV.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi.imgur.com%2FQucZpQV.png"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://i.imgur.com/QucZpQV.png" rel="noopener noreferrer"&gt;full-resolution image&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The number of peers that your node has and the bandwidth between your node and it's peers. &lt;/p&gt;

&lt;p&gt;In general, an optimum number of peers is around 30. This can be set as a command line argument when running Geth. &lt;/p&gt;

&lt;h3&gt;
  
  
  Reorgs charts
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi.imgur.com%2Fbv5hyTU.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi.imgur.com%2Fbv5hyTU.png"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://i.imgur.com/bv5hyTU.png" rel="noopener noreferrer"&gt;full-resolution image&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;With all the recent talk about reorgs, these charts will show the number of &lt;code&gt;reorgs&lt;/code&gt; that have been executed in our node, as also the number of blocks that were added and dropped. &lt;/p&gt;

&lt;h3&gt;
  
  
  TX pool charts
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi.imgur.com%2FENvgaCy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi.imgur.com%2FENvgaCy.png"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://i.imgur.com/ENvgaCy.png" rel="noopener noreferrer"&gt;full-resolution image&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Metrics about the &lt;code&gt;tx pool&lt;/code&gt; of our Geth node are not particularly actionable, but rather informational about the kind of transactions that are happening.&lt;/p&gt;

&lt;h3&gt;
  
  
  Goroutines chart
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi.imgur.com%2FnhfbwHZ.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi.imgur.com%2FnhfbwHZ.png"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://i.imgur.com/nhfbwHZ.png" rel="noopener noreferrer"&gt;full-resolution image&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The number of &lt;code&gt;goroutines&lt;/code&gt; is particularly important. With ~50 peers, you should expect about 500 &lt;code&gt;goroutines&lt;/code&gt;, while with ~100 you can expect around 1,500. If you have considerably more, there is some bug in the Geth software and you should raise an issue on GitHub. &lt;/p&gt;

&lt;h3&gt;
  
  
  RPC chart
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi.imgur.com%2FPAZ47FF.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi.imgur.com%2FPAZ47FF.png"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://i.imgur.com/PAZ47FF.png" rel="noopener noreferrer"&gt;full-resolution image&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;For now, it simply shows how many succesful/failed &lt;code&gt;rpc calls&lt;/code&gt; are performed in our node per-second. &lt;/p&gt;

&lt;p&gt;A sudden increase in &lt;code&gt;rpc calls&lt;/code&gt; can indicate a malicious activity (e.g DDoS). Note that a high number of RPC calls can strain the system considerably and a sudden increase in &lt;code&gt;CPU utilization&lt;/code&gt; and &lt;code&gt;CPU PSI&lt;/code&gt; will be immediately shown.&lt;/p&gt;

&lt;h2&gt;
  
  
  Default Alerts
&lt;/h2&gt;

&lt;p&gt;When monitoring a system, it's &lt;strong&gt;crucial&lt;/strong&gt; that you setup alerts. The best monitoring system is the one where you never have to open the dashboard, except for a warning alert of an impeding incident. &lt;/p&gt;

&lt;p&gt;The good news is that Netdata comes with a slew of default alerts, so most probably you will not have to set anything up. &lt;/p&gt;

&lt;p&gt;To get a sense of the default alerts, visit the &lt;a href="http://163.172.166.66:19999/" rel="noopener noreferrer"&gt;test server&lt;/a&gt; I mentioned above and click on the alert button.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi.imgur.com%2FZZMNSeD.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi.imgur.com%2FZZMNSeD.png"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://i.imgur.com/ZZMNSeD.png" rel="noopener noreferrer"&gt;full-resolution image&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  How Geth affects the default alert
&lt;/h2&gt;

&lt;p&gt;Geth affects the default alerts in 2 ways:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;There is a default alert for Geth, that checks if Geth is synced or not, by simply comparing &lt;code&gt;chainhead_block&lt;/code&gt; with &lt;code&gt;chainhead_header&lt;/code&gt;. This alert will be raised until the Geth node is synced.&lt;/li&gt;
&lt;li&gt;Geth may impose abnormal load on the disk. If Geth is functioning normally and a Netdata alert is raised, that means that we need to change the alert to a new default. Not all workloads are the same, and Netdata uses sane defaults that might not be suitable for some workloads. &lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  How to change a default alert
&lt;/h3&gt;

&lt;p&gt;My test server is constantly triggering the &lt;code&gt;disk_space&lt;/code&gt; alert. Let's assume that we want to change that. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi.imgur.com%2FrzIf5Et.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi.imgur.com%2FrzIf5Et.png"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://i.imgur.com/rzIf5Et.png" rel="noopener noreferrer"&gt;full-resolution image&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;From the raise alert, we can see the &lt;code&gt;source&lt;/code&gt; field. From that field, I get three pieces of information:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Netdata's configuration lives in &lt;code&gt;/etc/netdata&lt;/code&gt; (it can live in other places, depending on the installation method).&lt;/li&gt;
&lt;li&gt;The configuration file that I care about is &lt;code&gt;health.d/disks.conf&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;The alert starts at &lt;code&gt;line 12&lt;/code&gt; of the source file.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;To change the alert:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;ssh into the machine: &lt;code&gt;ssh root@163.172.166.66&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;run &lt;code&gt;sudo /etc/netdata/edit-config health.d/disks.conf&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;Find &lt;code&gt;line 12&lt;/code&gt;.
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;template: disk_space_usage
       on: disk.space
    class: Utilization
     type: System
component: Disk
       os: linux freebsd
    hosts: *
 families: !/dev !/dev/* !/run !/run/* *
     calc: $used * 100 / ($avail + $used)
    units: %
    every: 1m
     warn: $this &amp;gt; (($status &amp;gt;= $WARNING ) ? (80) : (90))
     crit: $this &amp;gt; (($status == $CRITICAL) ? (90) : (98))
    delay: up 1m down 15m multiplier 1.5 max 1h
     info: disk $family space utilization
       to: sysadmin
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The above is the current running alert. I can either comment out the entire alert definition by adding &lt;code&gt;#&lt;/code&gt; in front of every line, or change the values. &lt;/p&gt;

&lt;p&gt;The alert syntax is out of the scope of this blog post, but our &lt;a href="https://learn.netdata.cloud/docs/agent/health/reference" rel="noopener noreferrer"&gt;documentation&lt;/a&gt; should offer everything you need.&lt;/p&gt;

&lt;h2&gt;
  
  
  Extending the Geth-Netdata integration
&lt;/h2&gt;

&lt;p&gt;It's trivial to extend the integration between Geth and Netdata. Be it with more charts and alerts or for other Ethereum Clients.&lt;/p&gt;

&lt;p&gt;If you want to learn how, read on &lt;a href="https://dev.to/odyslam/how-to-extend-the-geth-netdata-integration-4o68"&gt;Part 2: Extending the Geth-Netdata integration&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  More Netdata goodies
&lt;/h2&gt;

&lt;p&gt;If you have reached thus far, you might be interested in other Netdata collectors, relevant to the operation of a Geth Node:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://learn.netdata.cloud/docs/agent/collectors/python.d.plugin/smartd_log" rel="noopener noreferrer"&gt;smartd monitoring&lt;/a&gt; with NVME capabilities being implemented by our community as we speak.&lt;/li&gt;
&lt;li&gt;&lt;a href="https://learn.netdata.cloud/docs/agent/collectors/python.d.plugin/nvidia_smi" rel="noopener noreferrer"&gt;Nvidia GPU monitoring&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://learn.netdata.cloud/docs/agent/collectors/python.d.plugin/anomalies" rel="noopener noreferrer"&gt;Experimental - automatic Anomaly detection&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://learn.netdata.cloud/docs/cloud/insights/metric-correlations" rel="noopener noreferrer"&gt;Metric Correlations&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  In conclusion
&lt;/h2&gt;

&lt;p&gt;System and performance monitoring is an extremely complex subject, as there is a high number of interdependencies between the system and the workload. A single issue may surface in a dozen different places, from system metrics to logs and user-facing issues. &lt;/p&gt;

&lt;p&gt;Geth in particular is a critical piece of infrastructure, and any possible downtime may have serious repercussions to both the operator and the end user. &lt;/p&gt;

&lt;p&gt;For this reason, I want to dig more into the matter and publish more content that helps users and node operators in understanding their systems and work proactively. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;I need you!&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;If you are a Node Operator, I would love to talk to you and learn more about the challenges that you are facing. My goal is to install Netdata on large nodes in production and observe the effects that Geth's incidents have on the underlying systems.&lt;/p&gt;

&lt;p&gt;By understanding the deeper interdependencies that Geth has with the underlying system, I hope to educate more users and operators in monitoring their systems and avoiding incidents.&lt;/p&gt;

&lt;p&gt;You can find me on the &lt;a href="https://discord.gg/mPZ6WZKKG2" rel="noopener noreferrer"&gt;Netdata Discord&lt;/a&gt;, our &lt;a href="https://community.netdata.cloud" rel="noopener noreferrer"&gt;netdata community forums&lt;/a&gt; and on &lt;a href="https://twitter.com/odysseas_lam" rel="noopener noreferrer"&gt;twitter&lt;/a&gt;!&lt;/p&gt;

&lt;h2&gt;
  
  
  Kudos
&lt;/h2&gt;

&lt;p&gt;I want to give some kudos to my fellow colleagues @ilyam8, @ferroin and, @kkaskavelis for making all this work possible 🚀&lt;/p&gt;

</description>
      <category>monitoring</category>
      <category>ethereum</category>
      <category>blockchain</category>
      <category>devops</category>
    </item>
    <item>
      <title>How to extend the Geth-Netdata integration</title>
      <dc:creator>Odysseas Lamtzidis</dc:creator>
      <pubDate>Mon, 02 Aug 2021 15:28:49 +0000</pubDate>
      <link>https://dev.to/netdata/how-to-extend-the-geth-netdata-integration-4o68</link>
      <guid>https://dev.to/netdata/how-to-extend-the-geth-netdata-integration-4o68</guid>
      <description>&lt;h1&gt;
  
  
  How to extend the Geth collector
&lt;/h1&gt;

&lt;p&gt;This is the the last of a 2-part blog post series regarding Netdata and Geth. If you missed the first, be sure to check it out &lt;a href=""&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Geth is short for Go-Ethereum and is the official implementation of the Ethereum Client in Go. Currently it's one of the most widely used implementations and a core piece of infrastructure for the Ethereum ecosystem. &lt;/p&gt;

&lt;p&gt;With this proof of concept I wanted to showcase how easy it really is to gather data from any Prometheus endpoint and visualize them in Netdata. This has the added benefit of leveraging all the other features of Netdata, namely it's per-second data collection, automatic deployment and configuration and superb system monitoring. &lt;/p&gt;

&lt;p&gt;The most challenging aspect is to make sense of the metrics and organize them into meaningful charts. In other words, the expertise that is required to understand what each metric means and if it makes sense to surface it for the user. &lt;/p&gt;

&lt;p&gt;Note that some metrics would make sense for some users, and other metrics for others. We want to surface &lt;strong&gt;all metrics that make sense&lt;/strong&gt;. When developping an application, you need much lower level metrics (e.g &lt;a href="https://containerjournal.com/topics/container-management/using-ebpf-monitoring-to-know-what-to-measure-and-why/"&gt;eBPF&lt;/a&gt;), than when operating the application.&lt;/p&gt;

&lt;p&gt;Let's get down to it. &lt;/p&gt;

&lt;h3&gt;
  
  
  A note on collectors
&lt;/h3&gt;

&lt;p&gt;First, let's do a very brief intro to what a collector is. &lt;/p&gt;

&lt;p&gt;In Netdata, every collector is composed of a plugin and a module. The plugin is an orchestrator process that is responsible for running jobs, each job is an instance of a module. &lt;/p&gt;

&lt;p&gt;When we are "creating" a collector, in essence we select a plugin and we develop a module for that plugin. &lt;/p&gt;

&lt;p&gt;For Geth, since we are using the Prometheus Endpoint, it's easier to use our Golang Plugin, as it has internal libraries to gather data from Prometheus endpoints. &lt;/p&gt;

&lt;p&gt;The following image is useful:&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--PuFSqLHQ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://aws1.discourse-cdn.com/business5/uploads/netdata2/original/1X/3cc1ef3cb489e7d3146d73bedefb812e49631cc3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--PuFSqLHQ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://aws1.discourse-cdn.com/business5/uploads/netdata2/original/1X/3cc1ef3cb489e7d3146d73bedefb812e49631cc3.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you want to dive into the Netdata Collector framework:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://community.netdata.cloud/docs?topic=1189"&gt;FAQ: What are collectors and how do they work?&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://learn.netdata.cloud/docs/agent/collectors/plugins.d"&gt;External plugins overview&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;
  
  
  Geth collector structure
&lt;/h3&gt;

&lt;p&gt;So, in essence, the Geth collector is the Geth module of the Go.d.plugin.&lt;/p&gt;

&lt;p&gt;As you can see on &lt;a href="https://github.com/netdata/go.d.plugin/tree/master/modules/geth"&gt;GitHub&lt;/a&gt;, the module is composed of four files:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;charts.go&lt;/code&gt;: Chart definitions&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;collect.go&lt;/code&gt;: Actual data collection, using the metric variables defined in &lt;code&gt;metrics.go&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;geth.go&lt;/code&gt;: Main structure, mostly boilerplate. &lt;/li&gt;
&lt;li&gt;
&lt;code&gt;metrics.go&lt;/code&gt;: Define metric variables to the corresponding Prometheus values&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;
  
  
  How to extend the Geth collector with a new metric
&lt;/h3&gt;

&lt;p&gt;It's very simply, really. &lt;/p&gt;

&lt;p&gt;Open your Prometheus endpoint and find the metrics that you want to visualize with Netdata. &lt;/p&gt;

&lt;p&gt;e.g &lt;code&gt;p2p_ingress_eth_65_0x08&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Open &lt;code&gt;metrics.go&lt;/code&gt; and define a new variable&lt;/p&gt;

&lt;p&gt;e.g &lt;code&gt;const p2pIngressEth650x08 = "p2p_ingress_eth_65_0x08"&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Open &lt;code&gt;collect.go&lt;/code&gt; and create a new function, identical to the one that already exist. Although it doesn't really makes a difference in our case, we strive to organize the metrics into sensible functions (e.g gather all &lt;code&gt;p2pEth65&lt;/code&gt; metrics in one function). This is the function that we will do any computation on the raw value that we gather. &lt;/p&gt;

&lt;p&gt;Note that Netdata will automatically take care of units such as &lt;code&gt;bytes&lt;/code&gt; and will show the most human readable unit in the dashboard (e.g MB, GB, etc.)&lt;/p&gt;

&lt;p&gt;e.g&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;func (v *Geth) collectP2pEth65(mx map[string]float64, pms prometheus.Metrics) {
    pms = pms.FindByNames(
        p2pIngressEth650x08
    )
    v.collectEth(mx, pms)
    mx[p2pIngressEth650x08] = mx[p2pIngressEth650x08] + 1234

}

func (v *Geth) collectEth(mx map[string]float64, pms prometheus.Metrics) {
    for _, pm := range pms {
        mx[pm.Name()] += pm.Value
    }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We also need to add the function in the central function that is called by the module at the defined interval.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight go"&gt;&lt;code&gt;&lt;span class="k"&gt;func&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;g&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="n"&gt;Geth&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="n"&gt;collectGeth&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;pms&lt;/span&gt; &lt;span class="n"&gt;prometheus&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Metrics&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;map&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="kt"&gt;string&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="kt"&gt;float64&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="n"&gt;mx&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="nb"&gt;make&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;map&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="kt"&gt;string&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="kt"&gt;float64&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;g&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;collectChainData&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;mx&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;pms&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;g&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;collectP2P&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;mx&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;pms&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;g&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;collectTxPool&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;mx&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;pms&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;g&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;collectRpc&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;mx&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;pms&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;g&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;collectP2pEth65&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;mx&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;pms&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;mx&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Lastly, now that we have the value inside the module, we need to create the chart for that value. We do that in &lt;code&gt;charts.go&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight go"&gt;&lt;code&gt;&lt;span class="n"&gt;chartReorgs&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;Chart&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="n"&gt;ID&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt;    &lt;span class="s"&gt;"reorgs_executed"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="n"&gt;Title&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="s"&gt;"Executed Reorgs"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="n"&gt;Units&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="s"&gt;"reorgs"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="n"&gt;Fam&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt;   &lt;span class="s"&gt;"reorgs"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="n"&gt;Ctx&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt;   &lt;span class="s"&gt;"geth.reorgs"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="n"&gt;Dims&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="n"&gt;Dims&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="n"&gt;ID&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="n"&gt;reorgsExecuted&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;Name&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="s"&gt;"executed"&lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;
        &lt;span class="p"&gt;},&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="n"&gt;chartReorgsBlocks&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;Chart&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="n"&gt;ID&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt;    &lt;span class="s"&gt;"reorgs_blocks"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="n"&gt;Title&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="s"&gt;"Blocks Added/Removed from Reorg"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="n"&gt;Units&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="s"&gt;"blocks"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="n"&gt;Fam&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt;   &lt;span class="s"&gt;"reorgs"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="n"&gt;Ctx&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt;   &lt;span class="s"&gt;"geth.reorgs_blocks"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
                &lt;span class="n"&gt;Type&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt;  &lt;span class="n"&gt;Line&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; 
        &lt;span class="n"&gt;Dims&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="n"&gt;Dims&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="n"&gt;ID&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="n"&gt;reorgsAdd&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;Name&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="s"&gt;"added"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;Algorithm&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="s"&gt;"absolute"&lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;
            &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="n"&gt;ID&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="n"&gt;reorgsDropped&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;Name&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="s"&gt;"dropped"&lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;
        &lt;span class="p"&gt;},&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Let's explain the fields of the structure:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;ID&lt;/code&gt;: The unique identification for the chart.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;Title&lt;/code&gt;: A human readable title for the front-end.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;Units&lt;/code&gt;: The units for the dimension. Notice that Netdata can automatically scale certain units, so that the raw collector value stays in &lt;code&gt;bytes&lt;/code&gt; but the user sees &lt;code&gt;Megabytes&lt;/code&gt; on the dashboard. You can find a list of supported "automatically scaled" units on this &lt;a href="https://github.com/netdata/dashboard/blob/068bbbb975db7871920406be56af5a641c79a08e/src/utils/units-conversion.ts"&gt;file&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;Fam&lt;/code&gt;: The submenu title, used to group multiple charts together.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;Ctx&lt;/code&gt;: The identifier for the particular chart, kinda like id. Use the convention &lt;code&gt;&amp;lt;collector_name&amp;gt;.&amp;lt;chart_id&amp;gt;&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;Type&lt;/code&gt;: &lt;code&gt;Line&lt;/code&gt; (Default) or &lt;code&gt;Area&lt;/code&gt; or &lt;code&gt;Stacked&lt;/code&gt;. &lt;code&gt;Area&lt;/code&gt; is best used with dimensions that signify "bandwidth". &lt;code&gt;Stacked&lt;/code&gt; when it make sense to visually observe the &lt;code&gt;sum&lt;/code&gt; of dimensions. (e.g the&lt;code&gt;system.ram&lt;/code&gt; chart is stacked).&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;Dims&lt;/code&gt;:

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;ID&lt;/code&gt;: The variable name for that dimension.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;Name&lt;/code&gt;: human readable name for the dimension.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;Algorithm&lt;/code&gt;: 

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;absolute&lt;/code&gt;: Default (if omitted) is &lt;code&gt;absolute&lt;/code&gt;. Netdata will show the value that it gets from the collector. &lt;/li&gt;
&lt;li&gt;
&lt;code&gt;incremental&lt;/code&gt;: Netdata will show the per-second rate of the value. It will automatically take the delta between two data collections, find the per-second value and show it. &lt;/li&gt;
&lt;li&gt;
&lt;code&gt;percentage&lt;/code&gt;: Netdata will show the percentage of the dimension in relation to the &lt;code&gt;sum&lt;/code&gt; of all the dimensions of the chart. If four dimensions have value = &lt;code&gt;1&lt;/code&gt;, it will show &lt;code&gt;25%&lt;/code&gt;. &lt;/li&gt;
&lt;li&gt;
&lt;code&gt;Mul&lt;/code&gt;: Multiply value by some integer.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;Div&lt;/code&gt;: Divide value by some integer.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  A final note on extending Geth
&lt;/h3&gt;

&lt;p&gt;The prometheus endpoint is not the only way to monitor Geth, but it's the simplest. &lt;/p&gt;

&lt;p&gt;If you feel adventurous, you can try to implement a collector that also uses Geth's RPC endpoint to pull data (e.g show charts about specific contracts in real time) or even Geth's logs. &lt;/p&gt;

&lt;p&gt;To use Geth's RPC endpoint with Golang, take a look at &lt;a href="https://geth.ethereum.org/docs/dapp/native"&gt;Geth's documentation&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;To monitor Geth's logs, you can use our &lt;a href="https://github.com/netdata/go.d.plugin/tree/ec9980149c3d32e4a90912826edd344dfb0413ac/modules/weblog"&gt;weblog collector&lt;/a&gt; as a template. It monitors Apache and NGINX servers by parsing their logs. &lt;/p&gt;

&lt;h3&gt;
  
  
  Add alerts to Geth charts
&lt;/h3&gt;

&lt;p&gt;Now that we have defined the new charts, we may want to define alerts for them. The full alert syntax is out-of-scope for this tutorial, but it shouldn't be difficult once you get the hang of it. &lt;/p&gt;

&lt;p&gt;For example, here is a simple alarm that tells me if Geth is synced or not, based on whether &lt;code&gt;header&lt;/code&gt; and &lt;code&gt;block&lt;/code&gt; values are the same:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;  1 #chainhead_header is expected momenterarily to be ahead. If its considerably ahead (e.g more than 5 blocks), then the node is definetely out of sync.
  2  template: geth_chainhead_diff_between_header_block
  3        on: geth.chainhead
  4     class: Workload
  5      type: ethereum_node
  6 component: geth
  7     every: 10s
  8      calc: $chain_head_block -  $chain_head_header
  9     units: blocks
 10      warn: $this != 0
 11      crit: $this &amp;gt; 5
 12     delay: up 5s
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;You can read the above example as follows:&lt;/strong&gt;&lt;br&gt;
On the charts that have the context &lt;code&gt;geth.chainhead&lt;/code&gt; (thus all the Geth nodes that we may monitor with a single Netdata Agent), every 10s, caluclate the difference between the dimensions &lt;code&gt;chain_head_block&lt;/code&gt; and &lt;code&gt;chain_head_header&lt;/code&gt;. If it's not 0, then raise alert to &lt;code&gt;warn&lt;/code&gt;. If it's more than 5, then raise to &lt;code&gt;critical&lt;/code&gt;.  &lt;/p&gt;

&lt;p&gt;Some useful resources to get you up to speed quickly with creating alerts for our Geth node:&lt;/p&gt;

&lt;p&gt;Note that if you create an alert and it works for you, a great idea is to make a PR into the main &lt;code&gt;netdata/netdata&lt;/code&gt; &lt;a href="https://github.com/netdata/netdata"&gt;repository&lt;/a&gt;. That way, the alert definition will exist in every netdata installation, and you will help countless other Geth users. &lt;/p&gt;

&lt;p&gt;Here are some useful resources to create new alerts:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://www.youtube.com/watch?v=aWYj9VT8I5A"&gt;Youtube - Creating your first health alarm in Netdata&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://learn.netdata.cloud/docs/monitor/configure-alarms"&gt;Docs - Configure health alert
&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://learn.netdata.cloud/docs/agent/health/reference"&gt;Docs - alert configuration reference&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://learn.netdata.cloud/docs/monitor/enable-notifications"&gt;Docs - Enable alert notifications&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Extend Geth collector for other clients
&lt;/h2&gt;

&lt;p&gt;The beauty of this solution is that it's &lt;strong&gt;trivial&lt;/strong&gt; to duplicate the collector and gather metrics from all Ethereum clients that support the Prometheus endpoint:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://docs.nethermind.io/nethermind/ethereum-client/metrics/setting-up-local-metrics-infrastracture"&gt;Nethermind&lt;/a&gt;,&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://besu.hyperledger.org/en/stable/HowTo/Monitor/Metrics/"&gt;Besu&lt;/a&gt; &lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/ledgerwatch/erigon"&gt;Erigon&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The only difference between a Geth collector and a &lt;a href="https://nethermind.io/client"&gt;Nethermind&lt;/a&gt; collector is that they might expose different metrics or the same metrics with different "Prometheus metrics names". So, we just need to change the Prometheus metrics names in the &lt;code&gt;metrics.go&lt;/code&gt; source file and propagate any change to the other source files as well. &lt;/p&gt;

&lt;p&gt;The logic that I described above stays exactly the same. &lt;/p&gt;

&lt;h2&gt;
  
  
  In conclusion
&lt;/h2&gt;

&lt;p&gt;Extending Geth for more metrics is trivial. &lt;/p&gt;

&lt;p&gt;As you may suspect, this guide is applicable for any data source that is exposing it's metrics using the Prometheus format. &lt;/p&gt;

</description>
      <category>ethereum</category>
      <category>go</category>
      <category>devops</category>
      <category>monitoring</category>
    </item>
    <item>
      <title>The Network State community
</title>
      <dc:creator>Odysseas Lamtzidis</dc:creator>
      <pubDate>Wed, 30 Jun 2021 11:26:46 +0000</pubDate>
      <link>https://dev.to/odyslam/the-network-state-community-3ief</link>
      <guid>https://dev.to/odyslam/the-network-state-community-3ief</guid>
      <description>&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--FxDEzFrA--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://i.imgur.com/Un9r78o.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--FxDEzFrA--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://i.imgur.com/Un9r78o.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Meet the Network State community
&lt;/h1&gt;

&lt;p&gt;They say that if you want to go fast, you go alone, but if you want to go far, go together. &lt;/p&gt;

&lt;p&gt;Over the last months, there have been more and more voices around the ideas of the Network state.&lt;/p&gt;

&lt;p&gt;I was first introduced to these brands of ideas while reading book: "The Sovereign Individual". In the new cyber-world, societies are bound to organize again in merchant republics and city-states. The book was written in the late 1990s, thus the use of "cyber".&lt;/p&gt;

&lt;p&gt;It argues that our obsession with nation-states will be equally bizarre in the future, as the medieval oaths are today. &lt;/p&gt;

&lt;p&gt;The Internet converts everything to a frontier, as it brings together people that are located all over the globe. In a world where people and capital can move freely, the nation-states will have to compete.&lt;br&gt;
Compete in the sense of keeping capital and people from fleeing. Although violence works for people, capital in the form of cryptocurrency will move either way. Voting with your feet, as &lt;a href="https://twitter.com/balajis"&gt;Balaji S. Srinivasan&lt;/a&gt; notes, will be much more effective than voting with your wallet or vote. &lt;/p&gt;

&lt;p&gt;With that in mind, the ideas that have been championed by futurist, startupper, and investor Balaji S. Srinivasan are not that bizarre. Actually, they fit right in. &lt;/p&gt;

&lt;p&gt;Balaji's thesis is simple. Geographical proximity no longer equates to cultural proximity. Thanks to the Internet, we can communicate and form online communities, as we already have. Using blockchain as a consensus layer to codify the group's rules, we can form a state. A network state, that is. &lt;/p&gt;

&lt;p&gt;First online, then on land. &lt;/p&gt;

&lt;p&gt;As he underlines, there is a very simple reason to do this.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;We want to be able to peacefully start a new country for the same reason we want a bare plot of earth, a blank sheet of paper, an empty text buffer, a fresh startup, or a clean slate. Because we want to build something new without historical constraint.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;h2&gt;
  
  
  So, how do we go about it?
&lt;/h2&gt;

&lt;p&gt;As you can imagine, the domain is vast, covering many different domains, from legal to construction. Thankfully, there are a handful of people already working on such ideas. &lt;a href="https://www.creatorcabins.com/"&gt;Creator Cabins&lt;/a&gt; is a great example of an MVP project, where they explore the actual creation of small facilities. &lt;/p&gt;

&lt;p&gt;There are other, larger-scale examples, like &lt;a href="https://culdesac.com/"&gt;culdesac&lt;/a&gt;. They are building a whole neighborhood from scratch, right in the US. &lt;/p&gt;

&lt;p&gt;Of course, a neighborhood is a far cry from actual sovereign land, but it's a start. &lt;/p&gt;
&lt;h2&gt;
  
  
  The community
&lt;/h2&gt;

&lt;p&gt;Now, seeing all these disparate discussions online, I thought that a community would be a great place for people to come together and talk about this domain.&lt;/p&gt;

&lt;p&gt;This is a by-definition multi-disciplinary domain. The more diverse the community is, the more fruitful the discussions will be. &lt;/p&gt;

&lt;p&gt;The long-run is particularly important, especially if we want a real societal change. A geographically distributed group of people, but who share common values and culture. &lt;/p&gt;

&lt;p&gt;We want to bring us together.&lt;/p&gt;

&lt;p&gt;To interact with the network-state community, you will need &lt;a href="https://urbit.org/"&gt;Urbit&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;If you haven't heard about it, no worries. It's a complete re-definition of modern computing, so let's talk about Urbit for a bit.&lt;/p&gt;
&lt;h2&gt;
  
  
  Urbit
&lt;/h2&gt;

&lt;p&gt;Urbit is an entirely new paradigm in personal computers. It's a compact system for an individual to run their own permanent personal server on any Unix machine with an internet connection.&lt;/p&gt;

&lt;p&gt;Urbit was created as an &lt;strong&gt;exit&lt;/strong&gt; from a world where applications and services and controlled by big companies. It's not only that they monetize your data, but it's also about the very fact that they can deplatform you at any time. &lt;/p&gt;

&lt;p&gt;Urbit believes that the only way forward is for us to run the services and applications. They achieve it by creating an entirely new OS and peer-to-peer network that is &lt;em&gt;simple&lt;/em&gt; by design and 100% owned by its users.&lt;/p&gt;

&lt;p&gt;It's an entirely new stack, an integrated tool for people to communicate and build communities. A tool that the people themselves control, they can trust and extend to their liking.&lt;/p&gt;
&lt;h3&gt;
  
  
  Urbit OS
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--GAFvCuYg--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://media.urbit.org/site/understanding-urbit/technical-overview/technical-overview-kernel%402x.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--GAFvCuYg--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://media.urbit.org/site/understanding-urbit/technical-overview/technical-overview-kernel%402x.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Urbit OS is a new, carefully architected software stack: a VM, programming language, and kernel designed to run software for an individual.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;It's completely sealed from the system it runs on. In the Urbit world, every person has their own Urbit OS node. &lt;/p&gt;

&lt;p&gt;Read more about &lt;a href="https://urbit.org/understanding-urbit/urbit-os/"&gt;Urbit OS&lt;/a&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  Urbit ID
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--AJeh_dRh--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://media.urbit.org/site/understanding-urbit/uu-intro-3.svg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--AJeh_dRh--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://media.urbit.org/site/understanding-urbit/uu-intro-3.svg" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Urbit ID is an identity and authentication system specifically designed to work with Urbit OS. When you boot or log in to Urbit OS, you use your Urbit ID.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The Urbit ID is a short memorable name (e.g ~ravmel-ropdyl) that it's a username, network address, and cryptocurrency wallet all in one. Since it's registered on the Ethereum blockchain, it's yours forever. &lt;strong&gt;It's cryptographic property.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Read more about &lt;a href="https://urbit.org/understanding-urbit/urbit-id/"&gt;Urbit ID&lt;/a&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  OS 1
&lt;/h3&gt;

&lt;p&gt;In early 2020, Urbit released OS 1. A minimal interface for group communication happens to be the first complete interface. This is the Eden of our community.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://urbit.org/understanding-urbit/interface/"&gt;Lanscape&lt;/a&gt;, as the interface is called, will be the place where we will collaborate and have our discussions. A calm place for people who search for the truth.&lt;/p&gt;
&lt;h2&gt;
  
  
  Install Urbit
&lt;/h2&gt;

&lt;p&gt;To join the community, you will have to install Urbit. &lt;/p&gt;

&lt;p&gt;The easiest way is to follow the &lt;a href="https://urbit.org/getting-started/"&gt;Getting Started Guide&lt;/a&gt; and use the Port Application:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--9ZyUGy6m--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://i.imgur.com/vHG3zPd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--9ZyUGy6m--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://i.imgur.com/vHG3zPd.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  Alternative -- Using the CLI
&lt;/h3&gt;

&lt;p&gt;Although the CLI is more technically involved, it's an older, more stable client. It's a good alternative in case you face any challenges with the Port. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;macOS&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;mkdir&lt;/span&gt; ~/urbit
&lt;span class="nb"&gt;cd&lt;/span&gt; ~/urbit
curl &lt;span class="nt"&gt;-JLO&lt;/span&gt; https://urbit.org/install/mac/latest
&lt;span class="nb"&gt;tar &lt;/span&gt;zxvf ./darwin.tgz &lt;span class="nt"&gt;--strip&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;1
~/urbit/urbit
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Linux&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;mkdir&lt;/span&gt; ~/urbit
&lt;span class="nb"&gt;cd&lt;/span&gt; ~/urbit
wget &lt;span class="nt"&gt;--content-disposition&lt;/span&gt; https://urbit.org/install/linux64/latest
&lt;span class="nb"&gt;tar &lt;/span&gt;zxvf ./linux64.tgz &lt;span class="nt"&gt;--strip&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;1
~/urbit/urbit
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Linux users may need to run this command in another terminal window to access your Urbit on port 80:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;apt-get &lt;span class="nb"&gt;install &lt;/span&gt;libcap2-bin
&lt;span class="nb"&gt;sudo &lt;/span&gt;setcap &lt;span class="s1"&gt;'cap_net_bind_service=+ep'&lt;/span&gt; ~/urbit/urbit
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After installation, you should see a block of output that beings with the following line:&lt;br&gt;
&lt;code&gt;Urbit: a personal server operating function&lt;br&gt;
&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Welcome to your &lt;em&gt;last&lt;/em&gt; computer. &lt;/p&gt;
&lt;h2&gt;
  
  
  Identities
&lt;/h2&gt;

&lt;p&gt;A core concept of Urbit is its Identity. Although we won't go into detail here, it's important to remember that IDs have a hierarchy in the Urbit universe. &lt;/p&gt;

&lt;p&gt;There are galaxy IDs, where each galaxy is the parent of many star IDs and each star is the parent of many planets IDs. We, the users, are concerned with planets (and comets).&lt;/p&gt;

&lt;p&gt;You can read more about Urbit identities in the docs&lt;/p&gt;
&lt;h2&gt;
  
  
  Planets and comets
&lt;/h2&gt;

&lt;p&gt;Planets are scarce, as this prevents spamming amongst other things. This artificial scarcity makes planets valuable and thus not free. &lt;/p&gt;

&lt;p&gt;If you have already acquired a planet, you can &lt;a href="https://urbit.org/getting-started/planet/"&gt;follow this guide&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Comets&lt;/strong&gt; on the other hand are practically unlimited and free. It's a great way to try out the network for free. They have a very long and unwieldy name&lt;/p&gt;

&lt;p&gt;&lt;code&gt;~dasres-ragnep-lislyt-ribpyl--mosnyx-bisdem-nidful-marzod&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;As we read from the urbit documentation:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;You can continue using this comet indefinitely. There are currently few differences between using a comet-level identity and a planet-level one. However, some groups will not allow comets entry in order to maintain a certain level of quality, and changes may be made in the future that further devalue comets. They will always, however, be able to access the basic functions of the network. &lt;/p&gt;

&lt;p&gt;A comet also comes with a long and fairly unmemorable name whereas a planet has a short name and a "sigil" (avatar) associated with it that makes it more identifiable on the network. You may notice all this within the first few minutes of using Urbit.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;h2&gt;
  
  
  Using Landscape
&lt;/h2&gt;

&lt;p&gt;Landscape is the UI of this new computer. To visit landscape, just visit &lt;code&gt;localhost&lt;/code&gt; on your computer. &lt;/p&gt;

&lt;p&gt;If it doesn't work, try &lt;code&gt;localhost:8080&lt;/code&gt;, which is the fallback &lt;code&gt;port&lt;/code&gt; in case the default HTTP port &lt;code&gt;80&lt;/code&gt; is already taken by another app. &lt;/p&gt;

&lt;p&gt;Search for the following output in your terminal. It will tell you the port of your Urbit.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;eyre: canceling ~[//http-server/0vu.34ksf/2/3]
http: web interface live on http://localhost:8080
http: loopback live on http://localhost:12322
pier (18836): live
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Welcome to the clean slate OS&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--bM3kEr8b--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://i.imgur.com/pOCLllx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--bM3kEr8b--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://i.imgur.com/pOCLllx.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Run Urbit: Port application
&lt;/h2&gt;

&lt;p&gt;Open the application and choose your comet. Click Launch and that's it!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--59WpL5Pk--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://i.imgur.com/beJ4XGo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--59WpL5Pk--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://i.imgur.com/beJ4XGo.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Note&lt;/strong&gt;: You can find your Access Key by pressing the bottom left button.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--z-Ik5AE0--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://i.imgur.com/5URamae.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--z-Ik5AE0--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://i.imgur.com/5URamae.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Run Urbit: CLI
&lt;/h2&gt;

&lt;p&gt;We are going now to boot our free identity, or &lt;em&gt;comet&lt;/em&gt;. It's very simple. &lt;/p&gt;

&lt;p&gt;Run &lt;code&gt;~/urbit/urbit -c mycomet&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Caveat&lt;/strong&gt;: When you run urbit again, you will &lt;strong&gt;not&lt;/strong&gt; use the flag &lt;code&gt;-c&lt;/code&gt;. Thus, &lt;code&gt;~/urbit/urbit mycomet&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;It can take a while to load the comet since we need a unique id. Be patient. &lt;/p&gt;

&lt;p&gt;Once it finishes loading, you should see the following line at the terminal &lt;/p&gt;

&lt;p&gt;&lt;code&gt;~sampel_marzod:dojo&amp;gt;&lt;/code&gt;, where &lt;code&gt;dojo&lt;/code&gt; is the Urbit command line. &lt;/p&gt;

&lt;p&gt;Before proceeding go ahead and type &lt;code&gt;+code&lt;/code&gt; in the terminal. Copy the result, we will need this. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--gULmRXe9--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://i.imgur.com/kVySCFy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--gULmRXe9--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://i.imgur.com/kVySCFy.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Network State group
&lt;/h2&gt;

&lt;p&gt;To join the community, click on "&lt;strong&gt;Join Group&lt;/strong&gt;" and type &lt;code&gt;~pitrup-nosfyl/network-state&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--YlGHUKu_--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://i.imgur.com/nMwXWZu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--YlGHUKu_--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://i.imgur.com/nMwXWZu.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You should land on an empty page with all the channels greyed out. Welcome to our community.&lt;/p&gt;

&lt;p&gt;Simply choose a channel that you wish to join and click on "Join Channel"&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--W2vlevuy--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://i.imgur.com/uDIZJo9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--W2vlevuy--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://i.imgur.com/uDIZJo9.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Before proceeding to discover the discussions, make sure you leave a comment in the &lt;strong&gt;intro&lt;/strong&gt; channel. It's a great place to get to know each other, not on a personal level necessarily, as we live in the pseudo-anonymous age. But, we would like to know why you joined the group if you are engaged in relevant projects and whatever else relevant. &lt;/p&gt;

&lt;p&gt;For updates about the community, you should see the Group Feed. &lt;/p&gt;

&lt;p&gt;The community is very new, so everything that you see is open for change. If you feel that something is missing or should be altered, please do say so. &lt;/p&gt;

&lt;p&gt;The community is only as good as its members are, so if you think you can improve the shared experience, it's up to you to improve it. Sending me a message is a good start. &lt;/p&gt;

&lt;p&gt;Welcome to the future of communities.&lt;/p&gt;

&lt;h2&gt;
  
  
  1729
&lt;/h2&gt;

&lt;p&gt;A small note before I let you go. Although this group will be focused on ideas around sovereign individuals and the network state, there is already a &lt;strong&gt;discord&lt;/strong&gt; group of 1729ers. &lt;/p&gt;

&lt;p&gt;1729 is a project started by Balaji. As he describes it:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;It's a newsletter for technological progressives. That means people who are into cryptocurrencies, startup cities, mathematics, transhumanism, space travel, reversing aging, and initially-crazy-seeming-but-technologically-feasible ideas. Basically, if you like @balajis' past work, you will enjoy this&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;For now, we don't expect to create an Urbit group for 1729, but if there are users who prefer urbit for its more private, decentralised experience, we will consider offering the choice. The goal is to bring people together, not split them up.&lt;/p&gt;

&lt;p&gt;To join the Discord group: &lt;a href="https://discord.gg/WPN3XftmkV"&gt;https://discord.gg/WPN3XftmkV&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Relevant Resources
&lt;/h2&gt;

&lt;p&gt;Here is a helpful list of resources that might prove useful, as you enter this domain's rabbit hole.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://www.chartercitiesinstitute.org/post/charter-cities-podcast-episode-15-a-city-in-the-cloud-with-balaji-srinivasan"&gt;Charter Cities Podcast Episode 15: A City in the Cloud with Balaji Srinivasan&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.youtube.com/watch?v=P5UAtAOV66c&amp;amp;t=1843s"&gt;Balaji S. Srinivasan: The Network State&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://1729.com/how-to-start-a-new-country/"&gt;How to Start a New Country - 1729.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://creators.mirror.xyz/-lNPJRz2GLWIcsuMTZqklGNEWRrY7Nk0Y33Qn6Lw4q4"&gt;Tech stack for decentralized cities - Jonathan Hillis&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
    </item>
    <item>
      <title>Update #1 on Learning in Public </title>
      <dc:creator>Odysseas Lamtzidis</dc:creator>
      <pubDate>Sun, 13 Jun 2021 13:50:53 +0000</pubDate>
      <link>https://dev.to/odyslam/update-1-on-learning-in-public-57e6</link>
      <guid>https://dev.to/odyslam/update-1-on-learning-in-public-57e6</guid>
      <description>&lt;p&gt;Hello everyone,&lt;/p&gt;

&lt;p&gt;It's been 2 weeks since I launched my [Learning in Public] Roam Research graph. &lt;/p&gt;

&lt;p&gt;As I wrote in the initial blog post, the goal of the graph is to start documenting my learnings in a manner that is usable by others. Moreover, I hope that this graph will result in a multiplayer learning experiment. That means that multiple people will log their learnings on the graph and enrich the various &lt;strong&gt;Locations of Knowledge&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;After running with this for 2 weeks, here are a couple of learnings&lt;/p&gt;

&lt;h2&gt;
  
  
  New Locations of Knowledge
&lt;/h2&gt;

&lt;p&gt;You can find all the Locations of Knowledge on the &lt;a href="https://roamresearch.com/#/app/Symposium/page/mG9aABxkw"&gt;Roam Research Graph&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Solidity
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Baseline architecture for an oracle&lt;/li&gt;
&lt;li&gt;What is a library? &lt;/li&gt;
&lt;li&gt;What is the delete keyword&lt;/li&gt;
&lt;li&gt;Fallback Functions&lt;/li&gt;
&lt;li&gt;What is delegatecall()&lt;/li&gt;
&lt;li&gt;How to force ether into a contract&lt;/li&gt;
&lt;li&gt;Security implications for

&lt;ul&gt;
&lt;li&gt;Interfaces&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;transfer()&lt;/code&gt;, &lt;code&gt;send()&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Solidity has been going great, having finished &lt;a href="https://cryptozombies.io/"&gt;CryptoZombies&lt;/a&gt; and investing considerable time into &lt;a href="https://ethernaut.openzeppelin.com/"&gt;Ethernaut&lt;/a&gt;. &lt;/p&gt;

&lt;p&gt;As you will see, the new additions are vastly more advanced concepts than the first iteration of the Location of Knowledge for Solidity&lt;/p&gt;

&lt;h3&gt;
  
  
  Mechanism Design
&lt;/h3&gt;

&lt;p&gt;I am happy to share with you all the I started a new course for Mechanism Design. Notes from the 2 first lectures have been added in the Graph. &lt;/p&gt;

&lt;p&gt;Mechanism Design is hugely relevant to blockchains, since it's about designing systems that achieve a certain goal in the presence of strategic players. &lt;/p&gt;

&lt;p&gt;It's a science that compliments psychology, computer science and game theory. As you can see, with blockchains having an intrinsic economic part, being able to create systems that align the interests of different players is challenging. &lt;/p&gt;

&lt;p&gt;A very good example of good mechanism design is MakerDAO. The good design stems from the fact that the system has been able to work for all these years, providing both lending services and a stablecoin that is fully transparent. In it's very essence, the stability is due to good mechanism design and not obscure storage of dollar equivalents &lt;/p&gt;

&lt;h3&gt;
  
  
  DeFi
&lt;/h3&gt;

&lt;p&gt;I have started learning more about DeFi, thus a couple of interesting insights have been already logged into the Graph. &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;What is MakerDao&lt;/li&gt;
&lt;li&gt;What is Uniswap&lt;/li&gt;
&lt;li&gt;How AMM work? &lt;/li&gt;
&lt;li&gt;What is slippage?&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This is a first pass and I will enrich the content as I read more about them. &lt;/p&gt;

&lt;h2&gt;
  
  
  System
&lt;/h2&gt;

&lt;p&gt;Minor changes have been implemented in the personal pages. &lt;/p&gt;

&lt;p&gt;There are 2 new categories, one to log the &lt;strong&gt;Open Questions&lt;/strong&gt; that the person has while taking &lt;strong&gt;fleeting notes&lt;/strong&gt;. Another to keep track of videos that are relevant to a &lt;strong&gt;fleeting note&lt;/strong&gt; and the person wants to view them alter. &lt;/p&gt;

&lt;p&gt;I consider to re-evaluate the system on a monthly basis. &lt;/p&gt;

&lt;h2&gt;
  
  
  Want to engage?
&lt;/h2&gt;

&lt;p&gt;Take a look inside the graph. If it's interesting to you, drop into Discord and let's have a chat!&lt;/p&gt;

&lt;p&gt;Discord: &lt;a href="https://discord.gg/CqpaY9FAgU"&gt;https://discord.gg/CqpaY9FAgU&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;See you in 2 weeks :) &lt;/p&gt;

</description>
      <category>roamresearch</category>
    </item>
    <item>
      <title>Multiplayer Learning in Public with Roam Research</title>
      <dc:creator>Odysseas Lamtzidis</dc:creator>
      <pubDate>Sun, 30 May 2021 10:23:35 +0000</pubDate>
      <link>https://dev.to/odyslam/a-system-for-learning-in-public-with-roam-research-62h</link>
      <guid>https://dev.to/odyslam/a-system-for-learning-in-public-with-roam-research-62h</guid>
      <description>&lt;p&gt;I have started an exciting new project, that will go in parallel with &lt;strong&gt;everything&lt;/strong&gt; that I am doing. In short, it's called &lt;strong&gt;Learning in Public&lt;/strong&gt; and I have written about it in this blog post [[Learning in Public intro post]].&lt;/p&gt;

&lt;p&gt;Learning in public serves 3 main functions, which I am refining every day that I use this system.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;It forces me to write down and note every &lt;em&gt;single&lt;/em&gt; thing that I write. Although it slows things down, I can later document what I learn.&lt;/li&gt;
&lt;li&gt;It's a call for like-minded people. Learning is simply much more fun when learning with others. Moreover, I find it difficult to participate in 10 different domain-specific communities. &lt;strong&gt;People are multi-faceted, and I want this community to be too.&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;It's a great repository to help others get up to speed in the domains that I cover.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  TL;DR
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Visit the Graph on &lt;a href="https://roamresearch.com/#/app/Symposium/page/t9PFemV3W" rel="noopener noreferrer"&gt;Roam Research&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Read the blog post on &lt;a href="https://roamresearch.com/#/app/Symposium/page/XCjPplbNs" rel="noopener noreferrer"&gt;Roam Research&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Roam Research
&lt;/h2&gt;

&lt;p&gt;As I hinted in the title, I opted to go for Roam Research. The choice was a no-brainer, as it's by far the lowest friction that I had with writing down things. It's snappy, yet powerful. &lt;/p&gt;

&lt;p&gt;With the power of &lt;code&gt;[[pages]]&lt;/code&gt; and &lt;code&gt;((blocks))&lt;/code&gt;, you can do project management and any kind of organization you want. It needs some experimentation and time, but it's possible. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foos2hu52lhycip5swgu7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foos2hu52lhycip5swgu7.png" alt="image"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;For those who don't know Roam, the TL;DR version  is the following. &lt;/p&gt;

&lt;p&gt;In Roam research, every bullet that you see is a block. It has a hierarchal structure. While writing you can either reference entire pages or blocks. When you reference a page, in essence you embed a link to that page, while you create a &lt;code&gt;backlink&lt;/code&gt; in that page towards the page that you made the reference. The same for blocks. &lt;/p&gt;

&lt;p&gt;In essence, you can use &lt;code&gt;[[pages]]&lt;/code&gt; as both a page to write into or a tag to anchor the children. For example, if I have the systems that I use inside a &lt;code&gt;[[project system]]&lt;/code&gt; page, I can go to that page and get an overview of &lt;strong&gt;all the systems&lt;/strong&gt; that I am using. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frsltmrli9zb2kcamxdmp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frsltmrli9zb2kcamxdmp.png" alt="image"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;For the same reason I use tags as column names for the kanban. I can easily get a view of all the items that are currently in the &lt;code&gt;DOING&lt;/code&gt; column. Moreover, using filters, I can do advanced filtering. For example, all the &lt;em&gt;blocks&lt;/em&gt; that have a &lt;code&gt;project DOING&lt;/code&gt; parent and  &lt;code&gt;Learning in Public&lt;/code&gt;. Of course, this particular example is not very helpful, since I could simply visit the project page for &lt;code&gt;Learning in Public&lt;/code&gt;, but you get the idea.&lt;/p&gt;

&lt;p&gt;At this point it's important to note that a &lt;code&gt;#tag&lt;/code&gt; and a &lt;code&gt;[[page]]&lt;/code&gt; work exactly the same way. The difference is only visual, perhaps to have them implicitly for different uses.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff2gtqwuz8bgr3z5gnobv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff2gtqwuz8bgr3z5gnobv.png" alt="image"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;With these primitives in mind, let's see the actual system that I am using. &lt;/p&gt;

&lt;h2&gt;
  
  
  The structure
&lt;/h2&gt;

&lt;p&gt;The system can be divided into 2 main groups. &lt;/p&gt;

&lt;p&gt;The System: How I use the graph in my every day and how I hope that others will do too&lt;/p&gt;

&lt;p&gt;The Ontology: How knowledge is organized in this repository.&lt;/p&gt;

&lt;p&gt;Let's start with the ontology, which is basically a cool word to signify "structure". We will see what are the building blocks of this repository and why I chose to organize it that way. &lt;/p&gt;

&lt;h2&gt;
  
  
  Ontology
&lt;/h2&gt;

&lt;p&gt;The repository uses a set of &lt;strong&gt;tags&lt;/strong&gt; and &lt;strong&gt;pages&lt;/strong&gt; to organize knowledge. &lt;/p&gt;

&lt;h2&gt;
  
  
  Tags
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;[[fleeting notes]]&lt;/strong&gt;: raw notes about something. They are meant to be converted into notes for longer storage (evergreen). Using queries inside a [[project]], we aggregate all the &lt;em&gt;fleeting notes&lt;/em&gt; for that particular project. They are meant to be discarded once they have been either discarded or converted to evergreen notes.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;[[external resources]]&lt;/strong&gt;: Tags external content for further research. All links should be anchored by this tag so that we can quickly view all the external links for a particular [[Location of Knowledge]].&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;[[open question]]&lt;/strong&gt;: An unanswered question that I save so that we don't forget about it. It is meant to have an inbox for all our open questions about a particular project so that we can quickly address them.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;[[project version]]&lt;/strong&gt; You can define different project versions so that you can group notes that are for current version or the next iteration.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Pages
&lt;/h2&gt;

&lt;h3&gt;
  
  
  [[project]]
&lt;/h3&gt;

&lt;p&gt;A project page.  Although we don't need to narrowly define a project, let's just say that it's a define set of actions to achieve a particular goal. e.g [[Odysseas learns Solidity]] or [[odyslam.com v3]]&lt;/p&gt;

&lt;p&gt;[[kanban]]: It's the main way to store items that I need to go through for that particular project. &lt;/p&gt;

&lt;p&gt;We have 3 different groups [[project TODO]], [[project DOING]], [[project DONE]]&lt;/p&gt;

&lt;p&gt;Items are divided into &lt;code&gt;IMP:&amp;lt;item_name&amp;gt;&lt;/code&gt; and &lt;code&gt;RE:&amp;lt;item_name&amp;gt;&lt;/code&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;IMP&lt;/strong&gt;: implement something. We have already decided to do this. &lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;RE&lt;/strong&gt;: Research into something. We don't know yet if we will implement it or not. &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Every item can have a START DATE and END DATE. This is helpful for us to keep track of when we start/finish our project items&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;[[project system]]&lt;/strong&gt;: We define the system that supports this project. Systems are much more effective than goals because they define habitual behavior. Read more on [[system vs goal]] and this &lt;a href="https://jamesclear.com/goals-systems" rel="noopener noreferrer"&gt;blog post&lt;/a&gt; by [[james clear]]&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;[[project brainstorming]]&lt;/strong&gt;: A general category to write thoughts and ideas for a particular project. After we refine and them, we should transfer them into the #kanban as project items.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;[[project NEXT ITERATION]]&lt;/strong&gt;: A general category for things that we want to do, but not at the current [[project version]]. We write them here so that we keep track of them and not forget them.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;[[project persona]]&lt;/strong&gt;: When doing a project, it is helpful to remember for whom this project is for. We assume that the project has a consumer, a human at the other end who will use the outcome of the project (e.g &lt;em&gt;a tool&lt;/em&gt;). We should always have an end-user at mind and this category is the perfect place to take a few notes about it.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;[[project inbox]]&lt;/strong&gt;:  A query that returns all the fleeting notes about a particular project. This enables the user to keep of a log of all their learning in the same top-level block (e.g a date) and be certain that every fleeting note will end up in the appropriate project.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;[[project open questions]]&lt;/strong&gt;: The same functionality as [[project inbox]], but for &lt;em&gt;open questions&lt;/em&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0dq1z5jb3579gc4ye6su.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0dq1z5jb3579gc4ye6su.png" alt="image"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  [[Location of Knowledge]]
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;page name:&lt;/strong&gt; "Learn ", where X can be anything&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Type:&lt;/strong&gt; Is it a [[tooling]] or an entire [[domain]]. This is rather ill-defined and we will reassess as more LoCs are created.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Domain&lt;/strong&gt;: Can we group this into a higher order [[domain]] ? e.g [[Learn Solidity]] is in the domain of [[Learn Ethereum]]&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Introduction&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;How to read this category&lt;/strong&gt;: An introduction on how to read a Location of Knowledge. &lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;[[insights]]&lt;/strong&gt;: This is the primary category. A set of curated insights about this category. Usually, they are complementary to some original [[external resources]].  We don't need to mirror the entire original context, only our "insight" or note that we believe is important. This way, we super-charge our learning by coupling original content and "hints" that might be unintuitive or hard to grasp. It's learning 10x.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;[[external resources]]&lt;/strong&gt;: This group is for external resources that are not tied to a particular insight, but are useful to have around.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;[[capstone project ideas]]&lt;/strong&gt;: A capstone project solidifies the understanding of the subject.  This is a category to add ideas for capstone projects that people can take up on.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0memwj411smjlfevm90r.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0memwj411smjlfevm90r.png" alt="image"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  [[personal page]]:
&lt;/h3&gt;

&lt;p&gt;This is the personal space for every user of this repository. It is the &lt;strong&gt;source&lt;/strong&gt; of every page that is created by that user. An anchor for their content.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;meta&lt;/strong&gt;: meta information about the user (e.g twitter account)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;fleeting notes&lt;/strong&gt;: the learning log of that particular user. Only they should be writing in it. It is structured as follows:

&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;date&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;A project page

&lt;ul&gt;
&lt;li&gt;raw learnings, ideally with timestamps (to get a view of how much each took)&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;example:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;[[May 29th, 2021]]&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;[[Odysseas learns Solidity]]

&lt;ul&gt;
&lt;li&gt;22:54 Solidity is just a language for writing smart contracts on [[Ethereum]]. Actually, there are alternatives, such as [[Vyper]]&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;/li&gt;

&lt;li&gt;&lt;p&gt;&lt;strong&gt;learning projects&lt;/strong&gt;: A list of project pages that concern learning. e.g [[Odysseas learns Solidity]]&lt;/p&gt;&lt;/li&gt;

&lt;li&gt;&lt;p&gt;&lt;strong&gt;capstone projects&lt;/strong&gt;: A list of capstone projects that the user would like to do. This helps the community to get visibility into what every user is working on. Visibility translates into more shared knowledge and enables collaboration.&lt;/p&gt;&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fg2izldckr4oq5ka1yys1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fg2izldckr4oq5ka1yys1.png" alt="image"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The use of pages is supported by a very handy #roamcult add-on, called &lt;a href="https://roamstack.com/workflow-automation-roam42-smartblocks/" rel="noopener noreferrer"&gt;42SmartBlocks&lt;/a&gt;. To install the add-on, you have to copy-paste some Javascript code in your graph, which will run every time you open your Roam Research app. The add-on is very powerful, but for now we use it as a templating engine for our pages. &lt;/p&gt;

&lt;p&gt;Instead of copy-pasting the blocks in every new LoC or project page,  we run the SmartBlock command which auto populates the blocks. Templates are already available to vanilla Roam Research, but SmartBlock makes the templates dynamic. For example, instead of manually adding the date, we can add the SmartBlock code &lt;code&gt;&amp;lt;%DATE:Today%&amp;gt;&lt;/code&gt;. This will be replaced by the page with the date of today. It's like typing &lt;code&gt;/today&lt;/code&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#42SmartBlock project 

Tags: #project

Creation Date: **&amp;lt;%DATE:**Today**%&amp;gt;**

## [[project description]]

### [[project inbox]]

{{[[query]]: {and: [[fleeting notes]] [[&amp;lt;%CURRENTPAGENAME%&amp;gt;]]}}}

### [[project open questions]]

{{[[query]]: {and: [[open question]] [[TODO]] [[&amp;lt;%CURRENTPAGENAME%&amp;gt;]]}}}

### [[project brainstorming]]



### [[project persona]]



### [[project system]]



### [[project NEXT ITERATION]]



#kanban {{[[kanban]]}}

[[project TODO]]

[[project DOING]]

[[project DONE]]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
#42SmartBlock loc

## [[Location of Knowledge]] template

Type: 

Creation Date: **&amp;lt;%DATE:**Today**%&amp;gt;**

Domain: 

## Introduction

## How to read this category 

The insights are not meant to be all-inclusive, but complementary resources. Follow the instructions at the start of the Insights.

Insights are meant to **greatly accelerate** your learning process of the original material that they accompany. 

This should be collaborative. If you have any questions, just jump into **Discord:** https://discord.gg/CqpaY9FAgU. 

If you want to enrich the content, let's chat in the **Discord** channel.

If you are unfamiliar with Roam Research, visit [[Learn Roam Research]] to learn more about it.

Think of this as a trunk of a tree. A body of knowledge that branches into different subjects. All subjects (branches) have [[external resources]], so you can research further into the subject.

For example, click on the following image for an example from [[Learn Solidity]], where the insights were generated while following [Crypto Zombies](https://cryptozombies.io/) 

![image](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/zb53e16m98ppptss9772.png)



## Insights

## [[external resources]]

## [[capstone projects]] ideas

## People who [[&amp;lt;%CURRENTPAGENAME%&amp;gt;]]

{{[[query]]: {and: [[person]] [[&amp;lt;%CURRENTPAGENAME%&amp;gt;]]}}}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  System
&lt;/h2&gt;

&lt;p&gt;Creating an ontology is cool. But you know what's cooler? &lt;/p&gt;

&lt;p&gt;Having a system that can support such an ontology, for multiple concurrent users and a considerable amount of time. &lt;/p&gt;

&lt;p&gt;The system is the result of many hours of "doing" &lt;strong&gt;Learning in Public&lt;/strong&gt; and thinking about the best way to keep track of my learnings. It's not perfect, but I feel it's refined enough to consider it an &lt;strong&gt;alpha version&lt;/strong&gt;. I do not have a set of explicitly requirements, but there are some principles that I kept in mind when designing this system:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;It should have the minimum possible friction. The user should focus on the note taking itself, rather than the system. The system's success, in a way, is measured by it's transparency. The more transparent it is, the more natural it feels to the user. This is important, as reducing friction is the best way to help yourself develop a habit. &lt;/li&gt;
&lt;li&gt;It should support multiple concurrent users. We need a system where multiple people will use it in parallel and it will not result into a chaotic graph. &lt;/li&gt;
&lt;li&gt;It should be forward facing. We probably use more tags that we currently needs, but that's ok. I am sure that in the future, new use-cases will emerge and the advanced filtering options will be really helpful. It is impossible to foresee all the possibilities, since we are still at the start of the project. &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;With the in mind, this is how this system should work:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;[[Locations of Knowledge]]&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;They are the treasure of this graph. Only curated insights and refined blocks should end up. They should be treated as a &lt;strong&gt;book&lt;/strong&gt;, rather than a &lt;strong&gt;notebook&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;They should follow a specific name pattern: , where X is something to be learned. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;[[project]]&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;When you are in doubt, it's a project. A &lt;strong&gt;capstone project&lt;/strong&gt;  is a project. Your &lt;strong&gt;personal&lt;/strong&gt; learning page is a project. &lt;/p&gt;

&lt;p&gt;A project can have any name, except if it about learning a particular Location of Knowledge. In that case, use &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0p8eof5dsy0xhhowi8lx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0p8eof5dsy0xhhowi8lx.png" alt="image"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  How to keep notes
&lt;/h3&gt;

&lt;p&gt;The core of this graph is the habit of taking notes as we learn something.  The following diagram sums up the process&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxlxsnntlno1h43oxb4yc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxlxsnntlno1h43oxb4yc.png" alt="image"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The process described above, results to the following Personal Page, with one personal learning project named "Odysseas Learns Solidity"&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhu0ktgml4vrdhnhc5q2l.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhu0ktgml4vrdhnhc5q2l.png" alt="image"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqszfi1ehfnzcvaiupf5y.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqszfi1ehfnzcvaiupf5y.png" alt="image"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  How to add insights in a Location of Knowledge
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyqt9ppmpd8jmetyzfl3b.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyqt9ppmpd8jmetyzfl3b.png" alt="image"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Which results to the following Location of Knowledge&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F63aih93gicinkur4y0zj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F63aih93gicinkur4y0zj.png" alt="image"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  What about tags and pages
&lt;/h3&gt;

&lt;p&gt;At this point, I haven't define a specific "rule" around when to define something as a page and when not. It is ad-hoc, based on what I expect to want to refer to in the future. For example, it make sense to group together all the various notes and information that I gather around the Ethereum Virtual Machine. It's a good rule of thumb to be cautious, so that the graph is not riddles with meaningless tags that make it visually impossible to read.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl0l1jqwoj9p2k05gy47j.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl0l1jqwoj9p2k05gy47j.png" alt="image"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Unlinked References
&lt;/h3&gt;

&lt;p&gt;Roam Research has a great feature where it will show &lt;strong&gt;unlinked references&lt;/strong&gt; in a page. That means that if I later create a page "EVM", it will tell me about all the occurrences of the word "EVM" in my graph. It assumes that they refer to this page, even though they don't link to it. This is very helpful to be able to backfill links when you add a tag later in the "life" of the graph.&lt;/p&gt;

&lt;h2&gt;
  
  
  Future
&lt;/h2&gt;

&lt;p&gt;The graph is still very early in its conception, with less than a month of active usage, so I expect this system to be constantly evolved. The more we use the system and structure, the more we will find optimizations to make.  In the future, I hope to have a community of people that all learn in parallel on the same graph.&lt;/p&gt;

&lt;p&gt;I believe that people are multi-faceted, thus we shouldn't necessarily cluster in domain-specific communities. The community of Symposium will be a rich community, where people learn about many different things, in a way that the learnings of one can benefit everyone. A community where people love learning and helping one another like good comrades. What we share is our love of learning.&lt;/p&gt;

&lt;h2&gt;
  
  
  CTA
&lt;/h2&gt;

&lt;p&gt;If you find this interesting, make sure to check out the graph on &lt;a href="https://roamresearch.com/#/app/Symposium/page/t9PFemV3W" rel="noopener noreferrer"&gt;Roam Research&lt;/a&gt; and join the Discord community. If you want to participate send a message and we will talk through it. At the moment, Roam Research demands a high degree of trust, so we will need to work things out first. &lt;/p&gt;

&lt;p&gt;I regularly tweet about technology, communities, learning and decentralisation, so make sure you &lt;a href="https://twitter.com/odysseas_lam" rel="noopener noreferrer"&gt;follow&lt;/a&gt; me for more.&lt;/p&gt;

</description>
      <category>productivity</category>
      <category>roamresearch</category>
      <category>learning</category>
      <category>community</category>
    </item>
    <item>
      <title>Learning Solidity, in Public</title>
      <dc:creator>Odysseas Lamtzidis</dc:creator>
      <pubDate>Sun, 23 May 2021 16:22:11 +0000</pubDate>
      <link>https://dev.to/odyslam/learning-solidity-in-public-58a3</link>
      <guid>https://dev.to/odyslam/learning-solidity-in-public-58a3</guid>
      <description>&lt;p&gt;A couple of weeks ago, I finished reading &lt;a href="https://www.nateliason.com/notes/sovereign-individual"&gt;The Sovereign Individual&lt;/a&gt;. Written in the late 90s, it talks about the ground-breaking changes that are coming to our society. Things like the Internet and computer technology will radically change our societies.&lt;/p&gt;

&lt;p&gt;What struck me most profoundly, was the fact that the authors predicted blockchains. Of course, I am not talking about the technology, but rather the principle. They predicted the coming of permissionless, decentralized exchange of information and value.&lt;/p&gt;

&lt;p&gt;After finishing the book, I had quite a few things to reflect upon. The most important question was about my place in a world where winner-takes-all effects are becoming the norm. I concluded, amongst other things, that I had to give the whole blockchain domain a second chance. The first chance was back in 2017-2018, when I invested time researching not-so-great&lt;a href="https://scholar.google.gr/citations?user=LFNHtTgAAAAJ&amp;amp;hl=en"&gt; blockchain projects.&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Being familiar with the basic principles of blockchain, I opted to start with a practical aspect of the space. After consuming a couple of books and podcasts, Solidity seemed the obvious start. It is one of the first languages for running on a general application blockchain. That blockchain is &lt;a href="https://en.wikipedia.org/wiki/Ethereum"&gt;Ethereum&lt;/a&gt;. In Solidity, you create programs that run on the Ethereum blockchain itself. That means that these __smart contracts, __as we called them, run on &lt;a href="https://ethereum.org/en/developers/docs/nodes-and-clients/"&gt;ethereum full node&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;The language is &lt;a href="https://simple.wikipedia.org/wiki/Turing_complete#:~:text=Turing%20complete%20is%20a%20term,programming%20languages%20are%20Turing%2Dcomplete."&gt;Turing-complete&lt;/a&gt;, so we can write programs about a great number of use-cases. From running decentralized exchanges (&lt;a href="https://academy.binance.com/en/articles/what-is-a-decentralized-exchange-dex"&gt;dex&lt;/a&gt;), to defining the rules of an organization (&lt;a href="https://academy.binance.com/en/articles/decentralized-autonomous-organizations-daos-explained"&gt;DAO&lt;/a&gt;). Once we launch these programs on the blockchain, it is impossible for others to censor or stop them.  These "decentralized applications" run on **every **Ethereum miner node.&lt;/p&gt;

&lt;p&gt;Let's take a moment to reflect on this unique characteristic.&lt;/p&gt;

&lt;p&gt;We can define a system, where: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;the inner workings of the system are completely transparent &lt;/li&gt;
&lt;li&gt;everyone abides by a pre-defined set of rules&lt;/li&gt;
&lt;li&gt;all the rules are visible to everyone and known from the start &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Moreover, the smart-contract code is open-source by default. This is because nobody will take part in a system that hasn't shared the source code. Once the source code is shared, it is trivial to verify it.&lt;/p&gt;

&lt;p&gt;This means that we have a groundbreaking way to create software and design systems. A way that is open source and fair, with limited asymmetry of information (no hidden rules).&lt;/p&gt;

&lt;p&gt;&lt;a href="https://twitter.com/naval"&gt;Naval Ravikant&lt;/a&gt;, also known as the philosopher investor, has said about Ethereum:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;So if Bitcoin is a shared ledger, then Ethereum is a shared computer for the entire world to run its most important applications. &lt;a href="https://tim.blog/2021/03/09/vitalik-buterin-naval-ravikant-transcript/"&gt;Source&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Perspective
&lt;/h2&gt;

&lt;p&gt;With that in mind, one can see why people are falling in love with Ethereum and Solidity. It is true though, that there are limitations (e.g the Network can support up to &lt;a href="https://blockchair.com/ethereum/charts/transactions-per-second"&gt;20 transactions/second&lt;/a&gt;). Currently, there are many interesting projects,  such as DeFi (Decentralized Finance). DeFi is one of the few use-cases that actually make sense to use a blockchain. Always remember that a blockchain is a very expensive way for counter-parties to agree on something.&lt;/p&gt;

&lt;h2&gt;
  
  
  Learning Solidity in Public
&lt;/h2&gt;

&lt;p&gt;While learning Solidity, I decided to start another project, called [[Learning in Public intro post]]. In that project, I share my learnings about different subjects as I learn them.&lt;/p&gt;

&lt;p&gt;Thus, I decided to share my learnings of Solidity.&lt;/p&gt;

&lt;p&gt;Think about it. Why spend hours on the internet chasing down the same answers as I did?&lt;/p&gt;

&lt;p&gt;Follow the guide that I have compiled and the insights will cover most of the questions that you may have. &lt;strong&gt;The goal is to greatly accelerate your learning process&lt;/strong&gt;. Thus, it is not about creating original content, but rather curating the already available content. &lt;/p&gt;

&lt;p&gt;I will simply lay the material for you, as I did for myself.&lt;/p&gt;

&lt;h2&gt;
  
  
  Outline
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Insights from &lt;a href="https://cryptozombies.io/"&gt;Cryptozombies&lt;/a&gt; Solidity path lesson 1-5
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;_variable&lt;/code&gt; means that the variable is a function's argument. It's a convention to make the code more readable. &lt;/li&gt;
&lt;li&gt;Storing a string in Memory vs Storage&lt;/li&gt;
&lt;li&gt;When to use memory keyword?&lt;/li&gt;
&lt;li&gt;uint variable types:&lt;/li&gt;
&lt;li&gt;Ethereum gas cost considerations.&lt;/li&gt;
&lt;li&gt;When defining a Solidity Struct you don't have to remember the order. There are different ways to call a specific attribute.&lt;/li&gt;
&lt;li&gt;What are events?&lt;/li&gt;
&lt;li&gt;String comparison&lt;/li&gt;
&lt;li&gt;Function Modifiers&lt;/li&gt;
&lt;li&gt;Internal Transactions&lt;/li&gt;
&lt;li&gt;Transactions vs Calls&lt;/li&gt;
&lt;li&gt;Function Visibility internal, external, public and private&lt;/li&gt;
&lt;li&gt;Interfaces&lt;/li&gt;
&lt;li&gt;calldata variable storage&lt;/li&gt;
&lt;li&gt;What is abi.encodepacked?&lt;/li&gt;
&lt;li&gt;Assert vs Require&lt;/li&gt;
&lt;li&gt;What is contract inheritance? &lt;/li&gt;
&lt;li&gt;Import contract&lt;/li&gt;
&lt;li&gt;Security Considerations&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;📍 Sounds simple enough? Then, &lt;a href="https://roamresearch.com/#/app/Symposium/page/_QiKXQKiD"&gt;Enter the rabbit hole&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>solidity</category>
      <category>ethereum</category>
      <category>blockchain</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>Learning in Public</title>
      <dc:creator>Odysseas Lamtzidis</dc:creator>
      <pubDate>Sun, 23 May 2021 16:17:31 +0000</pubDate>
      <link>https://dev.to/odyslam/learning-in-public-14ap</link>
      <guid>https://dev.to/odyslam/learning-in-public-14ap</guid>
      <description>&lt;h2&gt;
  
  
  TL;DR
&lt;/h2&gt;

&lt;p&gt;Just visit the &lt;a href="https://roamresearch.com/#/app/Symposium/page/t9PFemV3W"&gt;Symposium graph&lt;/a&gt;. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://dev.to/odyslam/learning-solidity-in-public-58a3"&gt;Learn Solidity, in Public&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Building in Public
&lt;/h2&gt;

&lt;p&gt;The &lt;em&gt;building in public&lt;/em&gt; movement is something that I only recently discovered. You pick a medium, like Twitter, and you treat it as your personal "building" journal. You log as you build, thus you build in public.&lt;/p&gt;

&lt;p&gt;There a lot of very successful startups and founders who have perfected this &lt;em&gt;art&lt;/em&gt;. For example, @shl from @gumroad shares his insights from running a startup and being a creator. Not only I get to share the thrill of doing something with a sense of mission, but I also learn about startups.&lt;/p&gt;

&lt;p&gt;In a lot of ways, building in public shares a lot of attributes with &lt;em&gt;open source&lt;/em&gt; projects. You get to build in a way where everyone can see your victories and defeats.&lt;/p&gt;

&lt;p&gt;The first upside of all this is that you gather a following which is rooting for your success. Because they care, they will provide feedback, creating a virtuous feedback loop. We can largely group the positive effects of this constant back and forth into 2 categories. Firstly, it is invigorating for the spirit and secondly, it's a source of inspiration. Lastly, I am certain that it's a necessity for effective rapid iteration. It's rocket fuel 🚀 for learning.&lt;/p&gt;

&lt;p&gt;With that in mind, today I am announcing an experiment of mine. A system to &lt;strong&gt;Learn in Public&lt;/strong&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Learning in Public
&lt;/h2&gt;

&lt;p&gt;It's an effort to bring all my learning efforts out in the open. Make them available for everyone to leverage.&lt;/p&gt;

&lt;p&gt;I spend a considerable time of my every day learning new things. Instead of learning into a vacuum, I will be logging my thoughts, insights, and "Aha!" moments as I research a particular subject.&lt;/p&gt;

&lt;p&gt;Every subject will have a corpus of knowledge, like a **tree trunk **which is the foundation of many insights. Think of them as branches. Each insight clarifies some part of the trunk and offers links for further reading.&lt;/p&gt;

&lt;p&gt;That way, you should be able to speed up your learning on the particular subject. This effort should accumulate into a rich repository of interlinked knowledge. I hope that people will enrich it, creating a community in the process.&lt;/p&gt;

&lt;p&gt;Learning is fun, especially when you share it with other people. This mental tango is so much more useful and productive than banging your head against the wall. Feedback is a crucial ingredient of learning. &lt;/p&gt;

&lt;p&gt;The idea is that it will beneficial to everyone. For starters, it will force me to approach learning in an even more systematic manner. Moreover, it will benefit others by laying out a path for learning a particular subject.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why learning?
&lt;/h2&gt;

&lt;p&gt;I have always been good at learning new ideas, new concepts. It's something that I enjoy and I suspect that is the very reason that I am good at it. &lt;/p&gt;

&lt;p&gt;Everyone knows a person that always seems to know a Wikipedia-style fact about a subject. Well, I am that person. I  literally read Wikipedia about everything that poked my interest.  I remember vividly that once Russia was mentioned in a discussion and I was baffled because I didn't know anything about the country. I thought to myself:  "Awesome, let's spend the next hour learning some facts about Russia". &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;How is it organized? &lt;/li&gt;
&lt;li&gt;What is geography? &lt;/li&gt;
&lt;li&gt;What about the economy?&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You see, this love of learning about things was not visible to me. It was a natural part of my psyche, &lt;em&gt;distinct&lt;/em&gt; &lt;em&gt;but&lt;/em&gt; &lt;em&gt;invisible&lt;/em&gt;. Like a pane of glass, it exists but you don't "see" it, you see through it. It served me well, as I was able to excel in school and later in the university, without studying too much. Given the number of irrelevant projects that I did, I was gliding through courses.&lt;/p&gt;

&lt;p&gt;But, &lt;strong&gt;there is no such thing as a free lunch&lt;/strong&gt;. This "aptitude" to learn things was only available when I &lt;em&gt;wanted&lt;/em&gt; to learn something. My discipline &lt;del&gt;was&lt;/del&gt; is terrible. It was particularly terrible when I had to do something that I considered unimportant.&lt;/p&gt;

&lt;p&gt;For that reason, I was never &lt;em&gt;great&lt;/em&gt;. I was good if and whenever I decided to be. I assume this is because I never observed my inclination in learning. This ability to learn and place knowledge in a mental map, retrieving it at will.&lt;/p&gt;

&lt;p&gt;For me, a big chunk of &lt;a href="https://www.coursera.org/learn/learning-how-to-learn"&gt;learning how to learn&lt;/a&gt; has been the ability to quickly find the right content. Being able to leverage the power of the internet to identify and consume the next piece of the puzzle.&lt;/p&gt;

&lt;h2&gt;
  
  
  A revelation
&lt;/h2&gt;

&lt;p&gt;In a way, this is still the way that I am operating, working day and night for the things that I feel passionate about. Thankfully, life is malleable. I am able to work on what I love, doing what I love, and learning about the things that I love.&lt;/p&gt;

&lt;p&gt;I do not need to remember the content of a piece of information. I only need to remember the fact that it exists and the metadata that will help with its retrieval.&lt;/p&gt;

&lt;p&gt;With google always being at arm's reach, I can vastly increase the amount of available data that I have. I need only to remember the metadata to retrieve them. In essence, I compress the entirety of the information into a handful of keywords.&lt;/p&gt;

&lt;p&gt;The thing is, that I was completely unaware of this system until pointed out by a colleague and friend.&lt;/p&gt;

&lt;p&gt;During a discussion, he mentioned that I always seemed to have at hand material to support my thesis. He thought that I had a library or something.&lt;/p&gt;

&lt;p&gt;I was puzzled. For me, finding things on the internet has always been so normal and easy, that I thought it was trivial. Well, apparently it wasn't. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;I spent a couple of days reflecting on that fact.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Everything fell in place. I decided to start taking detailed notes while researching or learning about things. It was hard because taking notes is far slower than just &lt;strong&gt;doing™️&lt;/strong&gt;, but it was well worth the effort. After a couple of weeks, I had a log of all the things that I had learned. All these insights, nicely stored, filled with comments and links to resources. It was magic really, as you could see both the main corpus of knowledge, but also dive in a particular insight at will.&lt;/p&gt;

&lt;h2&gt;
  
  
  Sharing is caring
&lt;/h2&gt;

&lt;p&gt;Having compiled a "learning log", I thought to myself: "that could be useful to people!". Chances are that I am not the only one who had these questions while reading &lt;em&gt;that&lt;/em&gt; article or tutorial.&lt;/p&gt;

&lt;p&gt;You see, I have a thing for tools for knowledge. Others are into BDSM, I am into Roam Research.&lt;/p&gt;

&lt;p&gt;Why not create a living, breathing graph of knowledge, I thought to myself.&lt;/p&gt;

&lt;p&gt;Instead of creating static artifacts, I will create an interconnected living repository. This interconnection is &lt;a href="https://zenkit.com/en/blog/a-beginners-guide-to-the-zettelkasten-method/"&gt;particularly important&lt;/a&gt;, it is knowledge on steroids.&lt;/p&gt;

&lt;p&gt;On top of that, we could even bootstrap a community. A community about learning, spanning many different topics. Humans are multi-dimensional beings. We can read about &lt;a href="https://en.wikipedia.org/wiki/Solidity"&gt;Solidity&lt;/a&gt; and at the same time reflect the writings of &lt;a href="https://en.wikipedia.org/wiki/Jos%C3%A9_Ortega_y_Gasset"&gt;José Ortega y Gasset&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Enter the Symposium
&lt;/h2&gt;

&lt;p&gt;We start with &lt;a href="https://dev.to/odyslam/learning-solidity-in-public-58a3"&gt;Learning Solidity&lt;/a&gt; but more will follow.&lt;/p&gt;

&lt;p&gt;Over the next weeks I will be transferring over the bulk of my notes, both from various learning domains and books that I have read. &lt;/p&gt;

&lt;p&gt;Follow me here or on &lt;a href="https://twitter.com/odysseas_lam"&gt;Twitter&lt;/a&gt; to be up to date!&lt;/p&gt;

&lt;p&gt;I opted to name this public space after the &lt;a href="https://en.wikipedia.org/wiki/Symposium_Plato"&gt;Symposium&lt;/a&gt;, the famous work of Plato. As we read in Wikipedia,&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;It depicts a friendly contest of extemporaneous speeches given by a group of notable men attending a banquet&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Well, you can think this space as our banquet, where we chat and share our learnings about the things that we are interested in.&lt;/p&gt;

&lt;p&gt;I feel that I have written enough. Just visit the &lt;a href="https://roamresearch.com/#/app/Symposium/page/t9PFemV3W"&gt;Symposium graph&lt;/a&gt; and &lt;br&gt;
I will see you there ✌️&lt;/p&gt;

</description>
      <category>opensource</category>
      <category>learning</category>
      <category>roamcult</category>
      <category>buildinpublic</category>
    </item>
    <item>
      <title>Migrating from Nodebb to Discourse</title>
      <dc:creator>Odysseas Lamtzidis</dc:creator>
      <pubDate>Thu, 22 Apr 2021 14:07:07 +0000</pubDate>
      <link>https://dev.to/odyslam/migrating-from-nodebb-to-discourse-3ael</link>
      <guid>https://dev.to/odyslam/migrating-from-nodebb-to-discourse-3ael</guid>
      <description>&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc1dulva0qji50dne5tuz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc1dulva0qji50dne5tuz.png" alt="cover image"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In this blog post, I note some interesting things that I discovered during the migration of the &lt;a href="https://community.netdata.cloud" rel="noopener noreferrer"&gt;Netdata Community&lt;/a&gt; from Nodebb to Discourse. &lt;/p&gt;

&lt;p&gt;I had to deviate a bit from the &lt;a href="https://meta.discourse.org/t/importing-nodebb-mongodb-to-discourse/126553/11" rel="noopener noreferrer"&gt;"official" instructions&lt;/a&gt;, thus I thought it would be interesting to share my notes. &lt;/p&gt;

&lt;h1&gt;
  
  
  Some background
&lt;/h1&gt;

&lt;p&gt;When I joined Netdata in early August, we had just released our forum, based on Nodebb.&lt;/p&gt;

&lt;p&gt;According to Nodebb creators: &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;NodeBB is a next-generation discussion platform that utilizes web sockets for instant interactions and real-time notifications. NodeBB forums have many modern features out of the box such as social network integration and streaming discussions.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The premise of a well-engineered product was proven true, as Nodebb is an extremely pleasant software to work with, offering a modern technological stack which is easily extensible and using npm to vendor it's plugins.&lt;/p&gt;

&lt;p&gt;On top of that, the plugins can be installed without rebuilding the forum, meaning that you can swap them on-the-fly. Great!&lt;/p&gt;

&lt;p&gt;The reason we chose to move out of Nodebb is that although the project is awesome and the community is vivid, it's not as popular as Discourse. This translates to Discourse having a greater number of available plugins and themes. Our community quickly grew with new requirements that would be hard to accommodate with the existing tools at hand.&lt;/p&gt;

&lt;p&gt;In other words, in order to bring the forums up to shape, there is more manual work and maintenance required in Nodebb than in Discourse. Which would have been great, since the choice of customization is really wonderful, but I am the only Developer Relations team member at the moment. Thus whatever development I need, I will do it myself, limiting considerably the time I can invest.&lt;/p&gt;

&lt;p&gt;Moreover, as I have now spent considerable time in Discourse, while I do prefer some elements of NodeBB, such as the technology stack or the plugin system; Discourse as a whole offers much more control and options.** It's simply more mature.**&lt;/p&gt;

&lt;p&gt;All in all, there is a reason while the grand majority of forums look identical nowadays, and the reason is that Discourse is hard to beat. (Although there are a couple of interesting options oriented more for SaaS  products, maybe in another blog post)&lt;/p&gt;

&lt;h2&gt;
  
  
  The migration
&lt;/h2&gt;

&lt;p&gt;In order to migrate, the good folks at Discourse, with the &lt;a href="https://meta.discourse.org/t/importing-nodebb-mongodb-to-discourse/126553/11" rel="noopener noreferrer"&gt;help of the community&lt;/a&gt;, have released a migration tool which parses the MongoDB/Redis database of Nodebb and extracts what can be extracted.&lt;/p&gt;

&lt;p&gt;There are a few gotchas which I learned the hard way. Let's see them:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;You have to build Discourse from source. As it's a great piece of monolithic software on Ruby, it's not as trivial as one would have hoped. 

&lt;ol&gt;
&lt;li&gt;MacOS is not playing nice with Ruby, avoid to reduce unneeded complexity.&lt;/li&gt;
&lt;li&gt;Go for Ubuntu (or an Ubuntu VM) and prepare to spend quite some time in order to setup everything. &lt;/li&gt;
&lt;/ol&gt;


&lt;/li&gt;

&lt;li&gt;In case of a hosted NodeBB, make sure that you request the &lt;strong&gt;entirety&lt;/strong&gt; of the dump from their support. I spent quite some time trying to figure out why the migration did not work and the reason was that I simply requested half the dump.&lt;/li&gt;

&lt;li&gt;In case you encounter the &lt;code&gt;don't know what to do with the file, skipping&lt;/code&gt; error, it means that you are using a newer version of mongorestore which has a slightly different syntax than the one that is shown in the migration instructions.

&lt;ol&gt;
&lt;li&gt;Try &lt;code&gt;mongorestore -d &amp;lt;database_name&amp;gt; /directory&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;This will restore the database in a newly created database called &lt;code&gt;&amp;lt;database_name&amp;gt;&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;Make sure you change the &lt;code&gt;nodebb.rb&lt;/code&gt; line concerning the connection to the MongoDB to include the database you defined above: &lt;code&gt;@client = adapter.new('mongodb://127.0.0.1:27017/&amp;lt;database_name&amp;gt;'')&lt;/code&gt;
&lt;/li&gt;
&lt;/ol&gt;


&lt;/li&gt;

&lt;li&gt;It is possible that NodeBB has created some orphan posts or another bad state inside the database. These abnormalities can break the migration script, bringing the process to a halt. This is what happened to my case and it's the crux of this blog post. &lt;/li&gt;

&lt;/ol&gt;

&lt;h2&gt;
  
  
  The culprit is 3.14
&lt;/h2&gt;

&lt;p&gt;I used to get some bad states in the forum, meaning that some topics would be created they would be impossible to be retrieved after their creation, resulting in a 404 instead. &lt;/p&gt;

&lt;p&gt;As there were already internal discussions about the migration from NodeBB to another platform, I did not spend time to pin down the issue, as it was very very intermittent and I had myriad other things to look after.&lt;/p&gt;

&lt;p&gt;After some time, I discovered that the reason for this was the keyword &lt;code&gt;Raspberry pi&lt;/code&gt; in the topic name. Every time a topic name included that keyword, the whole topic would enter a bad state, with 404 every single time and manual deletion of the topic (or change of the topic name) as the only solution.&lt;/p&gt;

&lt;p&gt;Again, I did not elect to debug the issue any further. That was a mistake.&lt;/p&gt;

&lt;h3&gt;
  
  
  We found it!
&lt;/h3&gt;

&lt;p&gt;After talking with the good folks at NodeBB for this migration, I informed them of the bug, in case they wanted to dig deeper to evaluate if this is a wider problem or specific to my instance. They were very happy to investigate, and indeed they found the culprit.&lt;/p&gt;

&lt;p&gt;They use a specific regex on their reverse proxy that checks for common file extensions that they don't serve. One of these files is &lt;code&gt;.asp&lt;/code&gt; which without the ^ or $ qualifier can match with &lt;code&gt;raspberry pi&lt;/code&gt;. By updating their rules, they were able to fix this.&lt;/p&gt;

&lt;p&gt;But, even with finding the culprit, the bad states had been created inside the database, and unknowingly to me, they would come back and haunt me during the migration.&lt;/p&gt;

&lt;h2&gt;
  
  
  The migration
&lt;/h2&gt;

&lt;p&gt;When I tried to test the migration, the script was unable to perform the migration and would crash, consistently and without giving much information. With the help of some debugging "print" statements that I put in the script, I found that the script would crash in area of topics import. &lt;/p&gt;

&lt;p&gt;At first, it fetches a list of topics, then it starts pulling each topic from the database, and for each topic it pulls all it's posts. &lt;/p&gt;

&lt;p&gt;It's simple really.&lt;/p&gt;

&lt;p&gt;But, to my disappointment, the script would crash when it it tried to pull the post of a particular topic, the one with the &lt;code&gt;raspberry pi&lt;/code&gt; keyword in it's title. &lt;/p&gt;

&lt;p&gt;When I tried to fetch the topic from the database, using MongoCLI, I was successful, but when I tried to fetch the first post of that particular topic, I couldn't.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Bingo!&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Apparently, for the &lt;code&gt;Raspberry pi&lt;/code&gt; topics, while the topic existed, the post did not, resulting in a &lt;code&gt;null&lt;/code&gt; return for &lt;code&gt;post_id&lt;/code&gt; and the crashing of the whole script. In Nodebb, the &lt;code&gt;post_id&lt;/code&gt; of the first post of a topic, is the &lt;code&gt;mainPID&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Here is the data structure of the bad topic:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"_id"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;ObjectId(&lt;/span&gt;&lt;span class="s2"&gt;"5f621635d53e46aab957f13b"&lt;/span&gt;&lt;span class="err"&gt;)&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"_key"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"topic:99"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"cid"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;3&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"lastposttime"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;1600263733946&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"mainPid"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"postcount"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"slug"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"99/monitor-pi-hole-and-a-raspberry-pi-with-netdata"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"tid"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;99&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"timestamp"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;1600263733946&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"title"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Monitor Pi-hole (and a Raspberry Pi) with Netdata"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"uid"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;3&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"viewcount"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"thumb"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;""&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"deleted"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"deletedTimestamp"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;1600263902138&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"deleterUid"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;37&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"mergeIntoTid"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;100&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"mergedTimestamp"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;1600263902160&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"mergerUid"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;37&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now that I knew the exact problem, I had to think of a quick solution. &lt;/p&gt;

&lt;p&gt;Here is the function in &lt;code&gt;mongo.rb&lt;/code&gt; that fetches the list of topics from MongoDB.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight ruby"&gt;&lt;code&gt;&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;topics&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;offset&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;page_size&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;2000&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
      &lt;span class="n"&gt;topic_keys&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;mongo&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;find&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="ss"&gt;_key: &lt;/span&gt;&lt;span class="s1"&gt;'topics:tid'&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;skip&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;offset&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;limit&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;page_size&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;pluck&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="ss"&gt;:value&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
      &lt;span class="n"&gt;topic_keys&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;map&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="n"&gt;topic_key&lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="n"&gt;topic&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;topic_key&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="k"&gt;end&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;I assumed that if I removed the particular topic from that list of available topics, even if it exists in the database, the script will not try to import it and thus the migration will succeed.&lt;/p&gt;

&lt;p&gt;I ran: &lt;code&gt;db.objects.find({_key:"topics:tid"})&lt;/code&gt; which returned a large number of documents, like this one: &lt;code&gt;{ "_id" : ObjectId("5fb48402d53e46aab9f70b07"), "_key" : "topics:tid", "value" : "198", "score" : 1605665794715 }&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Thus, I needed to delete the document with &lt;code&gt;"value":"99"&lt;/code&gt; by running &lt;code&gt;db.objects.remove({_key:"topics:tid", "value": "99"})&lt;/code&gt; and &lt;em&gt;voila&lt;/em&gt;, the migration script was able to progress normally.&lt;/p&gt;

&lt;p&gt;In case you are interested, you can find the database schema on the NodeBB &lt;a href="https://docs.nodebb.org/development/database-structure/" rel="noopener noreferrer"&gt;website&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Here are the steps for the migration in a bite-sized format:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Import mongodump into local instance of Mongo.&lt;/li&gt;
&lt;li&gt;Build Discourse from source and setup the discourse instance.&lt;/li&gt;
&lt;li&gt;Run script which exports data from mongo, transforms them, and imports them to PostgreSQL.&lt;/li&gt;
&lt;li&gt;Export Discourse instance through the Admin panel and import it to the production server.

&lt;ol&gt;
&lt;li&gt;In case you are using a Hosted version of Discourse, you will need to contact support in order to restore the uploaded backup.&lt;/li&gt;
&lt;/ol&gt;


&lt;/li&gt;

&lt;li&gt;Make modifications and prepare a forum for production use.&lt;/li&gt;

&lt;/ol&gt;

&lt;h1&gt;
  
  
  Final words
&lt;/h1&gt;

&lt;p&gt;The last problem that I encountered during the migration was related to the profile pictures that the script tried to import. Apparently, some were pulled from the Internet and used some special characters which the script could not parse.&lt;/p&gt;

&lt;p&gt;As with the &lt;a href="https://www.investopedia.com/terms/p/paretoprinciple.asp" rel="noopener noreferrer"&gt;Pareto Principle&lt;/a&gt; of 80/20, I am pretty happy with an 80% of the perfect result by investing only 20% of total effort. I simply &lt;em&gt;commented-out&lt;/em&gt; the profile picture functions and went on with my life, and the migration.&lt;/p&gt;

&lt;p&gt;Although this will result in a &lt;em&gt;worse&lt;/em&gt; user experience, as existing users will have to re-upload their profile pictures, the other option was not viable as I would have to understand what is the exact issue and change the script, possibly learning some Ruby in the process. I simply don't have the time.&lt;/p&gt;

&lt;p&gt;Due to creating the foundations of a Developer Relations program, I have to be ruthless with prioritization, something which I have not done particularly successfully. But, as it's a process, I learn continuously, and thus I opted to go for the quick-and-dirty solution for this. &lt;/p&gt;

&lt;p&gt;In the end, what really matters is to improve the experience of the community as a whole, making sure that it grows steadily.&lt;/p&gt;

&lt;p&gt;Consider following me on &lt;a href="https://twitter.com/odysseas_lam" rel="noopener noreferrer"&gt;Twitter&lt;/a&gt; for more stuff on tech, philosophy and startups ✌️&lt;/p&gt;

</description>
      <category>devrel</category>
      <category>devops</category>
      <category>tutorial</category>
      <category>mongodb</category>
    </item>
    <item>
      <title>Introduction to StatsD</title>
      <dc:creator>Odysseas Lamtzidis</dc:creator>
      <pubDate>Mon, 15 Feb 2021 14:00:14 +0000</pubDate>
      <link>https://dev.to/netdata/introduction-to-statsd-1ci9</link>
      <guid>https://dev.to/netdata/introduction-to-statsd-1ci9</guid>
      <description>&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--9Qgh66Uk--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/5160ztstkjwl8ng46eoj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--9Qgh66Uk--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/5160ztstkjwl8ng46eoj.png" alt="Intro image"&gt;&lt;/a&gt;StatsD is an industry-standard technology stack for monitoring applications and instrumenting any piece of software to deliver custom metrics. The StatsD architecture is based on delivering the metrics via UDP packets from any application to a central statsD server. Although the original StatsD server was written in Node.js, there are many implementations today, with Netdata being one of them.&lt;/p&gt;

&lt;p&gt;StatsD makes it easier for you to instrument your applications, delivering value around three main pillars: open-source, control, and modularity. That’s a real windfall for full-stack developers who need to code quickly, troubleshoot application issues on the fly, and often don’t have the necessary background knowledge to use complex monitoring platforms.&lt;/p&gt;

&lt;p&gt;First and foremost, StatsD is an open-source standard, meaning that vendor lock-in is simply not possible. With most of the monitoring solutions offering a StatsD server, you know that your instrumentation will play nicely with any solution you might want to use in the future.&lt;/p&gt;

&lt;p&gt;The second is that you have absolute control over the data you send, since the StatsD server just listens for metrics. You can choose how, when, or why to send data from any application you build, whether it’s in aggregate or as highly cardinal data points. You also don’t need to spend any time configuring the StatsD server, since it will accept any metrics in any form you choose via your instrumentation.&lt;/p&gt;

&lt;p&gt;Finally, there is a complete decoupling of each component of the stack. The client doesn’t care about the implementation of the server, and the server is agnostic about the backend. You can mix and match any combination of client, server, and backend that works best for you, or migrate between them as your needs change.&lt;/p&gt;

&lt;p&gt;Historically, it has always been easier to measure and collect metrics about systems and networks than applications. In 2011, Erik Kasten developed StatsD while working at Etsy, to collect metrics from instrumented code. The original implementation, in Node.JS, listened on a UDP port for incoming metrics data, extracted it, and periodically sent batches of metrics to Graphite. Since then, countless applications have StatsD already implemented and can be configured to send their metrics to any StatsD server, while the number of available libraries makes it trivial to use the protocol in any language.&lt;/p&gt;

&lt;h1&gt;
  
  
  How does StatsD work?
&lt;/h1&gt;

&lt;p&gt;The architecture of StatsD is divided into 3 main pieces: client, server, and backend. &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The &lt;strong&gt;client&lt;/strong&gt; is what creates and delivers metrics. In most cases, this is a StatsD library, added to your application, that pushes metrics at specific points where you add the relevant code.&lt;/li&gt;
&lt;li&gt;The &lt;strong&gt;server&lt;/strong&gt; is a daemon process responsible for listening for metric data as it’s pushed from the client, batching them, and sending them to the backend.&lt;/li&gt;
&lt;li&gt;The &lt;strong&gt;backend&lt;/strong&gt;, which is where metrics data is stored for analysis and visualization.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;StatsD uses UDP packets because the client/server both reside on the same host, where packet loss is minimal and you can get the maximum throughput with the least amount of overhead. TCP is also an option, in case the client/server implementations reside on different hosts and the deliverability of metrics is a primary concern; in that case, the metrics collection speed will be lower due to the overhead of TCP.&lt;/p&gt;

&lt;p&gt;In case you are wondering about the difference between TCP and UDP, this image is most illustrative:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--AkgNvpTd--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://ydevern.files.wordpress.com/2018/09/tcp-vs-udp.png%3Fw%3D809" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--AkgNvpTd--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://ydevern.files.wordpress.com/2018/09/tcp-vs-udp.png%3Fw%3D809" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://ydevern.wordpress.com/2018/09/26/ccna-udp-vs-tcp/"&gt;Source&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;More often than not, an HTTP-based connection is used to send the metrics from the server to the backend, and because the backend is stored for long-term analysis and storage, it often resides in a different host than the server/clients.&lt;/p&gt;

&lt;h1&gt;
  
  
  StatsD in &lt;a href="https://netdata.cloud"&gt;Netdata&lt;/a&gt;
&lt;/h1&gt;

&lt;p&gt;Netdata is a fully featured StatsD server, meaning it collects formatted metrics from any application that you instrumented with your library of choice. Netdata is also its own backend implementation, as it offers instant visualization and long-term storage using the embedded time-series database (TSDB). When you install Netdata, you immediately get a fully functional StatsD implementation running on port 8125.&lt;/p&gt;

&lt;p&gt;Since StatsD uses UDP or TCP to send instrumented metrics, either across localhost or between separate nodes, you’re free to deploy your application in whatever way works best for you, and it can still connect to Netdata’s server implementation. As soon as your application exposes metrics and starts sending packets on port 8125, Netdata turns the incoming metrics into charts and visualizes them in a meaningful fashion. &lt;/p&gt;

&lt;p&gt;Your applications can be deployed in a variety of ways and still be able to easily surface monitoring data to Netdata. Moreover, Netdata accepts StatsD packets by default, meaning that as soon as your application starts sending data to Netdata, Netdata will create charts and visualize them as accurately as it can. Since there are a myriad of different setups, Netdata offers a robust server implementation that can be configured to organize the metrics in charts that make sense, so you can easily improve the visualization by making some simple modifications. &lt;/p&gt;

&lt;p&gt;Because StatsD is a robust, mature technology, developers have built libraries to easily instrument applications in most popular languages.&lt;/p&gt;

&lt;p&gt;Python:  &lt;a href="https://github.com/jsocol/pystatsd"&gt;https://github.com/jsocol/pystatsd&lt;/a&gt;&lt;br&gt;
Python Django: &lt;a href="https://github.com/WoLpH/django-statsd"&gt;https://github.com/WoLpH/django-statsd&lt;/a&gt;&lt;br&gt;
Java: &lt;a href="https://github.com/tim-group/java-statsd-client"&gt;https://github.com/tim-group/java-statsd-client&lt;/a&gt;&lt;br&gt;
Clojure: &lt;a href="https://github.com/pyr/clj-statsd"&gt;https://github.com/pyr/clj-statsd&lt;/a&gt;&lt;br&gt;
Nodes/Javascript: &lt;a href="https://github.com/sivy/node-statsd"&gt;https://github.com/sivy/node-statsd&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Taking the example from python-statsd, you only need a reachable Netdata Agent (locally or over the internet) and a couple of lines of code. This hello_world example illustrates just how simple it is to send any metric you care about to Netdata and instantly visualize it. &lt;/p&gt;

&lt;p&gt;Even with no configuration at all, Netdata automatically creates charts for you. Netdata, being a robust monitoring agent, is also capable of organizing incoming metrics in any way you find most meaningful.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="nn"&gt;statsd&lt;/span&gt;
&lt;span class="n"&gt;c&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;statsd&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;StatsClient&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;'localhost'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;8125&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;c&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;incr&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;'foo'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="c1"&gt;# Increment the 'foo' counter.
&lt;/span&gt;&lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;i&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="nb"&gt;range&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;100000000&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
   &lt;span class="n"&gt;c&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;incr&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;'bar'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
   &lt;span class="n"&gt;c&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;incr&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;'foo'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
   &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;i&lt;/span&gt; &lt;span class="o"&gt;%&lt;/span&gt; &lt;span class="mi"&gt;3&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
       &lt;span class="n"&gt;c&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;decr&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;'bar'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
       &lt;span class="n"&gt;c&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;timing&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;'stats.timed'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;320&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="c1"&gt;# Record a 320ms 'stats.timed'.
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Netdata’s StatsD server is also quite performant, which means you can monitor applications where they run without concerns over bottlenecks or restricting resources:&lt;/p&gt;

&lt;p&gt;Netdata StatsD is fast. It can collect more than 1.200.000 metrics per second on modern hardware, more than 200Mbps of sustained statsd traffic, using 1 CPU core&lt;/p&gt;

&lt;p&gt;Netdata does this on top of gathering metrics from other data sources. Netdata monitors an application’s full stack, from hardware to operating system to underlying services, organized automatically into meaningful categories. Every available metric is nicely organized automatically into a single dashboard.&lt;/p&gt;

&lt;p&gt;Ready to get started?&lt;br&gt;
In the next part of the StatsD series, we are going to illustrate how to configure Netdata to organize the metrics of any application, using K6 as our use-case. &lt;/p&gt;

&lt;p&gt;If you can’t wait until then, join our Community Forums where we have kickstarted a discussion around StatsD.&lt;/p&gt;

&lt;p&gt;Here are a couple of interesting resources to get you started with StatsD:&lt;/p&gt;

&lt;p&gt;StatsD GitHub &lt;a href="https://github.com/statsd/statsd"&gt;repository&lt;/a&gt;&lt;br&gt;
&lt;a href="https://medium.com/@DoorDash/scaling-statsd-84d456a7cc2a"&gt;Scaling StatsD in DoorDash&lt;/a&gt;&lt;br&gt;
Netdata StatsD reference &lt;a href="https://learn.netdata.cloud/docs/agent/collectors/statsd.plugin"&gt;documentation&lt;/a&gt;&lt;/p&gt;

</description>
      <category>netdata</category>
      <category>statsd</category>
      <category>monitoring</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>Introduction to community repository: Consul, Ansible, ML</title>
      <dc:creator>Odysseas Lamtzidis</dc:creator>
      <pubDate>Mon, 16 Nov 2020 16:46:02 +0000</pubDate>
      <link>https://dev.to/netdata/introduction-to-community-repository-consul-ansible-ml-4hma</link>
      <guid>https://dev.to/netdata/introduction-to-community-repository-consul-ansible-ml-4hma</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;The post was originally posted on the &lt;a href="https://www.netdata.cloud/blog/welcome-to-netdatas-community-repository-consul-ansible-ml/"&gt;Netdata blog&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--QYd9Z7L8--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/csg9ef98iqi6g78y785g.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--QYd9Z7L8--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/csg9ef98iqi6g78y785g.png" alt="Cover Image"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;On our journey to democratize monitoring, we are proud to have open source at the core of both our products and our company values. What started as a project out of frustration for lack of existing alternatives (see &lt;a href="https://www.rexfeng.com/blog/2016/01/anger-driven-development/"&gt;anger-driven development&lt;/a&gt;), quickly became one of the most starred open-source projects on all of GitHub. &lt;/p&gt;

&lt;p&gt;Fast-forward a couple of years later, and the Netdata Agent, our open-source monitoring agent, is maturing as the best single-node monitoring experience, offering unparalleled efficiency and thousands of metrics, per-second. At the same time, we have gathered a considerable community on our &lt;a href="https://github.com/netdata/netdata"&gt;GitHub repository&lt;/a&gt; and new forums.&lt;/p&gt;

&lt;p&gt;As the community grows, and considering our belief that extensibility is key to adoption, it was only natural to start brainstorming a way to share code and sample applications that supercharge the user experience and the Netdata Agent’s capabilities. &lt;/p&gt;

&lt;p&gt;Thus, without further ado, please say hello to our &lt;a href="https://github.com/netdata/community"&gt;Community Repository&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Z1WkbeYr--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://www.netdata.cloud/wp-content/uploads/2020/11/netdata-community.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Z1WkbeYr--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://www.netdata.cloud/wp-content/uploads/2020/11/netdata-community.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Although still in its infancy, we expect this repository to be filled by community members who want to share their experience of running Netdata in a production environment or integrated into a technological stack. At the moment, the repository will be used to house all sample applications, which are divided into categories, depending on the use case.&lt;/p&gt;

&lt;p&gt;Currently, there are three example applications, all contributed by the Netdata team, which were originally developed for internal use. Let’s take a look at them.&lt;/p&gt;

&lt;h2&gt;
  
  
  Configuration Management
&lt;/h2&gt;

&lt;p&gt;The first sample application is one I built that focuses on the issue of configuration management of an arbitrary number of Netdata Agents. More specifically, I opted to use &lt;a href="https://www.consul.io/"&gt;Consul&lt;/a&gt;, an amazing open-source project by HashiCorp, to dynamically manage the configuration of a Netdata Agent. The keyword is “dynamically”: Whenever I choose to change a configuration variable, the Netdata Agent restarts automatically so that it can pick up the change from the configuration files.&lt;/p&gt;

&lt;p&gt;Consul, per their documentation, is a “service mesh solution providing a full-featured control plane with service discovery, configuration, and segmentation functionality”. As such, Consul is routinely used already in cloud-native applications, and it’s ideal for a simple key/value store that we can use to house the configuration variables that we wish to dynamically change. Since Netdata can’t pick up configuration from a RESTful interface, we use consul-template, again an open-source tool by HashiCorp, which watches a Consul node for a specific number of keys, picks up the changes to their values and places them into the templates, generating the changed configuration files in the process.&lt;/p&gt;

&lt;p&gt;The code and documentation for this sample application can be found in the specific &lt;a href="https://github.com/netdata/community/tree/main/configuration-management/consul-quickstart"&gt;consul-quickstart directory&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Machine Learning and Netdata Agent’s API
&lt;/h2&gt;

&lt;p&gt;The second contribution came from &lt;a href="https://www.netdata.cloud/author/amaguire/"&gt;Andrew Maguire&lt;/a&gt;, who contributed a few examples built on the &lt;a href="https://registry.my-netdata.io/swagger/#/default/get_data"&gt;Netdata Agent’s API&lt;/a&gt;. The API offers anyone the ability to extract data from the Netdata Agent in an extremely efficient way and build real-time applications on top of it. He leveraged his in-house &lt;a href="https://github.com/netdata/netdata-pandas/tree/master/"&gt;python library&lt;/a&gt; to automatically extract data, add them to panda arrays, and enable live ML, capabilities such as the detection of anomalies.&lt;/p&gt;

&lt;p&gt;You can find the examples in the &lt;a href="https://github.com/netdata/community/tree/main/netdata-agent-api/netdata-pandas"&gt;appropriate directory&lt;/a&gt; of the community repository and open them in Google Colab. We suggest Google Colab not only because it’s free, but also because they spin up a VM and install all the required dependencies, making it the fastest way to try out the examples and play with the API. To open them on Google Colab, simply open a notebook on GitHub, and click on the Open in Colab button.&lt;/p&gt;

&lt;h2&gt;
  
  
  Automatic provisioning of Netdata Agents
&lt;/h2&gt;

&lt;p&gt;Last but not least, &lt;a href="https://www.netdata.cloud/author/joel/"&gt;Joel Hans&lt;/a&gt; pulled together the scripts that he had created for him to be able to automatically provision and claim any number of Netdata Agents on remote servers. The sample application is enabled by Ansible, a popular system provisioning, configuration management, and infrastructure-as-code tool. The user defines a set of steps in a &lt;code&gt;.yaml&lt;/code&gt; file, called a playbook, and then Ansible is responsible to run this playbook against a number of hosts using SSH as the only requirement. &lt;/p&gt;

&lt;p&gt;With &lt;a href="https://www.ansible.com/"&gt;Ansible&lt;/a&gt;, Joel can install and claim any number of Netdata Agents automatically, so that he can access and monitor his nodes in a matter of minutes, through Netdata Cloud. It’s that easy. You can learn more in the &lt;a href="https://learn.netdata.cloud/guides/deploy/ansible"&gt;guide&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Now, it’s your turn
&lt;/h2&gt;

&lt;p&gt;The repository is up and running, but we need you to participate. If you are using any of the aforementioned tools and platforms and feel that we could have done something in a better way, please do let us know and make a pull request with your suggestions. &lt;/p&gt;

&lt;p&gt;If, on other hand, you are using Netdata with another application that greatly improves the experience, please do create a README about the project and PR it to the appropriate category. The value of this repository is of a compounding nature. The more examples we can get, the more value our users (like you) will be able to receive, and thus the popularity of the repository will invite even more sample applications.&lt;/p&gt;

&lt;p&gt;See you all on our repo!&lt;/p&gt;

</description>
      <category>opensource</category>
      <category>monitoring</category>
      <category>machinelearning</category>
      <category>devops</category>
    </item>
    <item>
      <title>Home webserver setup on a Raspberry pi, using balena and Nginx</title>
      <dc:creator>Odysseas Lamtzidis</dc:creator>
      <pubDate>Sun, 26 Apr 2020 16:22:27 +0000</pubDate>
      <link>https://dev.to/odyslam/home-webserver-setup-on-a-raspberry-pi-using-balena-and-nginx-2kj5</link>
      <guid>https://dev.to/odyslam/home-webserver-setup-on-a-raspberry-pi-using-balena-and-nginx-2kj5</guid>
      <description>&lt;p&gt;The post was originally posted on my &lt;a href="https://odyslam.me/blog/balena-nginx-rpi/"&gt;blog&lt;/a&gt;, which is actually hosted on this setup, at my home.&lt;/p&gt;

&lt;h1&gt;
  
  
  Introduction
&lt;/h1&gt;

&lt;p&gt;In this post, we will be using a spare Raspberry pi 4 to host our very own &lt;em&gt;website&lt;/em&gt; using the internet connection of our house.&lt;/p&gt;

&lt;p&gt;We will start with some introductory terms to get a lay of the land and then we will continue with the tutorial itself. &lt;/p&gt;

&lt;p&gt;If you are familiar with the relevant terms (&lt;em&gt;IP, Domain Name&lt;/em&gt;, etc.), go ahead and jump to Let's get to it.&lt;/p&gt;

&lt;h1&gt;
  
  
  Table of Contents
&lt;/h1&gt;

&lt;ul&gt;
&lt;li&gt;Introduction&lt;/li&gt;
&lt;li&gt;
Table of Contents

&lt;ul&gt;
&lt;li&gt;Static Website&lt;/li&gt;
&lt;li&gt;Webserver&lt;/li&gt;
&lt;li&gt;Blogging software - Jekyll&lt;/li&gt;
&lt;li&gt;Domain&lt;/li&gt;
&lt;li&gt;What's an IP&lt;/li&gt;
&lt;li&gt;But what it has to do with domains?&lt;/li&gt;
&lt;li&gt;Dynamic IPs aka "The Plot Thickens"&lt;/li&gt;
&lt;li&gt;Wait a minute, I don't have a domain name&lt;/li&gt;
&lt;li&gt;balena.io&lt;/li&gt;
&lt;li&gt;So what's the deal, exactly?&lt;/li&gt;
&lt;li&gt;Complimentary Software:&lt;/li&gt;
&lt;li&gt;Certbot&lt;/li&gt;
&lt;li&gt;Netdata&lt;/li&gt;
&lt;li&gt;Architecture&lt;/li&gt;
&lt;li&gt;Components&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;
Let's get to it

&lt;ul&gt;
&lt;li&gt;Provisioning the Device&lt;/li&gt;
&lt;li&gt;Installing the Software&lt;/li&gt;
&lt;li&gt;Jekyll&lt;/li&gt;
&lt;li&gt;Installing nginx-in-balena&lt;/li&gt;
&lt;li&gt;Configuring the Software&lt;/li&gt;
&lt;li&gt;Environment Variables&lt;/li&gt;
&lt;li&gt;nginx configuration&lt;/li&gt;
&lt;li&gt;ddiclient configuration&lt;/li&gt;
&lt;li&gt;Configuring the environment&lt;/li&gt;
&lt;li&gt;Static IP&lt;/li&gt;
&lt;li&gt;Port Forwarding&lt;/li&gt;
&lt;li&gt;Deploy&lt;/li&gt;
&lt;li&gt;Deploy the project to the device&lt;/li&gt;
&lt;li&gt;Generating the SSL certificate&lt;/li&gt;
&lt;li&gt;Updating your certificates&lt;/li&gt;
&lt;li&gt;Push new content to the website&lt;/li&gt;
&lt;li&gt;Push new content to the website - For advanced users&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;Comments&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Static Website
&lt;/h2&gt;

&lt;p&gt;So, this idea came to me when I was considering alternatives for a new blog I wanted to start. Up to this point, I used Github pages which hosts for free  any &lt;em&gt;static&lt;/em&gt; website that belonged to an organization, a project or a person.&lt;/p&gt;

&lt;p&gt;For those who are not familiar with web programming, as we read from Wikipedia:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;A static web page (sometimes called a flat page or a stationary page) is a web page that is delivered to the user's web browser exactly as stored, in contrast to dynamic web pages which are generated by a web application.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;So, the website is not supported by a back-end web application, but it's only a set of &lt;code&gt;.html&lt;/code&gt;, &lt;code&gt;.css&lt;/code&gt;, &lt;code&gt;.js&lt;/code&gt; files that the server sends to the browser for the user to view the website. Wordpress sites, for example, &lt;strong&gt;are not&lt;/strong&gt; static, since it's supported by a &lt;code&gt;PHP&lt;/code&gt; server, a &lt;code&gt;SQL&lt;/code&gt; database and various other components.&lt;/p&gt;

&lt;h2&gt;
  
  
  Webserver
&lt;/h2&gt;

&lt;p&gt;In our case, to keep things as simple and &lt;strong&gt;as lightweight&lt;/strong&gt; as possible, we will be serving a static website using a static webserver, &lt;strong&gt;Nginx&lt;/strong&gt;. It is one of the oldest and most performant webservers, allowing us to serve up to 1000 users without our Raspberry breaking a sweat.&lt;/p&gt;

&lt;p&gt;Nginx is a very robust web server, allowing the user to perform a myriad of different uses, from serving a static website to performing inverse proxy. Its configuration is very straightforward (as we will see below) and as we will find out soon enough and in essence we will simply dictate the server to serve a set of static files (our website), each time there is a connection at a specific port. (&lt;a href="https://www.tutorialspoint.com/computer_fundamentals/computer_ports.htm"&gt;What is a computer port&lt;/a&gt;?)&lt;/p&gt;

&lt;h2&gt;
  
  
  Blogging software - Jekyll
&lt;/h2&gt;

&lt;p&gt;As we are &lt;em&gt;lazy&lt;/em&gt;, we don't want to write a blog website from scratch, as it would entail considerable overhead for each new post we want to make. What we want, is a framework that will have a certain &lt;em&gt;theme&lt;/em&gt; and which will generate the static files of the blog &lt;strong&gt;for us&lt;/strong&gt;, allowing us to focus solely on the content of the blog.&lt;/p&gt;

&lt;p&gt;Luckily for us, there is a very easy-to-use framework, called &lt;strong&gt;Jekyll&lt;/strong&gt;. It was created by Github's co-founder Tom Preston-Werner. As we read from the project's &lt;a href="https://github.com/jekyll/jekyll"&gt;repository&lt;/a&gt; &lt;code&gt;readme.md&lt;/code&gt;:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Jekyll is a simple, blog-aware, static site generator perfect for personal, project, or organization sites. Think of it like a file-based CMS, without all the complexity. Jekyll takes your content, renders Markdown and Liquid templates, and spits out a complete, static website ready to be served by Apache, Nginx or another web server. Jekyll is the engine behind &lt;a href="https://pages.github.com/"&gt;GitHub Pages&lt;/a&gt;, which you can use to host sites right from your GitHub repositories.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The power of Jekyll is that it is super easy to use, so easy, that you don't even need programming knowledge (Verified from personal experience). In essence, you configure a Jekyll &lt;code&gt;theme&lt;/code&gt; using a central configuration file and then you write the blog posts in &lt;code&gt;markdown&lt;/code&gt; format (More on markdown &lt;a href="https://www.markdownguide.org/getting-started/"&gt;here&lt;/a&gt;).&lt;/p&gt;

&lt;h2&gt;
  
  
  Domain
&lt;/h2&gt;

&lt;p&gt;Now that we have the core pieces of the website, it is time to think about the &lt;code&gt;domain name&lt;/code&gt; and the &lt;em&gt;potential&lt;/em&gt; issue of &lt;code&gt;dynamic IP&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;If this doesn't sound familiar, let's spend a minute for a computer science 101 super-mini-course.&lt;/p&gt;

&lt;h3&gt;
  
  
  What's an IP
&lt;/h3&gt;

&lt;p&gt;Each computer that is connected to a network is identified by a unique address, or &lt;code&gt;IP&lt;/code&gt;, very much like your home address. The Internet is a global network of computers, thus each server has an &lt;code&gt;IP&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;As your home router is connected to the internet, it has it's own &lt;code&gt;IP&lt;/code&gt; address, which you can find using a service like &lt;a href="https://whatismyipaddress.com/"&gt;whatsmyipaddress&lt;/a&gt;. At the same time, the router creates a local network in which all your computers at home are connected, thus each computer has a &lt;code&gt;local IP&lt;/code&gt; and all your computers, since they are connected to the Internet &lt;strong&gt;through the router&lt;/strong&gt;, will have the same global &lt;code&gt;IP&lt;/code&gt;, that of the router.&lt;/p&gt;

&lt;h3&gt;
  
  
  But what it has to do with domains?
&lt;/h3&gt;

&lt;p&gt;Because it is hard for a person to remember an &lt;code&gt;IP&lt;/code&gt;, there are services that have &lt;strong&gt;Huge&lt;/strong&gt; registries, in which an &lt;code&gt;IP&lt;/code&gt; is tied to a human-understandable &lt;strong&gt;word&lt;/strong&gt;, or &lt;strong&gt;Domain&lt;/strong&gt;. When you pay for a &lt;code&gt;domain name&lt;/code&gt;, you pay to register your &lt;code&gt;IP&lt;/code&gt; to these registries and tie that &lt;code&gt;IP&lt;/code&gt; to the &lt;code&gt;domain name&lt;/code&gt; that you have bought.&lt;/p&gt;

&lt;p&gt;Now, each time someone enters that domain name, the computer automatically connects to several Domain Name Registries searching for an &lt;code&gt;IP&lt;/code&gt; that is tied to that specific &lt;code&gt;domain name&lt;/code&gt;. When it is found, the browser connects to the server using the &lt;code&gt;IP&lt;/code&gt; and loads the content.&lt;/p&gt;

&lt;p&gt;The problem arises when our IP is not &lt;code&gt;static&lt;/code&gt; but it changes continuously, in other words, it's &lt;code&gt;dynamic&lt;/code&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Dynamic IPs aka "The Plot Thickens"
&lt;/h3&gt;

&lt;p&gt;Many ISPs (Internet Service Providers) around the world offer a &lt;code&gt;dynamic IP&lt;/code&gt;, meaning that the &lt;code&gt;IP&lt;/code&gt; doesn't stay the same but changes now and then, according to the policies of each ISP. This creates a challenge, as the domain name will have to point to a new &lt;code&gt;IP&lt;/code&gt; each time our &lt;code&gt;IP&lt;/code&gt; changes.&lt;/p&gt;

&lt;p&gt;Luckily for us, most &lt;code&gt;domain name&lt;/code&gt; providers offer a service called &lt;strong&gt;Dynamic DNS&lt;/strong&gt;. This service allows the customer to use their API to update the &lt;code&gt;IP&lt;/code&gt; to which the &lt;code&gt;domain name&lt;/code&gt; must point to. We will be using a small program called &lt;a href="https://ddclient.net/"&gt;ddclient&lt;/a&gt; which supports most of the known &lt;code&gt;domain name&lt;/code&gt; providers.&lt;/p&gt;

&lt;h3&gt;
  
  
  Wait a minute, I don't have a domain name
&lt;/h3&gt;

&lt;p&gt;If you don't have a domain, go ahead and grab one from one of the major Domain Name providers (just google &lt;code&gt;buy domain name&lt;/code&gt;). Make sure that the Domain Name provider that you choose supports &lt;code&gt;Dynamic DNS&lt;/code&gt; and &lt;code&gt;ddclient&lt;/code&gt;. This &lt;code&gt;guide&lt;/code&gt; was tested using &lt;code&gt;Namecheap&lt;/code&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  balena.io
&lt;/h2&gt;

&lt;p&gt;At this point, it is apparent that we will be needing to install and configure a bunch of software not only to bootstrap the website but to also keep it up to date. To do that, we will be using &lt;a href="https://balena.io"&gt;balena.io&lt;/a&gt; to &lt;em&gt;develop&lt;/em&gt; and &lt;em&gt;deploy&lt;/em&gt; our software to the Raspberry pi with the ease and speed of using a &lt;em&gt;cloud-service&lt;/em&gt; provider.&lt;/p&gt;

&lt;p&gt;That's right, using balena to &lt;em&gt;provision&lt;/em&gt; and &lt;em&gt;manage&lt;/em&gt; our embedded IoT device, we will be having the same tools and workflows that one would expect from &lt;em&gt;AWS&lt;/em&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  So what's the deal, exactly?
&lt;/h3&gt;

&lt;p&gt;The team behind balena.io were the first people to port docker to the Raspberry family, showcasing how the &lt;code&gt;container&lt;/code&gt; visualization paradigm could serve the domain of the Internet of Things.&lt;/p&gt;

&lt;p&gt;Balena now offers a full feature-set that enables us to manage-literally thousands- devices, such as a Raspberry pi, as easily as ever&lt;/p&gt;

&lt;p&gt;We will develop our application as a &lt;code&gt;multi-container&lt;/code&gt; application, meaning that the distinct services from which the project is constructed will run as distinct containers, completely isolated one from another.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;You can think the &lt;del&gt;docker engine&lt;/del&gt; balena-engine (our optimized for the IoT version of docker) as an &lt;strong&gt;oven&lt;/strong&gt; where you can bake both a &lt;em&gt;fish&lt;/em&gt; and a &lt;em&gt;cake&lt;/em&gt; at the same time, while each will taste and smell just fine when you take them out.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;In the same sense, each service can run independently, without having to worry with incompatible libraries or different versions. &lt;strong&gt;They wil&lt;/strong&gt;l &lt;del&gt;taste&lt;/del&gt; &lt;strong&gt;work just fine.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Finally, balena allows us to &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;easily access the device's logs&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;ssh&lt;/code&gt; into the host OS or one of the containers&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;push&lt;/code&gt; a new release by simply running a command.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;This last part is pure &lt;strong&gt;black magic&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://i.giphy.com/media/AisOYaOZdrS1i/giphy.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://i.giphy.com/media/AisOYaOZdrS1i/giphy.gif" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You simply define your application in a &lt;code&gt;docker-compose.yaml&lt;/code&gt;, you define a couple of &lt;code&gt;Dockerfiles&lt;/code&gt; and then you just push your project. &lt;strong&gt;balena&lt;/strong&gt; takes care of building the project specifically for your device on its build-servers and then it simply sends the built project to the device. The smart &lt;code&gt;supervisor&lt;/code&gt; is responsible for downloading and setting up your application according to the &lt;code&gt;docker-compose&lt;/code&gt; file.&lt;/p&gt;

&lt;p&gt;Developing and managing IoT devices have &lt;strong&gt;never&lt;/strong&gt; been so easy and &lt;em&gt;beautiful&lt;/em&gt;. Here is a sneak peek of the dashboard for our device:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--cm5cPFH6--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://i.imgur.com/Mp9pB7M.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--cm5cPFH6--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://i.imgur.com/Mp9pB7M.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Disclaimer&lt;/strong&gt;: I work at balena.io in the product team.  Thus, you &lt;em&gt;could&lt;/em&gt; say that I am a bit biased.&lt;/p&gt;

&lt;h2&gt;
  
  
  Complimentary Software:
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Certbot
&lt;/h3&gt;

&lt;p&gt;&lt;code&gt;Certbot&lt;/code&gt; is a service offered by &lt;a href="https://letsencrypt.org/"&gt;letsencrypt&lt;/a&gt;, a nonprofit Certificate Authority providing TLS certificates for anyone who may ask. This way, users will be able to connect securely on our website, using &lt;code&gt;https&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;We will be using the certbot-CLI program to request a certificate for our website. In essence, certbot will place a special file for our webserver to serve. When the authority tests the website, it will find the specific file and verify that the website (and thus the domain) is indeed ours. &lt;strong&gt;Giving us a certificate for 90 days&lt;/strong&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Netdata
&lt;/h3&gt;

&lt;p&gt;&lt;code&gt;Netdata&lt;/code&gt; is a monitoring agent that runs on the device, aggregates, visualizes and presents various data about the operation of the machine. From fairly simple, such as &lt;code&gt;RAM&lt;/code&gt; usage, to more complicated such as &lt;code&gt;CPU interrupts&lt;/code&gt;. Moreover, it has collectors for specific apps that can auto-detect if it's running and start gathering data.&lt;/p&gt;

&lt;p&gt;Although Netdata is fairly complex and customizable, we will be using it because:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;It's super light (about 5% CPU consumption) and thus ideal for the constraint nature of a Raspberry pi 4.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;It can auto-detect &lt;code&gt;nginx&lt;/code&gt; and start gathering data using the &lt;code&gt;stub_page&lt;/code&gt; of the webserver. You can read more about &lt;code&gt;stub_page&lt;/code&gt; &lt;a href="https://easyengine.io/tutorials/nginx/status-page/"&gt;here&lt;/a&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Architecture
&lt;/h2&gt;

&lt;p&gt;We will be using the multi-container functionality of the platform, thus the project will consist of several different containers, each one running a specific component. This architecture enables us to isolate one component from another, facilitating the management and configuration of the application.&lt;/p&gt;

&lt;h3&gt;
  
  
  Components
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;webserver:&lt;/strong&gt; Runs the &lt;code&gt;nginx&lt;/code&gt; service and the &lt;code&gt;certbot&lt;/code&gt; for SSL generation&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;ddclient:&lt;/strong&gt; Runs the &lt;code&gt;ddiclient&lt;/code&gt; service&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;netdata monitoring:&lt;/strong&gt; Runs an instance of &lt;a href="https://github.com/netdata/netdata"&gt;Netdata Monitoring software&lt;/a&gt; to overview the load of the server.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h1&gt;
  
  
  Let's get to it
&lt;/h1&gt;

&lt;h2&gt;
  
  
  Provisioning the Device
&lt;/h2&gt;

&lt;p&gt;The first order of business would be to provision our device, a Raspberry pi 4.&lt;/p&gt;

&lt;p&gt;To do that, we need a new account at balena.io and we need to head over to the &lt;a href="https://www.balena.io/docs/learn/getting-started/raspberrypi4-64/python/"&gt;Get Started Guide&lt;/a&gt; of the balena platform and finish it. It will prompt you to install a &lt;code&gt;demo-app&lt;/code&gt; in your device, but that's ok. We need you to get familiar with the platform before we continue.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Disclaimer:&lt;/strong&gt; After you finish with the guide, don't turn off the device, we will need it for later. Just leave it be. Ok? Cool.&lt;/p&gt;

&lt;p&gt;Go ahead, &lt;strong&gt;I'll wait.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--JlTxFvn1--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_880/https://media0.giphy.com/media/pFZTlrO0MV6LoWSDXd/giphy.gif%3Fcid%3Decf05e47767a6b2830a15c5a45f4bad24ff65678e446b4d5%26rid%3Dgiphy.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--JlTxFvn1--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_880/https://media0.giphy.com/media/pFZTlrO0MV6LoWSDXd/giphy.gif%3Fcid%3Decf05e47767a6b2830a15c5a45f4bad24ff65678e446b4d5%26rid%3Dgiphy.gif" alt="Waiting.."&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;br&gt;&lt;br&gt;
&lt;strong&gt;One Note&lt;/strong&gt;: The first 10 devices on balena.io are for &lt;strong&gt;Free&lt;/strong&gt;, so using your new account will be more than enough for this project.&lt;/p&gt;

&lt;p&gt;Before going forward, we assume that:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;You have balena-CLI installed&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;You have balena-etcher installed&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;You have logged in balena-CLI&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;You have logged in balena dashboard &lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;
&lt;h2&gt;
  
  
  Installing the Software
&lt;/h2&gt;
&lt;h3&gt;
  
  
  Jekyll
&lt;/h3&gt;

&lt;p&gt;Although this blog post focuses mainly on setting up a balena-powered raspberry pi webserver, we want to give some insight into how to create a website in the first place.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Visit&lt;/strong&gt; Jekyll's website and follow the &lt;a href="https://jekyllrb.com/docs/"&gt;Get started guide&lt;/a&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Get yourself familiar with the Jekyll templating engine&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Search&lt;/strong&gt; for a Jekyll theme that is appealing to you:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;a href="http://jekyllthemes.org/"&gt;http://jekyllthemes.org/&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://jekyllthemes.io/"&gt;https://jekyllthemes.io/&lt;/a&gt;&lt;/li&gt;
&lt;/ol&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Download&lt;/strong&gt; the theme locally and configure it according to the &lt;strong&gt;theme's&lt;/strong&gt; and &lt;strong&gt;Jekyll's documentation&lt;/strong&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Build&lt;/strong&gt; the website according to the &lt;strong&gt;theme's documentation&lt;/strong&gt;, the &lt;code&gt;source files&lt;/code&gt; will be placed in a directory called &lt;code&gt;_site&lt;/code&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Upload the files inside &lt;code&gt;_site&lt;/code&gt; to a &lt;em&gt;Github Repository&lt;/em&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;br&gt;&lt;br&gt;
&lt;strong&gt;Disclaimer:&lt;/strong&gt; If you haven't used Github again:&lt;br&gt;
&lt;br&gt;&lt;br&gt;
&lt;br&gt;&lt;/p&gt;

&lt;p&gt;1) Follow ths &lt;a href="https://help.github.com/en/enterprise/2.13/user/articles/creating-a-new-repository"&gt;guide&lt;/a&gt; to create a new &lt;code&gt;repository&lt;/code&gt; to Github.&lt;/p&gt;

&lt;p&gt;2) Follow this &lt;a href="https://help.github.com/en/github/managing-files-in-a-repository/adding-a-file-to-a-repository"&gt;guide&lt;/a&gt; to upload all your website's source files to the &lt;code&gt;repository&lt;/code&gt; you just created. Simply drag and drop all of them as it is shown in the guide.&lt;/p&gt;

&lt;p&gt;3) &lt;strong&gt;Congratulations!&lt;/strong&gt; You have your very first project on Github!&lt;/p&gt;
&lt;h3&gt;
  
  
  Installing nginx-in-balena
&lt;/h3&gt;

&lt;p&gt;To install the software, we did all the heavy lifting for you. We aggregated all the relevant software and made sure that it can support the Raspberry pi 4 with a balenaOS 64bit. You only have to go download a local copy of the project:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;
git clone https://repo

&lt;span class="nb"&gt;cd &lt;/span&gt;repo

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We are now into the project's directory. &lt;strong&gt;Let's configure it!&lt;/strong&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Possibly, the same software will run without problems on a Raspberry pi 3 with either 32 or 64 bit OS. If you test it successfully on a Raspberry pi 3, please do leave a comment and we will update the blog-post accordingly.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Configuring the Software
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Environment Variables
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Go &lt;a href="https://github.com/balena-io-playground/balena-nginx"&gt;here&lt;/a&gt; and download the project's repository locally. If you are not familiar with Github, please refer to their &lt;a href="https://help.github.com/en/github/creating-cloning-and-archiving-repositories/cloning-a-repository"&gt;documentation&lt;/a&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;cd&lt;/code&gt; into the repository using a terminal program (cmd in windows).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Open the file &lt;code&gt;.balena/balena.yml&lt;/code&gt; using your favorite code editor. If you don't have one, go ahead and install &lt;a href="https://code.visualstudio.com/?wt.mc_id=vscom_downloads"&gt;Visual Studio Code&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;In this file, we need to change a couple of &lt;code&gt;environment variables&lt;/code&gt; which we make available to the services via their respective &lt;code&gt;Dockerfiles&lt;/code&gt;. Go ahead and change the variables:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;In order to find the &lt;code&gt;REPO_ZIP_URL&lt;/code&gt;, go to your Github repository, &lt;em&gt;click&lt;/em&gt; on &lt;strong&gt;clone or download&lt;/strong&gt; and then &lt;em&gt;right-click&lt;/em&gt; on &lt;strong&gt;Download ZIP&lt;/strong&gt; and &lt;em&gt;click&lt;/em&gt; on &lt;strong&gt;Copy Link Address&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;Fill in the domain of your website according to this certbot example. Please note that &lt;code&gt;www.domain.com&lt;/code&gt; and &lt;code&gt;domain.com&lt;/code&gt; are two different domains, it is best to include both.
&lt;/li&gt;
&lt;/ol&gt;


&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;REPO_ZIP_URL&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;https://github.com/OdysLam/odyslam.github.io/archive/master.zip

&lt;span class="nv"&gt;REPO_NAME&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;odyslam.github.io

&lt;span class="nv"&gt;CERTBOT_MAIL&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;hi@odyslam.me

&lt;span class="nv"&gt;CERTBOT_DOMAIN_1&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;www.example.com

&lt;span class="nv"&gt;CERTBOT_DOMAIN_2&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;example.com

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This environment variables are not expected to change while the server runs, thus we prefer to define them at build time.&lt;/p&gt;

&lt;p&gt;On the other hand, there are 2 environment variables that can be set using &lt;code&gt;balena dashboard&lt;/code&gt; and when the &lt;code&gt;nginx&lt;/code&gt; container reloads, it will pick them up.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;SYNC_WEBSITE:&lt;/strong&gt; If this environment variable is set to "1", the container will always download the latest version of the website every time it restarts.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;CERTBOT_FORCED:&lt;/strong&gt; If this environment variable is set to "1", the container will always request a new certification every time it restarts. If the current certification is still valid, it will simply inform the user that the certification is up-to-date and will exit.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;You can read more about &lt;code&gt;environment variables&lt;/code&gt; in balena, in the &lt;a href="https://www.balena.io/docs/learn/manage/serv-vars/"&gt;documentation&lt;/a&gt;.&lt;br&gt;
You can read more about &lt;code&gt;build-time secrets&lt;/code&gt; in balena, in the &lt;a href="https://www.balena.io/docs/learn/more/masterclasses/cli-masterclass/#81-build-time-secrets"&gt;documentation&lt;/a&gt;.&lt;/p&gt;
&lt;h3&gt;
  
  
  nginx configuration
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;Run the commands bellow to generate a private key that will be used by nginx for &lt;em&gt;SSL&lt;/em&gt; related functionality. As it might take some minutes, go ahead and read about it in this &lt;a href="https://security.stackexchange.com/questions/94390/whats-the-purpose-of-dh-parameters"&gt;Stack Overflow Question&lt;/a&gt;. Welcome to the world of &lt;em&gt;cryptography&lt;/em&gt;.
&lt;/li&gt;
&lt;/ol&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;
&lt;span class="nb"&gt;cd &lt;/span&gt;nginx

openssl dhparam &lt;span class="nt"&gt;-out&lt;/span&gt; dhparam.pem 2048

&lt;span class="nb"&gt;cd&lt;/span&gt; ..

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;ol&gt;
&lt;li&gt;Using a text editor, open &lt;code&gt;nginx.conf&lt;/code&gt; which will find the &lt;code&gt;nginx&lt;/code&gt; directory. Head over to the following excerpt and replace the &lt;code&gt;www.example.com&lt;/code&gt; and &lt;code&gt;example.com&lt;/code&gt; with your domain name.
&lt;/li&gt;
&lt;/ol&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight conf"&gt;&lt;code&gt;
&lt;span class="n"&gt;server&lt;/span&gt; {
&lt;span class="n"&gt;listen&lt;/span&gt; &lt;span class="m"&gt;443&lt;/span&gt; &lt;span class="n"&gt;ssl&lt;/span&gt; &lt;span class="n"&gt;http2&lt;/span&gt;;
&lt;span class="n"&gt;server_name&lt;/span&gt; &lt;span class="n"&gt;www&lt;/span&gt;.&lt;span class="n"&gt;example&lt;/span&gt;.&lt;span class="n"&gt;com&lt;/span&gt; &lt;span class="n"&gt;example&lt;/span&gt;.&lt;span class="n"&gt;com&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;server {

listen 80;
listen [::]:80;
server_name www.example.com example.com;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;If you want to read more about the &lt;code&gt;nginx&lt;/code&gt; configuration file and what the various fields mean, you can read more about it here:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Digital ocean &lt;a href="https://www.digitalocean.com/community/tutorials/understanding-the-nginx-configuration-file-structure-and-configuration-contexts"&gt;article&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Nginx &lt;a href="http://nginx.org/en/docs/beginners_guide.html"&gt;Beginner's Guide&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;
&lt;h3&gt;
  
  
  ddiclient configuration
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Create a configuration file for &lt;code&gt;ddclient&lt;/code&gt; using a text editor:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;You can find examples in the &lt;code&gt;ddclient&lt;/code&gt;'s &lt;a href="https://ddclient.net/#configuration"&gt;documentation&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;If you used Namecheap, you can find an example configuration in the &lt;a href="https://www.namecheap.com/support/knowledgebase/article.aspx/583/11/how-do-i-configure-ddclient"&gt;namecheap documentation&lt;/a&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Place the configuration file you just created into the &lt;code&gt;ddclient&lt;/code&gt; folder&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;
&lt;h2&gt;
  
  
  Configuring the environment
&lt;/h2&gt;

&lt;p&gt;It's time to move on configuring our environment. In our case we need to allow ingoing connections from the router and make sure that the server will always have a &lt;code&gt;static ip&lt;/code&gt; in the local network.&lt;/p&gt;
&lt;h3&gt;
  
  
  Static IP
&lt;/h3&gt;

&lt;p&gt;Balena allows us to set our device to static IP in a breeze.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Find out the &lt;code&gt;IP form&lt;/code&gt; that the router uses to assign &lt;code&gt;IP&lt;/code&gt;s.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Go to your balenaCloud dashboard, you can find the IP of your device at the summary page.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--3Kvo8EWZ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://i.imgur.com/BgMJUfv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--3Kvo8EWZ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://i.imgur.com/BgMJUfv.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Use a text editor to open the file &lt;code&gt;static-ip&lt;/code&gt; that you will find in the directory &lt;code&gt;tools&lt;/code&gt; of the repository you downloaded.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Replace the field &lt;code&gt;ROUTER_IP&lt;/code&gt; with the &lt;code&gt;IP&lt;/code&gt; of the router.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Replace the field &lt;code&gt;DEVICE_IP&lt;/code&gt; with the &lt;code&gt;IP&lt;/code&gt; of your balena device with a small change. Change the digits after the last &lt;code&gt;.&lt;/code&gt; to &lt;code&gt;100&lt;/code&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Close the text editor.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Here is an example, note that here the &lt;code&gt;IP&lt;/code&gt; has the format &lt;code&gt;192.168.1.X&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
[connection]

id=my-ethernet

type=ethernet

interface-name=eth0

permissions=

secondaries=



[ethernet]

mac-address-blacklist=



[ipv4]

#This is the important line
address1=192.168.1.100/24,192.168.1.1 

dns=8.8.8.8;8.8.4.4;

dns-search=

method=manual



[ipv6]

addr-gen-mode=stable-privacy

dns-search=

method=auto

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Port Forwarding
&lt;/h3&gt;

&lt;p&gt;Since we are building a server, we need to allow people to be able to connect to our Raspberry Pi. Normally a home router will block all ingoing connections (connections that are not initiated from a device inside the local network), thus we need to create a rule and tell the router that each connection that is made to a specific &lt;strong&gt;port&lt;/strong&gt; should be &lt;strong&gt;forwarded&lt;/strong&gt; to the same &lt;strong&gt;port&lt;/strong&gt; of the &lt;strong&gt;Raspberry&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;This is because when someone attempts to connect to our &lt;strong&gt;IP&lt;/strong&gt;, in essence, he tries to connect to our &lt;em&gt;router&lt;/em&gt; since it functions as the gateway that stands between the Internet and our local network. Thus we want to tell the &lt;em&gt;router&lt;/em&gt; that any time someone tries to continue to the ports that we will specify, in essence, he wants to connect to our server, thus the router must forward the connection to the Raspberry pi.&lt;/p&gt;

&lt;p&gt;In other words, we need to forward &lt;strong&gt;ports&lt;/strong&gt; &lt;code&gt;80&lt;/code&gt; and &lt;code&gt;443&lt;/code&gt; to the Raspberry pi.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Visit&lt;/strong&gt; &lt;a href="https://portforward.com/"&gt;portforward&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Find your router&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Follow instructions and forward the ports to the device's IP&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;We are almost there, let's deploy the device.&lt;/p&gt;

&lt;h2&gt;
  
  
  Deploy
&lt;/h2&gt;

&lt;p&gt;Now that we have everything configured, let's start deploying the various components.&lt;/p&gt;

&lt;h3&gt;
  
  
  Deploy the project to the device
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Open balenaCloud and create a new application for &lt;code&gt;Raspberry pi 4&lt;/code&gt;. You can name it whatever you want. We will assume that you name it &lt;code&gt;bananas&lt;/code&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Download an &lt;code&gt;development&lt;/code&gt; image for that application. Don't bother filling in your &lt;code&gt;wifi&lt;/code&gt; credentials, we will be using &lt;code&gt;ethernet&lt;/code&gt; as &lt;code&gt;wifi&lt;/code&gt; is not very reliable for a webserver.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://i.giphy.com/media/5hc2bkC60heU/giphy.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://i.giphy.com/media/5hc2bkC60heU/giphy.gif" alt=""&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Burn the image using the best image format app in the world, &lt;strong&gt;balena-etcher&lt;/strong&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Pull out the sd card and re-insert it in the computer. Copy the file named &lt;code&gt;static-ip&lt;/code&gt; we created earlier into the &lt;code&gt;system-connections&lt;/code&gt; directory of the sd card.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Insert the card into the Raspberry pi and connect it to power.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Go to the &lt;code&gt;balena-nginx&lt;/code&gt; directory, where your &lt;code&gt;docker-compose.yaml&lt;/code&gt; is located, and push the project&lt;br&gt;
&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;
balena push bananas

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Generating the SSL certificate
&lt;/h3&gt;

&lt;p&gt;To generate the SSL certificate, we don't have to do anything.&lt;/p&gt;

&lt;p&gt;The server will detect the absence of the certificates and will run the &lt;code&gt;certbot&lt;/code&gt; service to register your website. Afterward, it will simply start the server and will serve your website.&lt;/p&gt;

&lt;p&gt;The &lt;code&gt;certificates&lt;/code&gt; will be saved into a persistent directory with a functionality called &lt;code&gt;named volumes&lt;/code&gt;. This enables the device to persist your certificates (or any data in that directory)&lt;/p&gt;

&lt;h3&gt;
  
  
  Updating your certificates
&lt;/h3&gt;

&lt;p&gt;You are very organized and want to plan ahead? Sure, cerbot will email you a couple of days before your certificates are invalidated so you can renew.&lt;/p&gt;

&lt;p&gt;When you receive that e-mail, you only need to:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Log in&lt;/strong&gt; to balena dashboard&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Go to&lt;/strong&gt; device summary&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Open ssh session&lt;/strong&gt; to the &lt;code&gt;nginx&lt;/code&gt; container&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Type&lt;/strong&gt; &lt;code&gt;certbot renew&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Done&lt;/strong&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Push new content to the website
&lt;/h3&gt;

&lt;p&gt;All we have to do in order to push new content to the website is to update the &lt;code&gt;source files&lt;/code&gt; in the directory from which we have configured &lt;code&gt;nginx&lt;/code&gt; to serve our website. In our case  &lt;code&gt;/usr/share/nginx/html/&lt;/code&gt;.ping &lt;/p&gt;

&lt;p&gt;To do that, we have a script inside the &lt;code&gt;nginx&lt;/code&gt; container, which downloads the latest version of our website from the &lt;code&gt;GitHub&lt;/code&gt; repository we have defined. &lt;/p&gt;

&lt;p&gt;To run the script:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;SSH into &lt;code&gt;nginx&lt;/code&gt; container, preferably using &lt;code&gt;balena dashboard&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;&lt;code&gt;/update-blog.sh&lt;/code&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;But, what happens if we want to add new content to our website? &lt;/p&gt;

&lt;p&gt;We have to upload the new source files into our &lt;code&gt;GitHub Repository&lt;/code&gt; and then we have to run the &lt;code&gt;update-blog.sh&lt;/code&gt; script in order to download the new version in the server.&lt;/p&gt;

&lt;p&gt;So, in order to push new content to the website:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Change the website's source files&lt;/li&gt;
&lt;li&gt;Upload the new files to the Repository and add a &lt;code&gt;commit message&lt;/code&gt; to describe the changes&lt;/li&gt;
&lt;li&gt;From &lt;code&gt;balena dashboard&lt;/code&gt;, &lt;code&gt;ssh&lt;/code&gt; into the &lt;code&gt;nginx&lt;/code&gt; container and run: &lt;code&gt;/update-blog.sh&lt;/code&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Push new content to the website - For advanced users
&lt;/h3&gt;

&lt;p&gt;The whole &lt;em&gt;process&lt;/em&gt; has be &lt;strong&gt;automated&lt;/strong&gt;, and you can simply run the script &lt;code&gt;deploy-dev-loca.sh&lt;/code&gt; from the local directory of the project, like this:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;./deploy-dev-local.sh "&amp;lt;commit message&amp;gt;"&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;The field &lt;code&gt;&amp;lt;commit message&amp;gt;&lt;/code&gt; must be replaced with a commit message for the addition to the &lt;code&gt;GitHub&lt;/code&gt; repository, just like you did when you uploaded the files using the website of &lt;code&gt;GitHub&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Before you can you can use the script, you have to open the file using your favorit text editor and replace the following fields:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# If you use &lt;/span&gt;
&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;J_OUTPUT&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt; Absolute path to the directory of the website&lt;span class="s1"&gt;'s source files

export REPO = Absolute path to the directory of the website'&lt;/span&gt;s &lt;span class="nb"&gt;source &lt;/span&gt;files

&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;DEV_UUID&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt; UUID of the device, can be found from the device&lt;span class="s1"&gt;'s dashboard
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h1&gt;
  
  
  Comments
&lt;/h1&gt;

</description>
      <category>nginx</category>
      <category>balena</category>
      <category>raspberrypi</category>
      <category>webserver</category>
    </item>
  </channel>
</rss>
