<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Andrew Pillar</title>
    <description>The latest articles on DEV Community by Andrew Pillar (@andrewpillar).</description>
    <link>https://dev.to/andrewpillar</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/andrewpillar"/>
    <language>en</language>
    <item>
      <title>systemd.timer, an alternative to cron</title>
      <dc:creator>Andrew Pillar</dc:creator>
      <pubDate>Thu, 08 Dec 2022 19:21:40 +0000</pubDate>
      <link>https://dev.to/andrewpillar/systemdtimer-an-alternative-to-cron-d52</link>
      <guid>https://dev.to/andrewpillar/systemdtimer-an-alternative-to-cron-d52</guid>
      <description>&lt;p&gt;There will come a point in time during your time administering a Linux server where you will want to perform a job on a schedule. Perhaps you want to rotate some TLS certificates before they expire, or delete old files that are no longer needed. Typically for this, you would use &lt;a href="https://en.wikipedia.org/wiki/Cron"&gt;cron&lt;/a&gt;, perhaps the most widely used job scheduler for UNIX like systems. You would fire up the crontab for the user, punch in the schedule followed by the command, then write and quit. If you wanted to monitor the job, then you would add &lt;code&gt;MAILTO&lt;/code&gt; at the top to receive the cron logs should the job fail.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://systemd.io/"&gt;systemd&lt;/a&gt; offers an alternative to cron via &lt;a href="https://www.freedesktop.org/software/systemd/man/systemd.timer.html"&gt;systemd.timer&lt;/a&gt;, one that I prefer over cron for reasons I will get into later. With systemd.timer, you specify a &lt;code&gt;*.timer&lt;/code&gt; file and a corresponding &lt;code&gt;*.service&lt;/code&gt;, what with the latter being the job you want to perform. For example, using the example of certificate rotation, we might have a &lt;code&gt;certrotate.service&lt;/code&gt; file that looks like this,&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[Unit]
Description=Rotate TLS certificates

[Service]
Type=oneshot
ExecStart=/usr/local/bin/uacme -d /etc/uacme.d -h /usr/local/share/uacme/uacme.sh issue example.com
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;and the corresponding &lt;code&gt;certrotate.timer&lt;/code&gt; file,&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[Unit]
Description=Weekly rotation of TLS certificates

[Timer]
OnCalendar=weekly
# Set to true so we can store when the timer last triggered
# on disk.
Persistent=true
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;with both of these in place we can then start the timer.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ sudo systemctl start certrotate.timer
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;The above is certainly more verbose than cron, what with the main difference being that two files are required, one for the job itself, and another for the timer. The implementation of timers in systemd offers up the following benefits over that of cron:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;It allows for independent job execution via &lt;code&gt;systemctl start&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Jobs can be configured to have dependencies&lt;/li&gt;
&lt;li&gt;Job output will automatically be written to systemd-journald&lt;/li&gt;
&lt;li&gt;Templated unit files&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;On the first point, you can make the argument that this is possible via cron. Since cron does just execute the arbitrary commands, so you could run the same commands in the crontab via the terminal. This is true, however systemd offers up some convenience in this regard. With the above example for certificate rotation I can fire of the job like so,&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ sudo systemctl start certrotate
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;without having to memorise the full sequence of flags and arguments to pass. This makes debugging jobs easier, and should one fail you can check the status via &lt;code&gt;systemctl status&lt;/code&gt;, or the entire log with &lt;code&gt;journalctl -u&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;With the second point, since jobs are just regular &lt;code&gt;*.service&lt;/code&gt; files, they can be configured to have dependencies via &lt;code&gt;Wants&lt;/code&gt; and &lt;code&gt;Requires&lt;/code&gt;, more details can be found under &lt;a href="https://www.freedesktop.org/software/systemd/man/systemd.unit.html"&gt;systemd.unit&lt;/a&gt;. This can allow for more sophisticated orchestration between jobs on a system, should any of them depend on another job being completed first.&lt;/p&gt;

&lt;p&gt;Regarding the final point, systemd offers the ability to have templated unit files via &lt;a href="https://www.freedesktop.org/software/systemd/man/systemd.unit.html#Specifiers"&gt;specifiers&lt;/a&gt;. This reduces the overhead of managing jobs that have repetitive logic. With the above example we currently only rotate the certificate for the domain &lt;code&gt;example.com&lt;/code&gt;, what if we also want to rotate the certificates for any subdomains too? First we would rename both the &lt;code&gt;*.timer&lt;/code&gt; and &lt;code&gt;*.service&lt;/code&gt; files so a &lt;code&gt;@&lt;/code&gt; precedes the suffix, so &lt;code&gt;certrotate@.timer&lt;/code&gt;, and &lt;code&gt;certrotate@.server&lt;/code&gt;. Then, in &lt;code&gt;certrotate@.service&lt;/code&gt; we would modify &lt;code&gt;ExecStart&lt;/code&gt; to use &lt;code&gt;%i&lt;/code&gt; for the instance name of the service,&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ExecStart=/usr/local/bin/uacme -d /etc/uacme.d -h /usr/local/share/uacme/uacme.sh issue %i
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;then we can perform the following to have the changes applied,&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ sudo systemctl daemon-reload
$ sudo systemctl start certrotate@example.com.timer
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;and this would be the same for each subsequent domain too.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ sudo systemctl start certrotate@api.example.com.timer
$ sudo systemctl start certrotate@about.example.com.timer
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;And my favourite thing about systemd.timer is how it provides and easy overview of the jobs running on your server, and when they will next execute,&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ systemctl list-timers
NEXT                         LEFT          LAST                         PASSED       UNIT                                    ACTIVATES
Fri 2022-12-09 00:00:00 UTC  4h 57min left Thu 2022-12-08 00:00:00 UTC  19h ago      logrotate.timer                         logrotate.service
Fri 2022-12-09 00:00:00 UTC  4h 57min left Thu 2022-12-08 00:00:00 UTC  19h ago      man-db.timer                            man-db.service
Fri 2022-12-09 06:06:34 UTC  11h left      Thu 2022-12-08 06:06:43 UTC  12h ago      apt-daily-upgrade.timer                 apt-daily-upgrade.service
Fri 2022-12-09 08:19:01 UTC  13h left      Thu 2022-12-08 18:34:00 UTC  28min ago    apt-daily.timer                         apt-daily.service
Fri 2022-12-09 09:55:30 UTC  14h left      Thu 2022-12-08 09:55:30 UTC  9h ago       systemd-tmpfiles-clean.timer            systemd-tmpfiles-clean.service
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Now, one thing that systemd.timer lacks is the &lt;code&gt;MAILTO&lt;/code&gt; functionality offered by cron. But this can be easily reimplemented by implementing another oneshot service that is invoked via &lt;code&gt;OnFailure&lt;/code&gt;. For further information on this, see the &lt;a href="https://wiki.archlinux.org/title/Systemd/Timers#MAILTO"&gt;MAILTO&lt;/a&gt; section from the Arch wiki on how to achieve this.&lt;/p&gt;

&lt;p&gt;If you are someone who administers a Linux server, one that uses systemd specifically, I encourage you to invest some time in using systemd timers. Sure, they're more verbose than cron, but I've found the tradeoff in that regard to be one worth making. They're easy to manage, since they are just standalone services at the end of the day, make deduplication of jobs a cinch with unit specifiers, and offer easy monitoring via &lt;code&gt;systemctl list-timers&lt;/code&gt; as shown above.&lt;/p&gt;

</description>
      <category>linux</category>
      <category>programming</category>
    </item>
    <item>
      <title>Fast embedded templates in Go with quicktemplate</title>
      <dc:creator>Andrew Pillar</dc:creator>
      <pubDate>Tue, 29 Nov 2022 18:34:26 +0000</pubDate>
      <link>https://dev.to/andrewpillar/fast-embedded-templates-in-go-with-quicktemplate-1jl2</link>
      <guid>https://dev.to/andrewpillar/fast-embedded-templates-in-go-with-quicktemplate-1jl2</guid>
      <description>&lt;p&gt;Wrote a new post on my site about using quicktemplate in Go to generate efficient HTML documents. I would post it here but the syntax of the example code in the post is breaking the rich content embedding dev.to provides, so you'll have to read it there at: &lt;a href="https://andrewpillar.com/programming/2022/11/29/fast-embedded-templates-in-go-with-quicktemplate/"&gt;https://andrewpillar.com/programming/2022/11/29/fast-embedded-templates-in-go-with-quicktemplate/&lt;/a&gt;&lt;/p&gt;

</description>
      <category>go</category>
      <category>programming</category>
    </item>
    <item>
      <title>From WampServer, to Vagrant, to QEMU</title>
      <dc:creator>Andrew Pillar</dc:creator>
      <pubDate>Tue, 08 Nov 2022 19:20:43 +0000</pubDate>
      <link>https://dev.to/andrewpillar/from-wampserver-to-vagrant-to-qemu-1bdb</link>
      <guid>https://dev.to/andrewpillar/from-wampserver-to-vagrant-to-qemu-1bdb</guid>
      <description>&lt;p&gt;When I first dipped my toe into web development it was with PHP. Building quick and dirty inelegant websites, where the HTML and PHP would blur between one another. At the time, I did have a very firm grip on what HTTP was, where exactly PHP sat in the stack, other than it being in the backend, whatever that was, nor did I understand what a database was, other than a place to query data from. On top of this, I was writing this inelegant PHP code on Windows. If you've ever done PHP development on Windows, then you are most likely aware of &lt;a href="https://wampserver.com/en/"&gt;WampServer&lt;/a&gt;. This is a solution stack for Windows that provides an Apache web server, OpenSSL, MySQL and PHP.&lt;/p&gt;

&lt;p&gt;WampServer was exactly what I needed. Something that would allow me to create a proper web application with a web server, a database, and even OpenSSL, locally on my machine. It even provided a user interface to help with the management of this all, no need with mucking around with configuration files. As time went on my understanding of programming grew, as did my knowledge in the realm of web development. I eventually stumbled upon a new growing web framework that made developing in PHP even easier - &lt;a href="https://laravel.com"&gt;Laravel&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Laravel wanted to make the entire PHP development process as seamless as possible. At the time, Laravel achieved this with Laravel Homestead, apre-packaged &lt;a href="https://www.vagrantup.com"&gt;Vagrant&lt;/a&gt; box. Everything you would need would be bundled into a virtual machine, nice and neat, away from your OS. And this was how I was introduced to Vagrant. A means of packaging virtual machines into a portable format, so you could easily create local development environments. Vagrant worked by using a Vagrantfile to describe the virtual machine you would want to use, how it would be provisioned, the ports you would forward, and the filepaths you would want shared from host to guest. All of these tasks would be handled by the provider, the underlying virtual machine program itself, in my case, VirtualBox. Perhaps the one thing I liked the most about Vagrant, was the ability to provision my machine, clear down if I wanted to and have it back in a clean state for development.&lt;/p&gt;

&lt;p&gt;This was perhaps my earliest introduction to Linux in the context of hosting server software, or rather, hosting a local development environment. I was finally getting a glimpse at how all the pieces of my web development stack pieced together, though there was still some mystery behind it all.&lt;/p&gt;

&lt;p&gt;It was around the time of Windows 10 being forced onto everyone that I made the switch to Linux. Windows 8 was something I had tolerated, though I wasn't really a fan of the whole pseudo-tablet interface it offered. I had wanted to try out Linux anyway as my daily driver, so I saw this as the perfect excuse. I installed Arch on my desktop, went through the typical cycle of trying to glue together one of those fancy desktop environments with shell scripts, then settled on GNOME, as it just works and has what I consider some sensible defaults.&lt;/p&gt;

&lt;p&gt;I acclimated to my new environment, and figured out how to get Vagrant and VirtualBox installed so I could continue my hobbyist hacking as I had done before. I wanted to continue to keep my development environments separate from one another, and from the host OS itself. However, I started to have issues with VirtualBox, nothing major, just paper cuts here and there. But, over time these cuts added up. Furthermore, over the years I had noticed Vagrant becoming slower, or perhaps it was me realising how slow Vagrant had always been.&lt;/p&gt;

&lt;p&gt;As someone who enjoys playing video games, and a recent convert to Linux, I was well aware of the derth of support for games. I was also aware of some of the solutions, one of those being GPU passthrough to this thing called &lt;a href="https://qemu.org"&gt;QEMU&lt;/a&gt;. QEMU is a fast and lightweight machine emulator and virtualizer. This was of course something that interested me, so I went about exploring QEMU and playing with it. When I first started using it, I was offput by it. No slick UI to use, a myriad of flags to pass it, some flags which even took more options. But when I figured out the right incantation of flags to pass it, it worked, and I noticed its speed in comparison to VirtualBox.&lt;/p&gt;

&lt;p&gt;At this point when it came to my hobbyist development, I had moved past PHP and started learning Go, and was looking to do some serious development with this for a CI platform I had an idea for. By now, I had a firmer grasp of the software stack I wanted to work with, a better understanding of how everything pieced together. And so I went about developing that CI platform, that would later become &lt;a href="https://about.djinn-ci.com"&gt;Djinn CI&lt;/a&gt;. I uninstalled VirtualBox and Vagrant and fully committed to using QEMU, booting up the local machine was as simple as hitting &lt;code&gt;CTRL + R&lt;/code&gt; in my terminal, searching for &lt;code&gt;qemu&lt;/code&gt; and hitting enter, an elegant solution I know.&lt;/p&gt;

&lt;p&gt;QEMU worked fine for what I needed. However there were a few things I missed from Vagrant:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The ability to spin up a virtual machine based on a box&lt;/li&gt;
&lt;li&gt;The ability to provision machines&lt;/li&gt;
&lt;li&gt;Easy portforwarding (at least something easier than a &lt;code&gt;hostfwd=tcp:127.0.0.1:&amp;lt;src&amp;gt;-:&amp;lt;dst&amp;gt;&lt;/code&gt;)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I was tempted to build a full fledged tool to solve these issues. Instead, I found a better solution, a simpler one in my opinion. The solution I found was to implement a simple shell script, called &lt;code&gt;qemu&lt;/code&gt;. It looks something like this,&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ qemu debian/stable
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;if given one argument, then that single argument is treated as the name of the QCOW2 image to use. The image would be relative to the directory specified via the &lt;code&gt;QCOW2PATH&lt;/code&gt; environment variable. Here's what mine looks like,&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ ls $QCOW2PATH
alpine  arch  debian  djinn-dev  freebsd  ubuntu
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;if any leading arguments are given before the image name, then these are passed to the underlying QEMU program as flags,&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ qemu -snapshot debian/stable
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;if a &lt;code&gt;provision.sh&lt;/code&gt; script is detected in the current directory from where &lt;code&gt;qemu&lt;/code&gt; is executed then this is checked for the number of CPUs to use and ports to forward, for example,&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ head provision.sh
#!/bin/sh
# cpus: 2
# portfwd: 5432
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;the amount of memory given to the virtual machine will be half of what memory is available via &lt;code&gt;MemoryAvailable&lt;/code&gt; from &lt;code&gt;/proc/meminfo&lt;/code&gt;. The &lt;code&gt;provision.sh&lt;/code&gt; script will be copied to the machine and executed once booted.&lt;/p&gt;

&lt;p&gt;This script will automatically forward port &lt;code&gt;2222&lt;/code&gt; to &lt;code&gt;22&lt;/code&gt;, and assumes that passwordless root is configured on the virtual machine for SSH access.&lt;/p&gt;

&lt;p&gt;I found this script to be a simple rather nice solution to the things that I missed from Vagrant. The QCOW2 image files are simply files kept in a single location, I can use the filesystem to manage these, and refer to them via the aforementioned environment variable. The ability to provision machines is done simply by copying the provision script and executing it to the machine once booted, and portforwarding is hacked ontop of that too. You can find this script &lt;a href="https://github.com/andrewpillar/bin/blob/main/qemu"&gt;here&lt;/a&gt; if you're interested in making use of it yourself.&lt;/p&gt;

&lt;p&gt;This is a pretty long post, and seems tangential, but I wanted to look back at the different tooling I've used when it comes to programming. I've found that some tools in development helped with my understanding of programming, whilst others, perhaps hindered it. When I first started out I didn't really understand what was happening, I was simply looking for an end result, and would do anything to get there, I'm still guilty of this to a certain degree to this day.&lt;/p&gt;

</description>
      <category>programming</category>
    </item>
    <item>
      <title>A simple CRUD library for PostgreSQL with generics in Go</title>
      <dc:creator>Andrew Pillar</dc:creator>
      <pubDate>Tue, 25 Oct 2022 11:12:46 +0000</pubDate>
      <link>https://dev.to/andrewpillar/a-simple-crud-library-for-postgresql-with-generics-in-go-cl8</link>
      <guid>https://dev.to/andrewpillar/a-simple-crud-library-for-postgresql-with-generics-in-go-cl8</guid>
      <description>&lt;p&gt;I've &lt;a href="https://andrewpillar.com/programming/2019/07/13/orms-and-query-building-in-go"&gt;written&lt;/a&gt; previously about my thoughts on ORMs in Go and how the typical Active Record style ORMs seem to be a poor fit as a whole. With the advent of generics however, I've decided to revisit this topic, and see how generics could be utilied to make working with databases easier when it comes to modelling data. The parts I'm mostly interested in is providing a thin abstraction over the typical CRUD (create, read, update, delete) operations an ORM would provide for each model. Ideally, we want to provide this abstraction in a way that makes as few assumptions about the data being modelled as possible, and that builds on top of what has already been explored with regards to query building. I'm not going to cover what generics are in this post, as the authors of the language have &lt;a href="https://go.dev/blog/intro-generics"&gt;already done that&lt;/a&gt;, instead we're going to dive right in.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; For what we're going to be building we will be utilising the Go&lt;br&gt;
libraries &lt;code&gt;github.com/jackc/pgx&lt;/code&gt; and &lt;code&gt;github.com/andrewpillar/query&lt;/code&gt;.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;ul&gt;
&lt;li&gt;Models and stores&lt;/li&gt;
&lt;li&gt;Model creation&lt;/li&gt;
&lt;li&gt;Model updating&lt;/li&gt;
&lt;li&gt;Model deletion&lt;/li&gt;
&lt;li&gt;Model reading&lt;/li&gt;
&lt;li&gt;Implementing a model&lt;/li&gt;
&lt;li&gt;Conclusion&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Models and stores
&lt;/h2&gt;

&lt;p&gt;Pretty much every ORM has a concept of a model, that is a structure of data that represents some data from a database. These typically offer the ability to seamlessy create, update, and delete records of data that they map to. For our library, we will also want a way of representing our data too, and a mechanism by which to perform CRUD operations. For this, we will have the concept of a Model, for modelling data, and a Store, for performing the CRUD operations. So, let's implement both of these, first the Model,&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;type ScanFunc func(dest ...any) error

type Model interface {
    Primary() (string, any)

    Scan(fields []string, scan ScanFunc) error

    Params() map[string]any
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;the Model is implemented as an interface with the methods &lt;code&gt;Primary&lt;/code&gt;, &lt;code&gt;Scan&lt;/code&gt;, and &lt;code&gt;Params&lt;/code&gt;. The &lt;code&gt;Primary&lt;/code&gt; method would return the name of the column used as the primary key for the Model, and the value for that column if any. The &lt;code&gt;Scan&lt;/code&gt; method would be invoked when scanning data from the Model's underlying table into said Model, this would be given the fields being scanned, and the function to call to actually perform the scan. The &lt;code&gt;Params&lt;/code&gt; method to return the parameters of the Model to be used during create and update operations. With the Model defined, we can use this as a type parameter for our Store implementation,&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;type Store[M Model] struct {
    *pgxpool.Pool

    table string
    new   func() M
}

func NewStore[M Model](pool *pgxpool.Pool, table string, new func() M) *Store[M] {
    return &amp;amp;Store[M]{
        Pool:  pool,
        table: table,
        new:   new,
    }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;the Store implementation is a struct with a type parameter for anything that implements a Model. This will allow for the same Store to be used across multiple implementations of a Model interface. This takes a PostgreSQL connection pool from which to take connections from, a table, which is the table we would operate on, and a callback function for instantiating new Models, this would be used when scanning models from the database.&lt;/p&gt;

&lt;h2&gt;
  
  
  Model creation
&lt;/h2&gt;

&lt;p&gt;With the basic implementation of both Model and Store done, we can now expand the Store to support create operations for any Model we give it. First, let's implement the query building part of creation,&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;func (s *Store[M]) Create(ctx context.Context, m M)) error {
    p := m.Params()

    cols := make([]string, 0, len(p))
    vals := make([]any, 0, len(p))

    for k, v := range p {
        cols = append(cols, k)
        vals = append(vals, v)
    }

    primary, _ := m.Primary()

    q := query.Insert(
        s.table,
        query.Columns(cols...),
        query.Values(vals...),
        query.Returning(primary),
    )
...
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;in this bit, we iterate over the parameters returned from &lt;code&gt;m.Params&lt;/code&gt; and place them in &lt;code&gt;[]string&lt;/code&gt; and &lt;code&gt;[]any&lt;/code&gt; slices for the columns and values respectively. We also call &lt;code&gt;m.Primary&lt;/code&gt; to get the name of the primary key column which we want returning from the query, this will allow us to scan the value back into the created model if it doesn't already have it, for example if the column is an auto-incrementing integer. Now, let's implement the rest where we invoke the query, and scan the results back in,&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;...
    rows, err := s.Query(ctx, q.Build(), q.Args()...)

    if err != nil {
        return err
    }

    defer rows.Close()

    if !rows.Next() {
        if err := rows.Err(); err != nil {
            return err
        }
        return nil
    }

    if err := m.Scan(s.fields(rows), rows.Scan); err != nil {
        return nil
    }
    return nil
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;here, we pass the built query to the underlying database pool and scan the row that is returned. You'll notice how we invoke &lt;code&gt;s.fields&lt;/code&gt; to get the fields from the rows that are returned, this is something that needs to be implemented so let's do that next,&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;func (s *Store[M]) fields(rows pgx.Rows) []string {
    descs := rows.FieldDescriptions()
    fields := make([]string, 0, len(descs))

    for _, d := range descs {
        fields = append(fields, string(d.Name))
    }
    return fields
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;the reason we pass the field names to the &lt;code&gt;Scan&lt;/code&gt; method, is so that the Model will know which fields are being scanned. This reduces the assumptions that the Store has to make about the Model and its fields, and opens up the ability for more flexibility on the implementation of the Model.&lt;/p&gt;

&lt;h2&gt;
  
  
  Model updating
&lt;/h2&gt;

&lt;p&gt;Next, let's implement the logic for updating a Model. This will be somewhat&lt;br&gt;
similar to what we did for creation,&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;func (s *Store[M]) Update(ctx context.Context, m M) error {
    p := m.Params()

    opts := make([]query.Option, 0, len(p))

    for k, v := range p {
        opts = append(opts, query.Set(k, query.Arg(v)))
    }

    primary, id := m.Primary()

    opts = append(opts, query.Where(col, "=", query.Arg(id)))

    if _, err := s.Exec(ctx, q.Build(), q.Args()...); err != nil {
        return err
    }
    return nil
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;simpler than what we did for create. Here, we use the returned values from the &lt;code&gt;Primary&lt;/code&gt; method to construct the &lt;code&gt;WHERE&lt;/code&gt; clause to ensure only that Model will be updated. We did one assumption though, we assumed that there would be no data that needs to be scanned back into the model, assuming that the Model given to us at this point in time contains all the necessary data for the update.&lt;/p&gt;

&lt;h2&gt;
  
  
  Model deletion
&lt;/h2&gt;

&lt;p&gt;Now, let's implement the logic for deleting a Model. Again, this will be similar&lt;br&gt;
to what we did for update, but simpler,&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;func (s *Store[M]) Delete(ctx context.Context, m M) error {
    col, id := m.Primary()

    q := query.Delete(s.table, query.Where(col, "=", query.Arg(id)))

    if _, err := s.Exec(ctx, q.Build(), q.Args()...); err != nil {
        return err
    }
    return nil
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;again, we use &lt;code&gt;Primary&lt;/code&gt; to construct the &lt;code&gt;WHERE&lt;/code&gt; clause for that specific Model.&lt;/p&gt;

&lt;h2&gt;
  
  
  Model reading
&lt;/h2&gt;

&lt;p&gt;With the create, update, and delete operations implemented, we now need to implement the read operations. First, let's implement the &lt;code&gt;Select&lt;/code&gt; and &lt;code&gt;All&lt;/code&gt; methods for reading multiple Models,&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;func (s *Store[M]) Select(ctx context.Context, cols []string, opts ...query.Option) ([]M, error) {
    opts = append([]query.Option{
        query.From(s.table),
    }, opts...)

    q := query.Select(query.Columns(cols...), opts...)

    rows, err := s.Query(ctx, q.Build(), q.Args()...)

    if err != nil {
        return nil, err
    }

    defer rows.Close()

    fields := s.rows(fields)

    mm := make([]M, 0)

    for rows.Next() {
        m := s.new()

        if err := m.Scan(fields, rows.Scan); err != nil {
            return nil, err
        }
    }

    if err := rows.Err(); err != nil {
        return nil, err
    }
    return mm, nil
}

func (s *Store[M]) All(ctx context.Context, opts ...query.Option) ([]M, error) {
    return s.Select(ctx, []string{"*"}, opts...)
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;the &lt;code&gt;Select&lt;/code&gt; method is what does most of the leg work, and will only select the given columns via &lt;code&gt;cols&lt;/code&gt;, useful if you only want to load in a handful of parameters for a given Model. &lt;code&gt;Select&lt;/code&gt; also makes use of the &lt;code&gt;s.new&lt;/code&gt; callback for creating new Models to be scanned. The &lt;code&gt;All&lt;/code&gt; method simply calls &lt;code&gt;Select&lt;/code&gt; and specifies that every column should be selected.&lt;/p&gt;

&lt;p&gt;With both &lt;code&gt;Select&lt;/code&gt; and &lt;code&gt;All&lt;/code&gt; implemented for returning multiple Models, let's now implement &lt;code&gt;Get&lt;/code&gt; for returning a single Model,&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;func (s *Store[M]) Get(ctx context.Context, opts ...query.Option) (M, bool, error) {
    var zero M

    opts = append([]query.Option{
        query.From(s.table),
    }, opts...)

    q := query.Select(query.Columns(cols...), opts...)

    rows, err := s.Query(ctx, q.Build(), q.Args()...)

    if err != nil {
        return zero, false, err
    }

    defer rows.Close()

    if !rows.Next() {
        if err := rows.Err(); err != nil {
            return zero, false, err
        }
        return zero, false, nil
    }

    m := s.new()

    if err := m.Scan(s.fields(rows), rows.Scan); err != nil {
        return zero, false, err
    }
    return m, true, nil
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;for &lt;code&gt;Get&lt;/code&gt; we return a &lt;code&gt;bool&lt;/code&gt; as the second value, this is used as a simple flag to denote if there was any Model that was found. At the very top of the method we define &lt;code&gt;var zero M&lt;/code&gt;, this is simply the zero value of the Model that we can return along side an error, or &lt;code&gt;false&lt;/code&gt;. Again, we use the &lt;code&gt;s.new&lt;/code&gt; callback to create the Model for scanning.&lt;/p&gt;

&lt;h2&gt;
  
  
  Implementing a model
&lt;/h2&gt;

&lt;p&gt;We have our Store implemented, and a Model interface. Now, let's implement a simple Model that can utilise the Store,&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;func Scan(desttab map[string]any, fields []string, scan ScanFunc) error {
    dest := make([]any, 0, len(fields))

    for _, fld := range fields {
        if p, ok := desttab[fld]; ok {
            dest = append(dest, p)
        }
    }
    return scan(dest...)
}

type User struct {
    ID        int64
    Email     string
    Username  string
    Password  []byte
    CreatedAt time.Time
}

func (u *User) Primary() (string, any) {
    return "id", u.ID
}

func (u *User) Scan(fields []string, scan ScanFunc) error {
    return Scan(map[string]any{
        "id":         &amp;amp;u.ID,
        "email":      &amp;amp;u.Email,
        "username":   &amp;amp;u.Username,
        "password":   &amp;amp;u.Password,
        "created_at": &amp;amp;u.CreatedAt,
    }, fields, scan)
}

func (u *User) Params() map[string]any {
    return map[string]any{
        "email":      u.Email,
        "username":   u.Username,
        "password":   u.Password,
        "created_at": u.CreatedAt,
    }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;you'll notice that with the above implementation we invoke a function called &lt;code&gt;Scan&lt;/code&gt; within the implementation of the &lt;code&gt;User.Scan&lt;/code&gt; method. This is a utility method that can be used across multiple Model implementations for scanning in fields. This will ensure that only the fields we are given will be scanned, provided that they exist in the given &lt;code&gt;desttab&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;With the above Model implementation we can now create a Store for it like&lt;br&gt;
so,&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;users := NewStore[*User](pool, "users", func() *User {
    return &amp;amp;User{}
})
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;then, we can use the Store too,&lt;/p&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;err := users.Create(ctx, &amp;amp;User{&lt;br&gt;
    Email:     "&lt;a href="mailto:gordon.freeman@blackmesa.com"&gt;gordon.freeman@blackmesa.com&lt;/a&gt;",&lt;br&gt;
    Username:  "gordon.freeman",&lt;br&gt;
    Password:  []byte("secret"),&lt;br&gt;
    CreatedAt: time.Now(),&lt;br&gt;
})

&lt;p&gt;if err != nil {&lt;br&gt;
    // Handle the error.&lt;br&gt;
}&lt;/p&gt;

&lt;p&gt;fmt.Println("user created with id of", u.ID)&lt;/p&gt;

&lt;p&gt;u, ok, err := users.Get(ctx, query.Where("username", "=", query.Arg("gordon.freeman")))&lt;/p&gt;

&lt;p&gt;if err != nil {&lt;br&gt;
    // Handle the error.&lt;br&gt;
}&lt;/p&gt;

&lt;p&gt;if !ok {&lt;br&gt;
    fmt.Println("user not found")&lt;br&gt;
}&lt;/p&gt;

&lt;p&gt;u.Password = []byte("unforseenconsequences")&lt;/p&gt;

&lt;p&gt;if err := users.Update(ctx, u); err != nil {&lt;br&gt;
    // Handle the error.&lt;br&gt;
}&lt;br&gt;
&lt;/p&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h2&gt;
&lt;br&gt;
  &lt;br&gt;
  &lt;br&gt;
  Conclusion&lt;br&gt;
&lt;/h2&gt;

&lt;p&gt;I know that I dove in pretty quickly with this. What I wanted to explore was the possibility of creating an ORM-like experience in Go with generics. I wanted to implement something that was simple and extensible, and that did not rely on direct reflection to achieve what an ORM typically does, whilst making minimal assumptions about the data being modelled.&lt;/p&gt;

&lt;p&gt;The above code can be found at the following GitHub &lt;a href="https://gist.github.com/andrewpillar/9f834b76dd2ba37dda79dca4a9a137f7"&gt;Gist&lt;/a&gt;, licensed under MIT, feel free to drop it into your code base and modify as you see fit. This won't be published as a conventional library, since it is less than 200 lines of code, and does not warrant being a library in my opinion. Furthermore, as I have stated, I think an ORM, if you could call this that, should make as few assumptions about the data being worked with as possible. Allowing you, the developer, to drop this into your code base, opens it up for extending even further according to your needs.&lt;/p&gt;

</description>
      <category>programming</category>
      <category>go</category>
    </item>
    <item>
      <title>Using multiple repositories in your CI builds</title>
      <dc:creator>Andrew Pillar</dc:creator>
      <pubDate>Tue, 16 Aug 2022 21:38:40 +0000</pubDate>
      <link>https://dev.to/andrewpillar/using-multiple-repositories-in-your-ci-builds-5bac</link>
      <guid>https://dev.to/andrewpillar/using-multiple-repositories-in-your-ci-builds-5bac</guid>
      <description>&lt;p&gt;&lt;a href="https://about.djinn-ci.com"&gt;Djinn CI&lt;/a&gt; makes working with multiple repositoriesin a build simple via the &lt;a href="https://docs.djinn-ci.com/user/manifest/#sources"&gt;sources&lt;/a&gt;parameter in the build manifest. This allows you to specify multiple Git respositories to clone into your build environment. Each source would be a URL that could be cloned via &lt;code&gt;git clone&lt;/code&gt;. With most CI platforms, a build's manifest is typically tied to the source code repository itself. With Djinn CI, whilst you can have a build manifest in a source code repository, the CI server itself doesn't really have an understanding of that repository. Instead, it simply looks at the sources in the manifest that is specified, and clones each of them into the build environment.&lt;/p&gt;

&lt;h2&gt;
  
  
  Defining multiple sources in a manifest
&lt;/h2&gt;

&lt;p&gt;Like with the &lt;a href="https://blog.djinn-ci.com/showcase/2022/08/06/running-your-ci-builds-without-the-server"&gt;previous&lt;/a&gt; post, we're going to use &lt;a href="https://github.com/djinn-ci/imgsrv"&gt;djinn-ci/imgsrv&lt;/a&gt; as an example of using multiple sources in a build manifest. If we look at the top of the manifest file, we will see that it requires three repositories to build. These are, the source code for djinn-ci/imgsrv itself, &lt;a href="https://github.com/golang/tools"&gt;golang/tools&lt;/a&gt;, and &lt;a href="https://github.com/valyala/quicktemplate"&gt;valyala/quicktemplate&lt;/a&gt;, defined like so,&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sources:
- https://github.com/djinn-ci/imgsrv.git
- https://github.com/golang/tools
- https://github.com/valyala/quicktemplate
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;during execution of the manifest, all of the above source repositories will be cloned. If submitting this through the user interface for the build server to run, then each of these will have its own &lt;code&gt;clone.n&lt;/code&gt; job associated with it, where &lt;code&gt;n&lt;/code&gt; is the number counting up from &lt;code&gt;1&lt;/code&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  SSH keys and private repositories
&lt;/h2&gt;

&lt;p&gt;As previously mentioned, the URLs specified in the &lt;code&gt;sources&lt;/code&gt; parameter will be given to &lt;code&gt;git clone&lt;/code&gt; for cloning in the build environment. This means you can use the SSH format when specifying a source URL,&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;git@github.com:org/private-repo
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;this would be preferable in instances where you would want to clone from a private repository. This would however require creating an &lt;a href="https://docs.djinn-ci.com/user/keys/"&gt;SSH key&lt;/a&gt; to use as a sort of deployment key when cloning into the environment.&lt;/p&gt;

&lt;p&gt;Let's consider the above example, when cloning from &lt;code&gt;github.com&lt;/code&gt;. We would create a &lt;a href="https://docs.github.com/en/developers/overview/managing-deploy-keys"&gt;deploy key&lt;/a&gt; for that repository, and upload it to Djinn CI. When doing so, we will be able to specify some custom SSH configuration to associate with that key, this is what will allow us to make use of it when it is cloned in the build,&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Host github.com
    User git
    IdentityFile /root/.ssh/id_deploy_private_repo
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;with the key in place, along with the config, we would then be able to use the repository &lt;code&gt;org/private-repo&lt;/code&gt; in our builds. However, an issue can arise here, assume we want to clone from another private repository, we could add the same deployment key there, but this might not be preferable if you're concerned about the security implications. So instead, you would create another key for that repository. With two deploy keys in place, one for each repository, this raises the question, how would you associate each key with each repository?&lt;/p&gt;

&lt;p&gt;This could be done by making use of the &lt;code&gt;.ssh_config&lt;/code&gt; file format and &lt;code&gt;Host&lt;/code&gt; matching. Assume the two private repositories are &lt;code&gt;private-repo&lt;/code&gt; and &lt;code&gt;acme-repo&lt;/code&gt;, both exist under &lt;code&gt;org&lt;/code&gt; on &lt;code&gt;github.com&lt;/code&gt;. And they each have a deployment key named &lt;code&gt;id_deploy_private_repo&lt;/code&gt; and &lt;code&gt;id_deploy_acme_repo&lt;/code&gt;. Then we could give each key the following SSH configuration,&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Host private-repo.github.com
    User git
    IdentityFile /root/.ssh/id_deploy_private_repo

Host acme-repo.github.com
    User git
    IdentityFile /root/.ssh/id_deploy_acme_repo
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;then in the build manifest we could give the following sources,&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sources:
- git@private-repo.github.com:org/private-repo
- git@acme-repo.github.com:org/acme-repo
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;it's a bit of a hacky work around, but it works if you would prefer to use different deploy keys for different private repositories.&lt;/p&gt;

&lt;h3&gt;
  
  
  More information
&lt;/h3&gt;

&lt;p&gt;To learn more about Djinn CI, visit the &lt;a href="https://about.djinn-ci.com"&gt;about page&lt;/a&gt; which gives an overview of the features of the platform, and be sure to checkout the &lt;a href="https://docs.djinn-ci.com/user"&gt;user documentation&lt;/a&gt; too.&lt;/p&gt;

</description>
      <category>programming</category>
    </item>
    <item>
      <title>Running your CI builds without the server</title>
      <dc:creator>Andrew Pillar</dc:creator>
      <pubDate>Sat, 06 Aug 2022 15:48:00 +0000</pubDate>
      <link>https://dev.to/andrewpillar/running-your-ci-builds-without-the-server-4p9n</link>
      <guid>https://dev.to/andrewpillar/running-your-ci-builds-without-the-server-4p9n</guid>
      <description>&lt;p&gt;Perhaps the one feature that sets &lt;a href="https://about.djinn-ci.com"&gt;Djinn CI&lt;/a&gt; out from other CI platforms is the fact that is has an &lt;a href="https://docs.djinn-ci.com/user/offline-runner"&gt;offline runner&lt;/a&gt;. The offline runner allows for CI builds to be run without having to send them to the server. There are some limitations around this, of course, but it provides a useful mechanism for sanity checking build manifests, testing custom images, and for building software without the need for a CI server.&lt;/p&gt;

&lt;h2&gt;
  
  
  Using the offline runner
&lt;/h2&gt;

&lt;p&gt;We've given a high-level overview of the offline runner and how it works. But let's actually use see it in action to see how it is used. As part of this demonstration, we're going to be building the &lt;a href="https://github.com/djinn-ci/imgsrv"&gt;djinn-ci/imgsrv&lt;/a&gt; image server. This is what's used for serving the base QEMU images at &lt;a href="https://images.djinn-ci.com"&gt;https://images.djinn-ci.com&lt;/a&gt;.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; The following demonstration assumes all commands will be executed on a Linux distribution.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  Installing the runner
&lt;/h3&gt;

&lt;p&gt;First, the offline runner needs to be installed. We can grab the binary for this right from the &lt;a href="https://djinn-ci.com/n/djinn-ci/djinn"&gt;Djinn CI&lt;/a&gt; namespace &lt;a href="https://djinn-ci.com/b/djinn-ci/344/artifacts"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Once downloaded, we can move the binary into our &lt;code&gt;PATH&lt;/code&gt;,&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ sudo mv djinn /usr/local/bin
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Next, we need to configure the offline runner, and the drivers it will use. So let's create the necessary configuration directory,&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ mkdir ~/.config/djinn
$ touch ~/.config/djinn/driver.conf
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Now we can configure the &lt;code&gt;qemu&lt;/code&gt; driver for the offline runner to use,&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# ~/.config/djinn/driver.conf
driver qemu {
    disks  "/var/lib/djinn/images"
    cpus   2
    memory 2048
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;More information about how to configure drivers can be found in the &lt;a href="https://docs.djinn-ci.com/user/offline-runner/#configuring-drivers"&gt;documentation&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Finally, let's create the necessary sub-directories for the images we want to use, and download a base image for our build,&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ mkdir -p /var/lib/djinn/images/qemu/x86_64
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;For building the &lt;code&gt;imgsrv&lt;/code&gt; program we would need the &lt;a href="https://images.djinn-ci.com/?group=Debian"&gt;debian/oldstable&lt;/a&gt; image. We can download this from the image server previously mentioned.&lt;/p&gt;

&lt;p&gt;With the image downloaded, we can create a sub-directory in our images directory to store it,&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ mkdir /var/lib/djinn/images/qemu/x86_64/debian
$ mv oldstable /var/lib/djinn/images/qemu/x86_64/debian
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h3&gt;
  
  
  Building the image server
&lt;/h3&gt;

&lt;p&gt;With the offline runner setup, let's clone the &lt;code&gt;djinn-ci/imgrsrv&lt;/code&gt; repository, and build it,&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ git clone https://github.com/djinn-ci/imgsrv
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Once cloned, all we need to do to execute the build locally, is change into the repository, and execute &lt;code&gt;djinn&lt;/code&gt;,&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ cd imgsrv
$ djinn
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;When it starts running, we should see some output like this,&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Running with Driver qemu...
Creating machine with arch x86_64...
Booting machine with image debian/oldstable...
Established SSH connection to machine as root...
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Once done, the &lt;code&gt;djinn-imgsrv&lt;/code&gt; artifact will be collected into the root of the repository.&lt;/p&gt;

&lt;p&gt;Now, this begs the question. Why build it using the offline runner, when you could built it normally using the &lt;code&gt;make.sh&lt;/code&gt; script in the repository? For the following reasons:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The build manifest handles downloading and building dependencies needed&lt;/li&gt;
&lt;li&gt;The build manifest is executed on the Linux distribution the server is deployed to&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The second point, is perhaps the most important. The &lt;code&gt;djinn-imgsrv&lt;/code&gt; is a dynamically linked binary. This is because the program makes use of SQLite as the in-memory database to keep track of the images to serve. Because of this Cgo is used to interface Go with the C code in the sqlite library. So, a binary that is dynamically linked is built.&lt;/p&gt;

&lt;p&gt;So, if the typical approach of building the program were to be taken, then a binary linking against the distribution's version of the libc will be produced. In some scenarios, this can be no good, as you may be developing on a different distribution than what you would be deploying to.&lt;/p&gt;

&lt;p&gt;In this case, I develop on Arch, and deploy the image server to Debian 10. So, if I were to build this typically, then the binary may not execute on Debian. To remediate this, the offline runner is used instead, whereby I know it will be built against the distribution onto which it is deployed.&lt;/p&gt;

&lt;p&gt;This is perhaps one of the benefits of the Djinn CI platform as a whole, and not just the offline runner itself. Which is the ability to utilise the &lt;code&gt;qemu&lt;/code&gt; driver in builds, to build and test software on various Linux distributions.&lt;/p&gt;

&lt;h3&gt;
  
  
  More information
&lt;/h3&gt;

&lt;p&gt;To learn more about Djinn CI, visit the &lt;a href="https://about.djinn-ci.com"&gt;about page&lt;/a&gt; which gives an overview of the features of the platform, and be sure to checkout the &lt;a href="https://docs.djinn-ci.com/user"&gt;user documentation&lt;/a&gt; too.&lt;/p&gt;

</description>
      <category>programming</category>
    </item>
    <item>
      <title>v1.1.0 of req released</title>
      <dc:creator>Andrew Pillar</dc:creator>
      <pubDate>Tue, 14 Jun 2022 15:01:29 +0000</pubDate>
      <link>https://dev.to/andrewpillar/v110-of-req-released-599m</link>
      <guid>https://dev.to/andrewpillar/v110-of-req-released-599m</guid>
      <description>&lt;p&gt;&lt;a href="https://github.com/andrewpillar/req/releases/tag/v1.1.0"&gt;v1.1.0&lt;/a&gt; of req was just released. For those wondering, req is an HTTP scripting language designed with making HTTP scripting easier. I've written about this before, which you can read about &lt;a href="https://andrewpillar.com/programming/2022/02/26/req-an-http-scripting-language/"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;This latest release supports working with cookies, so should open up the language for some useful scraping of webpages. Feedback always appreciated either via the GitHub page or email.&lt;/p&gt;

</description>
      <category>showdev</category>
      <category>go</category>
      <category>programming</category>
    </item>
    <item>
      <title>Djinn CI v1.2</title>
      <dc:creator>Andrew Pillar</dc:creator>
      <pubDate>Sun, 10 Apr 2022 10:28:46 +0000</pubDate>
      <link>https://dev.to/andrewpillar/djinn-ci-v12-4ldc</link>
      <guid>https://dev.to/andrewpillar/djinn-ci-v12-4ldc</guid>
      <description>&lt;p&gt;I've &lt;a href="https://dev.to/andrewpillar/djinn-ci-a-simple-continuous-integration-platform-1job"&gt;written&lt;/a&gt; about Djinn CI before. Since that post, more work has gone into it to weed out bugs, and implement more features, specifically, &lt;a href="https://docs.djinn-ci.com/user/variables#masked-variables"&gt;variable masking&lt;/a&gt;. You can read more about the full release on the blog &lt;a href="https://blog.djinn-ci.com/announcements/2022/04/05/v1-2-released/"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Djinn CI is an &lt;a href="https://github.com/djinn-ci/djinn"&gt;open source&lt;/a&gt; continuous integration platform designed with simplicity in mind. There is of course a hosted version you can pay to use should you not wish to maintain your own infrastructure.&lt;/p&gt;

</description>
      <category>showdev</category>
      <category>go</category>
      <category>programming</category>
    </item>
    <item>
      <title>Structured configuration in Go</title>
      <dc:creator>Andrew Pillar</dc:creator>
      <pubDate>Sat, 09 Apr 2022 11:25:53 +0000</pubDate>
      <link>https://dev.to/andrewpillar/structured-configuration-in-go-4ap7</link>
      <guid>https://dev.to/andrewpillar/structured-configuration-in-go-4ap7</guid>
      <description>&lt;p&gt;There comes a point in time during the development of a piece of software when a configuration language needs to be used, you can only do so much via flags before it becomes too tenuous. The language chosen should provide a format that is easy for a person parse as well as a computer. Typically, most people would reach for &lt;a href="https://yaml.ord"&gt;YAML&lt;/a&gt;, &lt;a href="https://toml.io/en"&gt;TOML&lt;/a&gt;, or sometimes even &lt;a href="https://www.lucidchart.com/techblog/2018/07/16/why-json-isnt-a-good-configuration-language"&gt;JSON&lt;/a&gt;. For the development of &lt;a href="https://about.djinn-ci.com"&gt;Djinn CI&lt;/a&gt;, none of these fitted my needs, so I developed my own, specifically for Go.&lt;/p&gt;

&lt;h2&gt;
  
  
  What a configuration language should be
&lt;/h2&gt;

&lt;p&gt;In my opinion, a configuration language should allow for a declarative way of configuring a piece of software. The syntax of the language, should be easy for a person to parse, since they will be spending a good amount of time reading and writing said configuration. A configuration language should be light on visual noise, that is, anything that might incur a person's ability to read the language. It should also allow for comments, so the person writing the configuration can explain what the configuration is for.&lt;/p&gt;

&lt;p&gt;The last points, visual noise and comments, rules out JSON as being a configuration language. It is fine for serializing data and exchanging it between programs, but should be avoided as a primary configuration format. This does not rule out YAML or TOML though. Which, are fine configuration languages depending on what is being configured. I should stress, that there is no singular configuration language that will meet the requirements for how every piece of software is configured. The language chosen will vary depending on how you want to expose your software for configuration.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; When I use the term "primary configuration format" I am referring to the configuration that a person would need to edit themselves. JSON is fine for storing configuration that is edited by a program. My main gripes with JSON as a configuration format arise when I have to edit it myself.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Starting with TOML
&lt;/h2&gt;

&lt;p&gt;When starting out with the development of Djinn, I initially settled on TOML. It's simpler than &lt;a href="https://www.arp242.net/yaml-config.html"&gt;YAML&lt;/a&gt;, and much stricter, no assumptions will be made about the string &lt;code&gt;yes&lt;/code&gt; for example. Below is an example of some of the configuration in TOML,&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[net]
listen = ":8443"

[net.tls]
cert = "/var/lib/ssl/server.crt"
key  = "/var/lib/ssl/server.key"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;This was fine to start off with, however, Djinn requires a certain level of nested structure as part of the configuration. Take provider configuration for example, a provider can be configured for each 3rd party you want to integrate with, in TOML this looked like this,&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[[provider]]

[[provider.github]]
client_id     = "..."
client_secret = "..."

[[provider.gitlab]]
client_id     = "..."
client_secret = "..."
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;this was not great. The main reason being, it made the configuration less readable, and harder to parse at a glance. There were other instances of this too throughout the configuration of Djinn, such as the configuration of drivers,&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[[driver]]

[[driver.qemu]]
disks  = "/var/lib/djinn/qemu"
cpus   = 1
memory = 2048
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;I wanted Djinn and its components to have configuration that was easy to read. With TOML, I was quickly running up against its limitations with regard to nested configuration structures. So I started exploring other options.&lt;/p&gt;

&lt;h2&gt;
  
  
  HCL
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://github.com/hashicorp/hcl"&gt;HCL&lt;/a&gt; is the configuration language from HashiCorp, if you've worked with &lt;a href="https://www.terraform.io/"&gt;Terraform&lt;/a&gt; then you will be familiar with it. As stated in the readme for the project,&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;HCL attempts to strike a compromise between generic serialization formats such as JSON and configuration formats built around full programming languages such as Ruby. HCL syntax is designed to be easily read and written by humans, and allows declarative logic to permit its use in more complex applications.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;this appears to fulfill my needs for making it easier to work with nested structures of configuration. Let's take a look at how Djinn may be configured with HCL,&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;net {
    listen = ":8443"

    tls {
        cert = "/var/lib/ssl/server.crt"
        key  = "/var/lib/ssl/server.key"
    }
}

provider "github" {
    client_id      = "..."
    client_sercret = "..."
}

provider "gitlab" {
    client_id      = "..."
    client_sercret = "..."
}

driver "qemu" {
    disks  = "/var/lib/djinn/qemu"
    cpus   = 1
    memory = 2048
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;this is better, however the requirement for &lt;code&gt;=&lt;/code&gt; to assign a value to a parameter and the quotes around the labels irk me. Furthermore, it would be nice if I could use size units when specifying an amount of something in bytes.&lt;/p&gt;

&lt;h2&gt;
  
  
  Structured configuration
&lt;/h2&gt;

&lt;p&gt;Structured configuration is the type of configuration language I wanted for Djinn, whereby parameters could be grouped together into blocks, and nested within each other. Hence, the structure. The language I came up with was  heavily influenced by HCL, and &lt;a href="https://github.com/vstakhov/libucl"&gt;libucl&lt;/a&gt; and has support for duration and size literal values. Below is what the language looks like,&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;net {
    listen ":8443"

    tls {
        cert "/var/lib/ssl/server.crt"
        key  "/var/lib/ssl/server.key"
    }
}

provider github {
    client_id     "..."
    client_secret "..."
}

driver qemu {
    disks  "/var/lib/djinn/qemu"
    cpus   1
    memory 2KB
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;as you can see, it is very similar to HCL, however there is less visual noise as I call it. The library developed for this is called &lt;a href="https://github.com/andrewpillar/config"&gt;config&lt;/a&gt; which is used for decoding the configuration, there is not support as of yet for encoding. With this library you will be able to configure support for environment variable expansion and support for includes. I have found that this strikes the balance I require of a configuration language, declarative, with limited visual noise, and easy for people to read. This is hardly a silver bullet, and no doubt will demonstrate its limitations depending on what it is you're trying to configure. Nonetheless, I have found it be flexible for my use cases. You can see examples of this language in the &lt;a href="https://github.com/djinn-ci/djinn"&gt;djinn-ci/djinn&lt;/a&gt; repository itself in the &lt;code&gt;dist&lt;/code&gt; directory.&lt;/p&gt;

</description>
      <category>go</category>
      <category>programming</category>
    </item>
    <item>
      <title>req, an HTTP scripting language</title>
      <dc:creator>Andrew Pillar</dc:creator>
      <pubDate>Sat, 26 Feb 2022 14:41:55 +0000</pubDate>
      <link>https://dev.to/andrewpillar/req-an-http-scripting-language-3o81</link>
      <guid>https://dev.to/andrewpillar/req-an-http-scripting-language-3o81</guid>
      <description>&lt;p&gt;Programming languages are always something that have fascinated me, how they're&lt;br&gt;
designed, how they're implemented, and how they're used. Whether they're a DSL&lt;br&gt;
(domain specific language) or more of a generic programming language. A&lt;br&gt;
programming language is always something I had wanted to take a stab at&lt;br&gt;
creating, even if it ended up being terrible, or being of no true utility, but&lt;br&gt;
only for the sake of learning. Well, over the Christmas break I decided to&lt;br&gt;
occupy my time on developing a language, one that was small and simple, designed&lt;br&gt;
for a specific use case that I had encountered but hadn't found a solution for.&lt;br&gt;
The language that I ended up developing was &lt;a href="https://github.com/andrewpillar/req"&gt;req&lt;/a&gt;, a language designed&lt;br&gt;
only for HTTP scripting.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Overview&lt;/li&gt;
&lt;li&gt;Why&lt;/li&gt;
&lt;li&gt;Conclusion&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Overview
&lt;/h2&gt;

&lt;p&gt;What do I mean when I say HTTP scripting? Perhaps an example would be best to&lt;br&gt;
demonstrate, followed by an explanation,&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Token = env "GH_TOKEN";
Headers = (
    Authorizationn: "Bearer $(Token)",
);

Resp = GET "https://api.github.com/user" $Headers -&amp;gt; send;

if $Resp.StatusCode == 200 {
    User = decode json $Resp.Body;
    writeln _ "Hello $(User["login"])";
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Above is what req looks like. It looks like your typical language, however it&lt;br&gt;
offers first-class support for making HTTP requests and working with their&lt;br&gt;
responses. The language makes use of builtin commands to handle the sending&lt;br&gt;
of requests, the encoding/decoding of data, and the reading/writing of data.&lt;br&gt;
These commands also return values that can be stored in variables. The output&lt;br&gt;
of one command can be sent as the input of another command via the &lt;code&gt;-&amp;gt;&lt;/code&gt;&lt;br&gt;
operator. There is no support for user defined commands.&lt;/p&gt;

&lt;p&gt;That's what the language looks like, and this is how it is run,&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ req user.req
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;the above example makes use of the &lt;code&gt;GH_TOKEN&lt;/code&gt; environment variable, so if we&lt;br&gt;
wanted it to actually function we would need to make sure that was set&lt;br&gt;
before invocation,&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ GH_TOKEN=&amp;lt;token&amp;gt; req user.req
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;So, from a brief overview you can see that there is some familiarity with other&lt;br&gt;
languages out there. I call this a scripting language, as opposed to a&lt;br&gt;
programming language, as it is interpreted, and extremely limited in scope and&lt;br&gt;
capabilities.&lt;/p&gt;

&lt;p&gt;req can also be used via the REPL, which is accessed simply by invoking the&lt;br&gt;
binary and passing no arguments to it. This can be used as a scratchpad to plan&lt;br&gt;
out what you want your scripts to do, or as a means of exploring an HTTP service&lt;br&gt;
and its endpoints,&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ req
req devel a5ddbe7 Sat Jan 29 11:34:38 2022 +0000
&amp;gt; Resp = GET "https://httpbin.org/json" -&amp;gt; send
&amp;gt; writeln _ $Resp
HTTP/2.0 200 OK
Access-Control-Allow-Credentials: true
Access-Control-Allow-Origin: *
Content-Length: 429
Content-Type: application/json
Date: Sat, 26 Feb 2022 14:14:56 GMT
Server: gunicorn/19.9.0

{
  "slideshow": {
    "author": "Yours Truly",
    "date": "date of publication",
    "slides": [
      {
        "title": "Wake up to WonderWidgets!",
        "type": "all"
      },
      {
        "items": [
          "Why &amp;lt;em&amp;gt;WonderWidgets&amp;lt;/em&amp;gt; are great",
          "Who &amp;lt;em&amp;gt;buys&amp;lt;/em&amp;gt; WonderWidgets"
        ],
        "title": "Overview",
        "type": "all"
      }
    ],
    "title": "Sample Slide Show"
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h2&gt;
  
  
  Why
&lt;/h2&gt;

&lt;p&gt;Now, why did I want to create yet another scripting language. Two reasons.&lt;/p&gt;

&lt;p&gt;The first was for the sake of learning. Some time ago I had received copies of&lt;br&gt;
the books &lt;a href="https://interpreterbook.com/"&gt;Writing An Interpreter In Go&lt;/a&gt; and&lt;br&gt;
&lt;a href="https://compilerbook.com/"&gt;Writing A Compiler In Go&lt;/a&gt;. I had worked through the&lt;br&gt;
first book at the start of 2020, and enjoyed what I had learned, wanted to put&lt;br&gt;
what I had learned to practice. At the time however, I couldn't think of a fun&lt;br&gt;
or interesting language that I would have wanted to develop, so I shelved the&lt;br&gt;
prospect of it for some time, which brings me to my second reason...&lt;/p&gt;

&lt;p&gt;I think most developers have to interact with an HTTP service at some point&lt;br&gt;
during their day job, in a way which would require some form of scripting.&lt;br&gt;
Perhaps you're trying to debug an API, so you pull open a terminal and fire off&lt;br&gt;
a few &lt;a href="https://curl.se/"&gt;curl&lt;/a&gt; requests, and see what response comes back. Or maybe you&lt;br&gt;
want to scrape a site for its information. Either way, you're doing something&lt;br&gt;
that involves some tinkering.&lt;/p&gt;

&lt;p&gt;I have been there too. And it is this scenario that made me wonder if there was&lt;br&gt;
a tool out there that allowed for easily working with HTTP requests and their&lt;br&gt;
responses in a programmatic way. Sure, you could use curl and shell scripting,&lt;br&gt;
and wrangle the data through sed, awk, and jq to get the data you need, but this&lt;br&gt;
approach can be fragile. On the other hand you could use a full fledged&lt;br&gt;
programming language. This way, you would have more control, but it can be&lt;br&gt;
perhaps a bit too verbose at times if all you want to do is send off an HTTP&lt;br&gt;
request.&lt;/p&gt;

&lt;p&gt;This is was prompted my development on req. A high-level scripting language that&lt;br&gt;
allows you to easily send out HTTP requests, and work with their responses. The&lt;br&gt;
main benefit of the language, in my opinion, is that it tries to make working&lt;br&gt;
with HTTP requests as semantic as possible, take the following,&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Resp = GET "https://httpbin.org/json" -&amp;gt; send;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;here we want to send a &lt;code&gt;GET&lt;/code&gt; request to the &lt;code&gt;https://httpbin.org/json&lt;/code&gt; endpoint.&lt;br&gt;
Writing this out either in a script or the REPL can feel more natural than what&lt;br&gt;
you might otherwise write when using curl, for example. Then let's say we want&lt;br&gt;
to decode the response data into JSON,&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Data = decode json $Resp.Body;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;again, it's like we're describing what we want to do with the response. This is&lt;br&gt;
what I wanted to achieve with this language, keep it limited in scope, and&lt;br&gt;
hopefully offer some utility in the realm of HTTP scripting.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;What I've covered in this post is a simple overview of the language, and the&lt;br&gt;
reasons behind it's implementation. I haven't gone into my justifications&lt;br&gt;
as to how/why the language was designed the way it was, but that could perhaps&lt;br&gt;
be another post down the line. If what I've shown so far interests you, then&lt;br&gt;
feel free to take a look at the code for it on GitHub:&lt;br&gt;
&lt;a href="https://github.com/andrewpillar/req"&gt;https://github.com/andrewpillar/req&lt;/a&gt; and feel free to spool through the&lt;br&gt;
&lt;a href="https://github.com/andrewpillar/req/tree/main/docs"&gt;documentation&lt;/a&gt; there too, to get a sense of the language.&lt;/p&gt;

</description>
      <category>programming</category>
      <category>go</category>
      <category>showdev</category>
    </item>
    <item>
      <title>Djinn CI a simple continuous integration platform</title>
      <dc:creator>Andrew Pillar</dc:creator>
      <pubDate>Tue, 26 Oct 2021 17:20:34 +0000</pubDate>
      <link>https://dev.to/andrewpillar/djinn-ci-a-simple-continuous-integration-platform-1job</link>
      <guid>https://dev.to/andrewpillar/djinn-ci-a-simple-continuous-integration-platform-1job</guid>
      <description>&lt;p&gt;Djinn CI is a simple continuous integration platform that I have been developing in my free time. It has reached the stage where I'm happy enough with its stability to start debuting it to people.&lt;/p&gt;

&lt;p&gt;Djinn CI is &lt;a href="https://github.com/djinn-ci/djinn"&gt;open source&lt;/a&gt; software, so you can host it on your own infrastructure. There is, however, also a &lt;a href="https://about.djinn-ci.com"&gt;hosted&lt;/a&gt; version that you can pay to use should you not wish to host it yourself. To start using Djinn CI you can either create an account or sign in with GitHub or GitLab.&lt;/p&gt;

&lt;p&gt;Detailed below is a list of some of the features on offer in Djinn CI,&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Build objects (files placed inside of a build environment), and build artifacts&lt;/li&gt;
&lt;li&gt;Multi-repository builds&lt;/li&gt;
&lt;li&gt;Cron jobs (repeatable builds on a schedule)&lt;/li&gt;
&lt;li&gt;Namespaces for organizing builds and their resources and for working with collaborators&lt;/li&gt;
&lt;li&gt;Build tagging and auto ref-tagging of builds submitted via push events&lt;/li&gt;
&lt;li&gt;Custom QCOW2 images for builds that use QEMU&lt;/li&gt;
&lt;li&gt;Integrates with GitHub and GitLab&lt;/li&gt;
&lt;li&gt;Namespace webhooks for namespace events&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Whilst I am happy with the stability of the software, and it's feature set, there is always room for improvement. So, if you're looking for a new CI tool to use, please check this out, and feel free to reach out to me if you have any questions. You can find my contact details in my profile, via my website or Twitter.&lt;/p&gt;

</description>
      <category>showdev</category>
      <category>go</category>
      <category>ci</category>
      <category>tooling</category>
    </item>
    <item>
      <title>Working with SQL Relations in Go - Part 5</title>
      <dc:creator>Andrew Pillar</dc:creator>
      <pubDate>Mon, 13 Apr 2020 16:12:09 +0000</pubDate>
      <link>https://dev.to/andrewpillar/working-with-sql-relations-in-go-part-5-54oo</link>
      <guid>https://dev.to/andrewpillar/working-with-sql-relations-in-go-part-5-54oo</guid>
      <description>&lt;p&gt;Over these series of posts I have been exploring an approach that could be taken when working with SQL relationships in Go. The precursor to all of this was the initial &lt;a href="https://andrewpillar.com/programming/2019/07/13/orms-and-query-building-in-go/"&gt;ORMs and Query Building in Go&lt;/a&gt;. This explored one aspect of ORMs, the query building, but it didn't address how relationships could also be handled in an idiomatic way. So before I wrap up this series in this final post, let us address the code we have currently.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Finishing up the Application&lt;/li&gt;
&lt;li&gt;Callbacks and Interfaces&lt;/li&gt;
&lt;li&gt;A Note on Generics&lt;/li&gt;
&lt;li&gt;Why Not Make this a Library&lt;/li&gt;
&lt;li&gt;Conclusion&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; If you're interested in taking a look at the code I put together for this example application I put together, then take a look at it online here: &lt;a href="https://github.com/andrewpillar/blogger"&gt;https://github.com/andrewpillar/blogger&lt;/a&gt;.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Finishing up the Application
&lt;/h2&gt;

&lt;p&gt;Previously, we successfully implemented the &lt;code&gt;Index&lt;/code&gt;, and &lt;code&gt;Show&lt;/code&gt; methods for the Post entity. However, for the Category entity we need to update the &lt;code&gt;Show&lt;/code&gt; method so that we return a list of posts for that category. This can be done by utilising the &lt;code&gt;model.Binder&lt;/code&gt; interface we implemented on the &lt;code&gt;post.Store&lt;/code&gt; struct.&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// category/handler.go
package category

import (
...
    "blogger/post"
...
)
...
func (h Handler) Show(w http.ResponseWriter, r *http.Request) {
...
    pp, paginator, err := post.NewStore(h.DB, c).Index(r.URL.Query())

    if err != nil {
        // handle error
    }

    data := struct{
        Category *Category
        Prev     string
        Next     string
        Posts    []*post.Post
    }{
        Category: c,
        Prev:     fmt.Sprintf("/category/%d?page=%d", c.ID, paginator.Prev),
        Next:     fmt.Sprintf("/category/%d?page=%d", c.ID, paginator.Next),
        Posts:    pp,
    }
    w.Header().Set("Content-Type", "application/json; charset=utf-8")
    json.NewEncoder(w).Encode(data)
}
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;You'll notice here, that we imported the &lt;code&gt;blogger/post&lt;/code&gt; package which will result in an import cycle. This can be easily fixed by creating a third sub-package in the &lt;code&gt;post&lt;/code&gt; and &lt;code&gt;category&lt;/code&gt; packaged called &lt;code&gt;web&lt;/code&gt; to hold the web handler implementations.&lt;/p&gt;

&lt;p&gt;The application at this point is mostly finished, if you wish to see a complete example of this then take a look at the repository in GitHub, &lt;a href="https://github.com/andrewpillar/blogger"&gt;https://github.com/andrewpillar/blogger&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Now let me go about trying to justify the approach I took to this problem.&lt;/p&gt;

&lt;h2&gt;
  
  
  Callbacks and Interfaces
&lt;/h2&gt;

&lt;p&gt;When it comes to working with SQL relationships in Go there is going to be similarities between how things are done. For example, we want to load relationships, as well as bind them. Not to mention that obvious similarities between the entities we have, they all have 64-bit integer primary keys, and they each have differen relations.&lt;/p&gt;

&lt;p&gt;Because of these similarities it is only natural to look to an interface to implement what we need when it comes to relationship loading. So when it comes to writing the actual code we can just take the interface we have, and tell it "load in the relationships I want", without necessarily caring how they're loaded in. Furthermore, we also implemented a light interface to represent our entity models.&lt;/p&gt;

&lt;p&gt;The actual logic for binding the models to one another is deferred to a function callback. This makes sense to do, since different models could be bound in different ways. However, with the implementation of the &lt;code&gt;model.Model&lt;/code&gt; interface we were able to implement the &lt;code&gt;model.Bind&lt;/code&gt; function to have a generic way of binding our models together, assuming that the models have 64-bit integer keys.&lt;/p&gt;

&lt;p&gt;We take these function callbacks a step further, and allow for the description of how these models are bound together via the &lt;code&gt;model.Relation&lt;/code&gt; function and &lt;code&gt;model.RelationFunc&lt;/code&gt; type.&lt;/p&gt;

&lt;p&gt;As you can see, when coupled together, callbacks and interfaces can achieve what we went in a way that is fairly idiomatic. And we managed to do this without having to dip into the &lt;code&gt;reflect&lt;/code&gt; package.&lt;/p&gt;

&lt;h2&gt;
  
  
  A Note on Generics
&lt;/h2&gt;

&lt;p&gt;I touched on generic behaviour briefly, so I may aswell add my two cents on the whole generic situation in Go.&lt;/p&gt;

&lt;p&gt;When I first approached Go I rather liked the lack of supports for generics, and wouldn't have minded if the language continued without generics. This belief mainly arose from the fear of people abusing generics to write god code (code that is so generic and arbirtrary it could do anything, and yet is hard to understand). However, I think my fears in this regard are unfounded, mainly because some people like abusing &lt;code&gt;interface{}&lt;/code&gt; and &lt;code&gt;reflect&lt;/code&gt; to achieve this instead.&lt;/p&gt;

&lt;p&gt;That being said, I cannot deny that certain things would be easier with generics in Go. It is comforting to see some of the &lt;a href="https://blog.tempus-ex.com/generics-in-go-how-they-work-and-how-to-play-with-them/"&gt;performance gains&lt;/a&gt; that can be made via the use of generics in Go.&lt;/p&gt;

&lt;p&gt;So I would welcome generics in Go, and hope that people use them responsibly.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Not Make this a Library
&lt;/h2&gt;

&lt;p&gt;One final thing I should address before wrapping this up, is why didn't I take what I have written and turn it into a library? Well, I didn't turn this into a library for a platitude of reasons.&lt;/p&gt;

&lt;p&gt;The first being that I don't want to make any assumptions about how people went about modelling their data. For example, you may use something other than an integer for your primary key, perhaps a string, a byte array. And I think this is another thing where ORMs fall short, and that is making assumptions about the data being worked with.&lt;/p&gt;

&lt;p&gt;Second of all, this implementation only contains a handful of functions and interfaces. And because of what I mentioned in the first post, it makes a number of assumptions about the data.&lt;/p&gt;

&lt;p&gt;Finally, since the implementation is only a handful of functions and interfaces, I don't think this would make for a very substantial library. Also, I would defer to one of the &lt;a href="https://go-proverbs.github.io/"&gt;Go proverbs&lt;/a&gt; here too, "A little copying is better than a little dependency".&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;I hope the ideas presented throughout this series of posts will help you when it comes to working with SQL relationships in Go. This is something I have struggled with, especially since the solutions out there for modelling data are lacking, perhaps due to the points I made earlier. I also want to say, that these ideas are not gospel, just an approach that I have found that works for me. As always feel free to contact me to discuss this further.&lt;/p&gt;

</description>
      <category>go</category>
      <category>sql</category>
      <category>api</category>
    </item>
  </channel>
</rss>
