<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Jilles van Gurp</title>
    <description>The latest articles on DEV Community by Jilles van Gurp (@jillesvangurp).</description>
    <link>https://dev.to/jillesvangurp</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/jillesvangurp"/>
    <language>en</language>
    <item>
      <title>Escaping DevOps hell with Codex</title>
      <dc:creator>Jilles van Gurp</dc:creator>
      <pubDate>Thu, 12 Mar 2026 14:35:39 +0000</pubDate>
      <link>https://dev.to/jillesvangurp/escaping-devops-hell-with-codex-5ap7</link>
      <guid>https://dev.to/jillesvangurp/escaping-devops-hell-with-codex-5ap7</guid>
      <description>&lt;p&gt;If you are a developer, you are probably well aware of all the AI goodness that has been happening. I won't bore you with the hyperbole.&lt;/p&gt;

&lt;p&gt;My weapon of choice is &lt;strong&gt;Codex&lt;/strong&gt;. Other AI coding tools are available, but debating which one is best at what is not a very interesting conversation to me. What I want to focus on here is &lt;strong&gt;DevOps&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;I'm the CTO of a small company, which means I switch hats a lot. I used to call myself a backend developer, but these days I do everything vaguely tech-related in our company, plus marketing, sales, and a bunch of other supporting roles. There are only a few of us in the company. If it needs doing, one of us needs to do it. ChatGPT and Codex allow us to get shit done.&lt;/p&gt;

&lt;h2&gt;
  
  
  DevOps sucks. It really does.
&lt;/h2&gt;

&lt;p&gt;One of the hats I end up wearing is &lt;strong&gt;DevOps&lt;/strong&gt;. I hate doing DevOps. I've been exposed to this stuff for well over two decades. I know how to do it properly. I've done everything from uploading zip files over ISDN lines and remotely restarting Tomcat, as you did in the early 2000s, to automating deployments with Puppet, using CloudFormation in AWS, faffing about with Kubernetes, Docker Swarm, Terraform, and much more. Lately, my weapons of choice are Ansible, Docker, and Docker Compose. I've invested countless quantities of time in learning all that stuff and trying to apply it.&lt;/p&gt;

&lt;p&gt;Why do I think DevOps sucks? To me, it feels like dropping out of warp, to use a Star Trek analogy. You have all these grand plans to get some big feature out, and then you find yourself micromanaging some insanely arcane shit in Linux to get it to tell the time correctly, deal with some convoluted networking thing, or whatever. You get blocked for weeks on end. All that to solve the age-old problem of &lt;strong&gt;"put this fucking thing over there and run it!"&lt;/strong&gt; (pardon my French). I call this problem inception. You start with a grand vision: "Our shiny new backend is ready to go, let's deploy it and announce it to the world." Somehow, that escalates into: "I need to figure out how to set up a bastion and private networking so I don't expose my database to the public internet." One thing leads to another, and before you know it, you've sunk three months into the project.&lt;/p&gt;

&lt;h2&gt;
  
  
  DevOps should be simple but isn't
&lt;/h2&gt;

&lt;p&gt;DevOps is supposed to be about automating what should be automated. So, why does DevOps still feel so manual? The answer is that this stuff is genuinely complicated, and over decades we have built systems full of bear traps with terrible failure modes: data loss, security breaches, downtime, and worse. There is just a lot of stuff that a DevOps person needs to know and do. Taking shortcuts can lead to disaster. That's why it often ends up being a full-time job.&lt;/p&gt;

&lt;p&gt;Every once in a while, I get sucked into doing a stretch of DevOps that makes me feel stupid, because it should be simple. Instead, I end up pulling my hair out for days trying to solve weird shit that refuses to work without ritualistic bullshit, magical command-line incantations, and configuration files that need to be exactly right. I know some truly excellent operations and DevOps people and, honestly, I suffer a bit from imposter syndrome whenever I have to do this stuff myself. I'm skilled enough to be dangerous, and I know it.&lt;/p&gt;

&lt;h2&gt;
  
  
  Codex takes the pain away
&lt;/h2&gt;

&lt;p&gt;I got pulled into the latest round of this two weeks ago. We've been eyeing our setup in Google and concluded that we're spending about 10K per year on hosting. It works great, but it's a lot, and we'd prefer to give ourselves a little raise instead of donating to Google. So, I embarked on the plan to migrate to Hetzner.&lt;/p&gt;

&lt;p&gt;But I used &lt;strong&gt;Codex&lt;/strong&gt; to do it. We also have a second setup that, for customer reasons, runs in Telekom Cloud, which is basically an OpenStack-based environment. I already had a lot of Ansible scripts to provision that.&lt;/p&gt;

&lt;p&gt;I started by telling Codex to refactor and modernize that codebase and set up a new inventory for my brand-new Hetzner setup. I created a few VMs in Hetzner, a private network, and a load balancer. One of the VMs acts as a bastion so you can SSH into it to reach the other ones that don't have a public IP address.&lt;/p&gt;

&lt;p&gt;In small steps, I fixed, upgraded, and modernized the Ansible scripts, using the new Hetzner setup as the test bed. I let Codex do all the work. I got it to fix the Ansible code and drive the provisioning through the tools on my laptop and over SSH. I set up skills and guardrails around the process.&lt;/p&gt;

&lt;p&gt;When the Ansible scripts failed, I got it to debug why and implement fixes. I got it to research workarounds. A lot of this was me guiding it along the way. I was leaning on my 2+ decades of experience, but I did not touch a single line of code.&lt;/p&gt;

&lt;p&gt;This is actually key. As you go, you will see it struggle and figure things out. When that happens, you just ask it to record what it just did as a skill. You read over the markdown it generates, suggest further improvements, and the next time it does better. This feedback loop between you and the AI is super important. It's the difference between doing things your way, i.e. the right way, and doing it in whatever way it dreams up. Knowing what the right way is, is an experience-driven thing. And this is now our added value as software makers.&lt;/p&gt;

&lt;p&gt;After &lt;strong&gt;3 days&lt;/strong&gt;, I switched over the DNS and shut down the last of our Google VMs. That's 3 days from creating the Hetzner account, not counting the extra day I spent getting them to raise my VM limit because I was still in my first month.&lt;/p&gt;

&lt;h2&gt;
  
  
  Doing rolling restarts with Elasticsearch
&lt;/h2&gt;

&lt;p&gt;Part of our setup is Elasticsearch. If you've never used it, it's a clustered search engine that achieves high availability through node and cluster coordination. To upgrade it, you have to do a so-called rolling restart. You can't just yank the plug and upgrade all nodes because that would make the cluster unavailable and cause all sorts of issues with split brains, data loss, and worse. This is exactly the kind of thing you normally want an experienced DevOps person to handle responsibly. DevOps is full of tasks where experience matters, because the wrong shortcut can quietly turn into an outage or data loss.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;The XXX Elasticsearch cluster upgrade is complete.

All three nodes are now on 9.3.1:

es1 9.3.1
es2 9.3.1
es3 9.3.1
Final cluster state:

green
3 nodes / 3 data nodes
0 relocating shards
0 initializing shards
0 unassigned shards
Notable preflight/result details:

Latest 9.3.x release verified: 9.3.1
Snapshot health passed before rollout; latest successful snapshot was 2026_03_12_05_33
System feature migration was required and completed before the first node restart
I did the rollout one node at a time in this order:

es3
es2
es1
No code changes were needed in the repo.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This morning, I asked Codex to reprovision the cluster in our old data center. I last did this in September, manually. It first did a quick inventory check and a dry run on one of the nodes. It verified the cluster was in a healthy state (green) and that the latest snapshot was verified. Then it went ahead and executed the rollout. It followed the plan to the letter. The block above is the report it gave me when it finished. The whole thing took about 20 minutes. I was on standby giving permission the few times it came back to ask for permission for the next step, as my skill for this specifies.&lt;/p&gt;

&lt;p&gt;Before doing this, I iterated with it on writing a skill for doing this as part of my Hetzner migration. The skill covers what an experienced DevOps person would normally do: preflight checks before kickoff, confirmation gates that ask me for permission, and guidance for "what if this happens" scenarios. There's plenty of advice on the internet, and I even wrote about this exact topic years ago in &lt;a href="https://www.jillesvangurp.com/blog/2016-08-26-running-elasticsearch-in-a-docker-1-12-swarm.html" rel="noopener noreferrer"&gt;Running Elasticsearch in a Docker 1.12 Swarm&lt;/a&gt;. Writing blog articles was something I used to do more regularly as a way of saying, "I should remember this weird thing I just spent 2 days figuring out so I don't have to spend that time again." It's the pre-AI way of creating and recording skills.&lt;/p&gt;

&lt;p&gt;If you are interested, you can find the skill file I used &lt;a href="https://gist.github.com/jillesvangurp/d4ea63b3686d56e517524192db033be6" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  What's next?
&lt;/h2&gt;

&lt;p&gt;Over the past few weeks, I've been planning and executing a ridiculous amount of work. I've launched two new websites via Cloudflare. I've created a few new OSS projects. I've done some major surgery on our two live deployments. I've also shipped a few major new features. Somehow, I have also found some time to try out OpenClaw and play with some new AI stuff. I've compressed months of work into a few weeks. I'm not going to lie: I'm exhausted, but I'm also energized. This is crazy fun.&lt;/p&gt;

&lt;p&gt;Next on my agenda for modernizing DevOps bullshit that I don't want to deal with is getting some &lt;strong&gt;world-class AI monitoring and alerting&lt;/strong&gt; in place. I need telemetry, logging, and all the rest. I have some of that already, but having it and actually using it are two different things. I want an AI to handle the operational discipline part: checking uptimes, verifying backups, watching resource usage, and making sure everything is working as it should. I want it to give me daily reports, summarize what matters, and escalate issues. I don't want to take a sabbatical to set all this up manually. I just want to get this shit done.&lt;/p&gt;

&lt;h2&gt;
  
  
  If this sounds like something your team needs
&lt;/h2&gt;

&lt;p&gt;One of the other things I did with Codex recently was launch our AI services and consulting site: &lt;a href="https://formationxyz.com/" rel="noopener noreferrer"&gt;formationxyz.com&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;The pitch is simple. A lot of companies can see the opportunity with AI, but they struggle to turn that into practical systems, useful workflows, and actual leverage for their teams. That is exactly the gap we want to help close.&lt;/p&gt;

&lt;p&gt;At &lt;a href="https://formationxyz.com/" rel="noopener noreferrer"&gt;FORMATION XYZ&lt;/a&gt;, we help small teams automate repetitive work, build practical AI systems, and put agentic workflows in place that reduce manual effort and create more capacity. If the kind of work I described above sounds interesting to you, and you want help applying AI inside your company in a pragmatic way, we can help.&lt;/p&gt;

</description>
      <category>devops</category>
      <category>ai</category>
      <category>ansible</category>
      <category>productivity</category>
    </item>
    <item>
      <title>Kts Scripting of Yaml &amp; Json Dialects</title>
      <dc:creator>Jilles van Gurp</dc:creator>
      <pubDate>Tue, 09 Aug 2022 11:14:00 +0000</pubDate>
      <link>https://dev.to/jillesvangurp/kts-scripting-of-yaml-json-dialects-55if</link>
      <guid>https://dev.to/jillesvangurp/kts-scripting-of-yaml-json-dialects-55if</guid>
      <description>&lt;p&gt;I've been using Kotlin for quite a few years now. And while I've been using Gradle with the Kotlin scripting support, I've not done much else with Kotlin's scripting ability until fairly recently.&lt;/p&gt;

&lt;p&gt;Kotlin scripting (kts) allows you to write scripts with a slightly unfortunate ending of &lt;code&gt;.main.kts&lt;/code&gt; that can be interpreted by the Kotlin compiler on the command line. Adding &lt;code&gt;#!/usr/bin/env kotlin&lt;/code&gt; to your script tells your shell to use kotlin to execute the script. Any dependencies needed by the script are cached after the first use. So, running these script is generally pretty quick once you have all the jars you need.&lt;/p&gt;

&lt;h2&gt;
  
  
  Scripting Github Actions
&lt;/h2&gt;

&lt;p&gt;One of my team members, &lt;a href="https://twitter.com/nikkyai"&gt;Nikky&lt;/a&gt;, got annoyed with the verbosity and insane amount of copy-paste reuse needed to drive Github Actions. And true to her nature, promptly fixed it by using and contributing to &lt;a href="https://krzema12.github.io/github-actions-kotlin-dsl/"&gt;GitHub Actions Kotlin DSL&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This Kotlin DSL allows you to script github actions using kts. The idea is you write your actions as a &lt;code&gt;.main.kts&lt;/code&gt; file, give it execute permissions and then it spits out Yaml files when you run it (one for each workflow that you configure). All the repetitive stuff? Use functions or variables or constants. So much nicer.&lt;/p&gt;

&lt;p&gt;Here's a short example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight kotlin"&gt;&lt;code&gt;&lt;span class="err"&gt;#&lt;/span&gt;&lt;span class="p"&gt;!/&lt;/span&gt;&lt;span class="n"&gt;usr&lt;/span&gt;&lt;span class="p"&gt;/&lt;/span&gt;&lt;span class="n"&gt;bin&lt;/span&gt;&lt;span class="p"&gt;/&lt;/span&gt;&lt;span class="n"&gt;env&lt;/span&gt; &lt;span class="n"&gt;kotlin&lt;/span&gt;
&lt;span class="err"&gt;@&lt;/span&gt;&lt;span class="n"&gt;file&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="nc"&gt;DependsOn&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"it.krzeminski:github-actions-kotlin-dsl:0.21.0"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="nn"&gt;it.krzeminski.githubactions.actions.appleboy.SshActionV0&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="nn"&gt;it.krzeminski.githubactions.domain.RunnerType&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="nn"&gt;it.krzeminski.githubactions.domain.triggers.Push&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="nn"&gt;it.krzeminski.githubactions.domain.triggers.WorkflowDispatch&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="nn"&gt;it.krzeminski.githubactions.dsl.expressions.expr&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="nn"&gt;it.krzeminski.githubactions.dsl.workflow&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="nn"&gt;it.krzeminski.githubactions.Yaml.writeToFile&lt;/span&gt;

&lt;span class="kd"&gt;val&lt;/span&gt; &lt;span class="py"&gt;branch&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"production"&lt;/span&gt;
&lt;span class="kd"&gt;val&lt;/span&gt; &lt;span class="py"&gt;environment&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"our-environment"&lt;/span&gt;

&lt;span class="kd"&gt;val&lt;/span&gt; &lt;span class="py"&gt;appServers&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;mapOf&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="s"&gt;"app1"&lt;/span&gt; &lt;span class="n"&gt;to&lt;/span&gt; &lt;span class="s"&gt;"192.168.0.152"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="s"&gt;"enrich1"&lt;/span&gt; &lt;span class="n"&gt;to&lt;/span&gt; &lt;span class="s"&gt;"192.168.0.248"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="s"&gt;"app2"&lt;/span&gt; &lt;span class="n"&gt;to&lt;/span&gt; &lt;span class="s"&gt;"192.168.0.43"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="kd"&gt;val&lt;/span&gt; &lt;span class="py"&gt;bastionIp&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"xxxxxxxx"&lt;/span&gt;
&lt;span class="kd"&gt;val&lt;/span&gt; &lt;span class="py"&gt;deployKey&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;expr&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"secrets.DEPLOY_KEY"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="k"&gt;fun&lt;/span&gt; &lt;span class="nf"&gt;sshAction&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;host&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nc"&gt;String&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;script&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nc"&gt;String&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt; &lt;span class="nc"&gt;SshActionV0&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nc"&gt;SshActionV0&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
        &lt;span class="n"&gt;host&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="n"&gt;host&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="n"&gt;port&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;22&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="n"&gt;username&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"ubuntu"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="n"&gt;key&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="n"&gt;deployKey&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="n"&gt;proxyHost&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="n"&gt;bastionIp&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="n"&gt;proxyPort&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;22&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="n"&gt;proxyUsername&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"ubuntu"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="n"&gt;proxyKey&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="n"&gt;deployKey&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="n"&gt;script&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="n"&gt;script&lt;/span&gt;
    &lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="nf"&gt;workflow&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;name&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"Deploy to $branch"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;on&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;listOf&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
        &lt;span class="nc"&gt;Push&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;branches&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;listOf&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;branch&lt;/span&gt;&lt;span class="p"&gt;)),&lt;/span&gt;
        &lt;span class="nc"&gt;WorkflowDispatch&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
    &lt;span class="p"&gt;),&lt;/span&gt;
    &lt;span class="n"&gt;sourceFile&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="n"&gt;__FILE__&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;toPath&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt;
    &lt;span class="n"&gt;targetFileName&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"deploy_${branch}_$environment.yml"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;

    &lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;

    &lt;span class="nf"&gt;job&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
        &lt;span class="n"&gt;id&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"publish-telekom-$branch"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="n"&gt;runsOn&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;RunnerType&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;UbuntuLatest&lt;/span&gt;
    &lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;

        &lt;span class="n"&gt;appServers&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;forEach&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;name&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;ip&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;-&amp;gt;&lt;/span&gt;
            &lt;span class="nf"&gt;uses&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
                &lt;span class="n"&gt;name&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"restart-$name-$branch-$environment"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
                &lt;span class="n"&gt;action&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;sshAction&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;ip&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s"&gt;"/opt/formation/bin/restart"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
            &lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}.&lt;/span&gt;&lt;span class="nf"&gt;writeToFile&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;true&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Note how we use a loop to call the sshAction function three times. This script generates a Yaml file which is much longer and repeats the action three times. Not DRY and super brittle. And that's on top of the complete lack of type checks, auto-completion, etc. Doing this with kts is so much nicer, much less error prone, and much quicker to figure out.&lt;/p&gt;

&lt;p&gt;The little &lt;code&gt;writeToFile(true)&lt;/code&gt; spits out the Yaml and tells it to add a consistency check that ensures the committed Yaml matches the script output. So, modifying the kts script and then forgetting to run it will fail the action.&lt;/p&gt;

&lt;p&gt;There is much more to this library of course. A lot of github actions are supported out of the box (see &lt;a href="https://krzema12.github.io/github-actions-kotlin-dsl/supported-actions/"&gt;here&lt;/a&gt; for an overview) and you can add support for additional extensions pretty easily; or just pass in a map.&lt;/p&gt;

&lt;p&gt;Also, you can of course produce more than one action from a single kts script. We use this to configure actions for e.g. pull requests and merges to master. The latter has some continuous deployment related actions but of course they share a lot of code. Likewise, most of our actions include a slack notification and share a lot of configuration. With Yaml, you end up with a lot of duplication. With kts, you can get rid of all that duplication.&lt;/p&gt;

&lt;p&gt;Finally, not having to deal with Yaml's weird syntax is a big plus. Manually editing Yaml seems very brittle, verbose, and error prone.&lt;/p&gt;

&lt;h2&gt;
  
  
  Applying my new knowledge to kt-search
&lt;/h2&gt;

&lt;p&gt;I've been working on &lt;a href="https://github.com/jillesvangurp/kt-search"&gt;kt-search&lt;/a&gt;, my Kotlin Multi-Platform client for Opensearch and Elasticsearch for a while. Somehow, it never occurred to me that using that in combination with kts is such an obvious thing to do.&lt;/p&gt;

&lt;p&gt;So, that was easily remedied and I now have a &lt;a href="https://github.com/jillesvangurp/kt-search-kts"&gt;companion library&lt;/a&gt; that combines that with &lt;code&gt;kotlinx-cli&lt;/code&gt; to make writing scripts very straightforward.&lt;/p&gt;

&lt;p&gt;Here's a little script that checks status of your Elasticsearch/Opensearch cluster:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight kotlin"&gt;&lt;code&gt;&lt;span class="err"&gt;#&lt;/span&gt;&lt;span class="p"&gt;!/&lt;/span&gt;&lt;span class="n"&gt;usr&lt;/span&gt;&lt;span class="p"&gt;/&lt;/span&gt;&lt;span class="n"&gt;bin&lt;/span&gt;&lt;span class="p"&gt;/&lt;/span&gt;&lt;span class="n"&gt;env&lt;/span&gt; &lt;span class="n"&gt;kotlin&lt;/span&gt;

&lt;span class="err"&gt;@&lt;/span&gt;&lt;span class="n"&gt;file&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="nc"&gt;Repository&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"https://jitpack.io"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="err"&gt;@&lt;/span&gt;&lt;span class="n"&gt;file&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="nc"&gt;Repository&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"https://maven.tryformation.com/releases"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="err"&gt;@&lt;/span&gt;&lt;span class="n"&gt;file&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="nc"&gt;DependsOn&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"com.github.jillesvangurp:kt-search-kts:0.1.3"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="nn"&gt;com.jillesvangurp.ktsearch.ClusterStatus&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="nn"&gt;com.jillesvangurp.ktsearch.clusterHealth&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="nn"&gt;com.jillesvangurp.ktsearch.kts.addClientParams&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="nn"&gt;com.jillesvangurp.ktsearch.kts.searchClient&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="nn"&gt;com.jillesvangurp.ktsearch.root&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="nn"&gt;kotlinx.cli.ArgParser&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="nn"&gt;kotlinx.coroutines.runBlocking&lt;/span&gt;

&lt;span class="kd"&gt;val&lt;/span&gt; &lt;span class="py"&gt;parser&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;ArgParser&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"script"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="kd"&gt;val&lt;/span&gt; &lt;span class="py"&gt;searchClientParams&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="n"&gt;parser&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;addClientParams&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;span class="n"&gt;parser&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;parse&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;args&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="kd"&gt;val&lt;/span&gt; &lt;span class="py"&gt;client&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="n"&gt;searchClientParams&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;searchClient&lt;/span&gt;

&lt;span class="c1"&gt;// now use the client as normally in a runBlocking block&lt;/span&gt;
&lt;span class="nf"&gt;runBlocking&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="kd"&gt;val&lt;/span&gt; &lt;span class="py"&gt;clusterStatus&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="n"&gt;client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;clusterHealth&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
    &lt;span class="n"&gt;client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;root&lt;/span&gt;&lt;span class="p"&gt;().&lt;/span&gt;&lt;span class="nf"&gt;let&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="n"&gt;rootResp&lt;/span&gt; &lt;span class="p"&gt;-&amp;gt;&lt;/span&gt;
        &lt;span class="nf"&gt;println&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
            &lt;span class="s"&gt;"""
                Cluster name: ${rootResp.clusterName}
                Search Engine distribution: ${rootResp.version.distribution}
                Version: ${rootResp.version.number}
                Status: ${clusterStatus.status}
            """&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;trimIndent&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
        &lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Several things that are happening here. It pulls in the &lt;code&gt;kt-search-kts&lt;/code&gt; jar and it's dependency &lt;code&gt;kt-search&lt;/code&gt; as well as a lot of other dependencies.&lt;/p&gt;

&lt;p&gt;Then it creates a &lt;code&gt;kotlinx-cli&lt;/code&gt; parser and adds the parameters we need to be able to set host, port, and other settings we need to get a search client. And then it calls the &lt;code&gt;searchClient&lt;/code&gt; extension property on that to create the client with those settings.&lt;/p&gt;

&lt;p&gt;And then we use it. You can do whatever you want with this:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Run some queries&lt;/li&gt;
&lt;li&gt;Do some bulk indexing&lt;/li&gt;
&lt;li&gt;Use the snapshot APIs&lt;/li&gt;
&lt;li&gt;Configure cluster settings&lt;/li&gt;
&lt;li&gt;Index management&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I've seen other scripting languages being used for this. Python and go seem to be popular options for this. IMHO, this is nicer. More type safety, less guessing, less verbosity.&lt;/p&gt;

&lt;h2&gt;
  
  
  How to run these scripts
&lt;/h2&gt;

&lt;p&gt;To run these scripts, you need to install kotlin 1.7.x via your package manager of choice. Homebrew works, there's a snap package, there's an arch package, etc. Whatever OS and package manager you use, you can probably make it run. And of course you can also use docker for this.&lt;/p&gt;

&lt;p&gt;After that, just make sure the shebang is set correctly ('#!/usr/bin/env kotlin') and give your script execute permission:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;chmod &lt;/span&gt;755 myscript.main.kts
./myscript.main.kts
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Is it perfect?
&lt;/h2&gt;

&lt;p&gt;Of course this is far from perfect. In my opinion, Jetbrains can and should make a big effort to make this way nicer.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;The &lt;code&gt;.main.kts&lt;/code&gt; filename ending is silly. This should be just &lt;code&gt;.kts&lt;/code&gt;. I'm sure there's a reason for this but I doubt it's a very good reason.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Kotlin has a native compiler now that works on Linux, Mac, and Windows. And a growing ecosystem of multi-platform libraries. Pre-compiling scripts to binaries would be a nice option. Even pre-compiling them to executable jar files might be a nice thing and it would simplify the run-time dependencies.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The import mechanism is a bit flaky and brittle and doesn't understand multi-platform dependencies. If you look at the build file of &lt;code&gt;kt-search-kts&lt;/code&gt;, you will see it depends on &lt;code&gt;kt-search-jvm&lt;/code&gt;. Adding the &lt;code&gt;-jvm&lt;/code&gt; forces it to depend on the jvm variant. Reason: the dependency resolution in kts is not smart enough to add this by itself.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Final verdict
&lt;/h2&gt;

&lt;p&gt;These are just two examples of what you can do with kts. Out of the box, you can use the full Java standard library and as I show above, adding some additional dependencies is pretty easy. One of Kotlin's killer features is defining so-called internal Domain Specific Languages (DSLs). Basically, you abuse the Kotlin syntax to turn whatever framework you have into a mini DSL. My search client has DSLs for querying, index mappings, bulk indexing, etc. And the Github action library I use of course provides a DSL for Github actions.&lt;/p&gt;

&lt;p&gt;Whatever you are dealing with, you can create a Kotlin DSL for it. If you have any Json dialect, checkout my &lt;a href="https://github.com/jillesvangurp/kt-search"&gt;JsonDsl&lt;/a&gt; library, which is part of kt-search. With that you can create simple Kotlin classes to model your DSL using type safe properties and have a run-time modifiable map to add anything it doesn't model. Creating a Yaml version of this is very straightforward and likely something I might do at some point (pull requests welcome).&lt;/p&gt;

&lt;p&gt;Once you have that, you can script whatever: Amazon Cloudformation, Ansible, Elasticsearch Queries, etc. So, while kts still has some rough edges, it is so much nicer than writing Yaml, ansible, or whatever other type unsafe, not quite-a-scripting-language, you are using currently. Minimal verbosity, maximum gains.&lt;/p&gt;

</description>
      <category>kotlin</category>
      <category>elasticsearch</category>
      <category>github</category>
      <category>devops</category>
    </item>
    <item>
      <title>Docker over SSH &amp; Qemu : Replacing Docker for Mac</title>
      <dc:creator>Jilles van Gurp</dc:creator>
      <pubDate>Wed, 01 Sep 2021 06:40:05 +0000</pubDate>
      <link>https://dev.to/jillesvangurp/docker-over-qemu-on-a-mac-1ajp</link>
      <guid>https://dev.to/jillesvangurp/docker-over-qemu-on-a-mac-1ajp</guid>
      <description>&lt;p&gt;Yesterday, Docker announced that Docker for Mac is going to require a paid account for large companies soon. While this does not immediately impact me, I have been relying on docker desktop for mac for a while and that annoys me for several reasons:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;it's somewhat flaky. I've had loads of issues over the years where I had to wipe it out and reinstall.&lt;/li&gt;
&lt;li&gt;the update process is flaky&lt;/li&gt;
&lt;li&gt;it barfs a lot of stuff all over the file system making cleanup a PITA&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  The Alternative: DOCKER_HOST &amp;amp; ssh
&lt;/h2&gt;

&lt;p&gt;Docker is a simple binary that acts as a client to the  docker daemon. Normally it connects to that via a socket that the locally running docker daemon creates.&lt;/p&gt;

&lt;p&gt;However, you can easily make it connect to a remotely running docker daemon by either using the &lt;code&gt;-H&lt;/code&gt; option or setting the &lt;code&gt;DOCKER_HOST&lt;/code&gt; environment variable. One of the supported protocols is &lt;code&gt;ssh&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;So, I installed &lt;code&gt;qemu&lt;/code&gt; via homebrew, and created a vm running a linux distribution (&lt;a href="https://manjaro.org/"&gt;Manjaro&lt;/a&gt;), installed ssh &amp;amp; docker on that, and set up an authorized key so I can ssh into that from my mac terminal. I configured the networking to forward port 5555 to the ssh port 22 on the vm.&lt;/p&gt;

&lt;p&gt;Then I simply set this environment variable on my mac:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;DOCKER_HOST&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;jilles@localhost:5555
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After this all my docker commands (using the client that came with docker for mac; but of course you can install that via homebrew as well), go to the remote host, which is where all the docker containers run.&lt;/p&gt;

&lt;h2&gt;
  
  
  Docker Port Forwarding
&lt;/h2&gt;

&lt;p&gt;The point of running docker is launching things like databases, web servers, etc. with the goal of actually connecting to these things. These things have ports that you might want to talk to. Normally what you do is forward those ports with the -p option. For example,&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker run &lt;span class="nt"&gt;-p8080&lt;/span&gt;:80 nginx
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;would run nginx and allow you to connect to that from localhost via port 8080. One minor problem: those ports will be inside the linux vm and not on your mac's localhost.&lt;/p&gt;

&lt;p&gt;There are various ways to address this. An easy one is to use ssh for this. To forward a port, you can use the &lt;code&gt;-L&lt;/code&gt; option with ssh:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ssh -L 8080:localhost:8080 -p 5555 jilles@localhost
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After this, you can access it in your browser.&lt;/p&gt;

&lt;h2&gt;
  
  
  Installing Qemu on a Mac
&lt;/h2&gt;

&lt;p&gt;You can make this work with any remote ssh; including cloud based options for this. But I wanted a locally running vm. There are ways to do this on mac. A nice OSS and lightweight option for this is qemu. But of course, you can probably make this work with parallels, virtualbox, vmware, or whatever else.&lt;/p&gt;

&lt;p&gt;You can install qemu via homebrew (or whatever else you prefer). Also make sure to install libvirt. I was not able to get networking going without it.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;brew &lt;span class="nb"&gt;install &lt;/span&gt;qemu libvirt
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;When that has finished and you've started the libvirt service (see instructions it dumps in your terminal when it installs), you can create a disk image and start qemu with a linux iso of your choice (or whatever OS you prefer).&lt;/p&gt;

&lt;p&gt;For reference, here's a script that I use to start qemu:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#  qemu-img create -f qcow2 manjaro.qcow2 30G
#   -cdrom  manjaro-xfce-21.1.1-210827-linux513.iso \

qemu-system-x86_64 \
  -m 4G \
  -vga virtio \
  -display default,show-cursor=on \
  -usb \
  -device usb-tablet \
  -machine type=q35,accel=hvf \
  -smp 2 \
  -drive file=manjaro.qcow2,if=virtio \
  -cpu Nehalem \
  -device e1000,netdev=net0 \
  -netdev user,id=net0,hostfwd=tcp::5555-:22 \
  -soundhw
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The commented first line is the command you use to create a disk image. The line below that is the command line option for qemu you use to mount the linux iso. After you've installed to your disk image, you can remove that and boot from the disk image. I went with manjaro, which was pretty hassle free to get going. The networking options are important as you need port forwarding for ssh.&lt;/p&gt;

&lt;h2&gt;
  
  
  Benefits &amp;amp; Down Sides
&lt;/h2&gt;

&lt;p&gt;On the plus side:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;I have a nice sandboxed linux vm that contains all my docker stuff. I can shut it down, upgrade it, wipe it out, etc.&lt;/li&gt;
&lt;li&gt;Docker command line works as normally and things like docker-compose work as well&lt;/li&gt;
&lt;li&gt;qemu is reasonably fast and uses a similar virtualization strategy as docker for mac uses&lt;/li&gt;
&lt;li&gt;I can uninstall docker for mac.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Downsides:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;This stuff needs memory and cpu. I noticed qemu struggling a bit with some of the things I normally run in docker for mac.&lt;/li&gt;
&lt;li&gt;While command line docker works fine, other things like some of our gradle build files assume docker is running locally. I may have to investigate forwarding a socket over ssh. Likewise docker port forwarding needs some manual work as well.&lt;/li&gt;
&lt;li&gt;You need some level of skills on the command line, setting up linux, getting qemu going. If you have that; great.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;There are all sorts of ways to make this more seamless but it works quite nicely like this already. Adding stuff like kubernetes support might be relevant for some. Making the port mapping less tedious might be doable with e.g. &lt;code&gt;sshuttle&lt;/code&gt; or setting up an ssh proxy.&lt;/p&gt;

&lt;p&gt;Let me know on twitter (&lt;a class="mentioned-user" href="https://dev.to/jillesvangurp"&gt;@jillesvangurp&lt;/a&gt;
) what you think about this.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;UPDATE&lt;/strong&gt;, after running this for a week, I figured out that I needed to disable app nap, which is slowing down applications that you aren't looking at, like qemu when you are using it via ssh. Really annoying feature. You might not want to do this on a laptop. I added two aliases for this to my &lt;code&gt;.zshrc&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;alias &lt;/span&gt;&lt;span class="nv"&gt;appnapoff&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"defaults write NSGlobalDomain NSAppSleepDisabled -bool YES"&lt;/span&gt;
&lt;span class="nb"&gt;alias &lt;/span&gt;&lt;span class="nv"&gt;appnapon&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"defaults write NSGlobalDomain NSAppSleepDisabled -bool NO"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Also, &lt;a href="https://kind.sigs.k8s.io/"&gt;kind&lt;/a&gt; is a nice option if you want to add Kubernetes to this mix.&lt;/p&gt;

</description>
      <category>docker</category>
      <category>qemu</category>
    </item>
    <item>
      <title>Improving Build Speeds.</title>
      <dc:creator>Jilles van Gurp</dc:creator>
      <pubDate>Wed, 24 Mar 2021 09:01:54 +0000</pubDate>
      <link>https://dev.to/jillesvangurp/improving-build-speeds-262a</link>
      <guid>https://dev.to/jillesvangurp/improving-build-speeds-262a</guid>
      <description>&lt;p&gt;I wrote a &lt;a href="https://news.ycombinator.com/item?id=26564751"&gt;way too long HN comment&lt;/a&gt; this morning and realized that I probably should turn that into a proper article. The &lt;a href="http://dan.bodar.com/2012/02/28/crazy-fast-build-times-or-when-10-seconds-starts-to-make-you-nervous/"&gt;article that triggered me&lt;/a&gt; was a pretty old one on the importance of keeping builds fast. I could not agree more. And I have lots of wisdom to share on that front from having worked to keep builds fast for most of my career. Even though I develop mostly Kotlin these days, I also work with other tech stacks and pretty much all of the advice applies to almost any tech stack.&lt;/p&gt;

&lt;h2&gt;
  
  
  Build Performance Matters
&lt;/h2&gt;

&lt;p&gt;I've always been aggressive on trying to keep my Java and lately Kotlin builds fast. Anything over a few minutes in CI becomes a drain on the team. Basically, a productive team will have many pull requests open at any point and lots of commits happening on all of them. That means builds start piling up. People start hopping between tasks (or procrastinating) while builds are happening. Cheap laptops become a drain on developer productivity. Etc. All of this is bad. Maintain the flow and keep things as fast as you can. It's worth investing time in.&lt;/p&gt;

&lt;p&gt;Some of the overhead is unavoidable unfortunately. E.g. the Kotlin compiler is a bit of a slouch despite some improvements recently. Many integration tests these days involve using docker or docker compose. That's better than a lot of fakes and imperfect substitutes. But it sucks up time. A lot of Kotlin and Spring projects involve code generation. This adds to your build times. Breaking builds up into modules increases build times as well. Be mindful of all this.&lt;/p&gt;

&lt;p&gt;The rest of this article is a series of  performance tips not covered in the hacker news article. Most of it should apply to any tech stack; though some may have limitations with e.g. concurrency.&lt;/p&gt;

&lt;h2&gt;
  
  
  Run Tests Concurrently
&lt;/h2&gt;

&lt;p&gt;Run your tests concurrently and write your tests such that you can do so. Running thousands of tests sequentially is stupid. When using Junit 5, you need to set &lt;code&gt;junit.jupiter.execution.parallel.enabled=true&lt;/code&gt; in &lt;code&gt;platform.properties&lt;/code&gt; (goes in your test resources). Use more threads than CPUs for this as your tests will likely be IO limited and not CPU limited. Use  &lt;code&gt;junit.jupiter.execution.parallel.config.dynamic.factor=4&lt;/code&gt; to control this in Junit 5.&lt;/p&gt;

&lt;p&gt;If you are not maxing out all your cores, throw more threads at it because you can go faster. If your tests don't pass when running in parallel, fix it. Yes, this is hard but it will make your tests better.&lt;/p&gt;

&lt;h2&gt;
  
  
  No Database or Other Expensive Cleanup
&lt;/h2&gt;

&lt;p&gt;Don't do expensive cleanup and setup in tests. Set it up once. Doing repeated cleanup and setup takes time. Also integration tests become more realistic if they don't operate in a vacuum: your production system is not an empty system.&lt;/p&gt;

&lt;p&gt;To enable being able to do this, randomize test data so that the same tests can run multiple times even if data already exists in your database. Docker will take care of cleaning up ephemeral data after your build. This also helps with running tests concurrently.&lt;/p&gt;

&lt;h2&gt;
  
  
  Tests are Either Unit or Integration/Scenario Tests
&lt;/h2&gt;

&lt;p&gt;Distinguish between (proper) unit tests and scenario driven integration tests as the two ideal forms of a test. Anything in between is going to be slow and imperfect in terms of what it does. This means you can either improve test coverage  (of code, functionality, and edge cases) by making it a proper integration test or faster by making it a proper unit test (runs in milliseconds because there is no expensive setup).&lt;/p&gt;

&lt;h2&gt;
  
  
  Integration Tests Should be Scenario Driven
&lt;/h2&gt;

&lt;p&gt;With integration tests, add to your scenarios to make the most of your sunken cost (time to set up the scenario). Ensure they touch as much of your system as they can to do this. You are looking for e.g. feature interaction bugs, Heisen-bugs related to concurrency, weird things that only happen in the real world. So make it as real as you can get away with. A unit test is not going to catch any of these things. That's why they are called integration tests. Make it as real as you can.&lt;/p&gt;

&lt;h2&gt;
  
  
  Fix Flaky Tests
&lt;/h2&gt;

&lt;p&gt;Fix flaky tests. This usually means understanding why they are flaky and addressing that. If that's technical debt in your production code, that's a good thing. Flaky tests tend to be slow and waste a lot of time.&lt;/p&gt;

&lt;h2&gt;
  
  
  Fail Fast
&lt;/h2&gt;

&lt;p&gt;Separate your unit and integration tests and make your builds fail fast. Compile + unit tests should be under a minute tops. So, if somebody messed up, you'll know in a minute after the commit is pushed to CI.&lt;/p&gt;

&lt;h2&gt;
  
  
  Don't Sleep
&lt;/h2&gt;

&lt;p&gt;Get rid of sleep calls in tests. This is an anti pattern that indicates either flaky tests or naive strategies for dealing with testing asynchronous code (usually both). It's a mistake every time and it makes your tests slow. The solution is polling and ensuring that each test only takes as much time as it strictly needs to.&lt;/p&gt;

&lt;h2&gt;
  
  
  Use More Threads Than CPUs
&lt;/h2&gt;

&lt;p&gt;Run with more threads than your system can handle to flush out flaky tests. Interesting failures happen when your system is under load. Things time out, get blocked, deadlocked, etc. You want to learn about why this happens. Fix the tests until the tests pass reliably with way more threads than CPUs. Then back it down until you hit the optimum test performance. You'll have rock solid test that run as fast as they can.&lt;/p&gt;

&lt;h2&gt;
  
  
  Keep Your Tools up to Date
&lt;/h2&gt;

&lt;p&gt;Keep your build tools up to date and learn how to use them. Most good build tools work on performance issues all the time because it's important. I use Gradle currently and the difference between now and even two years ago is substantial. Even good old Maven got better over time.&lt;/p&gt;

&lt;h2&gt;
  
  
  Get Fast Build Machines
&lt;/h2&gt;

&lt;p&gt;Pay for faster CI machines. Every second counts. If your laptop builds faster than CI, fix it. There's no excuse for that. I once quadrupled our CI performance by simply switching from Travis CI to AWS code build with a proper instance type. 20 minutes to 5 minutes. Exact same build. And it removed the limits on concurrent builds as well. Massive performance boost and a rounding error on our IT cost.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Most of this advice should work for any language. Life is too short for waiting for builds to happen. With all of this, do as I say and not as I do. I am always battling slow builds in any project I join. Some of these things tend to be controversial in some teams. People get obnoxious and religious about using docker (or not), using in memory databases (or not). Adapt to your team. If you want fast, builds, understand why they are slow and how you can fix it. The above advice is just a range of tools you can use. Or not. Make using them a conscious choice at least. It's better than being fatalistic about accepting slow builds as a de-facto reality.&lt;/p&gt;

</description>
      <category>testing</category>
      <category>performance</category>
      <category>devops</category>
    </item>
    <item>
      <title>Reactive Security Filter with Spring &amp; Kotlin</title>
      <dc:creator>Jilles van Gurp</dc:creator>
      <pubDate>Tue, 25 Aug 2020 10:39:40 +0000</pubDate>
      <link>https://dev.to/jillesvangurp/reactive-security-filter-with-spring-kotlin-1ajo</link>
      <guid>https://dev.to/jillesvangurp/reactive-security-filter-with-spring-kotlin-1ajo</guid>
      <description>&lt;p&gt;Over the years, I've had to implement security filters a couple of time. Recently I had to add JWT token based API authentication to a Spring project.&lt;/p&gt;

&lt;p&gt;Some complicating factors:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;It's the reactive variant of Spring, aka. Flux.&lt;/li&gt;
&lt;li&gt;Flux has a very complicated API surface.&lt;/li&gt;
&lt;li&gt;To not have to deal with that we use Kotlin &amp;amp; Co-routines. This too has a few rough edges still as it is very new.&lt;/li&gt;
&lt;li&gt;Servlet Filters don't work when using Flux because they are inherently synchronous. So we have to do things the Flux way.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;So, I spent a good amount of time to figure out how to do this correctly and this is another instance of me &lt;strong&gt;documenting by blogging&lt;/strong&gt; so I don't have to google my way through the maze of cryptic documentation again. Also, I hope others might find this useful.&lt;/p&gt;

&lt;h2&gt;
  
  
  High-level design
&lt;/h2&gt;

&lt;p&gt;The design for my solution is fairly simple. I never really liked Spring Security as it mainly gives me headaches. Also I have custom requirements now and more coming that just won't fit in what it does that easily (I speak from experience). But I imagine it does something similar.&lt;/p&gt;

&lt;p&gt;So instead:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;We use simple JWT tokens that are signed, have a payload with things like a userId, and need to be checked as part of our Authentication and Authorization logic. Standard stuff these days. &lt;/li&gt;
&lt;li&gt;Any request that includes an Authorization header, we want to grab the JWT token from there, validate it, and create a &lt;code&gt;SecurityContext&lt;/code&gt; object. This object forms the basis for our authorization logic.&lt;/li&gt;
&lt;li&gt;The Authorization logic lives in an &lt;code&gt;AuthorizationService&lt;/code&gt; that is called to run checks from our business logic. When that happens, it needs to grab the &lt;code&gt;SecurityContext&lt;/code&gt; check whether we authenticated, grab the userId, and figure out the set of roles and privileges (beyond the scope of this article) the principal has.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;So, in short, we need something that creates the security context and stuffs it in a place where the AuthorizationService can grab it. Since we use co-routines on top of Flux, that place is the Flux Reactor Context and we want to get to that via the &lt;code&gt;coroutineContext&lt;/code&gt; that is part of the co-routine scope all our logic executes in.&lt;/p&gt;

&lt;h2&gt;
  
  
  The filter
&lt;/h2&gt;

&lt;p&gt;Spring Flux offers two ways to implement something similar to the good old &lt;code&gt;ServletFilter&lt;/code&gt;, which is what you'd use when we were all still doing synchronous IO with Tomcat. One of these is called &lt;code&gt;WebFilter&lt;/code&gt;, this appears to be the most useful of the two, since crucially it returns something called a &lt;code&gt;ServerWebExchange&lt;/code&gt;, which in a somewhat convoluted way gives us access to the request &lt;code&gt;Flux&lt;/code&gt; and the ability to interact with the Spring Reactor &lt;code&gt;Context&lt;/code&gt;. The best way to think of that is as a &lt;code&gt;ThreadLocal&lt;/code&gt; like construct for Flux where we can park custom data and access it downstream. Via the &lt;code&gt;coroutines-reactor&lt;/code&gt; library, we gain a few feature to access this via the co-routine scope.&lt;/p&gt;

&lt;p&gt;The other way to filter is via &lt;code&gt;HandlerFilterFunction&lt;/code&gt; which looks like it's a bit more limited as it does not provide an obvious way to do anything with Flux (correct me if I'm wrong) but would be a better fit if you use the Spring's router DSL.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight kotlin"&gt;&lt;code&gt;&lt;span class="nd"&gt;@Component&lt;/span&gt;
&lt;span class="kd"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;AuthorizationWebFilter&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="kd"&gt;val&lt;/span&gt; &lt;span class="py"&gt;tokenService&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nc"&gt;TokenService&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt; &lt;span class="nc"&gt;WebFilter&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;override&lt;/span&gt; &lt;span class="k"&gt;fun&lt;/span&gt; &lt;span class="nf"&gt;filter&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;exchange&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nc"&gt;ServerWebExchange&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;chain&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nc"&gt;WebFilterChain&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt; &lt;span class="nc"&gt;Mono&lt;/span&gt;&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nc"&gt;Void&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="kd"&gt;val&lt;/span&gt; &lt;span class="py"&gt;jwtToken&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="n"&gt;tokenService&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;parseHeader&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
            &lt;span class="n"&gt;exchange&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;request&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;headers&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s"&gt;"Authorization"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="o"&gt;?.&lt;/span&gt;&lt;span class="nf"&gt;firstOrNull&lt;/span&gt;&lt;span class="p"&gt;())&lt;/span&gt;
        &lt;span class="kd"&gt;val&lt;/span&gt; &lt;span class="py"&gt;context&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="n"&gt;tokenService&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;createSecurityContext&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;jwtToken&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;chain&lt;/span&gt;
            &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;filter&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;exchange&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
            &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;subscriberContext&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
                &lt;span class="n"&gt;it&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;put&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nc"&gt;FormationSecurityContext&lt;/span&gt;&lt;span class="o"&gt;::&lt;/span&gt;&lt;span class="k"&gt;class&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;java&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;context&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
            &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This is where Spring's API gets a little weird. This reads different then what it actually does and this threw me off for an while. The key here is that you call &lt;code&gt;subscriberContext&lt;/code&gt; on the return value of &lt;code&gt;.filter(exchange)&lt;/code&gt;. To me this reads like: first do the request logic and then mess with the context. Luckily, what it does is different and the context gets modified before the logic kicks in. Just a bit of API weirdness. &lt;/p&gt;

&lt;p&gt;The put method is weirder. Especially in combination with how we are getting values from the reactor Context. Intellij suggests a type of &lt;code&gt;Any&lt;/code&gt; for both key and value. This is a lie and just where the Java type system fell a bit short, I guess. The correct types for this are &lt;code&gt;Class&amp;lt;T&amp;gt;&lt;/code&gt; and &lt;code&gt;T&lt;/code&gt;. So, it's a map indexed by the class of the value. In our case that would be &lt;code&gt;FormationSecurityContext&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;A final gotcha is that unlike most &lt;code&gt;Map&lt;/code&gt; implementations, put does not manipulate a Context but creates a new one. I initially had this because I assumed put did not have a return value.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight kotlin"&gt;&lt;code&gt; &lt;span class="nf"&gt;subscriberContext&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="c1"&gt;// this is wrong!&lt;/span&gt;
    &lt;span class="n"&gt;it&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;put&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nc"&gt;FormationSecurityContext&lt;/span&gt;&lt;span class="o"&gt;::&lt;/span&gt;&lt;span class="k"&gt;class&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;java&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;context&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;it&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;So, that looks like a deceptively easy bit of code but it was made hard by a lack of documentation, and Spring not following the principle of the least amount of surprise, which makes all this hard to discover.&lt;/p&gt;

&lt;h2&gt;
  
  
  Getting the value out on the other side
&lt;/h2&gt;

&lt;p&gt;Now that we have our security context, we want to use it. For this I implemented a simple DSL to check auth in places where we need that. This is the Kotlin way and I prefer it over annotations and/or AOP based madness.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight kotlin"&gt;&lt;code&gt;&lt;span class="c1"&gt;// ReactorContext is still experimental&lt;/span&gt;
&lt;span class="nd"&gt;@ExperimentalCoroutinesApi&lt;/span&gt;
&lt;span class="nd"&gt;@Component&lt;/span&gt;
&lt;span class="kd"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;AuthorizationService&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="kd"&gt;val&lt;/span&gt; &lt;span class="py"&gt;roleRepository&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nc"&gt;RoleRepository&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="cm"&gt;/**
     * Runs the block if the authorization checks succeeed or throws a `NotAuthorizedException`.
     */&lt;/span&gt;
    &lt;span class="k"&gt;suspend&lt;/span&gt; &lt;span class="k"&gt;fun&lt;/span&gt; &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nc"&gt;T&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt; &lt;span class="nf"&gt;authorize&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;privilege&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nc"&gt;Privilege&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="n"&gt;ownerId&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nc"&gt;String&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;block&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="k"&gt;suspend&lt;/span&gt; &lt;span class="p"&gt;()-&amp;gt;&lt;/span&gt;&lt;span class="nc"&gt;T&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;&lt;span class="nc"&gt;T&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="kd"&gt;val&lt;/span&gt; &lt;span class="py"&gt;reactorContext&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="n"&gt;coroutineContext&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nc"&gt;ReactorContext&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
        &lt;span class="kd"&gt;val&lt;/span&gt; &lt;span class="py"&gt;securityContext&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="n"&gt;reactorContext&lt;/span&gt;&lt;span class="o"&gt;?.&lt;/span&gt;&lt;span class="n"&gt;context&lt;/span&gt;&lt;span class="p"&gt;?&lt;/span&gt;
            &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="k"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nc"&gt;FormationSecurityContext&lt;/span&gt;&lt;span class="o"&gt;::&lt;/span&gt;&lt;span class="k"&gt;class&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;java&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="k"&gt;if&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;securityContext&lt;/span&gt; &lt;span class="p"&gt;==&lt;/span&gt; &lt;span class="k"&gt;null&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="c1"&gt;// should not happen; means our AuthorizationWebFilter is broken&lt;/span&gt;
            &lt;span class="k"&gt;throw&lt;/span&gt; &lt;span class="nc"&gt;IllegalStateException&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"no context"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;else&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="k"&gt;if&lt;/span&gt;&lt;span class="p"&gt;(!&lt;/span&gt;&lt;span class="n"&gt;securityContext&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;isAuthenticated&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; 
              &lt;span class="k"&gt;throw&lt;/span&gt; &lt;span class="nc"&gt;NotAuthorizedException&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nc"&gt;AuthProblemCode&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;JWT_MISSING&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
            &lt;span class="c1"&gt;// additional auth checks beyond the scope of this article&lt;/span&gt;
            &lt;span class="nf"&gt;checkAuth&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;privilege&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="n"&gt;ownerId&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="n"&gt;securityContext&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;user&lt;/span&gt;
              &lt;span class="o"&gt;?:&lt;/span&gt; &lt;span class="k"&gt;throw&lt;/span&gt; &lt;span class="nc"&gt;IllegalStateException&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"no user"&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;
            &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;block&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;invoke&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To use this, you simply do something like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight kotlin"&gt;&lt;code&gt;&lt;span class="k"&gt;suspend&lt;/span&gt; &lt;span class="k"&gt;fun&lt;/span&gt; &lt;span class="nf"&gt;getUserProfile&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;userId&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nc"&gt;String&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt; &lt;span class="nc"&gt;UserProfile&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt;
    &lt;span class="n"&gt;authorizationService&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;authorize&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nc"&gt;UserProfilePrivilege&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;GET_USER_PROFILE&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;userId&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="n"&gt;userRepository&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;findUser&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;userId&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="o"&gt;?.&lt;/span&gt;&lt;span class="nf"&gt;toUserProfile&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;?:&lt;/span&gt; &lt;span class="k"&gt;throw&lt;/span&gt; &lt;span class="nc"&gt;NotFoundException&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;userId&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;It's a suspend function because we are using this from Flux based flow. In this case, we are actually using the Expedia GraphQL integration for Spring, which is definitely beyond the scope of this article but quite easy to set up.&lt;/p&gt;

&lt;p&gt;But if you weren't, you could do something like this to create an endpoint:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight kotlin"&gt;&lt;code&gt;&lt;span class="nf"&gt;coRouter&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nc"&gt;GET&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"/user/{userId}"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="nf"&gt;ok&lt;/span&gt;&lt;span class="p"&gt;().&lt;/span&gt;&lt;span class="nf"&gt;contentType&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nc"&gt;MediaType&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;APPLICATION_JSON&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
            &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;bodyValueAndAwait&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;userProfileSerice&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;getUserProfile&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt; &lt;span class="n"&gt;it&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;pathVariable&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"userId"&lt;/span&gt;&lt;span class="p"&gt;)))&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;bodyValueAndAwait&lt;/code&gt; extension function takes our suspending function and turns it into a Spring &lt;code&gt;Mono&lt;/code&gt;, so Spring Reactor does the right things with Flux.&lt;/p&gt;

</description>
      <category>kotlin</category>
      <category>security</category>
    </item>
    <item>
      <title>Publish Kotlin multiplaform  maven repo</title>
      <dc:creator>Jilles van Gurp</dc:creator>
      <pubDate>Tue, 11 Aug 2020 12:55:59 +0000</pubDate>
      <link>https://dev.to/jillesvangurp/publish-kotlin-multiplaform-jars-to-a-private-maven-2bdh</link>
      <guid>https://dev.to/jillesvangurp/publish-kotlin-multiplaform-jars-to-a-private-maven-2bdh</guid>
      <description>&lt;p&gt;A recurring topic on many projects for me is publishing private packages of some sort so that they can be used in other private projects. Historically this has always been a PITA to setup and involves either paying someone to provide you some SAAS solution or reserving a chunk of time to do devops. It's a reason I often dodge the issue by simply not bothering with maven repositories.&lt;/p&gt;

&lt;p&gt;I code a lot of Kotlin in the last few years and that means I get to deal with gradle a lot. On many projects people dodge the need for private repositories by doing mono repositories with multi module gradle or maven repositories. I've always hated dealing with multi module gradle projects because things slow down a lot when you are waiting for builds to happen. As soon as you have n modules, everything now happens n times (compile, test, package, etc). And while build tools are great for making things repeatable, they do have the potential to suck the life out of you in terms of consuming all your time. Three minutes may sound like nothing but repeat that 10-20 times in a day and you have just lost an hour.&lt;/p&gt;

&lt;p&gt;So, especially for things that don't change a lot wouldn't it be nice if you could park them in a separate project and just download the already compiled binary. The short answer to this completely rhetorical question is "well duh". That's where private maven repositories come in. &lt;/p&gt;

&lt;h2&gt;
  
  
  Multiplatform
&lt;/h2&gt;

&lt;p&gt;Recently, Kotlin multiplatform became a thing. With Kotlin multiplatform, you cross compile Kotlin code to multiple platforms like IOS, Android, Linux, WebAssembly, Javascript, etc. using the Kotlin multiplatform gradle plugin. This makes a lot of sense if you are trying to reuse code between different platforms and a natural fit for this is extracting common code into a multiplatform library that you then need to put somewhere so you can actually use it. Somewhere like a private maven repository.&lt;/p&gt;

&lt;p&gt;The last two weeks or so, I've been working on lots of new things since I joined &lt;a href="https://tryformation.com"&gt;Formation&lt;/a&gt;. One of those things is exactly this. We have android code and some server code written in Kotlin and are currently writing a lot more code. So, obviously Kotlin multiplatform has our interest since we want to do IOS soon as well. When I say "has our interest". &lt;/p&gt;

&lt;p&gt;What I really mean is I was stuck in Gradle hell for the past week trying to figure out this stuff from scraps of misleading, outdated, incomplete, or flat-out wrong documentation, stack overflow posts, etc. This stuff is very new and immature and technically only available in Beta so far. So, this is not unexpected.&lt;/p&gt;

&lt;p&gt;I've been trying to figure out an easy strategy to deploy multiplatform artifacts via a private repository. We started with Github Packages because we are on Github and are making full use of their freemium layer; which is actually really great these days. We pay 0$ for a Github organization with as many private repositories as we need, we get CI/CD via Github actions and I've even managed to use Github Packages. Sadly, Github packages has evaded all my attempts to make it work with Kotlin multiplatform. Given the awesome price tag, I still think it's pretty nice but it seems this is a dead end (for now) for multiplatform at least. I also tried Jitpack with a public repository and had issues with that as well.&lt;/p&gt;

&lt;h2&gt;
  
  
  GCS based repository
&lt;/h2&gt;

&lt;p&gt;We currently deploy some server stuff in Google Cloud which provided me with another false start in the form of Google Artifacts, which is a package repository with maven functionality that is currently in some alpha release and therefore not something you can actually use yet. That's a bit of a bummer because presumably Google is going to be all over Kotlin multiplatform given their Android involvement (or they should be, big corporations have trouble with doing logical things like this).&lt;/p&gt;

&lt;p&gt;So, two potential candidates for a private repository down, I was getting a bit frustrated until I remembered that I got some mileage out of setting up maven repositories via ssh some years ago (while I was still using maven) and more recently using an s3 bucket (also on a maven project). That led me down a rabbit hole of &lt;strong&gt;"I wonder if I can do something with Google Storage for this ..."&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;In short, yes. Google storage is a lot less popular than S3, which caused me a few headaches piecing together what I actually needed to do. But eventually I figured it out:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight kotlin"&gt;&lt;code&gt;&lt;span class="nf"&gt;plugins&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="c1"&gt;// we're using the multiplatform plugin&lt;/span&gt;
    &lt;span class="nf"&gt;kotlin&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"multiplatform"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="n"&gt;version&lt;/span&gt; &lt;span class="s"&gt;"1.3.72"&lt;/span&gt;
    &lt;span class="c1"&gt;// we want to publish our jars&lt;/span&gt;
    &lt;span class="nf"&gt;id&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"maven-publish"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="c1"&gt;// first you need set up your publishing repository&lt;/span&gt;
&lt;span class="nf"&gt;publishing&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nf"&gt;repositories&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="nf"&gt;maven&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="n"&gt;url&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="k"&gt;if&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;publishLocal&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;toBoolean&lt;/span&gt;&lt;span class="p"&gt;())&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
                &lt;span class="c1"&gt;// great for testing&lt;/span&gt;
                &lt;span class="c1"&gt;// gradle publish -PpublishLocal=true -Pversion=0.42&lt;/span&gt;
                &lt;span class="nf"&gt;uri&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"file:///${projectDir}/build/localrepo"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
            &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;else&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
                &lt;span class="c1"&gt;// this is what we do in github actions&lt;/span&gt;
                &lt;span class="c1"&gt;// GOOGLE_APPLICATION_CREDENTIALS env var must be set for this to work&lt;/span&gt;
                &lt;span class="c1"&gt;// either to a path with the json for the service account or with the base64 content of that.&lt;/span&gt;
                &lt;span class="c1"&gt;// in github actions we should configure a secret on the repository with a base64 version of a service account&lt;/span&gt;
                &lt;span class="c1"&gt;// export GOOGLE_APPLICATION_CREDENTIALS=$(cat /Users/jillesvangurp/.gcloud/jvg-admin.json | base64)&lt;/span&gt;
                &lt;span class="nf"&gt;uri&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"gcs://insert-your-bucket-name-here/releases"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
            &lt;span class="p"&gt;}&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="nf"&gt;repositories&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="c1"&gt;// if you want to depend on jars from your repo, add it like so&lt;/span&gt;
    &lt;span class="nf"&gt;maven&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="n"&gt;url&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;uri&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"gcs://insert-your-bucket-name-here/releases"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="nf"&gt;mavenCentral&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="nf"&gt;kotlin&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nf"&gt;jvm&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="c1"&gt;// ...&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="nf"&gt;js&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="c1"&gt;// ...&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Just add that to your &lt;code&gt;build.gradle.kts&lt;/code&gt; file. The multiplatform plugin actually integrates well with the publishing plugin so things should just work without further configuration. You may need to set artifactId, groupId and version somewhere of course.&lt;/p&gt;

&lt;p&gt;The tricky bit is getting credentials to the plugin. Basically, see the comments for that. You need a service account credentials file stored locally and the &lt;code&gt;GOOGLE_APPLICATION_CREDENTIALS&lt;/code&gt; environment pointing to that. I did this via the console but you should be able to do this via the command line if you prefer.&lt;/p&gt;

&lt;p&gt;For good measure, I also added a local repo so I can test it actually does the right things before creating a mess in my bucket.&lt;/p&gt;

&lt;h2&gt;
  
  
  Setting up CI to do this automatically
&lt;/h2&gt;

&lt;p&gt;Then the next issue is setting up a github actions workflow to do this for us:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Publish package to GitHub Packages&lt;/span&gt;
&lt;span class="na"&gt;on&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;release&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;types&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;created&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
&lt;span class="na"&gt;jobs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;publish&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;runs-on&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ubuntu-latest&lt;/span&gt;
    &lt;span class="na"&gt;steps&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;actions/checkout@v2&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;actions/setup-java@v1&lt;/span&gt;
        &lt;span class="na"&gt;with&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;java-version&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;1.8&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Save google token&lt;/span&gt;
        &lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;echo "${{ secrets.GOOGLE_CLOUD_KEY }}" | base64 -d &amp;gt; ${{ github.workspace }}/google_tok.json&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Publish package&lt;/span&gt;
        &lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;gradle -Pversion=${{ github.event.release.tag_name }} build publish&lt;/span&gt;
        &lt;span class="na"&gt;env&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="c1"&gt;#          GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}&lt;/span&gt;
          &lt;span class="na"&gt;GOOGLE_APPLICATION_CREDENTIALS&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${{ github.workspace }}/google_tok.json&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Basically, the way this works is that whenever I tag a release via Github's releases feature, it triggers this workflow to publish a set of jars to my gcs bucket. I simply added my token in base 64 as a secret to the github project:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;cat &lt;/span&gt;mytoken.json | &lt;span class="nb"&gt;base64&lt;/span&gt; | pbcopy
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then the first workflow step unpacks that and puts it into a file in the workspace. The second one adds the &lt;code&gt;GOOGLE_APPLICATION_CREDENTIALS&lt;/code&gt; variable.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusions
&lt;/h2&gt;

&lt;p&gt;Setting up a private repository like this is easy and cheap. It's was a PITA to piece together the relevant bits of documentation; so I wrote up an article documenting this both for my future self (I'll likely use this again) and for others to benefit from. I've been using this on two internal projects for a few weeks and we are currently using the artifacts in our Android project. Soon, we'll likely start experimenting with IOS as well.&lt;/p&gt;

&lt;p&gt;The same instructions should also work for S3, although the credentials for that are a bit easier to deal with (just grab your key and secret and put them into Github secrets and use them as is).&lt;/p&gt;

</description>
      <category>kotlin</category>
      <category>multiplatform</category>
      <category>googlecloud</category>
      <category>gradle</category>
    </item>
    <item>
      <title>Using Pandoc to create a Website</title>
      <dc:creator>Jilles van Gurp</dc:creator>
      <pubDate>Mon, 25 May 2020 19:16:50 +0000</pubDate>
      <link>https://dev.to/jillesvangurp/using-pandoc-to-create-a-website-1gea</link>
      <guid>https://dev.to/jillesvangurp/using-pandoc-to-create-a-website-1gea</guid>
      <description>&lt;p&gt;I think I created my first web page around 1997. I was studying at the University of Utrecht at the time. There was a brief period  around 1994 where the using the World Wide Web meant queueing for the single unix terminal with a gateway to the internet &amp;amp; Mosaic installed after lectures. This of course soon escalated. By 1997, all computer lab terminals (a mix of Sun and HP unix machines, and later some windows PCs) had a browser. It was around then that I put up a little HTML page on the faculty web server announcing my presence to the world.&lt;/p&gt;

&lt;p&gt;In the years after, this thing migrated to different places and around 2002 I registered my domain &lt;a href="https://www.jillesvangurp.com"&gt;jillesvangurp.com&lt;/a&gt;  (I was already too late to register jilles.com). Around the same time, I got into blogging. First with something called pivot, which I swapped out for Wordpress a couple of years later. For the past 16 or so years I've been using that to host my website. Recently, somebody pointed out that my web page was broken. Basically, my hosting provider did some infrastructure changes and this messed up https &amp;amp; http. Kind of embarrassing and it annoyed me that I had to spend time on fixing it.&lt;/p&gt;

&lt;p&gt;So, I decided that enough was enough and it was time to retire Wordpress. This had been on my TODO list probably for the last five years or so. But it was one of those things that I never got around to. Over time, I've grown more uncomfortable with the notion of running a mess of php that is regularly in need of security updates and generally a vector for having your website defaced. Also, I like to write in markdown and Wordpress seems to insist on not storing that natively and/or mangling it.&lt;/p&gt;

&lt;p&gt;After a survey of different tools for static website generation, I decided to keep it simple. There are a lot of these tools out there but they all seem to be a combination of opinionated and convoluted for what I want. And I got annoyed with their documentation. A problem I see with this type of tools is that the happy path of using them is well documented but basically only does less than half the job that you need done. Piecing the rest together has about the same complexity as just rolling your own scripts.&lt;/p&gt;

&lt;p&gt;So, I picked the simplest tool that gets the job done: &lt;code&gt;pandoc&lt;/code&gt;. Pandoc is a nice command-line tool to convert different text file formats to other formats. I've used a few months ago on my &lt;a href="https://github.com/jillesvangurp/es-kotlin-wrapper-client/"&gt;Elasticsearch Kotlin Client&lt;/a&gt; to generate an &lt;a href="https://github.com/jillesvangurp/es-kotlin-wrapper-client/blob/devtoarticle/book.epub"&gt;epub&lt;/a&gt; version of the manual. Crucially, it supports the Github flavor of markdown as input and html5 as the output. Even better, it can process code samples, do syntax highlighting, and some simple templating. That's all I need. I can do the rest with bash.&lt;/p&gt;

&lt;p&gt;So, I started hacking a few bash scripts together that I've been adding features to in the last weeks. At this point, it's good enough for what I need. Of course the whole set up is highly tailored to my needs but that might be good enough for others as well. So, I decided to share the &lt;a href="https://github.com/jillesvangurp/www.jillesvangurp.com"&gt;source code on Github&lt;/a&gt;. If not, the scripts are simple enough to figure out that you can probably fix it to do whatever you need. Feel free to fork and adapt.&lt;/p&gt;

&lt;h2&gt;
  
  
  Code
&lt;/h2&gt;

&lt;p&gt;Here's a &lt;a href="https://github.com/jillesvangurp/www.jillesvangurp.com/blob/devtoarticle/pd-pages.sh"&gt;script&lt;/a&gt; that I wrote to convert markdown:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;#! /usr/bin/env bash&lt;/span&gt;
&lt;span class="k"&gt;for &lt;/span&gt;page &lt;span class="k"&gt;in&lt;/span&gt; &lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;ls &lt;/span&gt;pages&lt;span class="si"&gt;)&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;do
  &lt;/span&gt;pandoc &lt;span class="nt"&gt;--from&lt;/span&gt; markdown_github+smart+yaml_metadata_block+auto_identifiers &lt;span class="s2"&gt;"pages/&lt;/span&gt;&lt;span class="nv"&gt;$page&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;-o&lt;/span&gt; &lt;span class="s2"&gt;"public/&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;basename&lt;/span&gt; &lt;span class="nv"&gt;$page&lt;/span&gt; .md&lt;span class="si"&gt;)&lt;/span&gt;&lt;span class="s2"&gt;.html"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--template&lt;/span&gt; templates/page.html&lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;-V&lt;/span&gt; &lt;span class="nv"&gt;navigation&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;cat &lt;/span&gt;navigation.html&lt;span class="si"&gt;)&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;-V&lt;/span&gt; &lt;span class="nv"&gt;footer&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;cat &lt;/span&gt;footer.html&lt;span class="si"&gt;)&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
&lt;span class="k"&gt;done&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This simply generates html for each of the pages in the pages directory and uses the &lt;a href="https://github.com/jillesvangurp/www.jillesvangurp.com/blob/devtoarticle/templates/page.html"&gt;page.html&lt;/a&gt; template. This in turn has a few variables that it mostly gets from the markdown meta data section and I've added a few on the command line. Nice and simple.&lt;/p&gt;

&lt;p&gt;I have a similar script that processes articles that I migrated from my old Wordpress setup (in the &lt;code&gt;articles&lt;/code&gt; directory). Fixing the exported markdown was hard as Wordpress export makes a mess of that. But I got it done in the end with a lot of patience (and some regex replacing).&lt;/p&gt;

&lt;p&gt;Since running the above script takes quite long on 300+ articles, I created &lt;a href="https://github.com/jillesvangurp/www.jillesvangurp.com/blob/devtoarticle/pd-articles.sh"&gt;another script&lt;/a&gt; with a bit of hackery to fork processes with bash:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;
&lt;span class="k"&gt;for &lt;/span&gt;blogpost &lt;span class="k"&gt;in&lt;/span&gt; &lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;ls &lt;/span&gt;articles&lt;span class="si"&gt;)&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;do 
  &lt;/span&gt;&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;publishdate&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="nv"&gt;$blogpost&lt;/span&gt; | &lt;span class="nb"&gt;grep&lt;/span&gt; &lt;span class="nt"&gt;-oE&lt;/span&gt; &lt;span class="s1"&gt;'[0-9]{4}-[0-9]{2}-[0-9]{2}'&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;
  pandoc &lt;span class="nt"&gt;--from&lt;/span&gt; markdown_github+smart+yaml_metadata_block+auto_identifiers &lt;span class="s2"&gt;"articles/&lt;/span&gt;&lt;span class="nv"&gt;$blogpost&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;-o&lt;/span&gt; &lt;span class="s2"&gt;"public/blog/&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;basename&lt;/span&gt; &lt;span class="nv"&gt;$blogpost&lt;/span&gt; .md&lt;span class="si"&gt;)&lt;/span&gt;&lt;span class="s2"&gt;.html"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--template&lt;/span&gt; templates/article.html   &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;-V&lt;/span&gt; &lt;span class="nv"&gt;publishdate&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$publishdate&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;-V&lt;/span&gt; &lt;span class="nv"&gt;navigation&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;cat &lt;/span&gt;navigation.html&lt;span class="si"&gt;)&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;-V&lt;/span&gt; &lt;span class="nv"&gt;footer&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;cat &lt;/span&gt;footer.html&lt;span class="si"&gt;)&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&amp;amp;
&lt;span class="k"&gt;done
for &lt;/span&gt;pid &lt;span class="k"&gt;in&lt;/span&gt; &lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;jobs&lt;/span&gt; &lt;span class="nt"&gt;-p&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;do
    &lt;/span&gt;&lt;span class="nb"&gt;wait&lt;/span&gt; &lt;span class="nv"&gt;$pid&lt;/span&gt;
&lt;span class="k"&gt;done&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Note it has a little &lt;code&gt;&amp;amp;&lt;/code&gt; on the &lt;code&gt;pandoc&lt;/code&gt; command. This causes bash to fork a process. Then after all processes are forked, I wait for all the jobs to finish. Takes about 10 seconds to process everything. Good enough. &lt;/p&gt;

&lt;p&gt;Also note the &lt;code&gt;-V&lt;/code&gt; options for setting variables, those are used in the template. In this case, I've added the publication date, which is parsed from the file name using grep.&lt;/p&gt;

&lt;p&gt;For the &lt;a href="https://github.com/jillesvangurp/www.jillesvangurp.com/blob/devtoarticle/sitemap.sh"&gt;sitemap&lt;/a&gt;, I simply gobbled together some nice &lt;code&gt;find&lt;/code&gt; command and a few &lt;code&gt;echo&lt;/code&gt; commands.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;#! /usr/bin/env bash&lt;/span&gt;
&lt;span class="nv"&gt;sitemap&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"public/sitemap.xml"&lt;/span&gt;
&lt;span class="nv"&gt;baseurl&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"https://www.jillesvangurp.com"&lt;/span&gt;
&lt;span class="nv"&gt;timestamp&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;date&lt;/span&gt; &lt;span class="nt"&gt;-u&lt;/span&gt; +&lt;span class="s2"&gt;"%Y-%m-%dT%H:%M:%SZ"&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;

&lt;span class="k"&gt;function &lt;/span&gt;url&lt;span class="o"&gt;()&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
  &lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"&amp;lt;url&amp;gt;&amp;lt;loc&amp;gt;&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;baseurl&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;/&lt;/span&gt;&lt;span class="nv"&gt;$1&lt;/span&gt;&lt;span class="s2"&gt;&amp;lt;loc&amp;gt;&amp;lt;lastmod&amp;gt;&lt;/span&gt;&lt;span class="nv"&gt;$timestamp&lt;/span&gt;&lt;span class="s2"&gt;&amp;lt;/lastmod&amp;gt;&amp;lt;/url&amp;gt;"&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;

&lt;span class="nv"&gt;robots&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sb"&gt;`&lt;/span&gt;&lt;span class="nb"&gt;cat&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&amp;lt;&lt;/span&gt;&lt;span class="no"&gt;EOF&lt;/span&gt;&lt;span class="sh"&gt;
User-agent: *
Allow: *
Sitemap: &lt;/span&gt;&lt;span class="nv"&gt;$baseurl&lt;/span&gt;&lt;span class="sh"&gt;/sitemap.xml
&lt;/span&gt;&lt;span class="no"&gt;EOF
&lt;/span&gt;&lt;span class="sb"&gt;`&lt;/span&gt;
&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$robots&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; public/robots.txt

&lt;span class="nv"&gt;header&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sb"&gt;`&lt;/span&gt;&lt;span class="nb"&gt;cat&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&amp;lt;&lt;/span&gt;&lt;span class="no"&gt;EOF&lt;/span&gt;&lt;span class="sh"&gt;
&amp;lt;?xml version="1.0" encoding="UTF-8"?&amp;gt;
&amp;lt;urlset xmlns="http://www.sitemaps.org/schemas/sitemap/0.9"&amp;gt;
&lt;/span&gt;&lt;span class="no"&gt;EOF
&lt;/span&gt;&lt;span class="sb"&gt;`&lt;/span&gt; 
&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$header&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="nv"&gt;$sitemap&lt;/span&gt;

&lt;span class="k"&gt;for &lt;/span&gt;file &lt;span class="k"&gt;in&lt;/span&gt; &lt;span class="si"&gt;$(&lt;/span&gt;find public &lt;span class="nt"&gt;-name&lt;/span&gt; &lt;span class="s2"&gt;"*.html"&lt;/span&gt; | &lt;span class="nb"&gt;sed&lt;/span&gt; &lt;span class="nt"&gt;-e&lt;/span&gt; &lt;span class="s1"&gt;'s/public\///'&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;do
  &lt;/span&gt;&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="si"&gt;$(&lt;/span&gt;url &lt;span class="nv"&gt;$file&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&amp;gt;&lt;/span&gt; &lt;span class="nv"&gt;$sitemap&lt;/span&gt;
&lt;span class="k"&gt;done

&lt;/span&gt;&lt;span class="nb"&gt;printf&lt;/span&gt; &lt;span class="s2"&gt;"&amp;lt;/urlset&amp;gt;&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&amp;gt;&lt;/span&gt; &lt;span class="nv"&gt;$sitemap&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;I also needed an &lt;a href="https://github.com/jillesvangurp/www.jillesvangurp.com/blob/devtoarticle/indexgenerator.sh"&gt;index page&lt;/a&gt; for my articles:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;#! /usr/bin/env bash&lt;/span&gt;
&lt;span class="nb"&gt;rm&lt;/span&gt; &lt;span class="nt"&gt;-f&lt;/span&gt; _index.md
&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;HEADER&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sb"&gt;`&lt;/span&gt;&lt;span class="nb"&gt;cat&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&amp;lt;&lt;/span&gt;&lt;span class="no"&gt;EOF&lt;/span&gt;&lt;span class="sh"&gt;
--------
title: Article Index
author: Jilles van Gurp
--------

Intro text omitted ...
&lt;/span&gt;&lt;span class="no"&gt;EOF
&lt;/span&gt;&lt;span class="sb"&gt;`&lt;/span&gt;

&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$HEADER&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; _index.md

&lt;span class="nv"&gt;current_year&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"-"&lt;/span&gt;
&lt;span class="k"&gt;for &lt;/span&gt;name &lt;span class="k"&gt;in&lt;/span&gt; &lt;span class="si"&gt;$(&lt;/span&gt;find articles &lt;span class="nt"&gt;-type&lt;/span&gt; f &lt;span class="nt"&gt;-exec&lt;/span&gt; &lt;span class="nb"&gt;basename&lt;/span&gt; &lt;span class="o"&gt;{}&lt;/span&gt; &lt;span class="se"&gt;\;&lt;/span&gt; | &lt;span class="nb"&gt;sort&lt;/span&gt; &lt;span class="nt"&gt;-ur&lt;/span&gt; | &lt;span class="nb"&gt;sed&lt;/span&gt; &lt;span class="s1"&gt;'s/\.md//'&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;do
  &lt;/span&gt;&lt;span class="nv"&gt;year&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="nv"&gt;$name&lt;/span&gt; | &lt;span class="nb"&gt;sed&lt;/span&gt; &lt;span class="nt"&gt;-E&lt;/span&gt; &lt;span class="s1"&gt;'s/([0-9]{4})-([0-9]{2})-([0-9]{2})-(.*)/\1/'&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;
  &lt;span class="nv"&gt;month&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="nv"&gt;$name&lt;/span&gt; | &lt;span class="nb"&gt;sed&lt;/span&gt; &lt;span class="nt"&gt;-E&lt;/span&gt; &lt;span class="s1"&gt;'s/([0-9]{4})-([0-9]{2})-([0-9]{2})-(.*)/\2/'&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;
  &lt;span class="nv"&gt;day&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="nv"&gt;$name&lt;/span&gt; | &lt;span class="nb"&gt;sed&lt;/span&gt; &lt;span class="nt"&gt;-E&lt;/span&gt; &lt;span class="s1"&gt;'s/([0-9]{4})-([0-9]{2})-([0-9]{2})-(.*)/\3/'&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;
  &lt;span class="nv"&gt;title&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="nv"&gt;$name&lt;/span&gt; | &lt;span class="nb"&gt;sed&lt;/span&gt; &lt;span class="nt"&gt;-E&lt;/span&gt; &lt;span class="s1"&gt;'s/([0-9]{4})-([0-9]{2})-([0-9]{2})-(.*)/\4/'&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;
  &lt;span class="nv"&gt;nice_title&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$year&lt;/span&gt;&lt;span class="s2"&gt;-&lt;/span&gt;&lt;span class="nv"&gt;$month&lt;/span&gt;&lt;span class="s2"&gt;-&lt;/span&gt;&lt;span class="nv"&gt;$day&lt;/span&gt;&lt;span class="s2"&gt; - &lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;title&lt;/span&gt;:0:1&lt;span class="k"&gt;}&lt;/span&gt; | &lt;span class="nb"&gt;tr&lt;/span&gt;  &lt;span class="s1"&gt;'[a-z]'&lt;/span&gt; &lt;span class="s1"&gt;'[A-Z]'&lt;/span&gt; &lt;span class="si"&gt;)&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;title&lt;/span&gt;:1&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; | &lt;span class="nb"&gt;sed&lt;/span&gt; &lt;span class="nt"&gt;-e&lt;/span&gt; &lt;span class="s1"&gt;'s/-/ /g'&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;
  &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="o"&gt;[&lt;/span&gt; &lt;span class="nv"&gt;$year&lt;/span&gt; &lt;span class="o"&gt;!=&lt;/span&gt; &lt;span class="nv"&gt;$current_year&lt;/span&gt; &lt;span class="o"&gt;]&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;then
    &lt;/span&gt;&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="nt"&gt;-e&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s2"&gt;## &lt;/span&gt;&lt;span class="nv"&gt;$year&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&amp;gt;&lt;/span&gt; _index.md
  &lt;span class="k"&gt;fi
  &lt;/span&gt;&lt;span class="nv"&gt;current_year&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nv"&gt;$year&lt;/span&gt;
  &lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"- [&lt;/span&gt;&lt;span class="nv"&gt;$nice_title&lt;/span&gt;&lt;span class="s2"&gt;](/&lt;/span&gt;&lt;span class="nv"&gt;$year&lt;/span&gt;&lt;span class="s2"&gt;/&lt;/span&gt;&lt;span class="nv"&gt;$month&lt;/span&gt;&lt;span class="s2"&gt;/&lt;/span&gt;&lt;span class="nv"&gt;$day&lt;/span&gt;&lt;span class="s2"&gt;/&lt;/span&gt;&lt;span class="nv"&gt;$title&lt;/span&gt;&lt;span class="s2"&gt;)"&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&amp;gt;&lt;/span&gt; _index.md

&lt;span class="k"&gt;done

&lt;/span&gt;pandoc &lt;span class="nt"&gt;--from&lt;/span&gt; markdown_github+smart+yaml_metadata_block+auto_identifiers &lt;span class="s2"&gt;"_index.md"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-o&lt;/span&gt; &lt;span class="s2"&gt;"public/blog/index.html"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--template&lt;/span&gt; templates/article.html &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-V&lt;/span&gt; &lt;span class="nv"&gt;navigation&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;cat &lt;/span&gt;navigation.html&lt;span class="si"&gt;)&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-V&lt;/span&gt; &lt;span class="nv"&gt;footer&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;cat &lt;/span&gt;footer.html&lt;span class="si"&gt;)&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-V&lt;/span&gt; &lt;span class="nv"&gt;year&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;date&lt;/span&gt; +%Y&lt;span class="si"&gt;)&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;

&lt;span class="c"&gt;# cp _index.md generatedindex.md&lt;/span&gt;
&lt;span class="nb"&gt;rm&lt;/span&gt; &lt;span class="nt"&gt;-f&lt;/span&gt; _index.md
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This is the most complicated script so far; mainly because I wanted to support the old Wordpress link structure of &lt;code&gt;year/month/day/title&lt;/code&gt; so as to not break external URLs. I have an &lt;a href="https://github.com/jillesvangurp/www.jillesvangurp.com/blob/devtoarticle/.htaccess"&gt;.htaccess&lt;/a&gt; file with some redirects.&lt;/p&gt;

&lt;p&gt;There's &lt;a href="https://github.com/jillesvangurp/www.jillesvangurp.com/blob/devtoarticle/atom.sh"&gt;another very similar script&lt;/a&gt; that generates an atom feed for those people still using feed readers. I used atom for mostly nostalgic reasons as I was following the standardization of that while I was in Nokia Research. I think RSS sort of stayed more dominant and then both are sort of equally irrelevant nowadays. But it's still around and I guess they are still uses by various news aggregators so having a feed probably is a good idea from an SEO perspective.&lt;/p&gt;

&lt;p&gt;Finally, I have a &lt;a href="https://github.com/jillesvangurp/www.jillesvangurp.com/blob/devtoarticle/Makefile"&gt;Makefile&lt;/a&gt; that invokes all my little scripts in the right order and all the output goes to a public folder. The last step uses rsync to copy everything over to my hosting provider. The whole process takes about 30 seconds; which is a bit on the slow side but acceptable.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Is this setup better than other tools? Probably not. But I have full control over what happens and it's simple enough that I can use and maintain it. It's good enough for me; and it gets the job done.&lt;/p&gt;

&lt;p&gt;Note, I've added links to the actual files in my git repo. Inevitably, there will be changes on master but all the links point to a tag created for this article.&lt;/p&gt;

</description>
      <category>pandoc</category>
      <category>html</category>
      <category>markdown</category>
      <category>webdev</category>
    </item>
    <item>
      <title>Using Cloudfront, S3, and Route 53 for hosting</title>
      <dc:creator>Jilles van Gurp</dc:creator>
      <pubDate>Wed, 17 Jul 2019 15:41:06 +0000</pubDate>
      <link>https://dev.to/jillesvangurp/using-cloudfront-s3-and-route-53-for-hosting-395o</link>
      <guid>https://dev.to/jillesvangurp/using-cloudfront-s3-and-route-53-for-hosting-395o</guid>
      <description>&lt;ul&gt;
&lt;li&gt;
Basic setup
&lt;/li&gt;
&lt;li&gt;
Updating content
&lt;/li&gt;
&lt;li&gt;
Cloudfront invalidations
&lt;/li&gt;
&lt;li&gt;
Url rewrites using S3
&lt;/li&gt;
&lt;li&gt;Conclusion&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In the past few months, I've retired our nginx server and replaced it with Cloudfront, s3, Route 53, and amazon issued certificates to simplify hosting &lt;a href="https://www.inbot.io"&gt;www.inbot.io&lt;/a&gt;. This has been a bit more painful than I liked and there have been a few issues. This is mainly because the AWS documentation sucks and is fragmented over different products and generally represents a maze of misdirection.&lt;/p&gt;

&lt;p&gt;Anyone from Amazon reading this, a helpful guid for somebody looking to combine your products for a basic use case like "Given a completely bog standard one page javascript website, this is how you host it on AWS" would be extremely  helpful. Instead you have bits and pieces of documentation for each of your products leaving all the other bits and pieces as an exercise to the reader. I had to refer to Stackoverflow because every documentation page you land to seems to be lacking some crucial details. I wasted some serious amounts of time figuring out this most basic of uses for these products.&lt;/p&gt;

&lt;p&gt;The good news is that it does work once you figure out all the workarounds in each of their products. The benefit is that it simplifies your infrastructure for a simple one page app. We have no self hosted bits and pieces. Additionally, a CDN ensures that your users have a good experience downloading your website from a fast CDN instead of hitting your poor web server on the wrong side of the globe. The bad news is that the AWS UI is a bit lacking in usability and flexibility and doing some common things that you would do in e.g. nginx are not that easy.&lt;/p&gt;

&lt;p&gt;So, to avoid me having to google this together again, I'm documenting what I had to do to make this work. Also, others may find this useful.&lt;/p&gt;

&lt;h2&gt;
  
  
  Basic setup
&lt;/h2&gt;

&lt;p&gt;First, thanks to Tiberiu Oprea for &lt;a href="https://medium.com/faun/how-to-host-your-static-website-with-s3-Cloudfront-and-set-up-an-ssl-certificate-9ee48cd701f9"&gt;this extremely helpful overview&lt;/a&gt;. This will get you pretty far. I'm not going to repeat everything that he says there and will instead focus some extra stuff I had to figure out separately. AWS people reading this, use this website as a reference on how to properly document your product.&lt;/p&gt;

&lt;p&gt;Assuming you followed his instructions to the letter, you would end up with a Cloudfront + s3 setup (just replace &lt;code&gt;inbot.io&lt;/code&gt; with your own domain):&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;https certificates for &lt;code&gt;inbot.io&lt;/code&gt; and &lt;code&gt;www.inbot.io&lt;/code&gt; in the AWS certificate manager. One gotcha here is that it takes ages for Cloudfront to actually see those. Allow a few hours for this to be picked up. If you get it wrong, delete and try again. I lost half a day over this where I thought I was doing it wrong when in fact Cloudfront just is a bit stupid when it comes to actually asking for the current list of certificates. Eventually they showed up in the relevant drop down. Give it some time.&lt;/li&gt;
&lt;li&gt;An s3 bucket with the domain &lt;code&gt;www.inbot.io.s3-website-eu-west-1.amazonaws.com&lt;/code&gt;. This is where you deploy your static content. I suggest using AWS cli for this.&lt;/li&gt;
&lt;li&gt;Another s3 bucket with the domain &lt;code&gt;inbot.io.s3-website-eu-west-1.amazonaws.com&lt;/code&gt;. This bucket is configured to redirect to &lt;code&gt;www.inbot.io&lt;/code&gt;. It has no content.&lt;/li&gt;
&lt;li&gt;Two matching Cloudfront setups for both buckets. Make sure you redirect http to https in Cloudfront.&lt;/li&gt;
&lt;li&gt;Two A records in Route 53 pointing for the domains with and without &lt;code&gt;www&lt;/code&gt; at both Cloudfront setups.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Note that the &lt;strong&gt;bucket names are important&lt;/strong&gt; for getting Route 53 to do the right things. So match the domain name in the bucket name and don't get creative here.&lt;/p&gt;

&lt;p&gt;You can verify that everything redirects as required with a few simple curl commands:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Cloudfront redirects to https&lt;/span&gt;
curl &lt;span class="nt"&gt;-v&lt;/span&gt; http://inbot.io 
&lt;span class="c"&gt;# Cloudfront hits the S3 bucket and S3 redirects to https://www.inbot.io&lt;/span&gt;
curl &lt;span class="nt"&gt;-v&lt;/span&gt; https://inbot.io
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If all this works alright, your browser will redirect &lt;code&gt;http://inbot.io/whatever/path&lt;/code&gt; to &lt;code&gt;https://www.inbot.io/whatever/path&lt;/code&gt; via two permanent redirects. So users typing in &lt;code&gt;inbot.io&lt;/code&gt; end up on your website instead of a blank page, or worse, an S3 403 XML page. &lt;/p&gt;

&lt;p&gt;This seems like the AWS product people needs to get spanked a little. This is literally one of the most basic use cases a user of their products might have which is to host a simple website and it totally sucks. There's literally no website in this world that would not want this out of the box without having redirects for https and handling domains with and without &lt;a href="http://www"&gt;www&lt;/a&gt;. This should not require two separate buckets and Cloudfront setups. This is madness. But at least it works.&lt;/p&gt;

&lt;h2&gt;
  
  
  Updating content
&lt;/h2&gt;

&lt;p&gt;One problem with Cloudfront is that TTLs are pretty long. This is helpful for caching things in a globally distributed CDN, but annoying when you need to fix a bug fix in your website and it takes hours or days for your users to actually get the fix. Addressing this requires some planning.&lt;/p&gt;

&lt;p&gt;If you use something like Webpack, you should ensure the file names are hashed so this does not matter for most files. This leaves you one file that is still a problem: &lt;code&gt;index.html&lt;/code&gt;. Since in our case this file is small, I ended up disabling TTL for this.&lt;/p&gt;

&lt;p&gt;In our CI build we use AWS cli to interact with AWS. We use this command to upload our index.html like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;aws s3 &lt;span class="nb"&gt;cp&lt;/span&gt; ./build/index.html &lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;S3_BUCKET&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--acl&lt;/span&gt; public-read &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--cache-control&lt;/span&gt; max-age&lt;span class="o"&gt;=&lt;/span&gt;0,no-cache,no-store,must-revalidate
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;S3 will respect these headers and start serving your new index.html as soon as you fix something. Cloudfront passes these headers to the browser as well.&lt;/p&gt;

&lt;p&gt;For the other files, we use a reasonable TTL. We have a bunch of static files that rarely change and lots of webpack hashed artifacts that generate new files.&lt;/p&gt;

&lt;p&gt;One gotcha here is to not use &lt;code&gt;aws sync --delete&lt;/code&gt;. The problem here is that if you delete a file from s3 and the user still has an old &lt;code&gt;index.html&lt;/code&gt; pointing to the old file hash, they will now run into 404s until they force reload the page in their browser. And of course assuming they even know how to do this; this poses some unique challenges on mobile.&lt;/p&gt;

&lt;h2&gt;
  
  
  Cloudfront invalidations
&lt;/h2&gt;

&lt;p&gt;You can force Cloudfront invalidations when you deploy new content. The main file to invalidate is &lt;code&gt;index.html&lt;/code&gt;. This ensures that Cloudfront updates the CDN nodes world wide within a few minutes instead of doing it much slower after the next user tries to load the file for the first time. So, within minutes after updating this any browser that reloads our page, will hit Cloudfront to get the latests because it forwards the &lt;code&gt;max-age=0,no-cache,no-store,must-revalidate&lt;/code&gt; and then get the latest version.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;aws Cloudfront create-invalidation &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--distribution-id&lt;/span&gt; &lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;CLOUDFRONT_ID&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--paths&lt;/span&gt; / /index.html /app/index.html
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Of course keeping your index.html small will be helpful since loads from s3 are going to be a bit slower than cache hits from Cloudfront. This is OK since the bulk of our content is in the javascript and other files.&lt;/p&gt;

&lt;h2&gt;
  
  
  Url rewrites using S3
&lt;/h2&gt;

&lt;p&gt;We had a few url rewrites in our nginx. These caused us some headaches until I figured out how to use S3 Routing Rules. Part of the problem here is that we had some links that we distributed to users that broke without us knowing because things got lost during the redirects. For example, we have a reward program for referring new users. They click on a link that used to go to &lt;code&gt;http://inbot.io/join/XYZ&lt;/code&gt; (we've since fixed that to at least go to &lt;code&gt;https&lt;/code&gt;) where XYZ is a referral code that needs to be passed into the javascript that constructs the signup form on our website so it can be passed to our server. To redirect this, I added a rule to our S3 bucket's static content properties:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight xml"&gt;&lt;code&gt;&lt;span class="nt"&gt;&amp;lt;RoutingRules&amp;gt;&lt;/span&gt;
  &lt;span class="nt"&gt;&amp;lt;RoutingRule&amp;gt;&lt;/span&gt;
    &lt;span class="nt"&gt;&amp;lt;Condition&amp;gt;&lt;/span&gt;
      &lt;span class="nt"&gt;&amp;lt;KeyPrefixEquals&amp;gt;&lt;/span&gt;join/&lt;span class="nt"&gt;&amp;lt;/KeyPrefixEquals&amp;gt;&lt;/span&gt;
    &lt;span class="nt"&gt;&amp;lt;/Condition&amp;gt;&lt;/span&gt;
    &lt;span class="nt"&gt;&amp;lt;Redirect&amp;gt;&lt;/span&gt;
      &lt;span class="nt"&gt;&amp;lt;Protocol&amp;gt;&lt;/span&gt;https&lt;span class="nt"&gt;&amp;lt;/Protocol&amp;gt;&lt;/span&gt;
      &lt;span class="nt"&gt;&amp;lt;HostName&amp;gt;&lt;/span&gt;www.inbot.io&lt;span class="nt"&gt;&amp;lt;/HostName&amp;gt;&lt;/span&gt;
      &lt;span class="nt"&gt;&amp;lt;ReplaceKeyPrefixWith&amp;gt;&lt;/span&gt;app/index.html#/join/&lt;span class="nt"&gt;&amp;lt;/ReplaceKeyPrefixWith&amp;gt;&lt;/span&gt;
    &lt;span class="nt"&gt;&amp;lt;/Redirect&amp;gt;&lt;/span&gt;
  &lt;span class="nt"&gt;&amp;lt;/RoutingRule&amp;gt;&lt;/span&gt;
&lt;span class="nt"&gt;&amp;lt;/RoutingRules&amp;gt;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Note, the protocol and hostname are essential, otherwise S3 will happily redirect you to the bucket url without SSL.&lt;/p&gt;

&lt;p&gt;You can also use &lt;code&gt;RoutingRules&lt;/code&gt; to fix nicer http errors for e.g. 403s that S3 throws when it can't find an object (how is that not a 404?). One of these days, I'll probably invest some time in using AWS cli or Cloudfront to do this in one command but I ended up clicking all of this together in the AWS ui.&lt;/p&gt;

&lt;p&gt;So this fixed our issues. Note we are redirecting to &lt;code&gt;app/index.html#/join/XYZ&lt;/code&gt;. I decided to get rid of our subdomain for the web app and simply host everything under &lt;code&gt;www.inbot.io&lt;/code&gt;. Inside our web app, we use &lt;code&gt;#&lt;/code&gt; paths (aka anchors) and everything is handled by our javascript.&lt;/p&gt;

&lt;h2&gt;
  
  
  CORS header for our stellar.toml
&lt;/h2&gt;

&lt;p&gt;Another issue we had that was that we have one file on our site that needs to have CORS headers set correctly. The file in question is the &lt;code&gt;stellar.toml&lt;/code&gt; file that Stellar uses to figure out meta data about our cryptocurrency, the InToken. The how and why is not important but setting up CORS is a common requirement and something that is straightforward in nginx. This one had me pondering for a while. S3 does not provide a good solution for this. However, it turns out that in Cloudfront you can configure so-called cache behaviors for specific paths. In our case we added a behavior for &lt;code&gt;/.well-known/stellar.toml&lt;/code&gt;, with the custom header. This location is where Stellar clients expect to find the meta data file. And since a lot of Stellar clients are browser applications, they need the CORS headers to be set correctly. If you want to read more about this, refer to the &lt;a href="https://www.stellar.org/developers/guides/concepts/stellar-toml.html"&gt;Stellar documentation&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;This setup is working pretty OK. We have a simple CI job that does most of this. I did use the AWS UI to click together the buckets and Cloudfront setup. You could probably use e.g. Cloudformation or some aws cli incantations for this but since this is a one time thing, I did not bother to automate this. The flip-side of that is that it is kind of a long process with many steps and quite easy to mess up.&lt;/p&gt;

&lt;p&gt;Also, I'm not completely happy about having to do a lot of things in the AWS UI (painfully unusable). However, I can't bring myself to automating this hopefully one time setup. There are diminishing returns when it comes to automating this stuff.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>cloudfront</category>
      <category>s3</category>
    </item>
    <item>
      <title>Libra, blockchains, and the meaning of it all</title>
      <dc:creator>Jilles van Gurp</dc:creator>
      <pubDate>Tue, 25 Jun 2019 12:43:47 +0000</pubDate>
      <link>https://dev.to/jillesvangurp/libra-blockchains-and-the-meaning-of-it-all-1clc</link>
      <guid>https://dev.to/jillesvangurp/libra-blockchains-and-the-meaning-of-it-all-1clc</guid>
      <description>&lt;p&gt;Recently, Facebook announced their plans for launching their own blockchain platform and crypto currency, the Libra. As I have done the technical work for &lt;a href="https://inbot.io"&gt;Inbot&lt;/a&gt; to launch our own token on the Stellar blockchain, I'm of course very interested in what they are doing, why,  and how.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;What problem does Libra solve?&lt;/li&gt;
&lt;li&gt; Byzantine Consensus vs. PoS &amp;amp; PoW&lt;/li&gt;
&lt;li&gt;Smart Contracts&lt;/li&gt;
&lt;li&gt;Libra consortium &lt;/li&gt;
&lt;li&gt;Privacy, KYC, and Financials&lt;/li&gt;
&lt;li&gt;Why is Facebook creating a new Blockchain?&lt;/li&gt;
&lt;li&gt;Monetization&lt;/li&gt;
&lt;li&gt;What happens next?&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In this article I want to explore what I think is happening and what it all means. As such, it is severely at risk of being both late and somewhat redundant as world plus dog seems to have an opinion on this. In writing this article, I want to reflect a bit on the bigger picture and also intend to distance myself a bit from the alarmist and populist hyperbole that has been circulating since Facebook announced Libra.&lt;/p&gt;

&lt;p&gt;This is going to be a longish article. In short, I'm cautiously optimistic that Facebook is serious about Libra and is doing exactly the kind of things technically, legally, and practically that they ought to be doing to make this a success for them. Of course, I cannot vouch for their motivations; though I would assume it involves enriching themselves in some way.&lt;/p&gt;

&lt;p&gt;First, let's get a few things out of the way. In debating this with others, I've noticed a lot of negative sentiment around the topic of crypto currencies, blockchains, and Facebook. A lot of this is for valid reasons and I partially share those sentiments. I'm a natural skeptic and despite my technical involvement with a blockchain based product, I'm not actually active as a crypto investor. I tend to look at this topic from both a technical and pragmatic angle. I don't consider myself a dreamer, utopian, or otherwise idealistic person.&lt;/p&gt;

&lt;p&gt;Another part of this negativity is simply ignorance and people assuming they have a firm grip on this when arguably they don't. Luddites have been there at every step of the way from the invention of the wheel, to the industrial revolution (where the term &lt;a href="https://en.wikipedia.org/wiki/Luddite"&gt;Luddites originates&lt;/a&gt;), the invention of radio, television, the internet, etc.&lt;/p&gt;

&lt;p&gt;It's good to be skeptical and conservative but it would be foolish to dismiss Libra as just a fad. It won't blow over and whether you like this or not it looks like blockchains are here to stay in one form or another. For us, developers, technologists, etc. the main thing is to make sense of what is happening and upgrade our skill sets so we can make ourselves useful gluing together all the new bits and pieces of technology.&lt;/p&gt;

&lt;p&gt;I think of the Libra announcement as a symptom of the entire market maturing and a necessary step for crypto currencies to become more mainstream. It's a combination of necessary, not unexpected, and inevitable for this to happen. More companies will follow and probably very soon now that there is a sense of urgency to not let Facebook 'win'.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Disclaimer&lt;/strong&gt; I use the term blockchain loosely in this article and some people might take offense with that. Blockchains or blockchain like platforms share a few useful properties of having some cryptographically way of storing and protecting a record of transactions. For the purpose of this article I'll use the word blockchains to refer to that type of system.&lt;/p&gt;

&lt;h2&gt;
  
  
  What problem does Libra solve?
&lt;/h2&gt;

&lt;p&gt;Simply put, the existing financial system has been around since the late 1600s. It has served us well and it is ingrained in our legal systems and we're used to it.&lt;/p&gt;

&lt;p&gt;However, it has issues. One of those issues is the level of control that resides in the hands of the banks and governments that control them and occasionally abuse the powers given to them. Another is the built in inefficiencies through our reliance on mechanisms that either predate the invention of computers (e.g. paper work) or date back to the early days of the invention of that. Additionally, poorly aligned incentives to be either cheap or fast cause banks to profit from delaying transactions and allow them to exploit their controlling position by e.g. charging excessively for transactions, denying service to some, or otherwise benefit from their exclusive position. Banks make stupendous amounts of money for essentially obstructing financial traffic. A notable thing about the Libra announcement is that banks do not feature at all in the consortium. This is not an accident. Libra is intended to make them redundant.&lt;/p&gt;

&lt;p&gt;Blockchains provide a solution for administering ledgers, i.e. a record of transactions, that is by design impossible to cheat (assuming no bugs, compromised algorithms, etc.). Using a blockchain, much of the need to have banks goes away. Ledgers are of course nothing new. The ancient Sumerians had them and much of their remaining written material consists of ledgers. Same for the Egyptians, Romans, etc. Ledger technology is ancient.&lt;/p&gt;

&lt;p&gt;The modern financial system invented in the 17th century introduced the notion of banks, central banks, stock exchanges, etc. that are powered by ledgers. Additionally, it introduced the notion of bits of paper to represent money, an IOU  by a bank or central bank that the bit of money is being administered correctly. The integrity of currency is core to our modern financial system and for the past century it has relied on governments and central banks overseeing the way people administer their ledgers. Necessarily, this involves independent bookkeeping, elaborate checks and balances when transacting, and rules and laws for conflict resolution when somebody inevitably cheats. Blockchains internalize much of the manual overhead by making it impossible to cheat. It removes the need for people to keep independent ledgers and thus reduces cost and friction associated with financial transactions.&lt;/p&gt;

&lt;p&gt;Contrary to the popular belief, a simple database is not a drop in replacement. We've already had those for the last few centuries. First in paper form and for the last century or so in mechanical (the M in IBM stands for machines) and then digital form (i.e. a database). Blockchains have one key feature: they allow mutually distrusting users to have confidence that none of them is able to cheat or commit fraudulent transactions.&lt;/p&gt;

&lt;p&gt;Not only is it cheaper to use a block chain but it is also faster. A good blockchain platform can transact in seconds or even faster. After that, the transaction is guaranteed to have happened. Additionally, because the cost is so low, it is possible to do very small transactions. This enables a whole range of use cases that is currently mostly outside the financial system.&lt;/p&gt;

&lt;p&gt;Indeed, as Facebook notes in their marketing material, there are still billions of people that lack a bank account. Most of their business is conducted using cash, bartering, or a range of non block chain based payment solutions that have emerged to address this.&lt;/p&gt;

&lt;h2&gt;
  
  
  Byzantine Consensus vs. PoS &amp;amp; PoW
&lt;/h2&gt;

&lt;p&gt;From reading the whitepaper, it looks like Facebook is broadly copying the approach used by the likes of Stellar (which I'm very familiar with), Ripple, and a few others. Stellar is using a consensus model where validators agree on which transactions to accept. This process is fast because it does not involve e.g. mining and it awards a special status to the owners of these validators in the sense that they control what happens in the network.&lt;/p&gt;

&lt;p&gt;Libra is implementing a so-called permissive variation of &lt;a href="https://www.theblockcrypto.com/2019/06/19/a-technical-perspective-on-facebooks-librabft-consensus-algorithm/"&gt;Byzantine Fault Tolerant (BFT)&lt;/a&gt; algorithms. Permissive BFTs are emerging as a third alternative to the more established permission less BFTs like Bitcoin and Ethereum. There are two variants of this: Proof of Work (aka. mining) or Proof of Stake based networks. Respectively the three approaches accept transactions based on either who you are, what you did (PoW.), or how much you own (PoS.).&lt;/p&gt;

&lt;p&gt;The way consensus works in permissive BFTs like Stellar is that if you choose to run a validator, you must configure a list of other validators that you trust and define a quorum of hpow many of those need to agree for a transaction to be valid for your node. Transactions on your validator are only accepted if there is consensus with the nodes you list and the validators they trust. This extended consensus network means that e.g. 50% attacks common in permission less network are very hard to perform unless you manage to hijack existing validators that are already trusted. It also means that as the number of validators that trust each other directly or indirectly, the network becomes more decentralized and it becomes harder to cheat. Facebook is following a similar model.&lt;/p&gt;

&lt;p&gt;Like Stellar, Libra will launch with a consortium of validators and they intend to open up to more third party validators later. As the number of validators grows, the network will be more resilient against attempts by any of them to control the network. While anyone can launch a validator, the only validators that matter are those that are trusted by the existing ones. This is what it means to be permissive. Think of it as an invitation only kind of thing where whether you get an invite will be based on your merit, reputation, and trustworthiness. In the Stellar network an important driver for getting others to trust you are also factors such as your uptime, business relations with other validator owning organizations, etc. &lt;/p&gt;

&lt;p&gt;So, it would be wrong to characterize this as centralized where e.g. Facebook decides on who gets to join their network. This would be an oversimplification that seems to come up regularly in discussions of how Stellar and Ripple work and how Libra will work. E.g. Stellar actually has a growing number of validators that have to agree with each other. When there is no agreement (i.e. consensus), the network stops making progress.&lt;/p&gt;

&lt;p&gt;This actually happened &lt;a href="https://medium.com/stellar-developers-blog/may-15th-network-halt-a7b933103984"&gt;recently&lt;/a&gt; when several validator operators decided at the same time to do some maintenance and took down some of their nodes. As a result, the remaining nodes failed to find enough nodes to agree with and stopped accepting new transactions until the nodes came back online. While this sounds bad, this is actually a good safety feature. Stellar will choose halting over forking. Nobody lost their money. In their response to this incident, the Stellar Foundation actually made clear that they want to reduce the importance of their own validators in the network and identified this as a root cause for the failure. Their stated goal is to be able for their own validators to be taken offline without halting the consensus.&lt;/p&gt;

&lt;p&gt;Similar Incidents with e.g. Ethereum instead resulted in unintentional forks or partitions (as opposed to intentional forks that also happen) where transactions on the wrong branch of the fork were lost. To mitigate against this, it is common to wait for several blocks before assuming a transaction has happened. Consequently, transactions can take very long to complete in Ethereum and Bitcoin.&lt;/p&gt;

&lt;p&gt;In &lt;a href="https://en.wikipedia.org/wiki/CAP_theorem"&gt;CAP theorem&lt;/a&gt; terms, permissive Byzantine consensus favors Consistency and Partition Tolerance over Availability where permission less PoS and PoW based systems tend to sacrifice consistency instead. It's the key reason why Facebook picked this mechanism because it is both safe and fast and therefore meets their requirements of being suitable for their intended use case. Also, the notion of not having their network invaded by hostile validators is likely more of a feature for them and not a bug.&lt;/p&gt;

&lt;p&gt;In terms of the existing financial system, what Facebook is doing is essentially a more efficient digital equivalent where the same types of entities responsible for making transactions happen now facilitate this by running validators that do this automatically. The same properties that made that good enough, also make permissive BFTs good enough provided the network has enough independent validators to be resilient against some of them misbehaving. Inevitably, there are going to be a lot of politics around this topic; just like in the existing financial system.&lt;/p&gt;

&lt;p&gt;IMHO there will remain a role for permission less systems as stores of value and the interaction between different blockchain networks and diversity of platforms that mutually integrate with each other is going to be key to the long term robustness of the ecosystem.&lt;/p&gt;

&lt;h2&gt;
  
  
  Smart Contracts
&lt;/h2&gt;

&lt;p&gt;Unlike Stellar and Ripple, Facebook has smart contracts. These are similar to what you'd find in e.g. Ethereum (i.e. solidity). However, there are some differences. &lt;a href="https://hackernoon.com/move-programming-language-the-highlight-of-libra-122a910d6e0f"&gt;Hackernoon&lt;/a&gt; has written a great overview of the essentials. Move, the language and vm in Libra provides all the essentials to provide strong guarantees about correctness and integrity.&lt;/p&gt;

&lt;p&gt;The language, vm, and module system are designed to facilitate formal verification. This is very important because it allows for contracts and modules that are provably correct. This greatly simplifies auditing.&lt;br&gt;
Unlike Solidity, the main smart contract language in Ethereum and similar blockchains, it does not support dynamic functions. This further simplifies reasoning about and auditing Move contracts.&lt;br&gt;
Auditing has proven to be a significant hurdle with Ethereum. When Inbot was preparing an ICO early 2018, we were indeed preparing an ERC20 token. Ultimately we were to late and the market had already crashed by the time we were getting ready. Part of our concern was significant cost for auditing and a nagging concern that there might be bugs in our contracts still. Smart contract bugs are no joke and can have significant financial consequences.&lt;/p&gt;

&lt;p&gt;The lack of smart contracts in Stellar is a feature, not a bug. Instead of smart contracts, Stellar (and Ripple) provide a set of primitive transaction types that you can combine to build financial products. However, it is inherently more limited than a full blown contract language. Libra actually makes a nice compromise here. By providing a language and module system with strong verification mechanisms, it allows for an ecosystem of verified modules to emerge that users may combine to build smart contracts. So, auditing should be similarly simple as Stellar for the basic use cases while still allowing for more complicated contracts to be written.&lt;/p&gt;

&lt;h2&gt;
  
  
  Libra consortium
&lt;/h2&gt;

&lt;p&gt;A blockchain is only as good as the people backing it. Facebook has gathered a strong consortium with major companies in it. Particularly, the presence of major existing payment providers like Visa, Mastercard, and Paypal, suggests that adoption could be quite rapid. Another thing worth pointing out is that these companies are competitors and that the financial stakes are very high for them. This further strengthens the argument that the consensus model that Libra is taking here is not about centralizing power but about facilitating mutually competing entities like this to collaborate.&lt;/p&gt;

&lt;p&gt;I imagine many governments currently considering Facebook's actions regarding this, will be scrutinizing these arrangements. I think it is safe to assume that Facebook has considered this and that much of the technical architecture is actually intentionally trying to provide strong guarantees here and aimed to preempt a lot of the concerns.&lt;/p&gt;

&lt;p&gt;The stated intention of Facebook is to create a legal entity called Calibra that will presumably control the platform. Crucially, this legal entity is a direct subsidiary of Facebook and will presumably control the financial reserve, intellectual property (i.e. patents), and technical direction. I am somewhat surprised that Facebook has chosen this path instead of setting up a foundation that is controlled by its members (like e.g. Stellar, Ethereum, and other blockchain platforms have done). This suggests that Facebook will have a special position in the platform.&lt;/p&gt;

&lt;h2&gt;
  
  
  Privacy, KYC, and Financials
&lt;/h2&gt;

&lt;p&gt;Facebook is under a lot of pressure from the public, press, and increasingly legal authorities over their lack of regard for their own user's privacy. A blockchain is by its nature very public and it is unclear how they are addressing privacy; despite their solemn promises that user transactions will be private.&lt;/p&gt;

&lt;p&gt;Potentially all the transactions of all their users would end up being public domain information and certainly Facebook would be gathering a lot of financial data about their users. This obviously raises some red flags for those concerned with their privacy. The ability to trace back user payments to ads is potentially a gold mine for Facebook.&lt;/p&gt;

&lt;p&gt;To comply with legal requirements, Facebook will have to apply AML and KYC policies. Arguably, they already know a lot about their users as it is today, which gives them a strong starting position. Requiring documentation in the form of selfies, copies of identity documents, and proof of address, will put them in a unique position of controlling a vast amount of very private data on their billions of users worldwide.&lt;/p&gt;

&lt;p&gt;Worse, a centrally controlled platform like this is also of interest to intelligence agencies; and potentially a tool that may be used to control citizens. E.g. WeChat is a very popular payment platform in China and the Chinese government routinely punishes citizens by banning people from it; which severely limits their ability to do financial transactions in China because WeChat has largely replaced the use of traditional money. In creating Libra, Facebook seems to want to compete directly with platforms like that.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why is Facebook creating a new Blockchain?
&lt;/h2&gt;

&lt;p&gt;Many users of existing blockchains may have raised a few eyebrows about Facebook building their own platform. However, I think I have a very rational explanation: none of the existing platforms has a level of maturity that would make it suitable for operating at the scale Facebook needs it to run.&lt;/p&gt;

&lt;p&gt;Billions of users using Libra means an absolutely huge transaction volume. I doubt many of the established blockchains are anywhere near ready for such volumes. Both Ethereum and Bitcoin are as of yet still mining based and limited to a world wide transaction volume that ranges in the single, or at best, double digit numbers of transactions per second. Existing traditional payment platforms can handle thousands or tens of thousands of transactions.&lt;/p&gt;

&lt;p&gt;Additionally, these are still limited to relatively large transactions as there are transaction fees involved. Facebook seems to intend to have a very low transaction cost in order to facilitate very small transactions and micro payments as well as traditional payments. E.g. they seem to highlight the lack of good payment solutions for third world countries where many people do not have banks or credit cards. All, this suggests that Facebook has a need to support a very high volume of transactions at a very low cost. &lt;/p&gt;

&lt;p&gt;So, clearly there is more going on here than just a not invented here syndrome. Facebook inventing their own platform of course gives them control and as pointed out there are plenty of reasons to not trust them with that level of control. But it is not like there is anything out there that they could have just taken and used.&lt;/p&gt;

&lt;h2&gt;
  
  
  Monetization
&lt;/h2&gt;

&lt;p&gt;Obviously, Facebook is a very valuable company and they are looking to make money through their platform. By design, transaction fees are going to be low. So, it would be wrong to assume that is going to be the primary revenue driver. Instead, it is likely that Facebook is looking to leverage their position as the controlling entity of Calibra to find alternative revenue streams.&lt;/p&gt;

&lt;p&gt;They are creating an exclusive ecosystem that generates a lot of data. Access to this ecosystem will likely require buying into it. Additionally, controlling the large reserve that is needed to stabilize the Libra means an opportunity to make lots of money from simply investing these funds. Finally, day to day fluctuations in exchange rates and conversions to and from existing currencies create very lucrative opportunities to make money.&lt;/p&gt;

&lt;p&gt;Finally, the value of Facebook as a holder of KYC and AML information means that they can provide guarantees to others about whom they are dealing with. As governments will inevitably want to crack down on money laundering, financing of terrorism and tax dodging, access to this information is going to be crucial for anyone looking to do business via their platform. The alternative of doing KYC and AML in house is likely to not scale and Facebook is uniquely positioned to provide very high quality information through their existing relationship with their billions of users.&lt;/p&gt;

&lt;h2&gt;
  
  
  What happens next?
&lt;/h2&gt;

&lt;p&gt;I expect Libra will trigger a lot of scrutiny. But, it will also act as a wakeup call. So far, blockchains have been operating in the margins of the financial and legal system. A combination of libertarian utopianism, greed, anarchism, etc. seems to be what has been driving things forward. Lately, there have been some increasingly serious attempts by e.g. IBM with World Wire, Bit Bond, WireX, and similar efforts to use blockchains for payments, securities, etc. However, most of these products are still relatively new and limited to low transaction volumes.&lt;/p&gt;

&lt;p&gt;With Libra, Facebook will overnight create a new reality with potentially hundreds of millions of people getting involved directly or indirectly. This will represent a sharp increase in the use, and importance of blockchains for the financial plumbing of the world. We're now about to hit the elbow in the exponential curve where this will go from relatively small numbers to massive numbers. &lt;/p&gt;

&lt;p&gt;I expect that Facebook will get some competition very shortly and fully expect most of the tech giants are already in an advanced stage of planning counter moves. I'm talking about the likes of Apple and Google, both of which operate mobile phone based payment platforms; and Microsoft, Amazon, traditional banks, and in some cases small governments of nations not so eager to have their financial infrastructure be hijacked by any of these.&lt;/p&gt;

&lt;p&gt;In short, I expect that we will have a frenzy of investments, speculation, and R&amp;amp;D activity around this topic over the next year. Anyone that matters in this space is going to want in on the action.If they were interested before, they will now be accelerating their attempts. None of these companies will be happy to sit back and watch Facebook eat their lunch.&lt;/p&gt;

&lt;p&gt;Additionally, I expect e.g. the US and EU to have a thing or two to say about financial oversight. Short term, Libra seems to be well positioned to dodge some of this. However, clearly, there is already widespread support for legal action against Facebook and they risk further scrutiny with this announcement. In a way, they are forcing the agenda early, which I think is actually smart because it catches everyone unprepared.&lt;/p&gt;

&lt;p&gt;My impression is that governments are too slow to respond in a timely fashion and that by the time they will be ready to legislate, there will be several too big too fail type platforms already as well as powerful lobbies that will aim to prevent them blocking this. Time is running out quickly and inaction just means the existing laws continue to apply.&lt;/p&gt;

&lt;p&gt;Assuming, Libra gets through this initial phase, we will likely have several new blockchain based payment platforms starting to compete with each other within the next 1-2 years, each with large volumes of transactions and vast amounts of money flowing around.&lt;/p&gt;

&lt;p&gt;One interesting thing here is that this will inevitably also create a need for inter blockchain traffic via exchanges. E.g. the Stellar platform seems to be already emerging as one of the platforms of choice for this kind of traffic. IBM's world wire is using it, several exchanges connect to it (or rather make use of its built in decentralized exchange), and several fintech companies are operating stable coins for both crypto and traditional currencies on it. The simplistic view of "there can be only one" is in my view not justified. There will be at least several platforms and they will inevitably be highly integrated with each other.&lt;/p&gt;

&lt;p&gt;In summary, I believe that Facebook's moves with Libra are logical and very smart. For their level of ambition, Libra is the right platform to be building technically and they seem to have a strong consortium with which they are launching this.&lt;/p&gt;

&lt;p&gt;I also believe that this actually strengthens the blockchain ecosystem and that it will lead to a lot of new things. Whether Facebook's Libra emerges as the dominant solution in this space is very much undecided. If that were to happen it would be because of a lack of initiative and vision by their competitors. Personally, I don't believe in this doom scenario and instead they will be one of several platforms that remain standing after the dust settles on this in a few years.&lt;/p&gt;

</description>
      <category>libra</category>
      <category>blockchain</category>
      <category>opinion</category>
    </item>
    <item>
      <title>Explaining Stellar using Cliste</title>
      <dc:creator>Jilles van Gurp</dc:creator>
      <pubDate>Wed, 17 Oct 2018 17:35:03 +0000</pubDate>
      <link>https://dev.to/jillesvangurp/explaining-stellar-using-cliste-3men</link>
      <guid>https://dev.to/jillesvangurp/explaining-stellar-using-cliste-3men</guid>
      <description>&lt;h1&gt;
  
  
  Origins of Cliste
&lt;/h1&gt;

&lt;p&gt;A few months ago, we decided to create a token on Stellar at &lt;a href="//inbot.io"&gt;Inbot&lt;/a&gt;. So, I read up, explored the sdks and started trying to figure out how everything works in the Stellar world.&lt;/p&gt;

&lt;p&gt;Since I primarily use Kotlin these days, I decided that I wanted to adapt the official stellar sdk for Java and make it a bit more kotlin friendly. Kotlin and Java play really nice together so this basically boiled down to me creating a new Github project called &lt;a href="https://github.com/Inbot/inbot-stellar-kotlin-wrapper"&gt;inbot-stellar-kotlin-wrapper&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Initially I was just fooling around and trying to figure out different parts of the Horizon API. I  figured out how to run a standalone chain and wrote some code, added some tests, etc. After a few days I realized that&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;this stuff is easy&lt;/li&gt;
&lt;li&gt;I really would like to use the command line instead of writing code and tests all the time.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;So, I googled for a suitable command line argument parser for Kotlin and stumbled on com.xenomachina.kotlin-argparser. It's a really nice kotlin dsl for picking apart commandlines.&lt;/p&gt;

&lt;p&gt;Next steps were basically, coming up with a nice name: Command Line Interface for STellar, aka. cliste. At this point, I've been working on this project on and off for about 2 months. It's getting to a state where it is pretty useful.&lt;/p&gt;

&lt;h1&gt;
  
  
  Explaining Stellar using Cliste
&lt;/h1&gt;

&lt;p&gt;Now that I have cliste, I can show some simple examples to explain how stellar works. Instead of giving you the usual marketing cliches, bad metaphores, and other verbose ways of communicating stuff, I'll highlight some key features using cliste:&lt;/p&gt;

&lt;h2&gt;
  
  
  Running a standalone stellar
&lt;/h2&gt;

&lt;p&gt;If you want to fool around a bit, firing up a standalone chain is the best way.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker run --rm --name stellarstandalone -p "8000:8000"  stellar/quickstart --standalone
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;It will take a second to start. Since we pass &lt;code&gt;--rf&lt;/code&gt; all your data will be lost if you kill the container. So, you can use this as a sandbox. You can swap out --standalone for --testnet or --public to target the testnet or public stellar net instead. &lt;/p&gt;

&lt;h1&gt;
  
  
  Creating accounts
&lt;/h1&gt;

&lt;p&gt;Lets create an account and give ourselves some XLM. In stellar, accounts must have a minimum balance of 0.5 XLM. On the standalone chain in the quickstart image it is still 20 XLM though. So, to be safe. We give alice plenty.&lt;/p&gt;

&lt;p&gt;Since we are on a standalone network, we'll have to create an account from nothing. This works as follows.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ ./cliste createAccount alice 1000
17:35:56.715 [main] INFO io.inbot.kotlinstellar.KotlinStellarWrapper - using standalone network
17:35:58.166 [main] INFO org.stellar.sdk.KotlinExtensions - 7 07067baf191a516c6434c84c6232c3672dd179883451641b417fb3090bc82d08 success:true fee:100 CREATE_ACCOUNT
17:35:58.166 [main] INFO io.inbot.kotlinstellar.KotlinStellarWrapper - created GBJR3JH4ZC5LZKGM2PPSVMJFPPWREVEKD4TQJL2RMUHI75P35N4JVABC
created account with secret key SCBB6IXATC2RFKCFIFUUIXGD5QDHRYAEQZBDDDMPRO3PVYXEMETCC67F
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;What just happened here:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;we generated a new key pair &lt;/li&gt;
&lt;li&gt;it was stored in keys.properties under the key alice. This allows us to use that as an alias in future commands. &lt;/li&gt;
&lt;li&gt;we gave ourselves some XLM. We can do this because we know the seed of the chain. &lt;/li&gt;
&lt;li&gt;it logged some details, like the private key&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This won't work on testnet but you can use the &lt;a href="https://www.stellar.org/laboratory/#account-creator?network=test"&gt;friendbot&lt;/a&gt; instead. On the public net, the only way to fund new accounts is through an exchange or via an existing account.&lt;/p&gt;

&lt;h2&gt;
  
  
  Alice can create more accounts now
&lt;/h2&gt;

&lt;p&gt;Now that alice has a valid account, she can create an account for bob without and fund the base XLM balance for bob herself.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ ./cliste -a alice createAccount bob 50
17:40:07.888 [main] INFO org.stellar.sdk.KotlinExtensions - 57 e580f9e898a7151000e0228500d17846165aebcfed405411167cf5b1f41dd6b4 success:true fee:100 CREATE_ACCOUNT
17:40:07.892 [main] INFO io.inbot.kotlinstellar.KotlinStellarWrapper - created GAURP3FQ56BOF2PXF5DCLFS4QOJPTO5XM62G2AAMAQXB4PZWYNSS4RKA
created account with secret key SCYD6JC5GHFW63T5KTNVOT5FFJIW2R67VBEHFYIQZCMS2VLLBAKUI4YW
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;So we simply add alice's key to the command with -a and define a new key.&lt;/p&gt;

&lt;h2&gt;
  
  
  Managing accounts
&lt;/h2&gt;

&lt;p&gt;If you want to know what keys you have you can list them as follows:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ ./cliste listKeys
Defined keys (2):
alice: secretKey SCBB6I.... accountId: GBJR3JH4ZC5LZKGM2PPSVMJFPPWREVEKD4TQJL2RMUHI75P35N4JVABC
bob: secretKey SCYD6J.... accountId: GAURP3FQ56BOF2PXF5DCLFS4QOJPTO5XM62G2AAMAQXB4PZWYNSS4RKA
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The secret keys are abbreviated here. &lt;/p&gt;

&lt;p&gt;Note, you can also manually add keys in keys.properties. You can also mix public keys and private keys here. So, you can use this like an address book for public keys that you car about. You only need private keys if you are going to do transactions for the account.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Be careful storing private keys&lt;/strong&gt; you care about here. This tool is intended as a development tool only. I may add some more suitable protection in the future but right now it is all plain text. In a nutshell, don't do anything with this that I would not do either.&lt;/p&gt;

&lt;h1&gt;
  
  
  Payments and balances
&lt;/h1&gt;

&lt;p&gt;Lets give bob some more XLM:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ ./cliste -a alice pay bob 10 XLM
17:44:12.535 [main] INFO org.stellar.sdk.KotlinExtensions - 106 703c34d98162f5223805cdd54a48261ae73e0dae8d5fccc76caba59d492f228a success:true fee:100 PAYMENT
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Bob and alice can each check their balance:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ ./cliste -a alice balance
accountId: GBJR3JH4ZC5LZKGM2PPSVMJFPPWREVEKD4TQJL2RMUHI75P35N4JVABC subEntryCount: 0 home domain: null

thresholds: 0 0 0
signers:
    GBJR3JH4ZC5LZKGM2PPSVMJFPPWREVEKD4TQJL2RMUHI75P35N4JVABC 1
authRequired: false
authRevocable: false

Balances:
XLM b:939.9999800 l:- - sl: - - bl: -

$ ./cliste -a bob balance
accountId: GAURP3FQ56BOF2PXF5DCLFS4QOJPTO5XM62G2AAMAQXB4PZWYNSS4RKA subEntryCount: 0 home domain: null

thresholds: 0 0 0
signers:
    GAURP3FQ56BOF2PXF5DCLFS4QOJPTO5XM62G2AAMAQXB4PZWYNSS4RKA 1
authRequired: false
authRevocable: false

Balances:
XLM b:60.0000000 l:- - sl: - - bl: -
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This returns a bit of meta data and a list of balances that currently only includes XLM.&lt;/p&gt;

&lt;p&gt;So, Bob, now has exactly 60 XLM. Alice has payed 50 and 10 XLM as well as some fees for both transactions: 2x 100 stroops. A stroop is 1/100000th of an XLM. This means you can fund 100K transactions with a mere 1 XLM, which at the time of writing is roughly 20 euro cents.&lt;/p&gt;

&lt;h1&gt;
  
  
  Issuing your own token
&lt;/h1&gt;

&lt;p&gt;To issue a new token, we need an issuing account. Lets create that as well as a distribution account that we will use for distributing our FOO token. Alice gets to pick up the bill for funding these accounts as well.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ ./cliste -a alice createAccount issuing 100
$ ./cliste -a alice createAccount distribution 100
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h1&gt;
  
  
  Token management
&lt;/h1&gt;

&lt;p&gt;Now lets define our token:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ ./cliste defineAsset issuing FOO

$ ./cliste listAssets
Defined assets (1):
FOO     GBSR46WQNCDK7SPUW3XR3C663AE3QDG3MGG5S66GI6XQINQYGAZH7CF5
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This does nothing else than add another alias to a file called assets.properties. In stellar assets are always identified by their code + the account that issued it.&lt;/p&gt;

&lt;p&gt;We will issue FOO from our issuing account to our distribution account. &lt;/p&gt;

&lt;h1&gt;
  
  
  Trust lines
&lt;/h1&gt;

&lt;p&gt;For this to work there needs to be a trustline. In stellar, you can only hold tokens that you trust. So for our distribution account to be able to hold FOO, it needs to trust FOO&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ ./cliste -a distribution trust FOO
17:58:07.565 [main] INFO org.stellar.sdk.KotlinExtensions - 273 f5ad81be61734ebfaef89233e5203a9f24cad1538312b17607b047da9de15a62 success:true fee:100 CHANGE_TRUST
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If we now check the balance for the distribution account we'll see a new entry under balances:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ ./cliste -a distribution balance
accountId: GADRJ4IDLVMOFE5DNQF7UZGWAUABKO7MYM2FAKGEGVPZA2W4RFQFF6ZQ subEntryCount: 1 home domain: null

thresholds: 0 0 0
signers:
    GADRJ4IDLVMOFE5DNQF7UZGWAUABKO7MYM2FAKGEGVPZA2W4RFQFF6ZQ 1
authRequired: false
authRevocable: false

Balances:
FOO (GBSR46WQNCDK7SPUW3XR3C663AE3QDG3MGG5S66GI6XQINQYGAZH7CF5) b:0.0000000 l:922337203685.4775807 - sl: - - bl: -
XLM b:99.9999900 l:- - sl: - - bl: -
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;So, our distribution account 'trusts' FOO to the extent of Long.MAX stroops of a FOO or about 922337203685.4775807 FOO. This is the maximum value you can put in stellar's 64 bit balance. If you want you can actually limit your trust to something smaller.&lt;/p&gt;

&lt;h2&gt;
  
  
  Magic happens ...
&lt;/h2&gt;

&lt;p&gt;Now lets do a magic trick and make our distribution account owner a billionaire (in FOO):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ ./cliste -a issuing pay distribution 10000000000 FOO
18:01:57.355 [main] INFO org.stellar.sdk.KotlinExtensions - 319 11d1f4533656ee7b7e009f6db8ad9c61159bd4012790ffc46eff7ed03000a85d success:true fee:100 PAYMENT
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Simply paying some FOO from the issuing account actually causes the FOO coin to come into existence. If we check the balance, we'll see simply paying from issuing to distribution created the token:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ ./cliste -a distribution balance
accountId: GADRJ4IDLVMOFE5DNQF7UZGWAUABKO7MYM2FAKGEGVPZA2W4RFQFF6ZQ subEntryCount: 1 home domain: null

thresholds: 0 0 0
signers:
    GADRJ4IDLVMOFE5DNQF7UZGWAUABKO7MYM2FAKGEGVPZA2W4RFQFF6ZQ 1
authRequired: false
authRevocable: false

Balances:
FOO (GBSR46WQNCDK7SPUW3XR3C663AE3QDG3MGG5S66GI6XQINQYGAZH7CF5) b:10000000000.0000000 l:922337203685.4775807 - sl: - - bl: -
XLM b:99.9999900 l:- - sl: - - bl: -
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h1&gt;
  
  
  Multi signatures
&lt;/h1&gt;

&lt;p&gt;Of course in practice you might want to lock things down a bit. For this we can modify the account options. A good practice is to protect important accounts with mutliple signatures.&lt;/p&gt;

&lt;h2&gt;
  
  
  Adding signees to the issuing account
&lt;/h2&gt;

&lt;p&gt;Lets add alice and bob as a signees to the issuing account.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ ./cliste -a issuing setOptions --signer-key alice --signer-weight 5
18:07:07.987 [main] INFO org.stellar.sdk.KotlinExtensions - 381 78d4572ef51900f088babf87a4e878771391950a24ca6f710a1deacde0967ad7 success:true fee:100 SET_OPTIONS

$ ./cliste -a issuing setOptions --signer-key bob --signer-weight 5
18:07:17.933 [main] INFO org.stellar.sdk.KotlinExtensions - 383 5c7b0ee0747c57fbe456781888b322ef7d93270af9db8daf010c263ad64b5c38 success:true fee:100 SET_OPTIONS
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This adds alice and bob as signers. Their keys have a weight of 5. If you look at the balance above, you'll see it defaults to 0. So either alice or bob have enough weight to do everything.&lt;/p&gt;

&lt;h2&gt;
  
  
  Managing thresholds and weights
&lt;/h2&gt;

&lt;p&gt;Lets lock down the issuing account:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ ./cliste -a issuing setOptions --low-threshold 8 --medium-threshold 8 --high-threshold 8 --master-key-weight 0
18:16:02.379 [main] INFO org.stellar.sdk.KotlinExtensions - 488 f1a347698c2b8ee6d9c891d6da7ae1c86fc367785b4f0dc1e71c54d78a7726e9 success:true fee:100 SET_OPTIONS

$ ./cliste -a issuing balance
accountId: GBSR46WQNCDK7SPUW3XR3C663AE3QDG3MGG5S66GI6XQINQYGAZH7CF5 subEntryCount: 2 home domain: null

thresholds: 8 8 8
signers:
    GBJR3JH4ZC5LZKGM2PPSVMJFPPWREVEKD4TQJL2RMUHI75P35N4JVABC 5
    GAURP3FQ56BOF2PXF5DCLFS4QOJPTO5XM62G2AAMAQXB4PZWYNSS4RKA 5
    GBSR46WQNCDK7SPUW3XR3C663AE3QDG3MGG5S66GI6XQINQYGAZH7CF5 0
authRequired: false
authRevocable: false

Balances:
XLM b:99.9999500 l:- - sl: - - bl: -
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;So, this confirms our issuing account now has 3 keys. We set the master key to 0; so it can no longer be used to issue FOO:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ ./cliste -a issuing pay distribution 10000000000 FOO
java.lang.IllegalStateException: failure after 0 transaction failed tx_bad_auth - null
    at org.stellar.sdk.KotlinExtensionsKt.doTransactionInternal(KotlinExtensions.kt:180)
    at org.stellar.sdk.KotlinExtensionsKt.doTransaction(KotlinExtensions.kt:142)
    at io.inbot.kotlinstellar.KotlinStellarWrapper.pay(KotlinStellarWrapper.kt:322)
    at io.inbot.kotlinstellar.KotlinStellarWrapper.pay$default(KotlinStellarWrapper.kt:314)
    at io.inbot.kotlinstellar.cli.CommandsKt$doPay$1.invoke(Commands.kt:194)
    at io.inbot.kotlinstellar.cli.CommandsKt$doPay$1.invoke(Commands.kt)
    at io.inbot.kotlinstellar.cli.CommandContext.run(CommandContext.kt:54)
    at io.inbot.kotlinstellar.cli.CliSteMainKt.main(CliSteMain.kt:88)
com.xenomachina.argparser.SystemExitException: Problem running 'pay'. failure after 0 transaction failed tx_bad_auth - null
    at io.inbot.kotlinstellar.cli.CommandContext.run(CommandContext.kt:62)
    at io.inbot.kotlinstellar.cli.CliSteMainKt.main(CliSteMain.kt:88)
cliste: Problem running 'pay'. failure after 0 transaction failed tx_bad_auth - null
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Signing transactions
&lt;/h2&gt;

&lt;p&gt;To make this work, both Alice and Bob need to sign the transaction. So lets create an unsigned transaction:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ ./cliste -a issuing preparePaymentTX distribution 10000000000 FOO
Transaction envelope xdr:
tx hash: zf+uG/7ePiTLuoqLbqgMMyQDq+PlxEsJkVEKq/jEixs=
tx envelope xdr: AAAAAGUeetBohq/J9LbvHYve2Am4DNthjdl7xkevBDYYMDJ/AAAAZAAAAMYAAAAGAAAAAAAAAAAAAAABAAAAAAAAAAEAAAAABxTxA11Y4pOjbAv6ZNYFABU77MM0UCjENV+QatyJYFIAAAABRk9PAAAAAABlHnrQaIavyfS27x2L3tgJuAzbYY3Ze8ZHrwQ2GDAyfwFjRXhdigAAAAAAAAAAAAA=
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This creates a transaction but instead of submitting it to stellar, it outputs the serialized binary representation in a format called XDR. This is actually what stellar stores internally. &lt;/p&gt;

&lt;h2&gt;
  
  
  Adding signatures
&lt;/h2&gt;

&lt;p&gt;Stellar requires two signatures for this transaction because we configured this in our previous step. So both alice and bob must add their signatures before we can submit this transaction.&lt;/p&gt;

&lt;p&gt;First alice signs the transaction:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ ./cliste -a alice signTx AAAAAGUeetBohq/J9LbvHYve2Am4DNthjdl7xkevBDYYMDJ/AAAAZAAAAMYAAAAGAAAAAAAAAAAAAAABAAAAAAAAAAEAAAAABxTxA11Y4pOjbAv6ZNYFABU77MM0UCjENV+QatyJYFIAAAABRk9PAAAAAABlHnrQaIavyfS27x2L3tgJuAzbYY3Ze8ZHrwQ2GDAyfwFjRXhdigAAAAAAAAAAAAA=
tx hash: zf+uG/7ePiTLuoqLbqgMMyQDq+PlxEsJkVEKq/jEixs=
tx envelope xdr: AAAAAGUeetBohq/J9LbvHYve2Am4DNthjdl7xkevBDYYMDJ/AAAAZAAAAMYAAAAGAAAAAAAAAAAAAAABAAAAAAAAAAEAAAAABxTxA11Y4pOjbAv6ZNYFABU77MM0UCjENV+QatyJYFIAAAABRk9PAAAAAABlHnrQaIavyfS27x2L3tgJuAzbYY3Ze8ZHrwQ2GDAyfwFjRXhdigAAAAAAAAAAAAH763iaAAAAQKZ+gYmDIqv9hUdZdC9+C4bUuX4RWmT8BnCI9wnb35IZ7IZIg5U8NIMvtodGEr4uv3NNB5/tbABEaNtDygihcws=
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Notice we get back a different XDR. It now includes the signature from Alice. One signature is not enough. So, Alice can now send her signed XDR to Bob via email/slack/etc. who can sign it as well:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;./cliste -a bob signTx AAAAAGUeetBohq/J9LbvHYve2Am4DNthjdl7xkevBDYYMDJ/AAAAZAAAAMYAAAAGAAAAAAAAAAAAAAABAAAAAAAAAAEAAAAABxTxA11Y4pOjbAv6ZNYFABU77MM0UCjENV+QatyJYFIAAAABRk9PAAAAAABlHnrQaIavyfS27x2L3tgJuAzbYY3Ze8ZHrwQ2GDAyfwFjRXhdigAAAAAAAAAAAAH763iaAAAAQKZ+gYmDIqv9hUdZdC9+C4bUuX4RWmT8BnCI9wnb35IZ7IZIg5U8NIMvtodGEr4uv3NNB5/tbABEaNtDygihcws=
tx hash: zf+uG/7ePiTLuoqLbqgMMyQDq+PlxEsJkVEKq/jEixs=
tx envelope xdr: AAAAAGUeetBohq/J9LbvHYve2Am4DNthjdl7xkevBDYYMDJ/AAAAZAAAAMYAAAAGAAAAAAAAAAAAAAABAAAAAAAAAAEAAAAABxTxA11Y4pOjbAv6ZNYFABU77MM0UCjENV+QatyJYFIAAAABRk9PAAAAAABlHnrQaIavyfS27x2L3tgJuAzbYY3Ze8ZHrwQ2GDAyfwFjRXhdigAAAAAAAAAAAAL763iaAAAAQKZ+gYmDIqv9hUdZdC9+C4bUuX4RWmT8BnCI9wnb35IZ7IZIg5U8NIMvtodGEr4uv3NNB5/tbABEaNtDygihcws2w2UuAAAAQOboMFKz6sOnFPio17cuaOBLrHYN7k/DpFSGAaYVgYKg25YCMqZug2brTkh7LXaubChpFBYJHkF4vN/tUQNSCwM=
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Bob gets back an even bigger XDR.&lt;/p&gt;

&lt;h2&gt;
  
  
  Examining the XDR
&lt;/h2&gt;

&lt;p&gt;Before submitting it, it might be a good idea to check what is inside:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ ./cliste txInfo AAAAAGUeetBohq/J9LbvHYve2Am4DNthjdl7xkevBDYYMDJ/AAAAZAAAAMYAAAAGAAAAAAAAAAAAAAABAAAAAAAAAAEAAAAABxTxA11Y4pOjbAv6ZNYFABU77MM0UCjENV+QatyJYFIAAAABRk9PAAAAAABlHnrQaIavyfS27x2L3tgJuAzbYY3Ze8ZHrwQ2GDAyfwFjRXhdigAAAAAAAAAAAAL763iaAAAAQKZ+gYmDIqv9hUdZdC9+C4bUuX4RWmT8BnCI9wnb35IZ7IZIg5U8NIMvtodGEr4uv3NNB5/tbABEaNtDygihcws2w2UuAAAAQOboMFKz6sOnFPio17cuaOBLrHYN7k/DpFSGAaYVgYKg25YCMqZug2brTkh7LXaubChpFBYJHkF4vN/tUQNSCwM=
850403524614 operations:
source account: GBSR46WQNCDK7SPUW3XR3C663AE3QDG3MGG5S66GI6XQINQYGAZH7CF5
10000000000.0000000 FOO to GADRJ4IDLVMOFE5DNQF7UZGWAUABKO7MYM2FAKGEGVPZA2W4RFQFF6ZQ
Signatures:
pn6BiYMiq/2FR1l0L34LhtS5fhFaZPwGcIj3CdvfkhnshkiDlTw0gy+2h0YSvi6/c00Hn+1sAERo20PKCKFzCw==
5ugwUrPqw6cU+KjXty5o4Eusdg3uT8OkVIYBphWBgqDblgIypm6DZutOSHstdq5sKGkUFgkeQXi83+1RA1ILAw==
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Submitting the signed transaction
&lt;/h2&gt;

&lt;p&gt;Now lets submit the transaction:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ ./cliste submitTx AAAAAGUeetBohq/J9LbvHYve2Am4DNthjdl7xkevBDYYMDJ/AAAAZAAAAMYAAAAGAAAAAAAAAAAAAAABAAAAAAAAAAEAAAAABxTxA11Y4pOjbAv6ZNYFABU77MM0UCjENV+QatyJYFIAAAABRk9PAAAAAABlHnrQaIavyfS27x2L3tgJuAzbYY3Ze8ZHrwQ2GDAyfwFjRXhdigAAAAAAAAAAAAL763iaAAAAQKZ+gYmDIqv9hUdZdC9+C4bUuX4RWmT8BnCI9wnb35IZ7IZIg5U8NIMvtodGEr4uv3NNB5/tbABEaNtDygihcws2w2UuAAAAQOboMFKz6sOnFPio17cuaOBLrHYN7k/DpFSGAaYVgYKg25YCMqZug2brTkh7LXaubChpFBYJHkF4vN/tUQNSCwM=
OK
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And check our distribution account again:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ ./cliste -a distribution balance
accountId: GADRJ4IDLVMOFE5DNQF7UZGWAUABKO7MYM2FAKGEGVPZA2W4RFQFF6ZQ subEntryCount: 1 home domain: null

thresholds: 0 0 0
signers:
    GADRJ4IDLVMOFE5DNQF7UZGWAUABKO7MYM2FAKGEGVPZA2W4RFQFF6ZQ 1
authRequired: false
authRevocable: false

Balances:
FOO (GBSR46WQNCDK7SPUW3XR3C663AE3QDG3MGG5S66GI6XQINQYGAZH7CF5) b:20000000000.0000000 l:922337203685.4775807 - sl: - - bl: -
XLM b:99.9999900 l:- - sl: - - bl:
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h1&gt;
  
  
  Alice and Bob can now start using FOO
&lt;/h1&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ ./cliste -a alice trust FOO
$ ./cliste -a bob trust FOO
$ ./cliste -a distribution pay alice 1000000 FOO
$ ./cliste -a alice pay bob 100000 FOO
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h1&gt;
  
  
  Other topics
&lt;/h1&gt;

&lt;p&gt;There are way more things you can do with cliste and stellar but this should be enough for a not so gentle introduction. &lt;/p&gt;

&lt;p&gt;You might like using cliste or playing with it to explore Stellar. You can actually do everything via the Stellar Laboratory UI as well. But as you will find it's a lot of clicking and a having cliste around definitely streamlines things. Also, the Stellar Laboratory does not actually work against standalone chains. So, cliste is pretty awesome for that.&lt;/p&gt;

&lt;h2&gt;
  
  
  Getting cliste
&lt;/h2&gt;

&lt;p&gt;Follow the instructions on Github. It's a kotlin library project currently. I might move cliste out to its own repository at some point. &lt;/p&gt;

&lt;h2&gt;
  
  
  Using cliste with public or test net
&lt;/h2&gt;

&lt;p&gt;All of the above commands run against a standalone network.&lt;/p&gt;

&lt;p&gt;You can actually run against the public network or testnetwork as well with some command line options. Instead of setting these options manually, it is easier to use the included scripts. This is how the clistePublic script works. Note that we use different file names for the assets and accounts as well so we can easily switch.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ cat clistePublic
#! /bin/bash
export CLISTE_ARGS='--stellar-network public --horizon-url https://horizon.stellar.org/ --key-properties public-keys.properties --asset-properties=public-assets.properties'

./cliste $*
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Contributing to cliste
&lt;/h2&gt;

&lt;p&gt;I welcome pull requests, issues, feedback, etc.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Note.&lt;/strong&gt; You may find a copy of this article in the github repository as well and I will probably keep on adding more documentation there.&lt;/p&gt;

</description>
      <category>stellar</category>
    </item>
    <item>
      <title>Post Agile: embracing asynchronous processes</title>
      <dc:creator>Jilles van Gurp</dc:creator>
      <pubDate>Mon, 13 Aug 2018 14:56:58 +0000</pubDate>
      <link>https://dev.to/jillesvangurp/post-agile-embracing-asynchronous-processes-ifa</link>
      <guid>https://dev.to/jillesvangurp/post-agile-embracing-asynchronous-processes-ifa</guid>
      <description>&lt;p&gt;Like many people my age in this industry (software development, if that was not clear), I remember how things worked before the internet and before agile. Programming in these days meant using expensive tools that came with stacks of books that sat on your desk. Google and Stackoverflow did not exist. Having internet in the workplace was not very widespread. I learned to program using the Basic manual that came with my commodore 64. Agile was not a thing.&lt;/p&gt;

&lt;p&gt;Things have changed since then and mostly for the better. Kent Beck's &lt;a href="http://www.extremeprogramming.org/"&gt;Extreme programming&lt;/a&gt; was a book and idea I was very enthusiastic about when it came out. Around that time I was doing a Ph. D. in the field of Software Engineering. I even referenced it in my 2003 Ph. D. thesis. The agile movement (Kent Beck was of course one of the &lt;a href="http://agilemanifesto.org/"&gt;Agile Manifesto&lt;/a&gt; signees), revolutionized development in many ways.&lt;/p&gt;

&lt;h1&gt;
  
  
  Bad Agile
&lt;/h1&gt;

&lt;p&gt;Everybody is claiming/pretending to be agile these days. So, the word has become somewhat meaningless. Every bank, insurer, and shitty little software shop is doing Agile. Capital A Agile of course because they are doing things "by the book". Anyone who knows anything about Agile would know this is exactly the wrong thing to do. I've been in the same room with some of the manifesto signees at various conferences, lectures, workshops, etc. and I've heard them spell this out. Agile is a set of tools and a way of working that you use and adapt to your needs. Use the books as a starting point, not the end state. If you know nothing, you might as well start by doing this.&lt;/p&gt;

&lt;p&gt;I've grown somewhat tired of the "you're doing it wrong" camp that snuffs out any criticism of things like Scrum, which I would argue has become somewhat toxic in our industry. People either hate scrum or love it and mostly for the wrong reasons (either way). Broadly speaking, people dislike the wide spread practice of its mindless application, the constant bickering in its pointless meetings, and the emotional drain and wasted energy over arguments on how to do it "right", whatever that means. Personally, this is a thing that I've witnessed on pretty much any project I've been involved with in the last 15 years. Anecdotical evidence suggests I'm not alone. &lt;/p&gt;

&lt;p&gt;For better or worse, Scrum has nominally replaced waterfall as something that is easy enough to grasp for organizations to restyle themselves as Agile without actually changing that much. That doesn't mean that Scrum or Agile are bad but Scrum in particular seems to have become the tool of choice for bad agile implementations. Scrum even has its own term for this: scrumbutt. &lt;/p&gt;

&lt;p&gt;There's a lot of bad agile going on in our industry. Most software developing companies are just as boring, stupid, and ineffective as 20 years ago. Government IT projects still go spectacularly wrong. Banks still sink tons of money in misguided projects. Companies like Lidl still allow themselves to get ripped off by companies like SAP (to the extent of a sweet &lt;a href="https://news.ycombinator.com/item?id=17541092"&gt;0.5 Billion&lt;/a&gt;). For the record, I blame both companies for this.&lt;/p&gt;

&lt;p&gt;Everybody prays to the church of Agile now and there are hordes of self styled Agilists/Agile coaches/etc to help you figure out how to do it right. A lot of companies employ some of those people pre-emptively to ensure they do things "by the book", which of course defeats the purpose.&lt;/p&gt;

&lt;p&gt;Whatever your point of view on this, a trend in our industry is that things are changing again and people are looking beyond agile at different and relatively new ways of organizing software development. If only to distinguish themselves from all the people doing bad Agile. &lt;/p&gt;

&lt;p&gt;Agile is close to 20 years old now. You are not going to improve things by not changing things and shaking things up a little. Some people have started referring to this as post agile. Whatever it is, scrum is definitely not a part of it.&lt;/p&gt;

&lt;h1&gt;
  
  
  History and evolution of Agile
&lt;/h1&gt;

&lt;p&gt;When the Agile manifesto was signed, most people were in fact not doing Agile or anything remotely close to it. Agile was a new thing then and somewhat controversial. People were doing all sorts of things and were generally confusing and conflating processes and modeling techniques and requirements engineering methodology and tools. Universities mostly taught waterfall then. &lt;/p&gt;

&lt;p&gt;In the late nineties and early 2000s people attempted to standardize modeling languages. The result, UML, was widely popular for a while and companies like Rational (later acquired by IBM) tried to make it the center-point of the development process. The result, &lt;a href="https://en.wikipedia.org/wiki/Rational_Unified_Process"&gt;Rational Unified Process&lt;/a&gt; was was considered modern and hip then and given the timing a bit of a counter move against Agile. Rational and RUP ended up in the hands of IBM; arguably one of the least agile companies in existence at the time (and today). Blue suits/white shirts were very much still a thing at IBM then. Standards, certification, training, and related consulting business were booming. &lt;/p&gt;

&lt;p&gt;UML and RUP perpetuated the dogmas of waterfall, which was to first do detailed designs (using UML, of course) after doing requirements specifications (also using UML, how convenient) and before doing implementation work and before testing would commence. Briefly that too was supposed to be done using UML with something called model driven development. Thankfully, MDA and the associated range of (now) abandonware to do that no longer comes up a lot in serious discussions about software development these days.&lt;/p&gt;

&lt;p&gt;But RUP was also a stepping stone towards Agile. Rational unified tried to be iterative as well. This really meant you got to do waterfall multiple times in a project; once a quarter or so. Rational's premise was that this required tools, lots of tools. Very complex tools that required lots of consulting. This is why IBM bought them and they made a lot of money getting organizations to implement RUP, training software architects and selling expensive licenses for software. Back in those days, any self respecting software architect had some boxes and books with the Rational logo prominently in sight. They'd be wielding impressive looking diagrams and there would typically be a lot of architecture and design documentation with more diagrams.&lt;/p&gt;

&lt;p&gt;Agile was hugely disRUPtive, mostly in the sense that it popped that bubble. Sorry about the bad pun. It just melted away in the space of about 5 years. Between 2000 and 2005, UML slowly disappeared from our lives. I haven't used UML tools in ages and can't say I miss them.&lt;/p&gt;

&lt;p&gt;Iterative development is of course as old as waterfall. The original paper by Royce on waterfall &lt;a href="http://www-scf.usc.edu/~csci201/lectures/Lecture11/royce1970.pdf"&gt;Managing the development of large software systems&lt;/a&gt; from 1970 is actually still a pretty good read and was soon complemented by papers on spiral and iterative development.&lt;/p&gt;

&lt;p&gt;Feedback loops are a good thing; every engineer knows this. In fact, Royce brought up iterations in that paper! Literal quote from the paper: "Attempt to do the job twice - the first result provides an early simulation of the final product". Royce was trying to be Agile in 1970. Of course his work got dumbed down to: lets first do requirements; set those in stone and turn them into design and implementation and maybe do a bit of testing/bugfixing before we throw it over the wall and walk away. &lt;a href="https://pragtob.wordpress.com/2012/03/02/why-waterfall-was-a-big-misunderstanding-from-the-beginning-reading-the-original-paper/"&gt;Fun fact, the word waterfall does not appear in Royce's paper!&lt;/a&gt; The original paper on waterfall does not mention waterfalls, at all. Don't blame Royce for waterfall. Waterfall was never a thing, from day 1.&lt;/p&gt;

&lt;p&gt;What the agile manifesto and movement accomplished was that the waterfall bubble was burst and iterative development became the norm. Having a lot of design documentation in UML slows you down when iterating and makes it hard to do that. If iterating is your goal, you can't lose time doing that. People figured out that the added value of this typically incomplete and out of date documentation was questionable. &lt;/p&gt;

&lt;p&gt;The result was that UML became a thing for whiteboards and from there an entirely optional thing that these days is not a topic that comes up in software planning at all. Same with requirements documentation. This was a bit of a black art to begin with. With agile people figured out that it's much easier to specify small deltas of what you want changed against what you have right now instead of specifying the whole thing up front. Which you inevitably would get wrong anyway. &lt;/p&gt;

&lt;p&gt;Getting rid of project bureaucracy like that allowed for shorter cycles and faster iterations and focused development around working prototypes. Extreme programming was about taking that to the (at the time) extreme of doing sprints that were as short as a few weeks. This was unheard of in an industry where projects could spend months or years without even producing working code.&lt;/p&gt;

&lt;h1&gt;
  
  
  Agile brought a revolution in tools
&lt;/h1&gt;

&lt;p&gt;Tools such as issue trackers empowered this. &lt;a href="https://www.bugzilla.org/"&gt;Bugzilla&lt;/a&gt; was the first popular one that got a lot of traction. This happened roughly around the time that Agile became a thing. Issue trackers turned requirements into a new way of working. Instead of writing specifications, you instead specify change requests in the form of issues that are tracked. Initially this was used for bugs but very soon this expanded to track essentially all sorts of changes. Similarly, wikis took the place of design and product documentation.&lt;/p&gt;

&lt;p&gt;Agile spawned the adoption and development of a lot of new tools; many of which had their origins in open source communities. These days any decent project has an issue tracker, some kind of decentralized version control system (usually Git), CI tooling, wikis, communication tools like irc or slack, etc. These tools are essential and they continue to change how people are working. Some of these tools are run in the cloud by companies like Atlassian, Gitlab, and Github. The recent acquisition of the latter by Microsoft shows how important these tools have become.&lt;/p&gt;

&lt;p&gt;The open source world was always distributed and could never rely on meetings. Consequently they adopted tools and processes that supported their way of working. Early projects used things like mailing lists and news groups; and version control systems like CVS, RCS, and later things like Subversion and Git. Likewise, Irc predates Slack by several decades and continues to be the preferred way of communicating in some projects. The practice of pull requests on Github/Gitlab that is now common in enterprise projects emerged out of the practice of exchanging patches via mailing lists and news groups. Eventually Git was created to make this process easier and Github created a UI for it.&lt;/p&gt;

&lt;p&gt;Many of these tools were not common (or even around) when the Agile manifesto was signed. However, Agile people embraced these tools and also spawned the Devops movement which integrated operations experience and responsibilities into teams. Devops brought even more tools to the table. Things like deployment automation, docker, kubernetes, PAAS, SAAS, chatops, etc.&lt;/p&gt;

&lt;p&gt;All these tools and their associated practices are slowly resulting in a post agile world. &lt;/p&gt;

&lt;h1&gt;
  
  
  Meetings are synchronization bottlenecks
&lt;/h1&gt;

&lt;p&gt;Agile replaced waterfall with structured processes to organize development in an iterative way. The above mentioned tools mostly came later. An unintended side effect of agile was lots of new meetings. Talk to any engineer about things like Scrum and you'll essentially be ticking off what meetings they have to do: sprint plannings, estimations, retrospectives, reviews, and of course standups. The schoolbook implementations of Scrum are essentially endless cycles of such meetings. Most engineers I know hate meetings and/or see them as a necessary evil at best.&lt;/p&gt;

&lt;p&gt;With post agile people are discovering that the added value of meetings is questionable and that tools exist that facilitate getting rid of them. One of my favorite .com era surviving companies, despair.com, still sells a great poster with the following slogan: &lt;a href="https://despair.com/products/meetings?variant=2457301507"&gt;Meetings. None of us is as dumb as all of us.&lt;/a&gt;. I've been to more than a few scrum related meetings that reminded me of that poster.&lt;/p&gt;

&lt;p&gt;Meetings are inherently synchronous in both time and (usually) space. They require heads in a room at a specific moment. This is highly disruptive because people have to stop what they are doing, go to the meeting, and discuss and decide things together, and come to a decision. Video conferencing tools kind of suck for this and remote attendees are typically at a huge disadvantage in such meetings.&lt;/p&gt;

&lt;h1&gt;
  
  
  Going asynchronous
&lt;/h1&gt;

&lt;p&gt;With post agile, people are keeping the tools they adopted from OSS teams and some of the practices of Agile. However they are abandoning meetings and generally eliminating synchronization bottlenecks in their processes. This is similarly liberating as saying goodbye to convoluted out of date UML diagrams, crappy requirements specifications, and the glacial pace of waterfall style development. I have many friends and colleagues that are working in distributed teams that span the globe.&lt;/p&gt;

&lt;p&gt;There are a couple of practices that are key to post agile. The key enabler is continuous deployment or continuous releases. Simply put: if stuff is ready, you make it available right away in order to keep feedback loops as short as possible. The OSS community figured this out first. Mozilla nightly builds have been a thing for pretty much as long as their open source products have been around. Frequent releases are essential for gathering feedback via an issue tracker. You don't wait until after the next retrospective in two weeks or whatever arbitrary milestone you have to get that feedback. And you use tools and automated processes to make sure this happens in a controlled and predictable way.&lt;/p&gt;

&lt;p&gt;Continuous deployment requires a high degree of automation. It requires continuous integration: aka. automatically assessing whether you are fit to ship right now. Every change triggers a CI build. CD eliminates release management as a role and CI relieves product managers from having to manually test and approve releases.&lt;/p&gt;

&lt;p&gt;Continuous integration in turn requires having tests that can run automatically that cover enough of the product that people feel confident that things work as they intend. The goal of automated tests is to prevent having manual testing on the critical path of releasing any software.&lt;/p&gt;

&lt;p&gt;Another thing that continuous deployment requires is the ability to branch and merge changes. In order to keep the production branch stable and releasable, it is essential to keep work in progress on branches until it is good enough. &lt;/p&gt;

&lt;p&gt;Git is the key enabler for this and another tool that emerged out of the OSS world. Before Git, version control systems were huge organizational bottlenecks. Branching was a royal pain in the ass and sufficiently complicated that subsequent merging required lots of planning. Organizations frequently were bottle-necked on merges. This was mitigated with commit freezes and similar practices. &lt;a href="http://gph.is/1fkwrno"&gt;Linus Torvalds&lt;/a&gt; has been running the largest OSS project in the world (Linux) for nearly 25 years and all this planning and coordination around branches and merges annoyed him enough that he invented Git. Git codifies and improves the asynchronous change management that the Linux development community has been practicing.&lt;/p&gt;

&lt;p&gt;Of course Linux still continues to rely on mailing lists for using git. Linux developers exchange git patches (i.e. textual exports of commits, not just diffs) via email. However, a company called &lt;a href="https://github.com"&gt;Github&lt;/a&gt; emerged that made Git more user friendly for the masses and introduced us to a key tool for post agile: the Pull Request. It's essentially the same thing but the flow of reviewing and merging is supported with a nice web UI.&lt;/p&gt;

&lt;p&gt;Pull requests are an asynchronous way to manage changes in software. Back when agile started out version control was done doing CVS (subversion was still in beta) and branching was considered a very dangerous ritual that was best avoided. Entire companies were getting by without either version control systems or branches.  &lt;/p&gt;

&lt;p&gt;Pull requests, CI, and CD are powerful tools that can remove a lot of synchronization bottlenecks in software development. You initiate work on a topic via an issue tracker, after discussion in that tool, on  mailing lists, or slack/skype/irc/whatever, you then create a branch and start work. When done you create a pull request (PR). Relevant stakeholders provide feedback (after being assigned or @tagged). Meanwhile CI confirms things are ready to merge. When the PR is approved, it is merged. This in turn triggers an automated deployment. Start to finish the life of a software change can be completely asynchronous. People synchronize on tickets in issue trackers and pull requests and escalate via asynchronous communication tools. When stuff goes wrong the relevant PR is identified, the git history and issue trackers are used to figure out what happened and if needed a new PR is created to either revert or fix the issue. Tools facilitate decision making, planning, auditing, automation and affect all life cycles of the traditional waterfall model.&lt;/p&gt;

&lt;h1&gt;
  
  
  Asynchronous enables distributed teams
&lt;/h1&gt;

&lt;p&gt;Going asynchronous can cut down on meetings and eliminates Sprints as a necessity or a meaningful unit of work and allows you to iterate in hours instead of weeks. Asynchronous also enables you to distribute the work geographically. Meetings are an obstacle for this because they require people to be in the same place and timezone. Neither is practical if you have people on all continents. By going asynchronous, people can coordinate work and synchronize via issues, pull request, and slack while the decision making around releases, deployments, etc. is automated.&lt;/p&gt;

&lt;p&gt;This enables you to get rid of essentially all Agile related meetings. All of them. This is why I believe that going asynchronous and distributed effectively moves us to a post Agile world. Standups are not practical in a distributed team, so those go. Having lengthy sprint plannings and estimation sessions over video calls are not practical either. Sprints themselves are no longer necessary. And so on.  &lt;/p&gt;

&lt;h1&gt;
  
  
  Should you go post Agile?
&lt;/h1&gt;

&lt;p&gt;Should everybody now jump on the bandwagon and start doing post agile? I'd argue that, no, you should only do things that fit your context. Just like you were supposed to do with Agile. What is helpful though is reflecting on where you are with your organization in terms of what you are doing, what you are trying to accomplish, what tools and processes you have, what your bottlenecks are, and how you are evolving your processes and tools over time. You probably already use a lot of the tools and practices mentioned above.&lt;/p&gt;

&lt;p&gt;I see distributed teams and companies as the key drivers of the post agile world. Asynchronous tools and practices are a key enabler for that. There are numerous economical advantages to going distributed that may cause people to be interested in becoming more distributed and less bottlenecked on meetings. For example, hiring is easier when people don't have to move to your central office. You have access to a global pool of talent. Not having to have expensive offices in expensive places like San Francisco or London is also a huge plus. Not wasting double digit R&amp;amp;D budget percentages on meetings and associated traveling is also a big gain. Sticking everybody in meeting rooms for a whole day every two weeks for scrum related planning and ceremony is 10% of your development budget. In places like San Francisco we are talking significant chunks of investor cash being spent on that as engineers are expensive there.&lt;/p&gt;

&lt;p&gt;If you want to tap into these benefits, you need to think about how you can integrate asynchronous in your current processes. It's perfectly alright to continue to do Scrum and practice post agile practices at the same time. Many companies do this. It's not about jumping ship but about reflecting on what you are doing and why and how that is working. Personally, I've long preferred Kanban style processes because they are easier to align with doing things asynchronously. Whatever you do, please don't be dogmatic about it.&lt;/p&gt;

</description>
      <category>agile</category>
      <category>postagile</category>
      <category>scrum</category>
    </item>
    <item>
      <title>Streaming results from a JdbcTemplate in Kotlin</title>
      <dc:creator>Jilles van Gurp</dc:creator>
      <pubDate>Fri, 15 Jun 2018 13:01:30 +0000</pubDate>
      <link>https://dev.to/jillesvangurp/streaming-results-from-a-jdbctemplate-in-kotlin-474h</link>
      <guid>https://dev.to/jillesvangurp/streaming-results-from-a-jdbctemplate-in-kotlin-474h</guid>
      <description>&lt;p&gt;I've been transitioning from using Hibernate to using JdbcTemplate in a Kotlin based Spring Boot 2.x project recently. The why and how of that I wrote down in another &lt;a href="https://dev.to/jillesvangurp/ripping-out-hibernate-and-going-native-jdbc-1lf2"&gt;article&lt;/a&gt;. One of the repository methods that I needed to move over was something like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight kotlin"&gt;&lt;code&gt;    &lt;span class="nd"&gt;@Query&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"..."&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="k"&gt;fun&lt;/span&gt; &lt;span class="nf"&gt;streamUserIds&lt;/span&gt;&lt;span class="p"&gt;():&lt;/span&gt; &lt;span class="nc"&gt;Stream&lt;/span&gt;&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nc"&gt;String&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;One of the nice things with Spring Boot is that it does the right thing with generating a useful implementation that does the right thing. You can do Streams, Optionals, Lists, etc. JdbcTemplate does not really have anything similar. &lt;/p&gt;

&lt;p&gt;IMHO a somewhat strange omission in the JdbcTemplate API but easy to fix. After a bit of googling, I found this somewhat &lt;a href="https://blog.apnic.net/2015/08/05/using-the-java-8-stream-api-with-springs-jdbctemplate/"&gt;helpful page&lt;/a&gt; with a solution that nearly worked but not quite. But it put me on the right track. &lt;/p&gt;

&lt;p&gt;So, I decided to share my implementation since I think it is a bit simpler and better:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight kotlin"&gt;&lt;code&gt;    &lt;span class="k"&gt;fun&lt;/span&gt; &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nc"&gt;T&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt; &lt;span class="nf"&gt;queryStream&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;sql&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nc"&gt;String&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;converter&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nc"&gt;SqlRowSet&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="nc"&gt;T&lt;/span&gt;&lt;span class="p"&gt;?,&lt;/span&gt; &lt;span class="n"&gt;args&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nc"&gt;Array&lt;/span&gt;&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nc"&gt;Any&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;):&lt;/span&gt; &lt;span class="nc"&gt;Stream&lt;/span&gt;&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nc"&gt;T&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="kd"&gt;val&lt;/span&gt; &lt;span class="py"&gt;rowSet&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="n"&gt;jdbcTemplate&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;queryForRowSet&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;sql&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;*&lt;/span&gt;&lt;span class="n"&gt;args&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

        &lt;span class="kd"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;RowSetIter&lt;/span&gt; &lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nc"&gt;Iterator&lt;/span&gt;&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nc"&gt;T&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="kd"&gt;var&lt;/span&gt; &lt;span class="py"&gt;current&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nc"&gt;T&lt;/span&gt;&lt;span class="p"&gt;?&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="k"&gt;null&lt;/span&gt;

            &lt;span class="k"&gt;override&lt;/span&gt; &lt;span class="k"&gt;fun&lt;/span&gt; &lt;span class="nf"&gt;hasNext&lt;/span&gt;&lt;span class="p"&gt;():&lt;/span&gt; &lt;span class="nc"&gt;Boolean&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
                &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;current&lt;/span&gt; &lt;span class="p"&gt;!=&lt;/span&gt; &lt;span class="k"&gt;null&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
                    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="k"&gt;true&lt;/span&gt;
                &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;else&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
                    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;rowSet&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;next&lt;/span&gt;&lt;span class="p"&gt;())&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
                        &lt;span class="n"&gt;current&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="n"&gt;converter&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;invoke&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;rowSet&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
                        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="k"&gt;true&lt;/span&gt;
                    &lt;span class="p"&gt;}&lt;/span&gt;
                &lt;span class="p"&gt;}&lt;/span&gt;
                &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="k"&gt;false&lt;/span&gt;
            &lt;span class="p"&gt;}&lt;/span&gt;

            &lt;span class="k"&gt;override&lt;/span&gt; &lt;span class="k"&gt;fun&lt;/span&gt; &lt;span class="nf"&gt;next&lt;/span&gt;&lt;span class="p"&gt;():&lt;/span&gt; &lt;span class="nc"&gt;T&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
                &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nf"&gt;hasNext&lt;/span&gt;&lt;span class="p"&gt;())&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
                    &lt;span class="kd"&gt;val&lt;/span&gt; &lt;span class="py"&gt;retVal&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="n"&gt;current&lt;/span&gt;
                    &lt;span class="n"&gt;current&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="k"&gt;null&lt;/span&gt;
                    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;retVal&lt;/span&gt;&lt;span class="o"&gt;!!&lt;/span&gt;
                &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;else&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
                    &lt;span class="k"&gt;throw&lt;/span&gt; &lt;span class="nc"&gt;NoSuchElementException&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
                &lt;span class="p"&gt;}&lt;/span&gt;
            &lt;span class="p"&gt;}&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt;

        &lt;span class="kd"&gt;val&lt;/span&gt; &lt;span class="py"&gt;spliterator&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;Spliterators&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;spliteratorUnknownSize&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nc"&gt;RowSetIter&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt; &lt;span class="nc"&gt;Spliterator&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;IMMUTABLE&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nc"&gt;StreamSupport&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;stream&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;spliterator&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="k"&gt;false&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;All this does is wrap &lt;code&gt;SqlRowSet&lt;/code&gt; with a simple iterator. &lt;code&gt;SqlRowSet&lt;/code&gt; is a simple wrapper around the JDBC &lt;code&gt;RowSet&lt;/code&gt; with sane exception handling that makes the above a bit less tedious. &lt;/p&gt;

&lt;p&gt;My implementation fixes a few issues that the code in the linked article has: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;the exit condition was wrong and omitted the last row. &lt;/li&gt;
&lt;li&gt;I like to do the heavy lifting in the &lt;code&gt;hasNext()&lt;/code&gt; method instead of the &lt;code&gt;next()&lt;/code&gt; method and make &lt;code&gt;next&lt;/code&gt; rely on &lt;code&gt;hasNext()&lt;/code&gt;. This also removes the need to call next on the first row. I've implemented some iterators before and this seems a good pattern for iterators. &lt;/li&gt;
&lt;li&gt;It was attempting to stream rowset. This doesn't really make sense given that this is some lowlevel object representing a db cursor and all you are doing is returning the same object and calling &lt;code&gt;next()&lt;/code&gt; on it. So instead, I'm using a lambda to convert each row to a &lt;code&gt;T&lt;/code&gt;. So, whether you are mapping some entity or just extracting strings, it will work. And you can always make &lt;code&gt;T&lt;/code&gt; a &lt;code&gt;Unit&lt;/code&gt; if you really just want to iterate over the rows and not return anything.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Another gotcha is that you obviously can't close the connection before you stream because it needs to use the database cursor to get results. So, the proper way to solve this this is with a TransactionTemplate so you keep the connection open until after you are done streaming results from the DB:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight kotlin"&gt;&lt;code&gt;&lt;span class="n"&gt;transactionTemplate&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;execute&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="n"&gt;dao&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;queryStream&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"select user_id from table"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="n"&gt;rs&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;_&lt;/span&gt; &lt;span class="p"&gt;-&amp;gt;&lt;/span&gt;
        &lt;span class="c1"&gt;// Map rows to a String&lt;/span&gt;
        &lt;span class="n"&gt;rs&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;getString&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"user_id"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="p"&gt;}.&lt;/span&gt;&lt;span class="nf"&gt;forEach&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="c1"&gt;// do something with each user_id&lt;/span&gt;
        &lt;span class="nf"&gt;println&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"User $it"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;    
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;This will work whether you have 50 users or 50 million user ids. The code above should be easy to port back to Java if you need it to. I also shared this code as a &lt;a href="https://gist.github.com/codebje/58d1b12e7a2d0ed31b3a#gistcomment-2616705"&gt;comment&lt;/a&gt; to the Github gist that was linked from the article that inspired this.&lt;/p&gt;

</description>
      <category>jdbctemplate</category>
      <category>kotlin</category>
      <category>streams</category>
      <category>springboot</category>
    </item>
  </channel>
</rss>
