<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Florian Polster</title>
    <description>The latest articles on DEV Community by Florian Polster (@fpolster).</description>
    <link>https://dev.to/fpolster</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/fpolster"/>
    <language>en</language>
    <item>
      <title>How to install only the CPU version of pytorch in pdm</title>
      <dc:creator>Florian Polster</dc:creator>
      <pubDate>Thu, 03 Jul 2025 17:13:32 +0000</pubDate>
      <link>https://dev.to/fpolster/how-to-install-only-the-cpu-version-of-pytorch-in-pdm-1me0</link>
      <guid>https://dev.to/fpolster/how-to-install-only-the-cpu-version-of-pytorch-in-pdm-1me0</guid>
      <description>&lt;p&gt;I've been chasing how to do this for days and Google couldn't help me. So here's me hoping I'm helping the next poor soul.&lt;/p&gt;

&lt;p&gt;In pyproject.toml:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;dependencies = [
    "torch&amp;gt;=2.7.1",
]

[[tool.pdm.source]]
type = "index"
url = "https://download.pytorch.org/whl/cpu/"
name = "torch"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



</description>
      <category>pdm</category>
      <category>torch</category>
      <category>pytorch</category>
    </item>
    <item>
      <title>Long-running jobs in Temporal.io</title>
      <dc:creator>Florian Polster</dc:creator>
      <pubDate>Mon, 17 Mar 2025 17:00:00 +0000</pubDate>
      <link>https://dev.to/fpolster/long-running-jobs-in-temporalio-2e6a</link>
      <guid>https://dev.to/fpolster/long-running-jobs-in-temporalio-2e6a</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;TL;DR&lt;br&gt;&lt;br&gt;
A long-running job should be implemented as one long-running activity that&lt;br&gt;
uses heartbeat for resumption.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  What I needed to do
&lt;/h2&gt;

&lt;p&gt;There was a data-sync backfilling to be done where we backfill data from one&lt;br&gt;
service's database into another one. It was about 5.5M DB entities that needed&lt;br&gt;
to be copied, so we expected the total running time to be some number of hours.&lt;/p&gt;

&lt;p&gt;We wanted to write a job with some kind of progress record, i.e. a "cursor".&lt;br&gt;
Such that, should the job fail at any point, it could, on retry, continue where&lt;br&gt;
it left off.&lt;/p&gt;
&lt;h2&gt;
  
  
  First instinct: Parameterized activity
&lt;/h2&gt;

&lt;p&gt;After reading the Temporal docs, my first instinct was to implement one workflow&lt;br&gt;
with one parameterized activity which the workflow would call in a loop.&lt;/p&gt;

&lt;p&gt;This idea was dismissed after talking to my colleagues. Because each activity&lt;br&gt;
invocation results in events being added to the Temporal Events. Temporal works&lt;br&gt;
under the assumption that there will be few events per workflow, not tens of&lt;br&gt;
thousands. We didn't want to risk exhausting Temporal's disk space.&lt;/p&gt;
&lt;h2&gt;
  
  
  The good solution: One long-running Activity with Heartbeats
&lt;/h2&gt;

&lt;p&gt;I'll just show you some code here.&lt;/p&gt;

&lt;p&gt;At the beginning of the Activity we use &lt;code&gt;GetHeartbeatDetails&lt;/code&gt; to get the cursor&lt;br&gt;
that was recorded by a previous, unsuccessful activity execution. After each&lt;br&gt;
batch we record the cursor via &lt;code&gt;RecordHeartbeat&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;You'll want to do some info logging throughout the activity execution. Errors&lt;br&gt;
don't need to be logged, just return them to the runtime, it will surface them.&lt;/p&gt;

&lt;p&gt;As you can see, the batch size is configurable. We ran it with a size of 100 and&lt;br&gt;
that worked well.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight go"&gt;&lt;code&gt;&lt;span class="k"&gt;package&lt;/span&gt; &lt;span class="n"&gt;workflows&lt;/span&gt;

&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="s"&gt;"context"&lt;/span&gt;
    &lt;span class="s"&gt;"fmt"&lt;/span&gt;

    &lt;span class="s"&gt;"go.temporal.io/sdk/activity"&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="k"&gt;type&lt;/span&gt; &lt;span class="n"&gt;BackfillingActivities&lt;/span&gt; &lt;span class="k"&gt;struct&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="n"&gt;BatchSize&lt;/span&gt; &lt;span class="kt"&gt;int&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="k"&gt;type&lt;/span&gt; &lt;span class="n"&gt;Heartbeat&lt;/span&gt; &lt;span class="k"&gt;struct&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="n"&gt;Cursor&lt;/span&gt; &lt;span class="kt"&gt;string&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="k"&gt;func&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;a&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="n"&gt;BackfillingActivities&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="n"&gt;CommentsBackfillingActivity&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;ctx&lt;/span&gt; &lt;span class="n"&gt;context&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Context&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="kt"&gt;error&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="n"&gt;cursor&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="s"&gt;""&lt;/span&gt;
    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;activity&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;HasHeartbeatDetails&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;ctx&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="k"&gt;var&lt;/span&gt; &lt;span class="n"&gt;heartbeat&lt;/span&gt; &lt;span class="n"&gt;Heartbeat&lt;/span&gt;
        &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;err&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="n"&gt;activity&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;GetHeartbeatDetails&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;ctx&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&lt;/span&gt;&lt;span class="n"&gt;heartbeat&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt; &lt;span class="n"&gt;err&lt;/span&gt; &lt;span class="o"&gt;!=&lt;/span&gt; &lt;span class="no"&gt;nil&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;fmt&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Errorf&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"error getting heartbeat: %w"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;err&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt;
        &lt;span class="n"&gt;cursor&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;heartbeat&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Cursor&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;

    &lt;span class="k"&gt;var&lt;/span&gt; &lt;span class="n"&gt;err&lt;/span&gt; &lt;span class="kt"&gt;error&lt;/span&gt;
    &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="n"&gt;cursor&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;err&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;a&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;processBatch&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;ctx&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;cursor&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;a&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;BatchSize&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;err&lt;/span&gt; &lt;span class="o"&gt;!=&lt;/span&gt; &lt;span class="no"&gt;nil&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;err&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt;
        &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;cursor&lt;/span&gt; &lt;span class="o"&gt;==&lt;/span&gt; &lt;span class="s"&gt;""&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="k"&gt;break&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;else&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="n"&gt;activity&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;RecordHeartbeat&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;ctx&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;Heartbeat&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="n"&gt;cursor&lt;/span&gt;&lt;span class="p"&gt;})&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="no"&gt;nil&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="c"&gt;// processBatch processes one batch starting from cursor and returns the next cursor or an error.&lt;/span&gt;
&lt;span class="k"&gt;func&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;a&lt;/span&gt; &lt;span class="n"&gt;BackfillingActivities&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="n"&gt;processBatch&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;ctx&lt;/span&gt; &lt;span class="n"&gt;context&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Context&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;cursor&lt;/span&gt; &lt;span class="kt"&gt;string&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;batchSize&lt;/span&gt; &lt;span class="kt"&gt;int&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="kt"&gt;string&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="kt"&gt;error&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="n"&gt;logger&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="n"&gt;activity&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;GetLogger&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;ctx&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;logger&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Info&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;fmt&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Sprintf&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"Starting processing batch with cursor &lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s"&gt;%s&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;cursor&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="c"&gt;//-&amp;gt; batch := SELECT FROM WHERE primary_key &amp;gt; {cursor} ORDER BY primary_key ASC LIMIT {batchSize}&lt;/span&gt;

    &lt;span class="n"&gt;logger&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Info&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;fmt&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Sprintf&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"Found %d items"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nb"&gt;len&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;batch&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;

    &lt;span class="c"&gt;//-&amp;gt; Transform into target data structure&lt;/span&gt;

    &lt;span class="c"&gt;//-&amp;gt; Load into target DB&lt;/span&gt;

    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="nb"&gt;len&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;batch&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&lt;/span&gt; &lt;span class="n"&gt;batchSize&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="s"&gt;""&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="no"&gt;nil&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="n"&gt;lastItem&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="n"&gt;batch&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nb"&gt;len&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;batch&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="m"&gt;1&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;lastItem&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Id&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="no"&gt;nil&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



</description>
      <category>go</category>
      <category>temporal</category>
    </item>
    <item>
      <title>JSON in ClickHouse</title>
      <dc:creator>Florian Polster</dc:creator>
      <pubDate>Sat, 14 Jan 2023 10:12:38 +0000</pubDate>
      <link>https://dev.to/fpolster/json-in-clickhouse-2j0l</link>
      <guid>https://dev.to/fpolster/json-in-clickhouse-2j0l</guid>
      <description>&lt;p&gt;I just watched &lt;a href="https://www.youtube.com/watch?v=mWEPHX8rlSM" rel="noopener noreferrer"&gt;this talk&lt;/a&gt; about storing JSON in ClickHouse. It's quite insightful. My takeaways:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;There are performance-trimmed json-value-extraction functions that avoid parsing the entire blob in favor of performance.
JSON in String columns compresses pretty well.&lt;/li&gt;
&lt;li&gt;There're functions to convert simple JSON to maps and when you store the same data from the talk's example as a map instead of JSON, queries are 4-5x faster.&lt;/li&gt;
&lt;li&gt;You can insert data into a table with engine=null. Such a table doesn't store any data but it can be the source of a Materialized View. &lt;/li&gt;
&lt;li&gt;It's pretty common for users to store the JSON in one column and then extract specific values into columns next to that. That will already boost performance if you're smart about it. You can further enhance performance by defining Data-Skipping Indexes on those columns.&lt;/li&gt;
&lt;li&gt;All the approaches explored in the talk are &lt;a href="https://clickhouse.com/docs/en/guides/developer/working-with-json/json-other-approaches" rel="noopener noreferrer"&gt;documented here&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;There's a &lt;a href="https://clickhouse.com/docs/en/guides/developer/working-with-json/json-semi-structured" rel="noopener noreferrer"&gt;beta feature&lt;/a&gt; for the future of JSON data in ClickHouse with many under the hood optimizations (essentially, inserted data is schema-inferred and clickhouse will automatically create columns for all fields (even nested) and it will even create new columns as json object with new fields flow in).&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>clickhouse</category>
    </item>
    <item>
      <title>How to host internal static websites protected by single-sign on (Oauth2, OpenID, or SAML) for free</title>
      <dc:creator>Florian Polster</dc:creator>
      <pubDate>Sat, 21 Nov 2020 14:28:04 +0000</pubDate>
      <link>https://dev.to/fpolster/how-to-host-internal-static-websites-protected-by-single-sign-on-oauth2-openid-or-saml-for-free-hhk</link>
      <guid>https://dev.to/fpolster/how-to-host-internal-static-websites-protected-by-single-sign-on-oauth2-openid-or-saml-for-free-hhk</guid>
      <description>&lt;p&gt;In this brief tutorial I will show you how to create a log-in protected static website using the Google Cloud Platform. We'll use App Engine to host the static (or dynamic if you want) website and we will add a log-in shield using Google's Identity-Aware Proxy. At Staffbase we use the G Suite so I'll set IAP up to only allow access to the website to people from our GSuite domain.&lt;/p&gt;

&lt;h3&gt;
  
  
  Prerequisites
&lt;/h3&gt;

&lt;p&gt;The eventual website will cause no costs as long as you don't have huge amounts of traffic. Nevertheless, even to create just a free account, Google requires you to provide a credit card number. They do this avoid fraud and misuse. So that's something to keep in mind - &lt;em&gt;you need a credit card&lt;/em&gt; to begin.&lt;/p&gt;

&lt;p&gt;You also need to install the &lt;a href="https://cloud.google.com/sdk/docs/install" rel="noopener noreferrer"&gt;Google Cloud SDK&lt;/a&gt; since you need the &lt;code&gt;gcloud&lt;/code&gt; CLI tool which comes with it.&lt;/p&gt;

&lt;p&gt;Lastly, you need a GCP project, so create one if you haven't already.&lt;/p&gt;

&lt;h3&gt;
  
  
  Setting up App Engine
&lt;/h3&gt;

&lt;p&gt;The only thing you need to turn your code into an app is adding an &lt;code&gt;app.yml&lt;/code&gt; file. You can reuse my &lt;code&gt;app.yml&lt;/code&gt; without having to make any modifications.&lt;/p&gt;

&lt;p&gt;
  My app.yml
  &lt;br&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;runtime&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;python38&lt;/span&gt;

&lt;span class="na"&gt;handlers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="c1"&gt;# resolves example.com/blog/ to example.com/blog/index.html&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;url&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/(.+)/&lt;/span&gt;
    &lt;span class="na"&gt;static_files&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;static/\1/index.html&lt;/span&gt;
    &lt;span class="na"&gt;upload&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;static/(.+)/index.html&lt;/span&gt;

  &lt;span class="c1"&gt;# resolves example.com to example.com/index.html&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;url&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/&lt;/span&gt;
    &lt;span class="na"&gt;static_files&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;static/index.html&lt;/span&gt;
    &lt;span class="na"&gt;upload&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;static/index.html&lt;/span&gt;

  &lt;span class="c1"&gt;# resolves example.com/blog to example.com/blog/index.html&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;url&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/([^\.]+)([^/])&lt;/span&gt;
    &lt;span class="na"&gt;static_files&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;static/\1\2/index.html&lt;/span&gt;
    &lt;span class="na"&gt;upload&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;static/(.+)&lt;/span&gt;

  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;url&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/(.+)&lt;/span&gt;
    &lt;span class="na"&gt;static_files&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;static/\1&lt;/span&gt;
    &lt;span class="na"&gt;upload&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;static/(.+)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;blockquote&gt;
&lt;p&gt;Credits for the above YAML: &lt;a href="https://github.com/mattgartner/appengine-static-sites" rel="noopener noreferrer"&gt;https://github.com/mattgartner/appengine-static-sites&lt;/a&gt;. This configuration assumes that you have all your static files in a directory called &lt;code&gt;static&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Don't worry about the fact that Python is configured as the runtime. Specifying a runtime is required and while all runtimes are capable of serving static files there is no dedicated runtime for static sites.&lt;/p&gt;

&lt;p&gt;You don't need to understand the handler definitions. They might look complicated but in essence all they do is making sure that urls like &lt;code&gt;domain.com/url/path&lt;/code&gt; and &lt;code&gt;domain.com/url/path/&lt;/code&gt; get translated to &lt;code&gt;domain.com/url/path/index.html&lt;/code&gt;.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;/p&gt;



&lt;p&gt;There is also a &lt;a href="https://cloud.google.com/appengine/docs/standard/python/getting-started/hosting-a-static-website" rel="noopener noreferrer"&gt;guide&lt;/a&gt; from Google on this. Once you have an &lt;code&gt;app.yml&lt;/code&gt; file you only need to run &lt;code&gt;gcloud app deploy --project=&amp;lt;project-id&amp;gt;&lt;/code&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Configuring Identity-Aware Proxy
&lt;/h3&gt;

&lt;blockquote&gt;
&lt;p&gt;Besides Google Accounts, you can use a wide range of additional identity providers, such as OAuth, SAML, and OIDC.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Configuring IAP is very easy. I had no problems following the &lt;a href="https://cloud.google.com/iap/docs/app-engine-quickstart" rel="noopener noreferrer"&gt;official guide&lt;/a&gt;. There is this role called &lt;em&gt;IAP-secured Web App User&lt;/em&gt; that you have to assign to anyone you want to be able to see the pages. I assigned this role to the entire staffbase.com G Suite domain but theoretically you can also assign it to individual Google accounts. &lt;/p&gt;

</description>
      <category>beginners</category>
    </item>
    <item>
      <title>The Reasons Why People Do Mob and Pair Programming Even Though It's Less Effective Than Working Alone</title>
      <dc:creator>Florian Polster</dc:creator>
      <pubDate>Sun, 22 Mar 2020 13:50:24 +0000</pubDate>
      <link>https://dev.to/fpolster/the-reason-why-people-do-mob-and-pair-programming-ifn</link>
      <guid>https://dev.to/fpolster/the-reason-why-people-do-mob-and-pair-programming-ifn</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;Explaining how mob programming works is not within the scope of this article. You can find &lt;a href="https://en.wikipedia.org/wiki/Mob_programming" rel="noopener noreferrer"&gt;that&lt;/a&gt; out &lt;a href="https://www.remotemobprogramming.org/" rel="noopener noreferrer"&gt;elsewhere&lt;/a&gt;. You might know pair programming. That's a special case of mob programming where &lt;code&gt;mob.size() == 2&lt;/code&gt; so to speak.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;I read an &lt;a href="https://www.heise.de/hintergrund/Erfolgreich-im-Homeoffice-arbeiten-4681061.html?seite=all" rel="noopener noreferrer"&gt;interesting article&lt;/a&gt; (in german) yesterday and it made me realize something that I want to share with you. It's the justification of mob programming which the article got down really well IMO. The benefits weren't so clear to me before.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;TL;DR&lt;br&gt;
Yes, in terms of throughput mob programming is less efficient than working independently and you should keep that in mind. However, mob programming enables faster time-to-market for single tasks/features. It's also perfect to spread knowledge among the team and that has valuable long-term benefits.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;If you've ever wondered 'why would anyone do mob programming at work, isn't it inefficient to have multiple people work on the same thing instead of everyone working in parallel?' - You're right. If every person is working on separate tasks you get more work done. That modus operandi does yield higher &lt;strong&gt;throughput&lt;/strong&gt; of work items.&lt;/p&gt;

&lt;p&gt;What mob programming optimizes for is &lt;strong&gt;time-to-market&lt;/strong&gt; -- getting one specific task done completely in the shortest amount of time possible. Here's how:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;If one person were to work on the task alone they might come to a point where they need some information from a co-worker. Getting that question answered will introduce some latency. In mob programming that co-worker is right there and can respond immediately.&lt;/li&gt;
&lt;li&gt;Where one person alone might have to look something up or google something. In mob programming there might be someone able to help instantly.&lt;/li&gt;
&lt;li&gt;Mob programmed code is already reviewed! If people are working separately, they submit their code for review resulting in more latency especially if further changes are requested from the reviewer. We do code reviews because individuals tend to suffer from tunnel vision, forget to write tests/documentation or are too lazy to do quality refactorings. Basically the problem is that one person can only look at the problem from one angle. With mob programming all these problems disappear as multiple people provide multiple angles to look at the problem.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Another benefit is &lt;strong&gt;knowledge sharing&lt;/strong&gt;. While working on the same problem, knowledge about that particular domain is spread across the team. The first result of this is that in the future team members become more &lt;strong&gt;independent&lt;/strong&gt; when working alone because less questions pop up. The second result is that everyone in the team can respond to requests from the &lt;strong&gt;stakeholders&lt;/strong&gt;. And if any of the members go on vacation &lt;strong&gt;no hand-overs&lt;/strong&gt; are necessary.&lt;/p&gt;

&lt;p&gt;The last upside is that by spending time together the &lt;strong&gt;team grows together&lt;/strong&gt; possibly with shared moments of accomplishment which is probably the most effective bonding activity you can do.&lt;/p&gt;

&lt;p&gt;Curiously, if you mix in mob programming from time to time the team becomes more effective in working independently.&lt;/p&gt;

&lt;p&gt;If you have programmed in a mob before please let me know how it went in the comments!&lt;/p&gt;

&lt;p&gt;The authors of the article I mentioned in the beginning have made a &lt;a href="https://www.remotemobprogramming.org/" rel="noopener noreferrer"&gt;website&lt;/a&gt; ... in case you want to check it out.&lt;/p&gt;

</description>
      <category>productivity</category>
      <category>programming</category>
    </item>
    <item>
      <title>How to work with git - An overview of git workflows</title>
      <dc:creator>Florian Polster</dc:creator>
      <pubDate>Sun, 05 Jan 2020 19:43:34 +0000</pubDate>
      <link>https://dev.to/fpolster/how-to-work-with-git-an-overview-of-git-workflows-1icb</link>
      <guid>https://dev.to/fpolster/how-to-work-with-git-an-overview-of-git-workflows-1icb</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;If you're here looking for a recommendation on what workflow to use, I strongly support the &lt;a href="https://docs.microsoft.com/en-us/azure/devops/repos/git/git-branching-guidance?view=azure-devops" rel="noopener noreferrer"&gt;recommendation&lt;/a&gt; Microsoft hands out to their customers.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Git workflows differ from dev team to dev team and from git server to git server (Github, Gitlab, etc.). Git as a technology is so solid and flexible that people have found multiple viable workflows. Here is an extensive overview. It is assumed that you know git and &lt;a href="https://www.atlassian.com/git/tutorials/comparing-workflows/feature-branch-workflow" rel="noopener noreferrer"&gt;feature branches&lt;/a&gt;.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;I plan on updating this post based on your feedback. So please, if you think something's wrong or missing, or if a link becomes out-dated leave a comment or contact me!&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Another guide about the same topic: &lt;a href="https://www.codingblocks.net/podcast/comparing-git-workflows/" rel="noopener noreferrer"&gt;https://www.codingblocks.net/podcast/comparing-git-workflows/&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Forking vs Centralized Workflow
&lt;/h2&gt;

&lt;h5&gt;
  
  
  The centralized workflow - i.e. all developers on a team have read and write access to a shared repo.
&lt;/h5&gt;

&lt;p&gt;This is especially common for professional teams. Team members fetch from the central repo, work in topic branches and push their branches to the central repo. Code is reviewed by other devs in Pull Requests. Accepted PRs merge the branch into &lt;code&gt;master&lt;/code&gt;.&lt;/p&gt;

&lt;h5&gt;
  
  
  The forking workflow - i.e. only core developers can change the repo.
&lt;/h5&gt;

&lt;p&gt;This is the most common form of collaboration for open source projects. The goal of this workflow is that the publicly visible repository sustains a high quality. Only trusted people (i.e. the maintainers of a project) can modify it. The way non-maintainers contribute to such projects is by a process called &lt;em&gt;forking&lt;/em&gt;. It clones the entire git repo to your personal user account.&lt;/p&gt;

&lt;p&gt;Say, if I want to make a contribution to some open source project like the programming language Scala. I would go to the official repo github.com/scala/scala and click on the &lt;em&gt;Fork&lt;/em&gt; button. That would create a copy of that repo under github.com/pofl/scala. I would have full control over that clone. I can push to it how ever I want.&lt;/p&gt;

&lt;p&gt;When I want to submit a contribution to the official repo, I will first push my changes to my fork. Then I can create a Pull Request in the upstream project. Such a PR is essentially a request to merge a branch in my repo into the upstream repo. It's called Pull Request instead of Merge Request (which it is called in GitLab btw) simply because 'pull' is the gitnically correct term for fetching a branch from a remote and merging it into HEAD.&lt;/p&gt;

&lt;p&gt;The day-to-day workflow is:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Fork the repo.&lt;/li&gt;
&lt;li&gt;Clone your fork to your computer (&lt;code&gt;origin&lt;/code&gt;).&lt;/li&gt;
&lt;li&gt;Add the official repo as a remote (&lt;code&gt;upstream&lt;/code&gt;).&lt;/li&gt;
&lt;li&gt;Fetch &lt;code&gt;upstream&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;Create a new branch from &lt;code&gt;upstream/master&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;When you're done, push it to &lt;code&gt;origin/topic-branch&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;Then you can create a PR in the upstream project where you can request that your topic-branch be merged as if it was a branch in the upstream repo.&lt;/li&gt;
&lt;li&gt;Resolve all requested changes that the maintainers pose.&lt;/li&gt;
&lt;li&gt;The maintainers accept the PR and the branch is pulled form your fork into &lt;code&gt;upstream&lt;/code&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://guides.github.com/activities/forking/" rel="noopener noreferrer"&gt;GitHub's guide on forking&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Git history hygiene
&lt;/h2&gt;

&lt;p&gt;Developers strive for clean code. Clean and consistent code is easier and faster to work with. To accomplish that teams author a code style guide and enforce it. The addition of Git as a tool to your development setup brings with it a whole new dimension of things that need to be kept clean. This section explains these things.&lt;/p&gt;

&lt;p&gt;If you are following the developer community on the internet you'll come across a few articles and blog posts talking about the git history. Many people have string opinions about what a clean history is. Curiously, when people say Git history what they actually mean most of the time is the shape of the commit graph of &lt;code&gt;master&lt;/code&gt; and other permanent branches. We'll cover that in this section. What we'll also cover is under what conditions a commit is 'clean'.&lt;/p&gt;

&lt;h3&gt;
  
  
  Linearity of Git history
&lt;/h3&gt;

&lt;p&gt;When you merge a branch into another, usually a merge commit is created but not always. Git defaults to not create a merge commit if it's not necessary (this is called &lt;a href="http://marklodato.github.io/visual-git-guide/index-en.html#merge" rel="noopener noreferrer"&gt;fast-forward merging&lt;/a&gt;). A merge is what happens anytime a code contribution gets accepted into &lt;code&gt;master&lt;/code&gt;, and that's the point where some people go religious. Luckily, most people just accept the default behavior and live with it.&lt;/p&gt;

&lt;h5&gt;
  
  
  Linear history
&lt;/h5&gt;

&lt;p&gt;There is one philosophical tribe of people who find that the commit graph should be linear. They want to be able to have a look at the Git history and immediately be able to see what has been happening and who's been doing what. With a linear history all you see is the unnoisy &lt;code&gt;master&lt;/code&gt; and the feature branches that are in progress. A linear history is achieved by rebasing the feature branch onto master before merging/pulling and enforcing fast-forward merges eliminating merge commits.&lt;/p&gt;

&lt;p&gt;The downside of this approach is that rebasing is quite an overhead as it involves more steps than merging. It also becomes limiting when many people work on the same repo, as there will be situations where devs will have to re-rebase because master changed before the PR got reviewed. The advantage is that you really do get a cleaner look at you history.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://stackoverflow.com/questions/15316601/in-what-cases-could-git-pull-be-harmful" rel="noopener noreferrer"&gt;Here&lt;/a&gt; is a link that discusses and explains this philosphy further. &lt;a href="https://gitlab.gnome.org/GNOME/mutter/-/network/master" rel="noopener noreferrer"&gt;Here&lt;/a&gt; you can see the Git history of a project following the this philosophy.&lt;/p&gt;

&lt;h5&gt;
  
  
  Maximum historic information
&lt;/h5&gt;

&lt;p&gt;The other extreme is one where people disable fast-forward merges and enforce the creation of merge commits. The philosophy behind this is that the history is a source of information and the fact that code was created on feature branches would be denied when you create a fast-forward merge. "A linear history is a lie" is their saying.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.atlassian.com/blog/git/git-team-workflows-merge-or-rebase" rel="noopener noreferrer"&gt;Here&lt;/a&gt; is an article comparing the two approaches with favoring the latter. &lt;a href="https://gitlab.com/gitlab-com/www-gitlab-com/-/network/master" rel="noopener noreferrer"&gt;Here&lt;/a&gt;'s a project following the second approach. &lt;a href="https://www.youtube.com/watch?v=3XjeYfH2BBI&amp;amp;t=426" rel="noopener noreferrer"&gt;Here&lt;/a&gt; is (a section of a) talk proclaiming the second philosophy.&lt;/p&gt;

&lt;h5&gt;
  
  
  Squash merge
&lt;/h5&gt;

&lt;p&gt;This is another approach how to keep the history linear. &lt;a href="https://docs.microsoft.com/en-us/azure/devops/repos/git/merging-with-squash" rel="noopener noreferrer"&gt;This article&lt;/a&gt; explains everything.&lt;/p&gt;

&lt;p&gt;TL;DR a squash merge is a merge where the entire merge source branch is condensed to a single commit which is created on top of the merge target branch without creating a merge commit. Some teams like to do this because it keeps the Git history slim.&lt;/p&gt;

&lt;h3&gt;
  
  
  Permissions
&lt;/h3&gt;

&lt;p&gt;Many Git servers allow restricting permissions to some extent. Here are a few use cases:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Suppose you have a branch called &lt;code&gt;live&lt;/code&gt; or &lt;code&gt;prod&lt;/code&gt; which, on every new commit, gets automatically deployed into the live production environment that is used by your users. You probably don't want everyone to be able to make changes on these branches.&lt;/li&gt;
&lt;li&gt;Let's say you want to maintain a very high code quality in master, it should not be possible for people to bypass the code quality assurance measurements.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;So in both examples what you want to achieve is, basically, to restrict write permissions on certain branches. Here are a few ways some Git hosting solutions allow you to do this:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Pretty much every one of them allows you to protect branches against changes like pushing or merging. You are able to grant this permission to a select set of users.&lt;/li&gt;
&lt;li&gt;Pretty much every one of them allows you to make it a requirement that some CI/CD pipeline succeeds before PRs can be acctepted.&lt;/li&gt;
&lt;li&gt;Some of them enable you to make it so that some branches can only be modified by accepted PRs. This effectively makes code review mandatory and is therefore very cool. It is also sometimes possible to require the approval of PRs by specific persons.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Here are some links that go into more detail: &lt;a href="https://about.gitlab.com/blog/2014/11/26/keeping-your-code-protected/" rel="noopener noreferrer"&gt;GitLab on the benefits of branch protection&lt;/a&gt;, &lt;a href="https://docs.gitlab.com/12.3/ee/user/project/protected_branches.html#using-the-allowed-to-merge-and-allowed-to-push-settings" rel="noopener noreferrer"&gt;GitLab documentation of allowing branches to be modified only through PRs&lt;/a&gt; (Merge Requests in the GitLab language) and &lt;a href="https://help.github.com/en/enterprise/2.18/user/articles/about-protected-branches" rel="noopener noreferrer"&gt;GitHub on their branch protection features&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Clean commits
&lt;/h3&gt;

&lt;p&gt;First, a link: &lt;a href="https://opensource.com/article/18/6/anatomy-perfect-pull-request" rel="noopener noreferrer"&gt;Anatomy of a perfect Pull Request&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Many teams have rules that determine under which conditions the commit history is 'clean'. Firstly many teams have rules for what the commit message should look like. One very popular set of rules is &lt;a href="https://chris.beams.io/posts/git-commit/" rel="noopener noreferrer"&gt;this&lt;/a&gt;. Another &lt;a href="https://wiki.openstack.org/wiki/GitCommitMessages#Summary_of_Git_commit_message_structure" rel="noopener noreferrer"&gt;write-up&lt;/a&gt; of rules.&lt;/p&gt;

&lt;p&gt;Besides those I've heard of projects which mandate that the commit title starts with the name of the module or the file in which the changes were made (in such a case there is usually a not so restrictive rule on title length). Some teams also mandate that there must always be a body.&lt;/p&gt;

&lt;p&gt;One very popular rule is: for every commit compilation has to succeed and the code needs to be free of errors (minimum: tests have to pass). The reason for this is that if somehow at one point a regression emerges, you can easily find out what commit caused it with &lt;code&gt;git bisect&lt;/code&gt; (&lt;a href="https://americanexpress.io/git-bisect/" rel="noopener noreferrer"&gt;Link1&lt;/a&gt;, &lt;a href="https://stackoverflow.com/questions/4713088/how-to-use-git-bisect" rel="noopener noreferrer"&gt;Link2&lt;/a&gt;).&lt;/p&gt;

&lt;p&gt;The reason why many teams have rules to enforce a clean history is to simplify code reviews. When all commit messages look the same, they can be processed faster. When all commits have a body detailing the changes the reviewer can grasp the context faster. It's easier for a reviewer to review three commits in succession as opposed to one commit containing three different logical changes.&lt;/p&gt;

&lt;p&gt;We all do however make that "WIP" commit from time to time. We do commits that don't build, that have an ugly commit message etc. With git you can always go back and clean your history up with &lt;code&gt;git rebase --interactive&lt;/code&gt;. See &lt;a href="https://git-rebase.io/" rel="noopener noreferrer"&gt;this guide&lt;/a&gt; to learn how to manipulate the Git history.&lt;/p&gt;

&lt;p&gt;PS: &lt;a href="https://lwn.net/Articles/328438/" rel="noopener noreferrer"&gt;Linus Torvalds on &lt;em&gt;clean history&lt;/em&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Branching strategies
&lt;/h2&gt;

&lt;p&gt;In this section I'll discuss guidelines for what branches to create and when to create them.&lt;/p&gt;

&lt;h3&gt;
  
  
  Release handling
&lt;/h3&gt;

&lt;p&gt;How you handle releases depends on whether you have to support multiple software releases. If you always only release one version and then develop the next version, then simply adding &lt;strong&gt;tags&lt;/strong&gt; to the master branch is sufficient.&lt;/p&gt;

&lt;p&gt;If you have to support multiple releases, create a &lt;strong&gt;release branch&lt;/strong&gt; for every release. On this branch you integrate bugfixes. You also prepare and release updates from it. Even if you only support one release release branches can be made great use of. They are an excellent way to perform a soft code freeze where on the release branch the code doesn't change during bug-hunting while on master new code can be added at full speed.&lt;/p&gt;

&lt;p&gt;Read &lt;a href="https://trunkbaseddevelopment.com/branch-for-release/" rel="noopener noreferrer"&gt;this great guide&lt;/a&gt; on release branching.&lt;/p&gt;

&lt;p&gt;Sometimes you detect a bug that affects multiple releases or your master. There are different approaches how to tackle this situation.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Git Flow recommends creating a branch from the release and merging the fixing commits into every affected release and the develop branch.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;You can implement the fix for every release manually&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;When you have fixed the issue in one branch you can cherry-pick the commits onto the other branches.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Popular Branching Strategies
&lt;/h3&gt;

&lt;p&gt;There are a few branching strategy guides out there. &lt;a href="http://nvie.com/posts/a-successful-git-branching-model/" rel="noopener noreferrer"&gt;Git Flow&lt;/a&gt; was one of the first written detailed branching guidelines on the internet and gained a lot of popularity because of that. It's an interesting read, but in my humble opinion it's an overkill for a branching strategy. Here are two reddit threads &lt;a href="https://www.reddit.com/r/git/comments/bkvo0h/lpt_dont_go_overboard_with_your_branching_strategy/" rel="noopener noreferrer"&gt;[1]&lt;/a&gt; &lt;a href="https://www.reddit.com/r/programming/comments/a8n44j/a_successful_git_branching_model/" rel="noopener noreferrer"&gt;[2]&lt;/a&gt; where this strategy is discussed and where the downsides of it are layed out.&lt;/p&gt;

&lt;p&gt;Another very popular workflow is &lt;a href="https://trunkbaseddevelopment.com/" rel="noopener noreferrer"&gt;Trunk-based development&lt;/a&gt; (TBD). The website has many great articles about the details of the workflow. Some parts of it can be applied to other workflows. Microsoft has published a &lt;a href="https://docs.microsoft.com/en-us/azure/devops/repos/git/git-branching-guidance?view=azure-devops" rel="noopener noreferrer"&gt;recommendation&lt;/a&gt; for Git workflows that is very very similar to TBD while being a much shorter read. That article and TBD in general is my personal recommendation to be a basis for your workflow.&lt;/p&gt;

&lt;p&gt;Once in a while some company invents a new Git workflow, gives it a fancy name and releases it as an article. Here are those that I'm aware of.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://docs.microsoft.com/en-us/azure/devops/learn/devops-at-microsoft/release-flow" rel="noopener noreferrer"&gt;Release Flow&lt;/a&gt; by one team at Microsoft&lt;/li&gt;
&lt;li&gt;&lt;a href="https://guides.github.com/introduction/flow/" rel="noopener noreferrer"&gt;GitHub Flow&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.gitlab.com/ee/workflow/gitlab_flow.html" rel="noopener noreferrer"&gt;GitLab Flow&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;However, it's not very satisfying to read these articles because you'll realize they're mostly the same as TBD. Which again speaks for TBD IMO.&lt;/p&gt;

&lt;h3&gt;
  
  
  Long-lived topic branches
&lt;/h3&gt;

&lt;p&gt;Topic branches should be short-lived. This means that ideally only one or a few days should pass between branching and merging back. If branches live longer, problems arise from the fact that potentially the code in master changes, resulting in a divergence between the current master codebase and the version on which you created your branch. Merging/rebasing becomes very complicated.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Ways to prevent long-lived branches:

&lt;ul&gt;
&lt;li&gt;Reduce the scope of the branch/issue (try to split the problem you are trying to solve into smaller problems)&lt;/li&gt;
&lt;li&gt;Use &lt;em&gt;feature toggles&lt;/em&gt; &lt;a href="https://martinfowler.com/articles/feature-toggles.html" rel="noopener noreferrer"&gt;[1]&lt;/a&gt; &lt;a href="https://martinfowler.com/articles/feature-toggles.html" rel="noopener noreferrer"&gt;[2]&lt;/a&gt; ... TL;DR: Add some form of toggle (an &lt;code&gt;if&lt;/code&gt; statement most of the time) to dis- and enable the new code. With this approach the new code resides next to the old and both evolve close together.&lt;/li&gt;
&lt;li&gt;Use &lt;a href="https://trunkbaseddevelopment.com/branch-by-abstraction/" rel="noopener noreferrer"&gt;branch by abstraction&lt;/a&gt;, a variant of feature toggles where you first introduce a new abstraction that wraps the part of the code you want to change. Then you implement the new code as an implementation of that abstraction while the rest of the team uses the old implementation &lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;h2&gt;
  
  
  E-Mail based workflow (Linux kernel)
&lt;/h2&gt;

&lt;p&gt;The Linux kernel is the birht place of Git. The lead developer of Linux, Linus Torvalds, made Git because he needed a source code management system that suits the development workflow of the kernel. That workflow is centered around mailing lists. Code changes ("patches") are shared via mailings lists and there is no central Git server like GitHub. &lt;a href="https://git-scm.com/book/en/v2/Distributed-Git-Distributed-Workflows" rel="noopener noreferrer"&gt;Here&lt;/a&gt; is an overview of distributed Git workflows. &lt;a href="https://git-send-email.io/" rel="noopener noreferrer"&gt;Here&lt;/a&gt; is a guide explaining how to use Git with E-Mails.&lt;/p&gt;




&lt;blockquote&gt;
&lt;p&gt;So far for this workflow guide. Thanks for reading and please give me feedback :)&lt;/p&gt;
&lt;/blockquote&gt;

</description>
      <category>git</category>
    </item>
    <item>
      <title>My vision for the missing LaTeX dependency manager</title>
      <dc:creator>Florian Polster</dc:creator>
      <pubDate>Mon, 29 Apr 2019 06:43:08 +0000</pubDate>
      <link>https://dev.to/fpolster/my-vision-for-the-missing-latex-dependency-manager-275n</link>
      <guid>https://dev.to/fpolster/my-vision-for-the-missing-latex-dependency-manager-275n</guid>
      <description>&lt;h2&gt;
  
  
  Abstract
&lt;/h2&gt;

&lt;p&gt;In this article I claim that the LaTeX developer experience can be improved by learning and copying from programming languages. Especially automated package management (including automatic package acquisition based on &lt;code&gt;\usepackage&lt;/code&gt; directives) would be a big gain.&lt;/p&gt;

&lt;h2&gt;
  
  
  My user experience with LaTeX
&lt;/h2&gt;

&lt;p&gt;TL;DR&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;I shouldn't have to install the packages that my document requires for compilation one by one manuallay. This should be automated.&lt;/li&gt;
&lt;li&gt;The Tex Live installer asks too many questions.&lt;/li&gt;
&lt;li&gt;The LaTeX compiler is not user-friendly&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I'm a Linux user and I use LaTeX for university assignments. I acquired texlive through my distro's package manager. Installing Texlive was a cumulative download of about 2GB. I got a ready-made LaTeX template from a fellow student. I ran pdflatex. It found that some .sty file could not be found and entered some weird interactive mode. I figured out how to prevent this weird interactive mode for good so that the compiler just terminates upon error. I then had to manually install all packages that provide all the .sty files that were not present on my system. &lt;/p&gt;

&lt;p&gt;These two are the first two problems I want to point out. The user should never be confronted with this strange interactive mode of the compiler. The worst part of the interactive mode is that it is so hard to exit (is it Ctrl+C, q, quit or exit?). Also the retrieval of packages should be automated. Installing them manually is tedious. An alternative is installing all texlive-* packages, so that none would have to be insalled manually. But that is a download of over 6GB...&lt;/p&gt;

&lt;p&gt;Then I wanted to work on my thesis on university lab computer. It did not have a package installed thorugh the distribution that was necessary to build my thesis. Lacking admin privileges I could not install them regularly. What I could have done was download the package manually from CTAN and put it into my source path.&lt;/p&gt;

&lt;p&gt;At that time I was not aware that texlive could be installed manually, independently from the system package manager. I could have installed texlive in my home directory and could have made use of tlmgr to install all packages. That would have worked. However, having recently tried this out, manually installing texlive is also tedious: The installer gives the user many choices to make. Installation location, the size of the set of packages to install and many other less relevant things. Also, after installation, all programs (compilers, tlmgr etc) need to be added to the PATH environment variable so as to make it exectuable from the command line.&lt;/p&gt;

&lt;h2&gt;
  
  
  Dependency management in programming languages
&lt;/h2&gt;

&lt;p&gt;Programmers prefer using libraries and frameworks over writing everything from scratch, because this approach has many obvious advantages. Relying on externally developed code has some consequences: The foreign code needs to be locally available to (depending on the type of language) build or run the software. Not only that, but the foreign code may rely on other libraries to perform its tasks. Manually downloading, integrating and setting up external code and its dependencies is tedious. And the process needs to be repeated for every new PC that is set up to build/run the software and for every version update of the dependencies.&lt;/p&gt;

&lt;p&gt;Long story short: There exist dependency management tools for most programming languages. As an example let me describe what the JavaScript dependency manager and build tool &lt;em&gt;npm&lt;/em&gt; does for the user.&lt;/p&gt;

&lt;h4&gt;
  
  
  NPM
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;There is an online registry/repository where libraries/frameworks (the more general term being 'packages') are published. &lt;a href="http://npmjs.com/" rel="noopener noreferrer"&gt;http://npmjs.com/&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;There is a CLI program &lt;code&gt;npm&lt;/code&gt; to find, download and update packages from that registry.&lt;/li&gt;
&lt;li&gt;If you use npm you put a file &lt;em&gt;package.json&lt;/em&gt; in the root folder of your code project. This file provides the following information:

&lt;ul&gt;
&lt;li&gt;It has a list of all packages that the code project depends on. You also specify the versions of the packages.&lt;/li&gt;
&lt;li&gt;It has a list of (build) scripts. These consist of a script name and a CLI command.&lt;/li&gt;
&lt;li&gt;And, less importantly, this file contains all the metadata of your project in case you want to publish the project as an npm package. As such you specify title, authors, current version etc.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;When somebody new starts working on a project all they need to do is get the project code from version control (only the internal code and the &lt;code&gt;package.json&lt;/code&gt; file is in version control - not the external dependencies) and then inside the root directory run &lt;code&gt;npm install&lt;/code&gt;. This reads the dependency list in &lt;code&gt;package.json&lt;/code&gt;, downloads them all and all their dependencies from the registry and puts them into a directory in the project root.&lt;/p&gt;

&lt;h4&gt;
  
  
  What LaTeX should learn and adopt from this
&lt;/h4&gt;

&lt;p&gt;LaTeX is in a good position to easily establish its own tooling for automated dependency management. With CTAN we already have an online registry of packages. Also, LaTeX already has the means for describing the dependencies of a document: the &lt;code&gt;\usepackage&lt;/code&gt; directives in the preamble.&lt;/p&gt;

&lt;p&gt;My vision for a LaTeX dependency manager is a little program that reads the main LaTeX source file to install all required packages automatically. Installing can be done in any of the following ways:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;If texlive is installed on the system then use the executables from that install. Download packages that are not present but required by the document directly from CTAN to ...&lt;br&gt;
a. a directory in the project&lt;br&gt;
b. a project-agnostic directory located in the user's home directory&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Download and install TeX Live. Offer express installation, i.e. do not ask for install location etc., use sane defaults. Install no LaTeX packages except the absolut bare minimum. After that, use tlmgr to install all packages that are needed for the document. No more. There are multiple ways this can be done:&lt;/p&gt;

&lt;p&gt;a. npm style: install in the project directory.&lt;br&gt;
b. Install in the user's home directory. This will be enough for many users because oftentimes a computer is used by only one person anyway.&lt;br&gt;
c. Install globally for all users. This will require fiddling with admin privileges.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Using the system package manager. I think it's going to be doable to find a way to map package names to the names of the package in the system package manager.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;These options are ordered by increasing difficulty.&lt;/p&gt;

&lt;h2&gt;
  
  
  Build management
&lt;/h2&gt;

&lt;p&gt;Revisiting NPM build scripts: For a typical fronted JavaScript project there are two very common scripts defined in package.json:&lt;br&gt;
    - A &lt;code&gt;dev&lt;/code&gt; script: This typically serves the frontend by means of a locally running webserver to the browser. It also watches the files in the project for changes and automatically rebuilds the code and reloads the served page.&lt;br&gt;
    - A &lt;code&gt;build&lt;/code&gt; script: This builds the project for deployment.&lt;/p&gt;

&lt;p&gt;Scripts are run by invoking &lt;code&gt;npm run &amp;lt;scriptname&amp;gt;&lt;/code&gt; from the command line.&lt;/p&gt;

&lt;p&gt;'Build tool' is a common term for programs that automate the build process. I think with LaTeX many people use Makefile. The build process of a LaTeX document typically comprises bibliography processing and up to three LaTeX compilation runs. Additionally you might want to perform some actions to keep the working directory clean from compilation byproducts. Maybe you're writing the body of the document in Markdown or some other markup and need to perform a markdown-to-LaTeX-translation step. Maybe there is some data that has to be plotted during build.&lt;/p&gt;

&lt;p&gt;TL;DR build automation is pretty much the norm. Some thought should be given to the question whether there are any benefits to be gained from including a build scripting feature in a possible LaTeX dependency manager, thereby turning it into a full-blown build tool like sbt and Maven. Maybe also just Makefile does the job well enough. However, I would love to save TeX Live users from having to find out about nonstop mode of the TeX compiler. Also, an authoring mode with automatic recompilation upon file change would be neat.&lt;/p&gt;

</description>
      <category>latex</category>
      <category>tools</category>
    </item>
  </channel>
</rss>
