<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: José Miguel Parrella</title>
    <description>The latest articles on DEV Community by José Miguel Parrella (@bureado).</description>
    <link>https://dev.to/bureado</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/bureado"/>
    <language>en</language>
    <item>
      <title>Transparency and user agency as principles for distributing and consuming open source software packages</title>
      <dc:creator>José Miguel Parrella</dc:creator>
      <pubDate>Tue, 08 Jun 2021 19:19:47 +0000</pubDate>
      <link>https://dev.to/bureado/transparency-and-user-agency-as-principles-for-distributing-and-consuming-open-source-software-packages-lea</link>
      <guid>https://dev.to/bureado/transparency-and-user-agency-as-principles-for-distributing-and-consuming-open-source-software-packages-lea</guid>
      <description>&lt;p&gt;More people are building more kinds of software that is consumed more often and in more places than ever, and the likelihood of that software being open source or having an open source dependency is very high compared to just a decade ago.&lt;/p&gt;

&lt;p&gt;These forces (software &lt;em&gt;publishers&lt;/em&gt;, software &lt;em&gt;kinds&lt;/em&gt; and software &lt;em&gt;use cases&lt;/em&gt;) along with network evolution, how software is monetized, and how people and organizations use technology put pressure on the software distribution technology and techniques such as software packages, package managers, and software repositories.&lt;/p&gt;

&lt;p&gt;I've been &lt;a href="https://gist.github.com/bureado/792037b71229db3c37975e70e8a9c54a"&gt;researching Linux and open source package management&lt;/a&gt; for a while and I'm very excited about many of those technologies and their applications, from &lt;a href="https://distr1.org/"&gt;distri&lt;/a&gt; and &lt;a href="https://github.com/systemd/mkosi"&gt;systemd/mkosi&lt;/a&gt; to &lt;a href="https://ostreedev.github.io/ostree/"&gt;libostree&lt;/a&gt; and &lt;a href="https://github.com/spack/spack"&gt;spack&lt;/a&gt;. Unsurprisingly, many of these are prompting us to revise how we think about distributions.&lt;/p&gt;


&lt;blockquote class="ltag__twitter-tweet"&gt;

  &lt;div class="ltag__twitter-tweet__main"&gt;
    &lt;div class="ltag__twitter-tweet__header"&gt;
      &lt;img class="ltag__twitter-tweet__profile-image" src="https://res.cloudinary.com/practicaldev/image/fetch/s--ajGmT_lx--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://pbs.twimg.com/profile_images/1312402601305165825/ttrKQo_H_normal.jpg" alt="Dirk RIEHLE profile image"&gt;
      &lt;div class="ltag__twitter-tweet__full-name"&gt;
        Dirk RIEHLE
      &lt;/div&gt;
      &lt;div class="ltag__twitter-tweet__username"&gt;
        @dirkriehle
      &lt;/div&gt;
      &lt;div class="ltag__twitter-tweet__twitter-logo"&gt;
        &lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ir1kO05j--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev.to/assets/twitter-f95605061196010f91e64806688390eb1a4dbc9e913682e043eb8b1e06ca484f.svg" alt="twitter logo"&gt;
      &lt;/div&gt;
    &lt;/div&gt;
    &lt;div class="ltag__twitter-tweet__body"&gt;
      What is an open source distribution?
    &lt;/div&gt;
    &lt;div class="ltag__twitter-tweet__date"&gt;
      10:25 AM - 28 May 2021
    &lt;/div&gt;


    &lt;div class="ltag__twitter-tweet__actions"&gt;
      &lt;a href="https://twitter.com/intent/tweet?in_reply_to=1398223906813657090" class="ltag__twitter-tweet__actions__button"&gt;
        &lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--fFnoeFxk--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev.to/assets/twitter-reply-action-238fe0a37991706a6880ed13941c3efd6b371e4aefe288fe8e0db85250708bc4.svg" alt="Twitter reply action"&gt;
      &lt;/a&gt;
      &lt;a href="https://twitter.com/intent/retweet?tweet_id=1398223906813657090" class="ltag__twitter-tweet__actions__button"&gt;
        &lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--k6dcrOn8--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev.to/assets/twitter-retweet-action-632c83532a4e7de573c5c08dbb090ee18b348b13e2793175fea914827bc42046.svg" alt="Twitter retweet action"&gt;
      &lt;/a&gt;
      &lt;a href="https://twitter.com/intent/like?tweet_id=1398223906813657090" class="ltag__twitter-tweet__actions__button"&gt;
        &lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--SRQc9lOp--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev.to/assets/twitter-like-action-1ea89f4b87c7d37465b0eb78d51fcb7fe6c03a089805d7ea014ba71365be5171.svg" alt="Twitter like action"&gt;
      &lt;/a&gt;
    &lt;/div&gt;
  &lt;/div&gt;
&lt;/blockquote&gt;


&lt;p&gt;Inspired by &lt;a href="https://fearthecowboy.com/informational/2020/11/19/Laws-of-Software-Installation.html"&gt;The Laws of Software Installation&lt;/a&gt;, here's an attempt to elaborate on the "contract" between software authors and users, particularly in open source software where the volume and composition possibilities create a vast and complex ecosystem of its own.&lt;/p&gt;

&lt;p&gt;Our expectations of &lt;strong&gt;transparency&lt;/strong&gt; for software packages have evolved. What used to be minimal metadata regarding the publisher, description of the software, and signatures has been enhanced with things like licensing information and is fully evolving into &lt;a href="https://www.ntia.gov/SBOM"&gt;Software Bill of Materials&lt;/a&gt;: the &lt;em&gt;Nutrition Facts&lt;/em&gt; label that allows every software package to explain how it came to be, where it comes from, who and what was involved in making that happen.&lt;/p&gt;

&lt;p&gt;In an open source world, consent builds upon transparency. We have been influenced by certain use cases, such as mobile apps, where "permissions" (both stated by the developer and enforced by the platform) have become the expectation.&lt;/p&gt;

&lt;p&gt;All of this means different things to different people: some users might want to know if the package lays itself out in the filesystem according to FHS, others need to know if the application will self-update or install publisher certificates, if it needs network egress to ship telemetry at runtime, if it pulls additional dependencies out-of-band to a local cache, if it changes environment variables, starts automatically on boot, need to run in a privileged container, ships with and relies on LSM integration and more.&lt;/p&gt;

&lt;p&gt;In general, we lack a standardized mechanism to carry behavioral ("permissions") information in a package, let alone &lt;em&gt;prove&lt;/em&gt; them or make policy decisions such as allowing a pull into a build or in production based on the organization's choices around, say, network egress or vendoring.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Interfaces&lt;/strong&gt; are another interesting aspect of software packages that can be easily overlooked. Beyond the most basic operations (install, remove and maybe update) package managers don't share semantics let alone have a harmonized approach to automation: allowing themselves to be handled by the system in user-defined ways.&lt;/p&gt;

&lt;p&gt;Hooks, triggers and other artifacts are regularly abused to achieve certain automation goals such as preseeding configuration or performing certain provisioning steps right after install, sometimes overreaching in terms of administrative privileges usage with &lt;a href="https://github.com/cncf/tag-security/tree/main/supply-chain-security/compromises"&gt;broad security implications&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Package managers inevitably grow in sophistication to meet certain user needs (see &lt;a href="https://wiki.debian.org/DpkgTriggers"&gt;&lt;code&gt;dpkg&lt;/code&gt; triggers&lt;/a&gt;) but in general, whether you are a software publisher, integrator, IT organization, a developer or a system administrator you are expected to learn the inner workings and nuances of each package system instead of relying on standardized interfaces, which results in wildly varying user experiences and thinly spread resources.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;A good way to illustrate the complexity of this today is parsing through the several thousand lines of Ansible code devoted to dealing with &lt;a href="https://github.com/ansible/ansible/blob/devel/lib/ansible/modules/apt.py"&gt;APT&lt;/a&gt; or &lt;a href="https://github.com/ansible/ansible/blob/devel/lib/ansible/modules/dnf.py"&gt;DNF&lt;/a&gt;, or how basic operations such as listing &lt;a href="https://github.com/ansible/ansible/blob/devel/lib/ansible/modules/package_facts.py"&gt;Linux packages&lt;/a&gt; or &lt;a href="https://github.com/CycloneDX/cyclonedx-gomod/issues/20#issuecomment-847163027"&gt;Go modules&lt;/a&gt; are handled.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;These two attributes, transparency and interfaces, are only worth investing in if they give users &lt;strong&gt;agency&lt;/strong&gt;. Of course, there are other attributes that make a high quality software package, particularly a judicious use of resources and testing for alternative configurations while providing sensible defaults.&lt;/p&gt;

&lt;p&gt;Finally, I think the jury's out for at least two topics: vendoring and component duplication, and automatic upgrades. While there will always be very good arguments against both in several use cases I expect more of both as a result of growing software "kinds" in a world where there isn't a lot of contention for storage or networking.&lt;/p&gt;

&lt;p&gt;It seems there's only one thing we love more than the open source components that we use and build upon daily, and that's the &lt;em&gt;package manager&lt;/em&gt; that we use to acquire those components. There's so much exciting activity happening in this space, that is easy to focus on technical differences such as package formats or installation mechanics when looking at package systems. With this post, I've tried to make the case for considering user agency, quality, transparency and interfaces as key tenets of the "contract" between software publishers, distributors and users.&lt;/p&gt;

&lt;h2&gt;
  
  
  Summary
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Growth of software publishers, software &lt;em&gt;kinds&lt;/em&gt; and software use cases along with changes in software monetization and network evolution are changing what users expect from software packages&lt;/li&gt;
&lt;li&gt;While &lt;a href="https://michael.stapelberg.ch/posts/2019-08-17-linux-package-managers-are-slow/"&gt;fun&lt;/a&gt; and necessary innovation is happening, package formats, implementation choices and the intricacies of how a software package is installed aren't necessarily where we'll meet future user expectations or how we give them agency&lt;/li&gt;
&lt;li&gt;Packages should not only describe what the software is, but how it was made and where it came from; packages should also describe their behaviors: metadata becomes a contract&lt;/li&gt;
&lt;li&gt;Package operations (across the entire lifecycle and well beyond installation) should be automatable via interfaces and said interfaces should allow for user-defined options: alternatives, configurations, installation paths, etc.&lt;/li&gt;
&lt;li&gt;Quality remains critical: packages must guarantee their removeability, keep promises across the lifecycle and provide users with control on mutating logic (e.g., triggers) and resource usage, noting that user's attention is also a finite resource that increases security risk when depleted&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;em&gt;Are you a software publisher or distributor? What steps are you taking to make your packages more transparent and give users more control? Are you a developer? I always love to hear new things that people have learned about software packaging and distribution. Input is always welcome!&lt;/em&gt;&lt;/p&gt;

</description>
      <category>opensource</category>
      <category>packaging</category>
      <category>supplychain</category>
    </item>
    <item>
      <title>Dapr, the hard way</title>
      <dc:creator>José Miguel Parrella</dc:creator>
      <pubDate>Thu, 13 Feb 2020 00:23:31 +0000</pubDate>
      <link>https://dev.to/bureado/dapr-the-hard-way-52g2</link>
      <guid>https://dev.to/bureado/dapr-the-hard-way-52g2</guid>
      <description>&lt;p&gt;This weekend I wanted to catch up with &lt;a href="https://dapr.io/"&gt;Dapr&lt;/a&gt;, the Distributed Application Runtime. As a sysadmin by trade, I have little knowledge of application model theory and limited hands-on experiences with some of the challenges that &lt;em&gt;Dapr&lt;/em&gt; seeks to address, so I was a little bit more inclined to figure out how things were put together, which proved useful for me to understand the value of &lt;em&gt;Dapr&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;TL;DR:&lt;/strong&gt; what the &lt;em&gt;Dapr&lt;/em&gt; &lt;strong&gt;software&lt;/strong&gt; does is stand up a &lt;code&gt;localhost&lt;/code&gt; endpoint next to your application. This endpoint, which speaks HTTP and gRPC, provides a standard, simple API (the &lt;em&gt;Dapr&lt;/em&gt; &lt;strong&gt;spec&lt;/strong&gt;) through which your application and any other application that interacts with yours can invoke methods, store and retrieve state, publish and subscribe to events, and more.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why dapr?
&lt;/h2&gt;

&lt;p&gt;The reasons &lt;em&gt;why&lt;/em&gt; you'd want such a thing are best described in the &lt;a href="https://dapr.io/"&gt;website&lt;/a&gt; and the &lt;a href="https://www.youtube.com/watch?v=mPVnu4W0xzQ"&gt;Azure Fridays&lt;/a&gt; two-part interview with the team, but I'll give it a shot.&lt;/p&gt;

&lt;p&gt;At first glance,none of the things &lt;em&gt;Dapr&lt;/em&gt; can do are necessarily new to developers. Everyone consumes events and stores state. What's new is the ability to enable large, complex, distributed applications without a lot of focus on the (traditionally hard) implementation details.&lt;/p&gt;

&lt;p&gt;Historically, ways through which developers bring things like session stores into their applications include:&lt;/p&gt;

&lt;p&gt;1) Framework facilities&lt;br&gt;
2) Middleware&lt;br&gt;
3) External services&lt;/p&gt;

&lt;p&gt;Examples of framework facilities that can achieve what &lt;em&gt;Dapr&lt;/em&gt; does include the &lt;a href="https://laravel.com/docs/5.0/session"&gt;session management capabilities in Laravel&lt;/a&gt; for PHP developers, or things like &lt;a href="https://metacpan.org/pod/Catalyst::Plugin::Session"&gt;Catalyst::Plugin::Session&lt;/a&gt; for Perl developers.&lt;/p&gt;

&lt;p&gt;Those facilities offer the most idiomatic option for developers but they come at a cost: it's harder to part ways with the framework, in some cases it isn't ready for a distributed world (the most simplistic facilities are instance-bound, live in memory, etc.) and even for the ones that are more sophisticated, someone still has to write and maintain the code that does state persistence, interact with the backends, etc.&lt;/p&gt;

&lt;p&gt;Middlewares are a generalization of this. They tend to be less language dependent, but they also tend to require more specialized operator knowledge. Some have rich language bindings, like the Java bindings for many Apache projects and Java-adjacent middlewares. With externally managed services, the issue tends to be around the need to import and keep track of SDKs, which can introduce challenges from poor developer experience to severe API lag, missing documentation and more.&lt;/p&gt;

&lt;p&gt;Imagine a world where your application can be written in any language (or many) and where you can always count on a local endpoint that offers you the building blocks to make your application highly decoupled and distributed (including in Kubernetes).&lt;/p&gt;

&lt;p&gt;This would be without taking any SDK dependencies (and perhaps even dropping some!) and just using HTTP/gRPC and JSON. All that you need to know as a developer is to know how to target the &lt;em&gt;Dapr&lt;/em&gt; &lt;em&gt;spec&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;And even if ultimately all state must be persisted somewhere, and/or a pub/sub server must be somehow available, you can delegate all of these decisions to someone else in your team, and those things can change from underneath you without you changing your application: want to use a managed CosmosDB instance instead of a bring-your-own Redis one? &lt;em&gt;Dapr&lt;/em&gt; can do that.&lt;/p&gt;
&lt;h2&gt;
  
  
  Dapr and &lt;code&gt;podman&lt;/code&gt;
&lt;/h2&gt;

&lt;p&gt;I'm running Debian &lt;code&gt;sid&lt;/code&gt; and I wanted to run &lt;code&gt;dapr&lt;/code&gt; in my local machine. I followed the &lt;a href="https://dapr.io/#download"&gt;installation instructions&lt;/a&gt; (the install script basically fetches the latest release from GitHub) and ran &lt;code&gt;dapr init&lt;/code&gt;. This failed (at least in version 0.3.0) because I don't have &lt;code&gt;docker&lt;/code&gt; in my machine. &lt;a href="https://dev.to/bureado/a-quick-guide-to-podman-and-toolbox-in-debian-5672"&gt;Since I wanted to use &lt;code&gt;podman&lt;/code&gt;&lt;/a&gt;, I proceeded to take a look at the code and see how I could make that happen.&lt;/p&gt;

&lt;p&gt;In standalone mode (i.e., not Kubernetes) &lt;code&gt;dapr init&lt;/code&gt; does &lt;a href="https://github.com/dapr/cli/blob/master/pkg/standalone/standalone.go#L54"&gt;three key things&lt;/a&gt;: check that &lt;code&gt;docker&lt;/code&gt; is installed, fetch the &lt;code&gt;daprd&lt;/code&gt; binary and prepare the runtime. So I went ahead and changed the test logic for testing and calling &lt;code&gt;docker&lt;/code&gt; (see &lt;a href="https://github.com/dapr/cli/issues/257"&gt;#257&lt;/a&gt;) in my cloned repo, and rebuilt &lt;code&gt;dapr&lt;/code&gt; with &lt;code&gt;go build&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Once I ran &lt;code&gt;dapr init&lt;/code&gt; with the resulting binary, I had local containers running Dapr's &lt;a href="https://github.com/dapr/cli/blob/master/pkg/standalone/standalone.go#L200"&gt;placement service&lt;/a&gt; and &lt;a href="https://github.com/dapr/cli/blob/master/pkg/standalone/standalone.go#L143"&gt;Redis&lt;/a&gt; as the default state store in standalone mode:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ dapr init 
⌛  Making the jump to hyperspace...
✅  Downloading binaries and setting up components...
✅  Success! Dapr is up and running

$ podman ps | grep dapr
231a26a31000  docker.io/library/redis:latest   redis-server          4 minutes ago  Up 4 minutes ago  0.0.0.0:6379-&amp;gt;6379/tcp    dapr_redis
f49eb9753492  docker.io/daprio/dapr:latest                           4 minutes ago  Up 4 minutes ago  0.0.0.0:50005-&amp;gt;50005/tcp  dapr_placement
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;&lt;em&gt;Dapr&lt;/em&gt; also fetches &lt;a href="https://github.com/dapr/dapr"&gt;daprd&lt;/a&gt; which is the component that actually starts with each application instance. The Redis container supports it (for storing state) and the placement container is used for actors -- more on this later.&lt;/p&gt;

&lt;h2&gt;
  
  
  Dapr in action
&lt;/h2&gt;

&lt;p&gt;The easiest way to see &lt;em&gt;Dapr&lt;/em&gt; in action in standalone mode is to run one of the &lt;a href="https://github.com/dapr/samples/tree/master/1.hello-world"&gt;samples&lt;/a&gt; (don't forget to install sample dependencies with &lt;code&gt;npm install&lt;/code&gt;), for example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ dapr run --app-id alpha --log-level error --app-port 3000 -- node app.js
ℹ️  Starting Dapr with id alpha. HTTP Port: 34651. gRPC Port: 34655
✅  You're up and running! Both Dapr and your app logs will appear here.
== APP == Node App listening on port 3000!
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;As you can see, I start my Node.js application as an argument of &lt;code&gt;dapr&lt;/code&gt;, and I use &lt;code&gt;--app-port&lt;/code&gt; to bind &lt;code&gt;daprd&lt;/code&gt; to the application port. What I can do now is start querying a &lt;em&gt;Dapr&lt;/em&gt; endpoint via &lt;code&gt;localhost:34651&lt;/code&gt;, calling my Node.js application methods from &lt;code&gt;dapr&lt;/code&gt;, persisting and querying state, and I can do this over HTTP or gRPC, with the &lt;code&gt;dapr&lt;/code&gt; CLI or a tool like &lt;code&gt;curl&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ dapr invoke --app-id alpha --method neworder --payload '{"data": { "orderId": "42" } }'
$ curl http://localhost:45651/v1.0/invoke/alpha/method/order ; echo
{"orderId":"42"}
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;The interesting thing is when you &lt;a href="https://github.com/dapr/samples/blob/master/1.hello-world/app.js#L34"&gt;read the code&lt;/a&gt; of the &lt;code&gt;/neworder&lt;/code&gt; method in my application, it &lt;em&gt;also&lt;/em&gt; calls &lt;em&gt;Dapr&lt;/em&gt; for persistence (and yes, &lt;code&gt;daprPort&lt;/code&gt; is passed as an environment variable) which makes the entire programming model decoupled and simple:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;app.post('/neworder', (req, res) =&amp;gt; {
    const state = [{
      key: req.body.data,
      value: req.body.data.orderId
    }];

    fetch(stateUrl, { // http://localhost:${daprPort}/v1.0/state/${stateStoreName}
        method: "POST",
        body: JSON.stringify(state),
        headers: {
            "Content-Type": "application/json"
        }
    }).then((response) =&amp;gt; {
    ...
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;And this also means that instead of &lt;code&gt;curl&lt;/code&gt; or a CLI, I could have written a consumer in &lt;a href="https://github.com/dapr/samples/tree/master/3.distributed-calculator"&gt;any other language&lt;/a&gt;. In fact, Dapr's &lt;em&gt;components&lt;/em&gt; extend to &lt;a href="https://github.com/dapr/samples/tree/master/4.pub-sub"&gt;pub-sub&lt;/a&gt;, &lt;a href="https://github.com/dapr/samples/tree/master/5.bindings"&gt;input/output bindings&lt;/a&gt; which are great for triggers and other events, state and secret stores, tracing exporters, OAuth authorization and, perhaps dramatically, &lt;a href="https://github.com/dapr/docs/blob/master/concepts/actor/actor_overview.md"&gt;Virtual Actors&lt;/a&gt; support, helping abstract the implementation details of things such as concurrency control.&lt;/p&gt;

&lt;p&gt;Check out all the supported integrations in the &lt;a href="https://github.com/dapr/components-contrib"&gt;components-contrib&lt;/a&gt; repo!&lt;/p&gt;

&lt;h2&gt;
  
  
  Beyond standalone &lt;code&gt;daprd&lt;/code&gt;
&lt;/h2&gt;

&lt;p&gt;Of course, things get far more interesting when you deploy this to Kubernetes. A simple &lt;code&gt;dapr init --kubernetes&lt;/code&gt; will use your current kubeconfig to deploy the &lt;em&gt;Dapr&lt;/em&gt; elements to your cluster. You can deploy a &lt;code&gt;redis&lt;/code&gt; chart, or use a managed Redis service or other state store. I end up with a Dapr-enabled cluster:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl get pods  | grep dapr
dapr-operator-68f7dcb454-zjhdj           1/1     Running   0          4d1h
dapr-placement-6d77d54dc6-ww5rb          1/1     Running   0          4d1h
dapr-sidecar-injector-86d6ccf956-7r85k   1/1     Running   0          4d1h
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;The &lt;a href="https://github.com/dapr/samples/tree/master/3.distributed-calculator"&gt;distributed calculator&lt;/a&gt; sample is a great way to see &lt;em&gt;Dapr&lt;/em&gt; in action in a Kubernetes cluster. You'll get a React-based calculator that can persist state to &lt;em&gt;Dapr&lt;/em&gt; and that calls services in multiple languages for each operation, &lt;a href="https://github.com/dapr/samples/blob/master/3.distributed-calculator/react-calculator/server.js#L21"&gt;all over Dapr&lt;/a&gt;. Once you deploy this sample, you'll end up with a bunch of services:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl get svc
NAME                        TYPE           CLUSTER-IP     EXTERNAL-IP      PORT(S)            AGE
addapp-dapr                 ClusterIP      10.0.66.128    &amp;lt;none&amp;gt;           80/TCP,50001/TCP   4d1h
calculator-front-end        LoadBalancer   10.0.15.166    158.51.155.210   80:30366/TCP       4d1h
calculator-front-end-dapr   ClusterIP      10.0.79.13     &amp;lt;none&amp;gt;           80/TCP,50001/TCP   4d1h
divideapp-dapr              ClusterIP      10.0.160.36    &amp;lt;none&amp;gt;           80/TCP,50001/TCP   4d1h
...
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;You'll notice that each service has a &lt;code&gt;-dapr&lt;/code&gt; sidecar, and that the React frontend doesn't know the cluster IP or service names of &lt;a href="https://github.com/dapr/samples/blob/master/3.distributed-calculator/react-calculator/server.js#L21"&gt;each operation's corresponding service&lt;/a&gt;. This is a key aspect of &lt;em&gt;Dapr&lt;/em&gt; beyond standalone mode: &lt;a href="https://github.com/dapr/components-contrib/tree/master/servicediscovery"&gt;Kubernetes-aware service discovery&lt;/a&gt;, along with mDNS capabilities for non-Kubernetes environments, that developers don't need to implement in their code.&lt;/p&gt;

&lt;p&gt;And what do you say? Passing around the output of something like &lt;a href="https://github.com/kellyjonbrazil/jc"&gt;jc&lt;/a&gt;? Using &lt;code&gt;dapr&lt;/code&gt; as the backend for something like &lt;a href="https://github.com/deskconn/deskconn"&gt;deskconn&lt;/a&gt;? Running &lt;code&gt;dapr run --app-id ncapp --app-port 3000 -- socat -v tcp-l:3000,fork exec:'/bin/cat'&lt;/code&gt;? Sure! Why not?&lt;/p&gt;

&lt;h2&gt;
  
  
  Summary
&lt;/h2&gt;

&lt;p&gt;&lt;em&gt;Dapr&lt;/em&gt; offers an excellent path to getting rid of redundant logic with brittle implementations and enable large teams to start doing things like service invocation, decoupled state stores and event-driven programming while reducing the 3rd party code and SDK footprint in the codebase.&lt;/p&gt;

&lt;p&gt;It could be particularly exciting when coupled with OAM/Rudr (watch Mark Russinovich's &lt;a href="https://www.youtube.com/watch?v=LAUDVk8PaCY"&gt;Dapr, Rudr, OAM interview at Microsoft Ignite&lt;/a&gt;) to help further separate developer and operator concerns in Kubernetes clusters.&lt;/p&gt;

&lt;p&gt;I learned a thing (or five) in the process of checking it out. Give it a try, &lt;a href="https://twitter.com/daprdev"&gt;reach out to the team&lt;/a&gt; and say hi!&lt;/p&gt;

</description>
      <category>dapr</category>
      <category>kubernetes</category>
      <category>distributedsystems</category>
    </item>
    <item>
      <title>A quick guide to podman and toolbox in Debian (and maybe Ubuntu)</title>
      <dc:creator>José Miguel Parrella</dc:creator>
      <pubDate>Sat, 01 Feb 2020 19:11:27 +0000</pubDate>
      <link>https://dev.to/bureado/a-quick-guide-to-podman-and-toolbox-in-debian-5672</link>
      <guid>https://dev.to/bureado/a-quick-guide-to-podman-and-toolbox-in-debian-5672</guid>
      <description>&lt;p&gt;Over the last couple years I've been spending a lot of time playing with containerized development environments such as WSL2 and Crostini. I also run Fedora in a NUC to try and keep up with &lt;code&gt;systemd&lt;/code&gt;, &lt;code&gt;cgroup2&lt;/code&gt;, &lt;code&gt;podman&lt;/code&gt; and other technologies. But since I brought my Debian laptop to &lt;a href="http://fosdem.org/2020/"&gt;FOSDEM20&lt;/a&gt;, I wanted to play with &lt;a href="https://podman.io/"&gt;Podman&lt;/a&gt; and &lt;a href="https://github.com/containers/toolbox"&gt;Toolbox&lt;/a&gt; natively.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why &lt;code&gt;podman&lt;/code&gt; is important
&lt;/h2&gt;

&lt;p&gt;Remember when &lt;code&gt;docker&lt;/code&gt; bundled daemon &lt;em&gt;and&lt;/em&gt; tools? Although it was eventually decoupled, many of us learned a formulaic usage of the &lt;code&gt;docker&lt;/code&gt; command and it's not unusual to find the legacy packages in many of our systems today.&lt;/p&gt;

&lt;p&gt;That was certainly the case for me: I &lt;em&gt;knew&lt;/em&gt; that we needed to decouple to achieve rootless containers, registry-side building and interchangeable runtimes. But the options expanded rapidly and since I was wary of keeping too many tools with overlapping experiences in my system, I continued to rely on the &lt;code&gt;docker-ce&lt;/code&gt; packages.&lt;/p&gt;

&lt;p&gt;I believe &lt;code&gt;podman&lt;/code&gt; is a credible replacement for my needs, but if you want to play around with &lt;code&gt;toolbox&lt;/code&gt; then &lt;code&gt;podman&lt;/code&gt; is a requirement, even if one that you'll be happy with. All of this stack is integrated and tested in Fedora before other RPM-based distros (let alone Debian derivatives) so it does take a little bit of work but the results and basic functionality is pretty much comparable. Let's begin.&lt;/p&gt;

&lt;h2&gt;
  
  
  Installing &lt;code&gt;podman&lt;/code&gt; and &lt;code&gt;toolbox&lt;/code&gt; in Debian
&lt;/h2&gt;

&lt;p&gt;While I was attending FOSDEM, two speakers (one from SUSE, one from Red Hat) wondered if &lt;code&gt;podman&lt;/code&gt; and &lt;code&gt;toolbox&lt;/code&gt; were available for Debian and derivatives, such as Ubuntu. They assumed so, but weren't quite sure. (A word of warning: there's a &lt;a href="https://blueprints.launchpad.net/~ondrak/+snap/toolbox"&gt;snap&lt;/a&gt; by Ondrej called "toolbox", but this is not what we're discussing here.)&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;For &lt;code&gt;podman&lt;/code&gt;, openSUSE's Kubic builds &lt;code&gt;deb&lt;/code&gt; packages that work in Debian and Ubuntu. This is the &lt;a href="https://podman.io/getting-started/installation#debian"&gt;current installation method&lt;/a&gt;, and it's what I used. Albeit &lt;a href="https://github.com/containers/libpod/issues/1742"&gt;rocky&lt;/a&gt;, there's &lt;a href="https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=930440"&gt;ongoing work&lt;/a&gt; to get this package officially into Debian.&lt;/li&gt;
&lt;li&gt;For &lt;code&gt;toolbox&lt;/code&gt;, which is a shell script, you can fetch a &lt;a href="https://github.com/containers/toolbox/releases"&gt;release&lt;/a&gt; and place the script in your &lt;code&gt;PATH&lt;/code&gt;. Make sure you install &lt;code&gt;flatpak&lt;/code&gt;, as that's needed (there could be other dependencies, but in my reasonably vanilla desktop system, I was only missing a &lt;code&gt;sudo apt install flatpak -y&lt;/code&gt;)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;One last thing, part of the rootless magic relies on &lt;a href="https://lwn.net/Articles/673597/"&gt;user namespaces&lt;/a&gt; so make sure you &lt;code&gt;echo 1 | sudo tee /proc/sys/kernel/unprivileged_userns_clone&lt;/code&gt; and understand the security implications of that.&lt;/p&gt;

&lt;h2&gt;
  
  
  What can &lt;code&gt;toolbox&lt;/code&gt; do?
&lt;/h2&gt;

&lt;p&gt;In Fedora Silverblue, &lt;code&gt;toolbox&lt;/code&gt; is used to provide a mutable working environment on top of a (mostly) immutable operating system such as Silverblue or CoreOS (you can watch &lt;a href="https://www.youtube.com/watch?v=BGXs0W6NRBM"&gt;rishi's presentation&lt;/a&gt; for full impact)&lt;/p&gt;

&lt;p&gt;In our case, since we're running Debian in an environment I regularly mutate, we care less about that aspect but we still want a working environment that's easy to step in and out of.&lt;/p&gt;

&lt;p&gt;Either way, you might be wondering how that is any different from having a pet &lt;code&gt;docker run -it ... /bin/bash&lt;/code&gt; or a playground VM whether that's with &lt;code&gt;libvirt&lt;/code&gt; or &lt;code&gt;lxd&lt;/code&gt;, in a public cloud, VPS provider or somewhere else. Or something custom you have with a mix of &lt;code&gt;pyenv&lt;/code&gt; or &lt;code&gt;nix-shell&lt;/code&gt;...&lt;/p&gt;

&lt;p&gt;The difference with &lt;code&gt;toolbox&lt;/code&gt; is that it overlays this environment on top of your profile, carries your shell settings and helps users resolve just like in the host. So you wouldn't need to worry about things like mounting a &lt;code&gt;9p&lt;/code&gt; filesystem or sync'ing files and adjusting ownership, etc.&lt;/p&gt;

&lt;p&gt;So when you enter the &lt;code&gt;toolbox&lt;/code&gt; environment, you feel like you're in your regular environment, but things you change beyond your profile are kept to the container. Here's an example:&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;bureado@crucia:~$ echo hi-from-host &amp;gt; hello
bureado@crucia:~$ htop -v

Command 'htop' not found, but can be installed with:

sudo apt install htop

bureado@crucia:~$ toolbox enter --container debian-toolbox-latest
bureado@toolbox:~$ cat hello &amp;amp;&amp;amp; echo hi-from-container &amp;gt; hello
hi-from-host
bureado@toolbox:~$ htop -v
htop 2.2.0 - (C) 2004-2019 Hisham Muhammad
Released under the GNU GPL.

bureado@toolbox:~$ logout
bureado@crucia:~$ cat hello
hi-from-container
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;In this example you see how I switched from my laptop (&lt;code&gt;crucia&lt;/code&gt;) to the container (&lt;code&gt;toolbox&lt;/code&gt;) sharing files and changes in my profile but keeping any additions (in this case, a previous &lt;code&gt;apt install htop&lt;/code&gt; I made in &lt;code&gt;toolbox&lt;/code&gt;) to the container.&lt;/p&gt;

&lt;h2&gt;
  
  
  The role of &lt;code&gt;podman&lt;/code&gt;
&lt;/h2&gt;

&lt;p&gt;As I mentioned earlier, &lt;code&gt;toolbox&lt;/code&gt; is a script. A 2.5K+ SLOC script with close to 60 mentions to &lt;code&gt;podman&lt;/code&gt;. So &lt;code&gt;podman&lt;/code&gt; is a real hero here. If I run &lt;code&gt;toolbox list&lt;/code&gt;, I get:&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;bureado@crucia:~$ toolbox list
IMAGE ID      IMAGE NAME                       CREATED
b31c77acc328  localhost/debian-toolbox:latest  About an hour ago
1787a6a86277  localhost/ubuntu-toolbox:latest  3 hours ago

CONTAINER ID  CONTAINER NAME         CREATED            STATUS                IMAGE NAME
267d9c17c3f8  debian-toolbox-latest  About an hour ago  Up About an hour ago  localhost/debian-toolbox:latest
baf2ed3ece9b  ubuntu-toolbox-latest  3 hours ago        Up 2 hours ago        localhost/ubuntu-toolbox:latest
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;This lists two OCI container images and two actual containers running. Doesn't that sound like the output of &lt;code&gt;docker ps&lt;/code&gt;? Well, that's what &lt;code&gt;podman&lt;/code&gt; can replicate:&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;bureado@crucia:~$ podman images
REPOSITORY                 TAG        IMAGE ID       CREATED             SIZE
localhost/debian-toolbox   latest     b31c77acc328   About an hour ago   445 MB
localhost/ubuntu-toolbox   latest     1787a6a86277   3 hours ago         340 MB
docker.io/library/ubuntu   19.04      c88ac1f841b7   2 weeks ago         72.4 MB
docker.io/library/debian   unstable   0e26bcfa03fc   5 weeks ago         122 MB
bureado@crucia:~$ podman ps
CONTAINER ID  IMAGE                            COMMAND               CREATED            STATUS                PORTS  NAMES
267d9c17c3f8  localhost/debian-toolbox:latest  toolbox --verbose...  About an hour ago  Up About an hour ago         debian-toolbox-latest
baf2ed3ece9b  localhost/ubuntu-toolbox:latest  toolbox --verbose...  3 hours ago        Up 3 hours ago               ubuntu-toolbox-latest
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;And of course, &lt;code&gt;podman&lt;/code&gt; allows me to do operations like &lt;code&gt;start&lt;/code&gt; and &lt;code&gt;stop&lt;/code&gt; and &lt;code&gt;run&lt;/code&gt; -- all rootless. In fact, I can take a Dockerfile and build it with &lt;code&gt;podman&lt;/code&gt;, as you see below.&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;bureado@crucia:~$ podman build . -t debian-toolbox
STEP 1: FROM docker.io/library/debian:unstable
STEP 2: ENV NAME=debian-toolbox VERSION=unstable
--&amp;gt; Using cache 96545d7a49c3a47a39cb9f2fc8c6b40d5240b02dfa1a0c2ac9efcf976d67d44c
...
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;I encourage you take a look at &lt;code&gt;podman&lt;/code&gt; which also runs on Mac (and even &lt;a href="https://www.redhat.com/sysadmin/podman-windows-wsl2"&gt;WSL2&lt;/a&gt;) by virtue of its &lt;a href="https://github.com/containers/libpod/blob/master/docs/tutorials/remote_client.md"&gt;remote-client support&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;There are also underlying technologies to &lt;code&gt;podman&lt;/code&gt; such as &lt;code&gt;conmon&lt;/code&gt; and you can learn more about it replaying this FOSDEM session: &lt;a href="https://fosdem.org/2020/schedule/event/containers_podman/"&gt;Podman - The Powerful Container Multi-Tool&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Where did the image come from?
&lt;/h2&gt;

&lt;p&gt;You probably noticed from my output that there are two &lt;code&gt;(debian|ubuntu)-toolbox-latest&lt;/code&gt; &lt;em&gt;containers&lt;/em&gt; that are using two &lt;code&gt;(debian|ubuntu)-toolbox&lt;/code&gt; &lt;em&gt;images&lt;/em&gt;. Where did those come from? (And yes, this also means this article is probably helpful if you use Ubuntu instead of Debian.)&lt;/p&gt;

&lt;p&gt;This image is supposed to be functionally close to your actual host working environment (to provide consistency) and, in fact, it needs two special &lt;code&gt;LABEL&lt;/code&gt;s asserting so in order for &lt;code&gt;toolbox&lt;/code&gt; to ingest them. &lt;/p&gt;

&lt;p&gt;Here's an example of a &lt;a href="https://github.com/containers/toolbox/blob/e27d7cafa45303100db91797179ecec1c4abb9a3/images/debian/unstable/Dockerfile"&gt;Debian image for toolbox&lt;/a&gt; where you can see the additional packages being installed and the labels being declared.&lt;/p&gt;

&lt;p&gt;Once you do all of that (with &lt;code&gt;podman&lt;/code&gt;), you instruct &lt;code&gt;toolbox&lt;/code&gt; to recognize this image and create a standby container which you can &lt;code&gt;enter&lt;/code&gt;. Altogether, it looks like this:&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;podman build -t debian-toolbox -f Dockerfile
toolbox create -i localhost/debian-toolbox:latest
toolbox enter -c debian-toolbox-latest
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;And everytime you &lt;code&gt;toolbox enter&lt;/code&gt;, you can mutate that system without polluting your main one - except for files in your profile. That's it! I learned a good deal about the underlying technology just going through this process and reading the code.&lt;/p&gt;

&lt;p&gt;In the coming months, I'll be evaluating this against my current setup of using &lt;code&gt;nix&lt;/code&gt; and Python's &lt;code&gt;venv&lt;/code&gt; while looking at more emerging technology that can be applied to this space, from other tools in this stack like &lt;code&gt;buildah&lt;/code&gt; and &lt;code&gt;skopeo&lt;/code&gt; to things like &lt;a href="https://microk8s.io/"&gt;MicroK8s&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://twitter.com/bureado"&gt;Let me know&lt;/a&gt; if you find this useful and/or interesting, and comments always welcome!&lt;/p&gt;

</description>
      <category>debian</category>
      <category>docker</category>
      <category>containers</category>
    </item>
    <item>
      <title>Getting started with distri on Azure</title>
      <dc:creator>José Miguel Parrella</dc:creator>
      <pubDate>Wed, 04 Sep 2019 17:55:33 +0000</pubDate>
      <link>https://dev.to/bureado/getting-started-with-distri-on-azure-1kik</link>
      <guid>https://dev.to/bureado/getting-started-with-distri-on-azure-1kik</guid>
      <description>&lt;p&gt;You might have come across Michael Stapelberg's &lt;a href="https://michael.stapelberg.ch/posts/2019-08-17-linux-package-managers-are-slow/" rel="noopener noreferrer"&gt;Linux package managers are slow&lt;/a&gt; write-up earlier this month. His hypothesis is that indexes, hooks/triggers and archive operations are a big source of issues with conventional Linux package managers.&lt;/p&gt;

&lt;p&gt;Concurrently, &lt;a href="https://michael.stapelberg.ch/posts/2019-08-17-introducing-distri/" rel="noopener noreferrer"&gt;Michael announced &lt;strong&gt;distri&lt;/strong&gt;&lt;/a&gt;, a Linux distro design to experiment with different approaches in these areas. I've been playing around with &lt;a href="https://github.com/distr1/distri" rel="noopener noreferrer"&gt;&lt;strong&gt;distri&lt;/strong&gt;&lt;/a&gt; after getting it up and running on Azure. Here's what I learned.&lt;/p&gt;

&lt;h2&gt;
  
  
  What's in it?
&lt;/h2&gt;

&lt;p&gt;In its current release, &lt;code&gt;jackherer&lt;/code&gt;, &lt;strong&gt;distri&lt;/strong&gt; ships Linux 5.1.9, systemd, and 170+ other packages including Python 2.7, Python 3, Perl, Golang and Docker, all the way to Xorg and i3.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;distri&lt;/code&gt; is also a command-line tool used to build packages and the distro (images, repos) itself. The &lt;a href="https://repo.distr1.org/distri/jackherer/" rel="noopener noreferrer"&gt;online repo&lt;/a&gt; contains said images and binary packages. The binary packages are SquashFS files, with metadata.&lt;/p&gt;

&lt;h2&gt;
  
  
  What makes it unique?
&lt;/h2&gt;

&lt;p&gt;In addition to the &lt;code&gt;distri&lt;/code&gt; tool and the &lt;a href="https://github.com/distr1/distri/tree/master/pkgs" rel="noopener noreferrer"&gt;packaging definitions&lt;/a&gt; for 400+ packages, the &lt;a href="https://github.com/distr1/distri" rel="noopener noreferrer"&gt;GitHub repo&lt;/a&gt; contains the code (Go) for a FUSE filesystem that assembles images from the local software store as a working, read-only filesystem for the running instance.&lt;/p&gt;

&lt;p&gt;So if you inspect the root of the filesystem, you'll notice that &lt;code&gt;/bin&lt;/code&gt;, &lt;code&gt;/share&lt;/code&gt; and &lt;code&gt;/lib&lt;/code&gt; (and their &lt;code&gt;/usr&lt;/code&gt; counterparts) point to &lt;code&gt;/ro&lt;/code&gt;. And, you guessed it, &lt;code&gt;/ro&lt;/code&gt; is handled by this FUSE component.&lt;/p&gt;

&lt;p&gt;Each package:version pair ships in a separate directory under &lt;code&gt;/ro&lt;/code&gt;. &lt;a href="https://distr1.org/#exchange-directories" rel="noopener noreferrer"&gt;Exchange directories&lt;/a&gt; are used for places where each package is expected to contribute files, such as &lt;code&gt;/usr/include&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;The part that makes &lt;strong&gt;distri&lt;/strong&gt; unique is that this FUSE component lazily loads the images as needed. The SquashFS images, roughly equivalent to binary packages such as debs and rpms, live in a separate &lt;em&gt;store&lt;/em&gt; under &lt;code&gt;/roimg&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;What all of this implies is that package manager operations are very fast, because they involve transporting an SquashFS file (and &lt;code&gt;textproto&lt;/code&gt; metadata) to the local software store, and then letting the FUSE component work its magic by assembling it in the filesystem "view". Here's &lt;a href="https://asciinema.org/a/cwHaOq7LnY01lFB7kpQbAOVua" rel="noopener noreferrer"&gt;Michael showing how that works&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Neither the concept of one folder per package or even the lazy loading with FUSE concepts are new (Nix, &lt;a href="http://appfs.rkeene.org/web/index" rel="noopener noreferrer"&gt;AppFS&lt;/a&gt;) but I don't think I have seen both put together in a modern way. Michael says that this method provides &lt;a href="https://michael.stapelberg.ch/posts/2019-08-17-introducing-distri/#fhs-compat" rel="noopener noreferrer"&gt;just enough FHS compatibility&lt;/a&gt; for 3rd-party applications to work, such as Chrome or Visual Studio Code.&lt;/p&gt;

&lt;h2&gt;
  
  
  Behind the scenes
&lt;/h2&gt;

&lt;p&gt;About 80% of the packages in the &lt;strong&gt;distri&lt;/strong&gt; repo use the &lt;code&gt;cbuilder&lt;/code&gt; build method, but other available and popular methods include &lt;code&gt;gomodbuilder&lt;/code&gt; and &lt;code&gt;perlbuilder&lt;/code&gt;. &lt;/p&gt;

&lt;p&gt;One of Michael's topics of interest is &lt;a href="https://michael.stapelberg.ch/posts/2019-07-20-hooks-and-triggers/" rel="noopener noreferrer"&gt;hooks and triggers&lt;/a&gt;, which he argues makes deserialization, idempotency, upstream standardization, etc., much more harder. Only one package in &lt;code&gt;distri&lt;/code&gt; has a trigger, and that is &lt;code&gt;openssh&lt;/code&gt; for host key generation.&lt;/p&gt;

&lt;p&gt;The packaging is very streamlined (respective to upstream) with only about 60 patches, 130 manual extra build flags, 115 manually declared runtime dependencies and 295 custom build steps across over 400 packages.&lt;/p&gt;

&lt;p&gt;Conceptually similar repos that you might want to compare include &lt;a href="https://github.com/NixOS/nixpkgs" rel="noopener noreferrer"&gt;nixpkgs&lt;/a&gt;, &lt;a href="https://github.com/spack/spack/tree/develop/var/spack/repos/builtin/packages" rel="noopener noreferrer"&gt;Spack&lt;/a&gt;, &lt;a href="https://formulae.brew.sh/formula/" rel="noopener noreferrer"&gt;Homebrew formulae&lt;/a&gt; or the &lt;a href="https://github.com/vmware/photon/tree/master/SPECS" rel="noopener noreferrer"&gt;Photon&lt;/a&gt; specs.&lt;/p&gt;

&lt;p&gt;This is me building &lt;code&gt;nano&lt;/code&gt; in distri. I use the &lt;code&gt;distri scaffold&lt;/code&gt; tool to create a &lt;code&gt;textproto&lt;/code&gt; file, declare my two build dependencies and go:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://asciinema.org/a/266011" rel="noopener noreferrer"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fasciinema.org%2Fa%2F266011.svg" alt="asciicast"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;...this will end with something like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;2019/09/04 16:59:04   step 0: 2m31.51560212s (command: [${DISTRI_SOURCEDIR}/configure --host=x86_64-pc-linux-gnu --prefix=${DISTRI_PREFIX} --sysconfdir=/etc --disable-dependency-tracking])
2019/09/04 16:59:04   step 1: 18.112800177s (command: [make -j8 V=1])
2019/09/04 16:59:04   step 2: 2.581871913s (command: [make install DESTDIR=${DISTRI_DESTDIR} PREFIX=${DISTRI_PREFIX}])
[...]
2019/09/04 16:59:05 nano runtime deps: ["file-amd64-5.34-3" "glibc-amd64-2.27-3" "zlib-amd64-1.2.11-3" "ncurses-amd64-6.1-5"]
[...]
2019/09/04 16:59:05 package successfully created in /root/distri/build/distri/pkg/nano-amd64-4.3-1.squashfs
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can then place this package in your local store, where the FUSE component will pick it up, making the &lt;code&gt;nano&lt;/code&gt; command avaiable in your &lt;code&gt;$PATH&lt;/code&gt;. Neat!&lt;/p&gt;

&lt;h2&gt;
  
  
  Beyond packaging
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;distri&lt;/strong&gt; runs on Azure pretty much out of the box. Images can be converted to the VHD format, and serial console can be enabled optionally, but because it doesn't let &lt;code&gt;root&lt;/code&gt; SSH into it by default (which is good) you might want to &lt;a href="https://github.com/distr1/distri/issues/29#issuecomment-524669776" rel="noopener noreferrer"&gt;consider the steps I documented&lt;/a&gt; which include uploading a disk and creating a VM with the Azure CLI.&lt;/p&gt;

&lt;p&gt;I also wrote a basic &lt;code&gt;/etc/os-release&lt;/code&gt; for &lt;strong&gt;distri&lt;/strong&gt;, because one is &lt;a href="https://github.com/distr1/distri/issues/33" rel="noopener noreferrer"&gt;currently not included&lt;/a&gt;. Here's what that looks like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;root@distri0:~# hostnamectl 
   Static hostname: distri0
         Icon name: computer-vm
           Chassis: vm
        Machine ID: 201aa7a551c14f299bb973f1ce206503
           Boot ID: e511d5411c804a7e8f24288025219796
    Virtualization: microsoft
  Operating System: distri (jackherer)
            Kernel: Linux 5.1.9
      Architecture: x86-64
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;More demos and &lt;a href="https://github.com/distr1/distri#cool-things-to-try" rel="noopener noreferrer"&gt;cool things to try&lt;/a&gt; are on the GitHub repo. If you're interested in &lt;strong&gt;distri&lt;/strong&gt;, I suggest you join the &lt;a href="https://www.freelists.org/list/distri" rel="noopener noreferrer"&gt;mailing list&lt;/a&gt; or join the conversation on &lt;a href="https://github.com/distr1/distri/issues" rel="noopener noreferrer"&gt;GitHub Issues&lt;/a&gt; for the repo.&lt;/p&gt;

&lt;p&gt;If you're in Europe and attending &lt;a href="https://all-systems-go.io/" rel="noopener noreferrer"&gt;All Systems Go&lt;/a&gt; I'd love to meet and talk more about Linux and package management. And if you're in the USA and attending &lt;a href="https://allthingsopen.org/" rel="noopener noreferrer"&gt;All Things Open&lt;/a&gt; join me for a session on &lt;a href="https://allthingsopen.org/talk/2-for-1-the-future-of-linux-distros-in-the-cloud/" rel="noopener noreferrer"&gt;The Future of Linux Distros in the Cloud&lt;/a&gt;, October 14th, 2019 in Raleigh.&lt;/p&gt;

</description>
      <category>linux</category>
      <category>packagemanagement</category>
      <category>azure</category>
      <category>go</category>
    </item>
    <item>
      <title>5 cosas que aprendí en KubeCon Barcelona</title>
      <dc:creator>José Miguel Parrella</dc:creator>
      <pubDate>Mon, 03 Jun 2019 21:04:44 +0000</pubDate>
      <link>https://dev.to/bureado/5-cosas-que-aprendi-en-kubecon-barcelona-mcj</link>
      <guid>https://dev.to/bureado/5-cosas-que-aprendi-en-kubecon-barcelona-mcj</guid>
      <description>&lt;p&gt;Hace un par de semanas visité Barcelona para participar en la &lt;a href="https://events.linuxfoundation.org/events/kubecon-cloudnativecon-europe-2019/"&gt;KubeCon + CloudNativeCon Europe 2019&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Si no pudiste ir, muchas de las láminas así como los &lt;a href="https://www.youtube.com/playlist?list=PLj6h78yzYM2PpmMAnvpvsnR4c27wJePh3"&gt;vídeos de las sesiones&lt;/a&gt; ya están disponibles.&lt;/p&gt;

&lt;p&gt;Estas son 5 cosas (algunas bastante inesperadas) que aprendí en KubeCon Europe, Barcelona 2019.&lt;/p&gt;

&lt;h2&gt;
  
  
  Kubernetes usa controladores para poder escalar mejor
&lt;/h2&gt;

&lt;p&gt;Kubernetes es un proyecto de software muy complejo, con más de 2 millones de líneas de código, múltiples componentes y mecanismos de extensión. Y sigue creciendo.&lt;/p&gt;

&lt;p&gt;En su charla &lt;a href="https://www.youtube.com/watch?v=zCXiXKMqnuE"&gt;The Kubernetes Control Plane for Busy People Who Like Pictures&lt;/a&gt;, Daniel Smith explica por qué en un sistema distribuído como Kubernetes es muy difícil de escalar su monitoreo con una máquina de estado convencional.&lt;/p&gt;

&lt;p&gt;Mediante el uso de ciclos de control, Kubernetes puede agregar nuevas características y escalar de forma lineal en vez de exponencial.&lt;/p&gt;

&lt;p&gt;Además, en su charla describe varias categorías de controladores (clásicos, inyección, biyección, unión) e incluso describe como ejemplos varios de los controladores más populares. No me esperaba que iba a interesarme por teoría de control en una charla de Kubernetes en Barcelona...&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;code&gt;topologyKey&lt;/code&gt; ayuda a mejorar la disponibilidad de tus aplicaciones en la nube
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=Cd7aJiQLIpM&amp;amp;list=PLj6h78yzYM2PpmMAnvpvsnR4c27wJePh3&amp;amp;index=190&amp;amp;t=0s"&gt;Improving Availability for Stateful Applications in Kubernetes&lt;/a&gt; de Michelle Au fue mi charla favorita de este evento.&lt;/p&gt;

&lt;p&gt;La razón es que en vez de asumir que la audiencia está escogiendo el mejor mecanismo de almacenamiento para sus aplicaciones en Kubernetes, Michelle describe cómo cada tipo de almacenamiento (ya sea tradicional o de nube) encaja (o no) con Kubernetes.&lt;/p&gt;

&lt;p&gt;Si tienes una NAS, una SAN, un disco administrado en la nube o algo distinto, Kubernetes utiliza conceptos y lógica diferentes para garantizar que el estado de tus aplicaciones está distribuído y es resiliente. &lt;/p&gt;

&lt;p&gt;Por ejemplo, el uso del valor &lt;code&gt;failure-domain&lt;/code&gt; en &lt;code&gt;topologyKey&lt;/code&gt; te permite &lt;a href="https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#inter-pod-affinity-and-anti-affinity-beta-feature"&gt;asignar zonas a los recursos&lt;/a&gt; de manera que Kubernetes esté consciente de dónde está tu almacenamiento y qué hacer en caso de fallos.&lt;/p&gt;

&lt;h2&gt;
  
  
  CNAB agrupa los elementos que describen una aplicación moderna compleja
&lt;/h2&gt;

&lt;p&gt;Un tópico del que hablo en &lt;a href="https://speakerdeck.com/bureado/the-future-of-linux-packaging"&gt;The Future of Linux Packaging&lt;/a&gt; es que hoy en día una aplicación ya no está compuesta por un paquete o un proyecto con código fuente, sino que se ha vuelto un concepto complejo que a veces requiere múltiples servidores o sistemas, servicios administrados y más.&lt;/p&gt;

&lt;p&gt;Chris Crone dio una excelente introducción a &lt;a href="https://cnab.io/"&gt;CNAB&lt;/a&gt; en Barcelona donde explicó que hoy en día desplegar una aplicación requiere de varios pasos incluyendo herramientas como &lt;code&gt;apt&lt;/code&gt;, &lt;code&gt;terraform&lt;/code&gt;, &lt;code&gt;helm&lt;/code&gt;, &lt;code&gt;kubectl&lt;/code&gt;, &lt;code&gt;aws|gcloud|az&lt;/code&gt; y más.&lt;/p&gt;

&lt;p&gt;CNAB ataca este problema agrupando los recursos que describen una aplicación y creando el concepto de una imagen "lanzadera" que despliega la aplicación. No dejes de darle una mirada a &lt;a href="https://www.youtube.com/watch?v=r6aqKhvdsRs"&gt;su presentación&lt;/a&gt; y darle feedback al &lt;a href="https://twitter.com/cnab_spec"&gt;equipo de CNAB&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;(Por cierto, ¿ya probaste &lt;a href="https://www.youtube.com/watch?v=lYzrhzLAxUI&amp;amp;feature=youtu.be"&gt;Helm 3&lt;/a&gt;?)&lt;/p&gt;

&lt;h2&gt;
  
  
  No es suficiente solo con dejar de usar &lt;code&gt;root&lt;/code&gt; en tus contenedores
&lt;/h2&gt;

&lt;p&gt;Si ya trabajas con contenedores de seguro te has preguntado cómo puedes confiar en todos los &lt;em&gt;layers&lt;/em&gt; que tu &lt;em&gt;runtime&lt;/em&gt; se baja cuando construyes tus imágenes. Es un tema muy importante, porque al fin y al cabo se trata de tu aplicación, de los datos de tus usuarios y de tus sistemas de producción donde podría estar entrando código malicioso o defectuoso.&lt;/p&gt;

&lt;p&gt;En &lt;a href="https://www.youtube.com/watch?v=IpMPRC-ybJI"&gt;Rootless, Reproducible, and Hermetic: Secure Container Build Showdown&lt;/a&gt;, Andrew describe varios vectores de ataque que incluyen el uso de un &lt;code&gt;FROM&lt;/code&gt; malicioso, ataques al servidor donde se construye la imagen a través de &lt;code&gt;RUN&lt;/code&gt;, uso de &lt;code&gt;--privileged&lt;/code&gt; e incluso ataques dentro del contenedor.&lt;/p&gt;

&lt;p&gt;Además, Andrew revisa nueve herramientas distintas (diferentes a &lt;code&gt;docker build&lt;/code&gt;) que si bien funcionan sin necesidad de &lt;code&gt;root&lt;/code&gt;, tienen distintos niveles de soporte para Docker y Kubernetes así como distintos niveles de protección. Andrew también habla sobre el proceso de estandarizar la interfaz de construcción de imágenes a través del &lt;a href="https://github.com/containerbuilding/cbi"&gt;Container Build Interface&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Slack es el principal mecanismo de colaboración de Kubernetes
&lt;/h2&gt;

&lt;p&gt;Por último, me pasé por &lt;a href="https://www.youtube.com/watch?v=a17FLjVDUOc"&gt;State of Kubernetes Contributor Community&lt;/a&gt; donde Paris Pittman habló sobre la salud del proyecto.&lt;/p&gt;

&lt;p&gt;Aquí descubrí que Slack es el principal medio de colaboración de Kubernetes (las listas de correo, websites y sobre todo GitHub o Twitter vienen muy por detrás...) y cómo ya me había visto la presentación de Rael García sobre &lt;a href="https://www.youtube.com/watch?v=tMCeY71o8aA"&gt;el grupo de trabajo de documentación&lt;/a&gt; donde también está el equipo &lt;a href="https://kubernetes.slack.com/messages/CH7GB2E3B/"&gt;kubernetes-docs-es&lt;/a&gt;, decidí pasarme por Slack.&lt;/p&gt;

&lt;p&gt;¡Y vaya comunidad! Al cabo de un par de días ya había hecho &lt;a href="https://github.com/kubernetes/website/pulls?utf8=%E2%9C%93&amp;amp;q=is%3Apr+author%3Abureado+"&gt;mis primeros &lt;em&gt;pull requests&lt;/em&gt;&lt;/a&gt; y empecé a escribir un documento de &lt;a href="https://gist.github.com/bureado/3c6ebccf3f70f2ade84d5f3ab399a821"&gt;tips and tricks&lt;/a&gt; que quizás también te anime a contribuir al proyecto.&lt;/p&gt;

&lt;p&gt;¿Qué has aprendido sobre Kubernetes recientemente? &lt;a href="https://twitter.com/bureado"&gt;Compártelo en Twitter&lt;/a&gt;&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>spanish</category>
      <category>security</category>
      <category>packaging</category>
    </item>
    <item>
      <title>Open source sustainability is a 🌎 debate</title>
      <dc:creator>José Miguel Parrella</dc:creator>
      <pubDate>Sat, 04 May 2019 20:27:52 +0000</pubDate>
      <link>https://dev.to/bureado/open-source-sustainability-is-a-debate-37p1</link>
      <guid>https://dev.to/bureado/open-source-sustainability-is-a-debate-37p1</guid>
      <description>&lt;p&gt;For the past decade, I've received a daily ping with open source news coverage. It's a lot. One minute I'm reading a Reddit thread on distros, then LWN coverage on Python 2.7 or a Jepsen report on a new product when SJVN tweets a new ZDNet article on corporate contributions.&lt;/p&gt;

&lt;p&gt;This coverage is, almost always, written in English by and for US audiences. At first glance, this seems inconsequential as anyone that started with open source in non-English speaking countries two decades ago made do even with the sorry state of documentation, i18 and l10n back then.&lt;/p&gt;

&lt;p&gt;But during the second half of 2018, as the debate dug deeper into open source sustainability (a broad collection of issues including licensing, business models, ethics, diversity &amp;amp; inclusion, etc.) it became apparent that the premises and analysis could use a global perspective.&lt;/p&gt;

&lt;p&gt;Earlier this year I set about testing whether there is a homogeneous understanding of the underlying concepts of this debate.&lt;/p&gt;

&lt;p&gt;I interviewed a couple dozen open source influencers and community leaders from around the globe, and &lt;a href="https://gist.github.com/bureado/5e2152ac6e6a8d920baeb5f2678b97d3"&gt;shared my findings&lt;/a&gt; at the Open Source Leadership Summit.&lt;/p&gt;

&lt;p&gt;My goal was not to test or even find the answers. For that, the sample needs to be several orders of magnitude larger. My goal was to test the &lt;em&gt;questions&lt;/em&gt;. &lt;/p&gt;

&lt;p&gt;A key takeaway was that even the words don't translate well. What do "income inequality", "wealth redistribution", "freeloading", "loss-leader", "strip mining" or "sustainability" translate to in Arabic, French or Spanish? It's "free" vs. "gratis" all over again! On top of that, many discussions assume a finance/economics background which in turn is a reflection of the VC-driven reality of open source in the US.&lt;/p&gt;

&lt;p&gt;I wondered how much weight non-US influencers put on themes such as open source licensing or business models compared to income inequality or survival of the maintainer. Maybe the quality of VC funding isn't a big concern simply because it isn't available to begin with, but do non-US audiences place any weight on the role of freeloaders or the age differences between contributors? I discovered my Twitter network polled in a very different way than my non-US friends.&lt;/p&gt;

&lt;p&gt;I also wanted to understand whether some of the prevailing models such as open source foundations or codes of conduct met the expectations of this group. Was this a classical hammer ergo nail situation? I wondered what people that agree open source has an income inequality problem believe about the role of foundations and competition: if I strongly believe in only one problem do I also believe in only one "savior"?&lt;/p&gt;

&lt;p&gt;In the US and other developed markets, there's been no shortage of proposed solutions to the components of the sustainability problem, from new licenses, subscription models, distributed funding, foundations, etc.&lt;/p&gt;

&lt;p&gt;But are those solutions enforceable or viable outside of the US? Are there any discussion forums for this problem south of the equator? Can a distributed funding model disburse funds in countries like Egypt? What about Nepal, or Venezuela? (all in the Top 10 for largest public GitHub org growth) &lt;/p&gt;

&lt;p&gt;There are areas where I plan to research more, including in the role of government and regulators. My upbringing in the open source world had a lot of this, back in the OpenXML and open source legislation days south of the equator, and I wonder how that has evolved in a cloud world.&lt;/p&gt;

&lt;p&gt;Possibly the most sobering takeaway for me was that when talking to non-US audiences, it seems impossible to talk about project sustainability without talking about maintainer survival. And whether that meant monetarily or from a recognition standpoint, lots of verbatims pointed out to expectations for the project health to sustain the individual developer's life.&lt;/p&gt;

&lt;p&gt;Nowadays, most project leaders understand that they need to listen to and care for a global audience if they want their project to achieve global scale: there are more GitHub contributors outside of the US than in the US and this gap &lt;a href="https://octoverse.github.com/people.html#location"&gt;continues to increase every year&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Similarly, I believe open source thought leaders addressing questions of open source sustainability need to bring this discussion to a global audience.&lt;/p&gt;

&lt;p&gt;If you have a sample you'd like to poll on this topic, check out the &lt;a href="https://gist.github.com/bureado/5e2152ac6e6a8d920baeb5f2678b97d3"&gt;resources&lt;/a&gt; from my talk. I would love to hear your findings or chat with you on this topic, so feel free to reach out on Twitter or elsewhere!&lt;/p&gt;

</description>
      <category>opensource</category>
      <category>sustainability</category>
      <category>inclusion</category>
      <category>diversity</category>
    </item>
    <item>
      <title>Getting started with Photon OS on Azure</title>
      <dc:creator>José Miguel Parrella</dc:creator>
      <pubDate>Sun, 10 Mar 2019 00:54:17 +0000</pubDate>
      <link>https://dev.to/bureado/getting-started-with-photon-os-on-azure-32h8</link>
      <guid>https://dev.to/bureado/getting-started-with-photon-os-on-azure-32h8</guid>
      <description>&lt;p&gt;&lt;a href="https://vmware.github.io/photon/"&gt;Photon OS&lt;/a&gt; is an open source, minimal Linux container host that is optimized for cloud-native applications. On Azure, it ships with just over a hundred packages including systemd, cloud-init and docker but Photon offers over a thousand packages in their repos like Go, .NET Core, Postgres, Tomcat, Zookeeper or Kubernetes!&lt;/p&gt;

&lt;p&gt;This is a simplified version of the Azure quickstart experience in the &lt;a href="https://vmware.github.io/photon/assets/files/html/3.0/photon_installation/Running-Photon-OS-on-Microsoft-Azure.html"&gt;official Photon OS documentation&lt;/a&gt;. It was tested with Photon OS 3.0 GA and &lt;code&gt;azure-cli&lt;/code&gt; version 2.0.38.&lt;/p&gt;

&lt;h2&gt;
  
  
  Prerequisites
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;Download the VHD for Azure from &lt;a href="https://github.com/vmware/photon/wiki/Downloading-Photon-OS"&gt;GitHub&lt;/a&gt; and extract with &lt;code&gt;tar xf&lt;/code&gt; in your local system&lt;/li&gt;
&lt;li&gt;Ensure you have the &lt;a href="https://docs.microsoft.com/en-us/cli/azure/install-azure-cli?view=azure-cli-latest"&gt;latest Azure CLI&lt;/a&gt; (&lt;code&gt;az&lt;/code&gt;) available in your working system and that you've &lt;a href="https://docs.microsoft.com/en-us/cli/azure/authenticate-azure-cli?view=azure-cli-latest"&gt;logged in&lt;/a&gt; with &lt;code&gt;az login&lt;/code&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Getting started
&lt;/h2&gt;

&lt;p&gt;Just like any other custom Linux VHD, you'll create a few workspaces, upload the VHD and create a new VM based on it.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://gist.github.com/bureado/ff42e1c612a71a6fcd940a19845ef051"&gt;Here's a script&lt;/a&gt; that simplifies the VM creation. It assumes that an SSH keypair is available in the profile of whatever user runs this script. It expects the downloaded and extracted VHD as an argument, for example:  &lt;code&gt;./script.sh photon-azure-3.0-26156e2.vhd&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ ./script.sh ./photon-azure-3.0-26156e2.vhd                                                      
+ set -e                                                                                                                  
+ GROUP=photon-rg                                                                                                         
+ STORAGE=phrg999                
+ LOCATION=southcentralus                                      
+ VM_NAME=photon-vm                                                           
+ STORAGE_CONTAINER=vhds                                                                                                  
+ IMAGE_PATH=./photon-azure-3.0-26156e2.vhd                                                                               
+ basename ./photon-azure-3.0-26156e2.vhd                                                                                
+ IMAGE_NAME=photon-azure-3.0-26156e2.vhd                                                                                 
+ az group create -n photon-rg -l southcentralus                     
+ az storage account create -n phrg999 -g photon-rg  
...                                  
+ az vm create -n photon-vm -g photon-rg --os-type linux --image https://phrg999.blob.core.windows.net/vhds/photon-azure-3.0-26156e2.vhd --use-unmanaged-disk --storage-account phrg999                                                             
{                                                          
...                     
  "powerState": "VM running",
...
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;az vm create&lt;/code&gt; command will output (in JSON, by default) information about the newly created VM, including a public IP address you can use to SSH into the VM. If you missed it, run &lt;code&gt;az vm list-ip-addresses -o table&lt;/code&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;bureado@photon4 [ ~ ]$ systemd-analyze 
Startup finished in 1.278s (kernel) + 2.177s (initrd) + 14.546s (userspace) = 18.002s
multi-user.target reached after 14.532s in userspace
bureado@photon4 [ ~ ]$ cloud-init --version
/usr/bin/cloud-init 18.3
bureado@photon4 [ ~ ]$ uname -a
Linux photon4 4.19.15-3.ph3 #1-photon SMP Mon Feb 25 14:48:35 UTC 2019 x86_64 GNU/Linux
bureado@photon4 [ ~ ]$ curl -s -H Metadata:true "http://169.254.169.254/metadata/instance?api-version=2017-08-01" | jq .compute.location
"westus2"
bureado@photon4 [ ~ ]$ 
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;h2&gt;
  
  
  Cleaning up
&lt;/h2&gt;

&lt;p&gt;If you used a unique resource group name in the &lt;code&gt;$GROUP&lt;/code&gt; variable above, you can remove all resources by running &lt;code&gt;az group delete -g &amp;lt;group name&amp;gt; --yes&lt;/code&gt;. This will remove all resources created by the script, including the uploaded blob, but won't delete it from your local working folder.&lt;/p&gt;

&lt;p&gt;Have fun!&lt;/p&gt;

</description>
      <category>photon</category>
      <category>linux</category>
      <category>distros</category>
      <category>azure</category>
    </item>
    <item>
      <title>A few of Microsoft's snow tracks in open source engagement</title>
      <dc:creator>José Miguel Parrella</dc:creator>
      <pubDate>Tue, 12 Feb 2019 05:16:00 +0000</pubDate>
      <link>https://dev.to/bureado/open-source-engagement-snow-tracks-2lkh</link>
      <guid>https://dev.to/bureado/open-source-engagement-snow-tracks-2lkh</guid>
      <description>&lt;p&gt;My colleague Jeff &lt;a href="https://twitter.com/jeffmcaffer/status/1093678093066096640"&gt;recently shared&lt;/a&gt; an outstanding model to assess organizational readiness, challenges and aspirations when it comes to participating in open source.&lt;/p&gt;

&lt;p&gt;I was familiar with this model since Jeff has driven it at Microsoft for a while and I've had a chance to learn from him and his team, and I was immensely happy to see him &lt;a href="https://mcaffer.com/2019/02/Open-source-engagement"&gt;share it in long form with the world&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Shortly after there was a &lt;a href="https://twitter.com/ibrahimatlinux/status/1093923896816918528"&gt;provocative follow-up question&lt;/a&gt; by LFDL's Ibrahim Haddad: how do you advance in this ladder? Ibrahim offered &lt;a href="https://twitter.com/ibrahimatlinux/status/1093924496010964992"&gt;some ideas&lt;/a&gt; that range from securing executive support to being involved in the TODO Group and having a comprehensive component inventory.&lt;/p&gt;

&lt;p&gt;Inpired by all of this, I wanted to share some of my experiences after ~10 years driving open source change at Microsoft. Many learnings are our own, some come from our partners and even from customers that often cross-pollinate our mutual open source journeys.&lt;/p&gt;

&lt;h2&gt;
  
  
  Embrace the branches
&lt;/h2&gt;

&lt;p&gt;The journey to pervasive open source engagement in an organization isn't linear. Imagine this. &lt;/p&gt;

&lt;p&gt;Some years from now we'll reflect back on how hard it was to move past the initial frustration with open source. You helped hire a dream team of open sourcerers but not much change was happening. Everyone was on their own mission, and at times you felt hopeless. But today, what a difference! You operate proactively, control your own destiny and people know you and come to you for guidance.&lt;/p&gt;

&lt;p&gt;Until they don't. Often times, a new "hype" branch will stem from a more mature stage in your open source journey triggered by a competitor, a new product, a new market or just the &lt;a href="https://redmonk.com/sogrady/2018/12/21/cycles-oss/"&gt;ups and downs&lt;/a&gt; of this crazy industry of ours.&lt;/p&gt;

&lt;p&gt;Resist the need to exercise extreme control over branches. Ask your mentors what would be a reasonable level of structure you can offer to the branch, then aim even lower: anything growing rapidly or moving a lot of resources carries politics that will only drag you down.&lt;/p&gt;

&lt;p&gt;At Microsoft, I've had the privilege to collaborate on projects that range from bland components in places like Windows Update to cutting-edge research projects and anything in between whether under the Office brand, the Microsoft brand or even no brand.&lt;/p&gt;

&lt;p&gt;I work on Azure, but in my head I've known at least 4 Azures since PDC09. I lived through Microsoft Open Technologies, Port 25, the Open Source Technology Center, the Enterprise Open Source Group and many other, more obscure acronyms.&lt;/p&gt;

&lt;p&gt;Each one of those branched in hype &lt;em&gt;more than once&lt;/em&gt; but &lt;strong&gt;all&lt;/strong&gt; of those branches contributed to the corporate culture and organizational knowledge and proficiency we have on open source today.&lt;/p&gt;

&lt;p&gt;So branches eventually merge. And the best that you can do is to showcase the business value of engagement maturity and capture the learnings: at times, I've had to engage my colleagues at the Microsoft Library or the Microsoft Archives to "deposit" some knowledge that is too fragile for our document management and retention systems.&lt;/p&gt;

&lt;h2&gt;
  
  
  Secure a bench
&lt;/h2&gt;

&lt;p&gt;There's no escaping it: executive support is core to the success of your open source strategy, regardless of the organization. Maybe the CEO is your sponsor &lt;em&gt;and&lt;/em&gt; the ideologist behind your open source strategy. But even if she isn't, you don't always need "inorganic" (authoritative, mandated, dictated) executive edicts.&lt;/p&gt;

&lt;p&gt;Sometimes you can create the conditions for executive support without calling in the air cover. You should still actively advocate but by bringing an external perspective, maybe shining a light on what a competitor or an adjacent industry are doing with open source, you can generate enough energy for executive support.&lt;/p&gt;

&lt;p&gt;That's why I often suggest that internal open source advocates remain fully informed of their context: industry, trends, market research and intelligence. I fully believe a market research engine focused on open source technologies is part of the charter of those in charge of open source strategy, in no small part because they can attract and retain not only tactical executive support but a support &lt;em&gt;bench&lt;/em&gt; that can serve the strategy long term and across the spectrum.&lt;/p&gt;

&lt;h2&gt;
  
  
  Invest in the program
&lt;/h2&gt;

&lt;p&gt;You can have the vision and the sponsors, maybe even the hero hires, but if you don't have the program and the resources for the program, you'll always depend on heroics and coin tosses - and open source always outpaces chance.&lt;/p&gt;

&lt;p&gt;Ibrahim recently shared his take on the &lt;a href="https://twitter.com/ibrahimatlinux/status/1094660138277920769"&gt;responsibilities of an open source program office&lt;/a&gt;. I don't see program offices as the place "&lt;em&gt;where open source happens&lt;/em&gt;" but rather the place "&lt;strong&gt;where we invest so that open source happens&lt;/strong&gt;".&lt;/p&gt;

&lt;p&gt;That doesn't mean it's where all the decisions are made, or where all the talent is hired. It's not where other teams dump or outsource their duties. It shouldn't be a bottleneck and it only makes sense if it scales to "community scale". It's not only a place for people to come, but a launchpad from where people goes to the rest of the organization.&lt;/p&gt;

&lt;p&gt;And even with the program and the funding it's in the &lt;em&gt;execution&lt;/em&gt; of an open source program (the tooling, the processes, the  compliance efforts...) that a competitive differentiator can be made. No wonder so many in the industry collaborate in the TODO Group to do so: to elevate the initial bar.&lt;/p&gt;

&lt;p&gt;There is no shortage of debate on what success looks like for an open source program office. But I think the program must enable the organization to know where they are in the engagement spectrum at any given point in time, anticipate where they want to be and what is in the critical path to get there.&lt;/p&gt;

&lt;p&gt;Without the program, the organization is instrument flying and everything is opportunistic. You wouldn't short change your organization in security response or customer support - don't do so in the open source program either.&lt;/p&gt;

&lt;h2&gt;
  
  
  Care about the product
&lt;/h2&gt;

&lt;p&gt;Whether it's disruption, competitiveness, engineering economics or something else, organizations look at open source to derive value from it and make them better at doing what they do. This is why open source programs should deeply care about the product, whether that's a solution, a technology portfolio, a societal value proposition or something else.&lt;/p&gt;

&lt;p&gt;The program will often times need to have an opinion on whether open sourcing is appropriate for a particular initiative, or how exactly to adopt what types of open source components to get something done. They need to understand the business imperatives for the product, and need to read the product tea leaves.&lt;/p&gt;

&lt;p&gt;To me, this means spending quality time with product leaders, developers/engineers and everyone else that makes the business tick. You might not establish a functional relationship with those individuals, but a personal one is worth the time investment.&lt;/p&gt;

&lt;p&gt;What are their top technical dilemmas? What keeps them up at night? What resource challenges do they have? What are some types of fire drills or distractors they face? And then, more specifically, what types of communities their members are part of, where do they get external direction or pressures?&lt;/p&gt;

&lt;p&gt;I might be biased since I'm a product person, but in growing open source programs I believe having product management types can be really helpful, maybe even developing "product advocates" (on a rotational basis) that can represent and advocate for product when the program conducts business with their natural partners.&lt;/p&gt;

&lt;p&gt;This might derive into actual work for the open source program office. For example, if different teams are building in diverging build systems, or obtaining components from disparate sources, or lack alignment in terms of tooling, the open source program office is often times expected to resource, develop and maintain services for product units. Product is always a customer of the program office.&lt;/p&gt;

&lt;p&gt;Finally, caring about the product as an open source subject matter expert also means being able to discourage the business from doing things that only look good in form.&lt;/p&gt;

&lt;p&gt;I spent the first half of my career at Microsoft advocating for open source by hyping it up, and the second half advocating for it by keeping teams away from pitfalls. For example, our team has written "why NOT" guidance that sets realistic expectations for teams.&lt;/p&gt;

&lt;p&gt;This raises the bar so that the projects where we invest become more clear and our participation in the community is deeper. It also strengthens the case for resource asks, or makes it clear there wasn't a case to begin with.&lt;/p&gt;

&lt;p&gt;There is business value, in pushing for maturity and sophistication: teams can no longer afford to be the "hype laggards" of an organization, let alone an industry, so you're helping them hit the market at an advanced stage, too.&lt;/p&gt;

&lt;h2&gt;
  
  
  Develop the culture
&lt;/h2&gt;

&lt;p&gt;I'm often reminded by &lt;a href="https://twitter.com/stephenrwalli"&gt;Stephen Walli&lt;/a&gt; of the famous "culture eats strategy for breakfast" lemma. At some point you realize your organization is well disciplined and resourced to execute most of the things we expect an open source program to do (it might even be in autopilot) but there's no purchase order you can approve for &lt;em&gt;culture&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;An open source culture permeates the entire organization. It should be an integral part of recruitment and talent development: that certainly hasn't gone unnoticed for &lt;a href="https://angel.co/blog/want-to-recruit-better-engineers-open-source-your-code"&gt;Facebook&lt;/a&gt;. Conversely, the absence of it -or lack of authenticity- can alienate your value chain. If you sell software, it certainly aggravates the &lt;a href="https://lukekanies.com/my-losing-battle-with-enterprise-sales/"&gt;enterprise sales problem&lt;/a&gt; described by Luke Kanies.&lt;/p&gt;

&lt;p&gt;At Microsoft, I spend a significant amount of my time talking to others about open source culture and taking concrete steps towards developing said culture. Developing an open source culture should be an observable event, and while you can measure many aspects of it, I've often found success best described in terms of &lt;em&gt;moments of truth&lt;/em&gt; that should be as easy to digest and understand as the culture itself.&lt;/p&gt;

&lt;p&gt;A powerful mechanism I've found is to structure most training and communications around the culture theme. Often times, this takes the form of sharing &lt;em&gt;learnings&lt;/em&gt;: finding culturally-relevant events in our open source journey that can serve to illustrate what we mean by a fledging (or failing) culture instead of trying to enumerate or dictate the aspects of the culture that we find strategically convenient. It is true: culture is on a low-strategy diet.&lt;/p&gt;

&lt;p&gt;In times of change it's hard for organizations to find their internal or external voice and this is why functions like PR or Marketing are such an important part of developing an open source culture. In absence of cultural tenets and a strong "why we do this", those functions operate opportunistically with the results that we all love to critique.&lt;/p&gt;

&lt;p&gt;Something that has worked wonders for us is developing talent as ambassadors of the culture, equipping them with that voice. PR and Marketing and other functions can facilitate this, understanding that we lead with a transparency and enablement spirit.&lt;/p&gt;

&lt;p&gt;When speaking, I spend little time introducing myself before showing a &lt;a href="https://twitter.com/bureado/status/860207419766558721"&gt;collage&lt;/a&gt; of thousands of Microsoft contributors on GitHub. When it comes to culture, my goal is to speak on their behalf and the other way around. Over time, I've found this speaks louder and more institutionally than one-offs, and one-offs can go wrong at any time, but especially when your maturity tree has a lot of hype branchs and people organically stay on message, less because of the policing and more because the message makes sense.&lt;/p&gt;

&lt;h2&gt;
  
  
  Don't go in alone
&lt;/h2&gt;

&lt;p&gt;Yours is not the only organization trying to derive value from open source. Your competitors certainly are, but so is everyone else in your value chain. Chances are that anyone that Accounts Receivable or Accounts Payable is touching is also trying to do something with open source. Therefore, units like partner and business development, customer success, customer support, etc., must all think ecosystem first.&lt;/p&gt;

&lt;p&gt;And by ecosystem of course I mean commercial and community partnerships. We talk about open source foundations and some working groups as if they're extensions of the standards work of the nineties, but ecosystem is far more complicated than that. &lt;a href="https://medium.com/memory-leak/2018-the-biggest-year-for-open-source-software-ever-68d01b4751a7"&gt;Venture capital is a reality&lt;/a&gt; in open source today, and we can expect &lt;a href="https://os2g.unl.edu/"&gt;academia&lt;/a&gt;, &lt;a href="https://www.forbes.com/sites/federicoguerrini/2018/12/30/eu-to-offer-almost-1m-in-bug-bounties-on-open-source-software/#7d84c57011be"&gt;government&lt;/a&gt; and &lt;a href="https://www.wipro.com/en-US/open-source/"&gt;system integrators&lt;/a&gt; to play an increasingly stronger role in this phase of the cycle.&lt;/p&gt;

&lt;h2&gt;
  
  
  Final thoughts
&lt;/h2&gt;

&lt;p&gt;If you've read this far, you've probably noticed I've mentioned a good number of functions that, in any medium- to large-sized organization, are performed by different (maybe even competing) divisions and disciplines, so you might be wondering about how open source change agents in an organization can or should work with their colleagues.&lt;/p&gt;

&lt;p&gt;I've been fortunate enough to have worked with open source at Microsoft across the entire org chart: field sales, corporate marketing, business planning, talent development, partnerships, product strategy... and with and across disciplines ranging from legal to customer support. In those capacities, I've interacted with different shapes of an "open source program" sometimes running in parallel and sometimes not running at all. And I've done so since 2010, under evolving leadership and evolving internal perceptions about open source at Microsoft.&lt;/p&gt;

&lt;p&gt;I can talk at length about those experiences in the hallway track at many upcoming open source conferences and I certainly look forward to &lt;a href="https://twitter.com/bureado/status/1094803383758680066"&gt;collaborating&lt;/a&gt; with thought leaders to document some learnings. But one of the most inspiring leaders I know at Microsoft summarized what best describes the mechanics of an impactful cross-boundary open source collaboration: &lt;strong&gt;being humble, helpful and harmless&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;This is why sharing cultural learnings is so powerful: you let someone else tell the story for you, and it's invariably helpful because it's immediately relatable. This also means that I proactively look for opportunities to find out those learnings. I proactively seek to participate in RCAs and post-mortems, and in those efforts I mostly stay silent (=harmless)&lt;/p&gt;

&lt;p&gt;Yet I can confidently say that it is in and through those &lt;a href="https://www.theregister.co.uk/2018/06/14/microsoft_r_open_debian_dev/"&gt;painful moments&lt;/a&gt; that I've felt we've made the broadest, most authentic and longer-lasting cultural changes with open source at Microsoft.&lt;/p&gt;

&lt;p&gt;In summary:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The journey to pervasive open source engagement isn't linear. Don't try to control a hype event, just observe it.&lt;/li&gt;
&lt;li&gt;Sometimes the case for executive support is self-evident. Develop an open source market research engine.&lt;/li&gt;
&lt;li&gt;You can have the vision and the sponsors, but you can't shortchange your program. Operational excellence is the next frontier of differentiation.&lt;/li&gt;
&lt;li&gt;Product empathy isn't optional for an open source program office. Knowing what's a non-ideal case for open source can be very valuable: people can't afford to lag in the engagement model.&lt;/li&gt;
&lt;li&gt;Lead with culture and don't go in alone!&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>opensource</category>
    </item>
  </channel>
</rss>
