<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Daniel Moch</title>
    <description>The latest articles on DEV Community by Daniel Moch (@djmoch).</description>
    <link>https://dev.to/djmoch</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/djmoch"/>
    <language>en</language>
    <item>
      <title>Acme: The Un-Terminal</title>
      <dc:creator>Daniel Moch</dc:creator>
      <pubDate>Mon, 20 Jan 2025 09:05:43 +0000</pubDate>
      <link>https://dev.to/djmoch/acme-the-un-terminal-36f2</link>
      <guid>https://dev.to/djmoch/acme-the-un-terminal-36f2</guid>
      <description>&lt;p&gt;If you are not a software developer, you might be surprised to learn that a large subset of that community loves terminal-based user interfaces (TUIs). These are applications that run, for instance, in Terminal.app on MacOS or Windows PowerShell&lt;sup id="fnref:1"&gt;1&lt;/sup&gt;. Developers who prefer TUIs will usually insist on using Emacs or Vim to write their code. They might also try to find TUIs to do &lt;em&gt;everything&lt;/em&gt; in the terminal:&lt;a href="http://www.mutt.org" rel="noopener noreferrer"&gt;Mutt&lt;/a&gt; for email, the &lt;a href="https://vifm.info" rel="noopener noreferrer"&gt;Vifm&lt;/a&gt; file manger.&lt;/p&gt;

&lt;p&gt;I was one of those folks for a time. Then I realized the magic wasn’t the terminal &lt;em&gt;per se&lt;/em&gt;, but rather what is best described as a highly-regular, text-based user interface with a clear interface to other utilities. Modern GUIs tend to each be created from scratch and are designed to operate hermetically. But treating every application like its own blank slate and creating every user interaction from scratch has the effect, in aggregate, of making modern computers overwhelming to use. The heavy use of iconography makes the uphill climb all that much steeper.&lt;/p&gt;

&lt;p&gt;Terminal-based programs don’t suffer from this. Their inability to display arbitrary bitmaps (pictures) essentially narrows their bandwidth. With text as the dominant mode of communication, behavioral patterns have emerged across programs, making the second terminal program easier to learn that the first and scaling from there.&lt;/p&gt;

&lt;h2&gt;
  
  
  Text-Based GUIs?
&lt;/h2&gt;

&lt;p&gt;So can we create GUI-based applications that rely only (or significantly) on text and provide a standardized mode of interaction? We can, and, in fact, it has already been done in the area of text editors. The editor I have in mind is Acme&lt;sup id="fnref:2"&gt;2&lt;/sup&gt;, originally released in the 1990’s as part of the Plan 9 operating system and brought to other Unix-like systems by Russ Cox’s &lt;a href="https://9fans.github.io/plan9port/" rel="noopener noreferrer"&gt;plan9port&lt;/a&gt; project.&lt;/p&gt;

&lt;p&gt;The power of the Acme editor is twofold. First is its seamless way of integrating the command-line tools programmers are already familiar with into its environment. Users can select any text and pipe&lt;sup id="fnref:3"&gt;3&lt;/sup&gt; it a command. The output of that command can in turn either populate a new window within Acme, or it can replace the original, selected text. Second is Acme’s use of the &lt;a href="https://en.wikipedia.org/wiki/9P_(protocol)" rel="noopener noreferrer"&gt;9P protocol&lt;/a&gt; under the hood. Without diving into the details of the protocol, this allows for a variety of interactions, including the use of what would in other programming environments be called plugins&lt;sup id="fnref:4"&gt;4&lt;/sup&gt;. What makes these so powerful is the free-form nature of the interaction model. The 9P communication is exposed on POSIX systems via &lt;a href="https://en.wikipedia.org/wiki/Unix_domain_socket" rel="noopener noreferrer"&gt;Unix domain sockets&lt;/a&gt;, but the protocol’s semantics are simple enough that plan9port includes a general-purpose client, allowing for helper programs to theoretically be written even as shell scripts.&lt;/p&gt;

&lt;p&gt;This is all a bit unusual and hard to imagine, so I recommend viewing this &lt;a href="https://www.youtube.com/watch?v=dP1xVpMPn8M" rel="noopener noreferrer"&gt;brief(-ish) tour of Acme&lt;/a&gt; by Russ Cox.&lt;/p&gt;

&lt;h2&gt;
  
  
  Acme’s Un-Strengths
&lt;/h2&gt;

&lt;p&gt;While I was taken with all of this when I first became acquainted with it, there are a couple things to say about what Acme does &lt;em&gt;not_do that have also contributed to its staying power for me. To summarize, there is no configuration to speak of: no color themes, no syntax highlighting, no attempt (save for an optional, primitive auto-indent) to automatically format code&lt;sup id="fnref:5"&gt;5&lt;/sup&gt;. Prior to Acme, I had spent far too much time concerned with how my Vim setup _looked&lt;/em&gt;, e.g. the color scheme. Configuration files running into the hundreds of lines do not and cannot exist in Acme. Not everyone will want such a thing&lt;sup id="fnref:6"&gt;6&lt;/sup&gt;, but I cannot emphasize enough what a breath of fresh air it was for me to enter an environment with no knobs to tweak.&lt;/p&gt;

&lt;h2&gt;
  
  
  An Integrating Development Environment
&lt;/h2&gt;

&lt;p&gt;Acme is remarkable for what it represents: a class of application that leverages a simple, text-based GUI to create a compelling model of interacting with all of the tools available in the Unix (or Plan 9) environment. Cox calls it an “integrating development environment,” distinguishing it from the more hermetic “integrated development environment” developers will be familiar with. The simplicity of its interface is important. It is what has allowed Acme to age gracefully over the past 30 or so years, without the constant churn of adding support for new languages, compilers, terminals, or color schemes.&lt;/p&gt;




&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Windows also has an antediluvian terminal called Command Prompt if that’s more your speed. ↩︎&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Visual Studio code might come close to achieving this ideal as well. If it succeeds, I think it is mainly because it integrates a text-based terminal into a text-heavy GUI environment, something Acme does by a different method. As we shall see shortly, whereas VS Code allows you to open a terminal window within the editor, Acme’s integration goes deeper, allowing the use of standard CLI-style commands in _any_window. ↩︎&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The word “pipe” here should be understood in the standard, Unix sense. ↩︎&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Cox calls these “helper programs” in the video linked below. ↩︎&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;There are examples of plugins that will format code on save, similar to VS Code, and even a full-blown &lt;a href="https://en.wikipedia.org/wiki/Language_Server_Protocol" rel="noopener noreferrer"&gt;Language Server Protocol&lt;/a&gt; client in &lt;a href="https://github.com/9fans/acme-lsp" rel="noopener noreferrer"&gt;acme-lsp&lt;/a&gt;. ↩︎&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The lack of syntax highlighting in particular is likely to draw ire. This is not something I wish to advocate for or against, so I will limit myself to saying that, if you are like me, you will not miss it once it is gone. Instead, you will find that you have discarded a whole panel of “knobs” with which you no longer need to concern yourself. While I found there was almost no cost to giving up syntax highlighting, the alternative—endlessly tweaking syntax highlighting themes—was, for me, an extraordinary waste of time. ↩︎&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

</description>
      <category>programming</category>
      <category>editors</category>
    </item>
    <item>
      <title>Systems Programming</title>
      <dc:creator>Daniel Moch</dc:creator>
      <pubDate>Mon, 13 Jan 2025 09:41:53 +0000</pubDate>
      <link>https://dev.to/djmoch/systems-programming-4j17</link>
      <guid>https://dev.to/djmoch/systems-programming-4j17</guid>
      <description>&lt;p&gt;I’m not sure what &lt;em&gt;systems programming&lt;/em&gt; means. It used to mean something like designing operating systems. The creators of the Go programming language sought to redefine the term (although they may not have seen it that way) to mean something like designing networked systems, and particularly web servers. This to say, the term’s usefulness is debatable.&lt;/p&gt;

&lt;p&gt;And yet I find myself attracted to it. I even like to think of myself as a &lt;em&gt;systems programmer&lt;/em&gt; of a sort. I’m not content to simply use the layers of abstraction provided to me. I always want to know what’s going on above and especially below the layer I am programming. Over the course of my career I’ve had the privilege of working on everything from device drivers, to power-on self-test (POST), all the way up the stack to large, web-based systems.&lt;/p&gt;

&lt;h2&gt;
  
  
  An Actually-Useful Definition of Systems Programming
&lt;/h2&gt;

&lt;p&gt;I think this approach to programming is about the best definition of &lt;em&gt;systems programming&lt;/em&gt; I can come up with. Systems programmers want all of the layers in their software system to cooperate. They don’t think abstraction should be used to hide messes in lower layers. Those messes should be cleaned up. The whole system will be better for it.&lt;/p&gt;

&lt;p&gt;This all came to mind as I read Fernando Hurtado Cardenas’s blog post “&lt;a href="https://fhur.me/posts/2024/thats-not-an-abstraction" rel="noopener noreferrer"&gt;That’s Not an Abstraction, That’s Just a Layer of Indirection&lt;/a&gt;.” In noticing and commenting on the difference between abstraction and indirection, Cardenas is behaving like a good &lt;em&gt;systems programmer&lt;/em&gt;(at least by my definition). They recognize it is the whole system that needs to be optimized and not just the topmost layer.&lt;/p&gt;

&lt;p&gt;With my own experience and Cardenas’s post in mind, maybe we can start to sketch a useful definition of &lt;em&gt;systems programming&lt;/em&gt; as applying the same thoughtful approach to the entire stack that one does to the software they are actually writing&lt;sup id="fnref:1"&gt;1&lt;/sup&gt;. This mindset is not a given within the software community, and with well-designed systems it does not need to be ubiquitous. But I would argue it does need to be &lt;em&gt;present&lt;/em&gt; in any large-scale software team.&lt;/p&gt;

&lt;h2&gt;
  
  
  How Is This Useful Though?
&lt;/h2&gt;

&lt;p&gt;What does this mean for the makeup of software teams, particularly in large organizations that embrace DevOps, Platform Engineering, and the like? I actually think this understanding of &lt;em&gt;systems programming&lt;/em&gt; fits quite naturally within these kinds of organizations. The application developers tasked with writing the value-add business logic are usefully abstracted away from the system as much as possible. The &lt;em&gt;systems programmers&lt;/em&gt; are the architects, platform engineers, and DevOps/SRE staff who concern themselves with the rest of the stack (or, in the case of architects, the &lt;em&gt;whole&lt;/em&gt; stack). Indeed, this models comports quite well with the division of labor advocated by the &lt;em&gt;&lt;a href="https://teamtopologies.com" rel="noopener noreferrer"&gt;Team Topologies&lt;/a&gt;&lt;/em&gt; crowd.&lt;/p&gt;

&lt;p&gt;At this point I imagine one might respond to all of this with, “So what?” I have basically invented a definition of &lt;em&gt;systems programming&lt;/em&gt; out of the blue that just confirms that &lt;em&gt;Team Topologies&lt;/em&gt; got it right? Yes. And …&lt;/p&gt;

&lt;p&gt;I think this definition of &lt;em&gt;systems programming&lt;/em&gt; sheds light on something I have struggled with. As someone who &lt;em&gt;wants&lt;/em&gt; to understand and appreciate the entire software stack, I am tempted to look down on folks who just want to write their code and be done with it. This whole thought experiment has brought to light the fact that application programmers are necessary too. Of course they are! They are the ones writing the value-add business logic that will make or break the business. A &lt;em&gt;systems programmer&lt;/em&gt;’s job is to support an application programmer. How can that happen if instead we look down our noses at them. They are our customers! So elitism within the Platform Engineering community should be verboten.&lt;/p&gt;

&lt;p&gt;This systems/application programmer model might also be a helpful heuristic for hiring managers. Ask questions that tease out whether a candidate wants to understand the entire software stack, or if they would rather focus on writing their business logic. Doing so will give you an idea of where they will fit more naturally.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;For me this started as a thought experiment to understand why, despite the lack of clarity, I still find myself attracted to the term &lt;em&gt;systems programming&lt;/em&gt;. I then used my own experience and intuitions to induce a definition for the term that put words to my gut-level intuitions about it. I rather doubt that framing the term this way will take off. The term has fallen out of regular use anyway. Still, I think it is helpful to have a catchall term to distinguish application programmers from all the other supporting disciplines, and I am not aware of any other term doing that today.&lt;/p&gt;




&lt;ol&gt;
&lt;li&gt;It is perhaps useful to identify an opposite of this kind of &lt;em&gt;systems programming&lt;/em&gt; in certain kinds of hacking. Let me say that I have a lot of respect for the hacker ethos (or perhaps &lt;em&gt;mythos&lt;/em&gt;). Still, I struggle to see its place in large scale software_engineering_, where the technical requirements are always (at least implicitly) complemented with non-technical requirements around maintainability, technical debt and the like. I want to be clear that it would be possible to push this too far. I genuinely think software engineers stand to learn a lot from hackers. I just think that software engineering and, more to the point,&lt;em&gt;systems programming&lt;/em&gt; are very different mindsets. ↩︎
&lt;/li&gt;
&lt;/ol&gt;

</description>
      <category>programming</category>
      <category>platformengineering</category>
      <category>devops</category>
      <category>sre</category>
    </item>
    <item>
      <title>Complexity Is Leaky, or, Roger Peppé Gets It</title>
      <dc:creator>Daniel Moch</dc:creator>
      <pubDate>Sat, 27 Jul 2024 12:19:00 +0000</pubDate>
      <link>https://dev.to/djmoch/complexity-is-leaky-or-roger-peppe-gets-it-24e5</link>
      <guid>https://dev.to/djmoch/complexity-is-leaky-or-roger-peppe-gets-it-24e5</guid>
      <description>&lt;p&gt;Over the past year or so I’ve been collecting quotes about simplicity. Here is a short collection of my favorites:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Einstein: “Everything should be as simple as possible, but no simpler.”&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;“Even the simplest solution is bound to have something wrong with it.”&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Grady Booch:, “The function of good software is to make the complex appear to be simple.”&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;C.A.R. Hoare: “There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies and the other way is to make it so complicated that there are no obvious deficiencies.”&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The pretty clear impression anyone would get from listening to &lt;a href="https://about.sourcegraph.com/blog/dev-tool-time-roger-peppe/" rel="noopener noreferrer"&gt;Roger Peppé discuss his work environment&lt;/a&gt; is that he values simplicity. If you think I’m overstating things, consider the title the folks at Sourcegraph opted to give their conversation with him. The video is worth viewing. I’ll use the rest of this post to talk about what benefit I’ve personally seen since deciding to pay the cost of simplicity for myself.&lt;/p&gt;

&lt;p&gt;To begin with, my approach to simplicity is not too different from Peppé’s. I have plan9port installed on my laptop. I pretty much live in Acme, having embraced its pared back approach to editing for a little more than two years now. Before that I was an avid Vim user.&lt;/p&gt;

&lt;p&gt;The thing about Vim is that it’s complicated, and that complexity has a cost. Most of the cost is maintenance. Vim has support for syntax highlighting, so that needs to be updated every time a new language comes out or an old one gets updated. The same goes for the compiler plugins, each of which needs to understand how to parse output from its compiler into Vim’s Quickfix window. I’m not saying these are hard things to do given some background knowledge. But they need to be done. And I haven’t even mentioned all of the code in Vim to deal with different types of terminals and terminal emulators&lt;sup id="fnref:1"&gt;1&lt;/sup&gt;.&lt;/p&gt;

&lt;p&gt;Vim has a lot of maintainers, so if you use it, you’ll probably never see any of that complexity. It’s the maintainers who do. And they’re working for free, because they presumably love Vim. So it’s a win-win, right? You get the joy of using Vim, and the maintainers get the joy of maintaining it.&lt;/p&gt;

&lt;p&gt;Except it’s not really true that you’ll never see that complexity. You almost definitely will. You’ll realize Vim has a plugin system, and so you’ll want to learn how to leverage that to make the experience a little better. You’ll learn that there are a lot of third-party color themes out there, so you’ll put time into finding just the right one. You’ll join the ranks of folks online who compete to have the longest &lt;code&gt;.vimrc&lt;/code&gt; file. Or maybe you’ll join the group that rolls their eyes at those people and competes for the shortest.&lt;/p&gt;

&lt;p&gt;I can say this with some confidence, because this was me. And Vim is just one example. I’ve used &lt;a href="https://awesomewm.org/" rel="noopener noreferrer"&gt;plenty&lt;/a&gt; &lt;a href="https://irssi.org/" rel="noopener noreferrer"&gt;of&lt;/a&gt; &lt;a href="https://www.ocf.berkeley.edu/~ckuehl/tmux/" rel="noopener noreferrer"&gt;tools&lt;/a&gt; that boast being hyper-configurable. My experience has been that that very configurability is a source of cognitive burden.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.techopedia.com/definition/15511/yak-shaving" rel="noopener noreferrer"&gt;Yak shaving&lt;/a&gt; has a hallowed tradition in computer programming, but it seems to have expanded to encompass tasks that bear no obvious relationship to the task at hand &lt;em&gt;and add no value to it&lt;/em&gt;. Looking back, I can see that having endlessly configurable tools gave me endless opportunity to fiddle with them. And this felt important! Maybe I’m more prone to distraction than most, but I got to a point where I couldn’t really handle language keywords being displayed to me in the wrong shade of blue. If I know I can change the color, then why not change it?&lt;/p&gt;

&lt;p&gt;Fast-forward to today: I use a text editor with no syntax highlighting, and doesn’t even have a config file. Meanwhile, professionally I’ve had probably the most productive years of my life. Simplicity has a cost, but I’ve found it to be much less than the cost of complexity.&lt;/p&gt;




&lt;ol&gt;
&lt;li&gt;I could talk more about developers romanticizing doing everything possible in a terminal window, but that’s a post for another time. ↩︎
&lt;/li&gt;
&lt;/ol&gt;

</description>
      <category>technology</category>
      <category>simplicity</category>
    </item>
    <item>
      <title>I Am Done With Self-Hosting</title>
      <dc:creator>Daniel Moch</dc:creator>
      <pubDate>Thu, 04 Jul 2024 18:43:00 +0000</pubDate>
      <link>https://dev.to/djmoch/i-am-done-with-self-hosting-595h</link>
      <guid>https://dev.to/djmoch/i-am-done-with-self-hosting-595h</guid>
      <description>&lt;p&gt;This is a personal post on why, after almost ten years, I am no longer self-hosting my blog, mail and other servers.&lt;/p&gt;

&lt;p&gt;First, a clarification: up until now a lot of my data has been hosted on various virtual private server (VPS) providers. This may walk up to the line between proper self-hosting and ... something else. Still, I continue to call what I was doing self-hosting, not least because data I felt needed to stay private remained on servers physically under my control. I also think it qualifies because I was still fully responsible for OS-level maintenance of my VPS.&lt;/p&gt;

&lt;p&gt;Pedantry aside, that maintenance was a primary driver for moving to a different hosting model. I have spent long enough maintaining Linux and OpenBSD servers that the excitement has long since worn off, and so giving time to it on weekends and holidays became untenable in the face of other options. There are services run by companies I trust that will host my email and most other data I might care to access remotely. They will even apply end-to-end encryption to the majority of that data. This is far from a zero trust arrangement, I admit. Still, the companies I have migrated to have cleared a threshold that I am comfortable with for the data they are hosting.&lt;/p&gt;

&lt;p&gt;Plus, these companies have better availability guarantees than I could ever hope to achieve on my own. For example, my VPS was configured to forward incoming mail through a Wireguard VPN into a mail server hosted at my home. But as a result, if the power went out at my house, email service would go down. This was fine most of the time, but when I was on vacation the issue could persist until I got home to reboot the necessary hardware. Spending my vacation anxious about my servers came to seem like a ridiculous trade in exchange for more control over my data.&lt;/p&gt;

&lt;p&gt;Probably the biggest surprise was that providers often have free tiers that are sufficient for my needs, meaning I'm actually paying less in hosting fees than I was before. I stand to make back even more if I sell hardware I'm not using at the moment (although I'm pretty good at coming up with new uses).&lt;/p&gt;

&lt;p&gt;But mostly I'm glad to get the time back. I'll use it to spend more time with my family, or maybe write more. At the very least I'll be more present on vacation.&lt;/p&gt;

</description>
      <category>personal</category>
      <category>selfhost</category>
      <category>update</category>
    </item>
    <item>
      <title>Regarding Semantic Versioning</title>
      <dc:creator>Daniel Moch</dc:creator>
      <pubDate>Fri, 11 Sep 2020 12:51:18 +0000</pubDate>
      <link>https://dev.to/djmoch/regarding-semantic-versioning-hhk</link>
      <guid>https://dev.to/djmoch/regarding-semantic-versioning-hhk</guid>
      <description>&lt;p&gt;So as not to bury the lede, I'll get to my point: &lt;a href="https://semver.org/" rel="noopener noreferrer"&gt;Semantic Versioning&lt;/a&gt; is a meta-API, and maintainers who are cavalier aboutviolating it can't be trusted to created stable contracts. I've lostpatience for breaking changes making their way to my code bases withoutthe maintainers incrementing the major version of their projects,especially in language ecosystems where Semantic Versioning isexpected, and in such cases I'm going to begin exploring alternativeoptions so I can ban such libraries from my projects—personal and professional—altogether.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Even Is Semantic Versioning?
&lt;/h2&gt;

&lt;p&gt;When developers adopt an external library into their code bases, theydo so knowing they will be bound in their use of the library by theapplication programming interface (API). In this sense, an API can beseen as a kind of contract between a library's maintainer and itsconsumers. If a maintainer makes frequent changes to a library's API,then that API is considered unstable. In that situation, consumerseither use the library anyway, accepting the risk that things willbreak as a result of a change in the library, or they avoid it.&lt;/p&gt;

&lt;p&gt;Semantic Versioning seeks to ease this picture by embedding notions ofbackward- and forward- compatibility into software version numbers. Ifa library maintainer adheres to it, then consumers are able to upgradeto newer versions of the library (say, to pick up bug fixes) withoutfear of breaking changes, provided they aren't moving to a new, majorversion. In terms of backward- and forward-compatibility, SemanticVersioning creates an expectation that a given version of a library isforward-compatible with any future version up to the next, majorrelease. A library is also backward-compatible down to the mostrecent, minor release (beyond which point consumers' code _might_break if they are using newer library features).&lt;/p&gt;

&lt;p&gt;There are several benefits to using Semantic Versioning. One benefitis that it becomes easy to codify dependency requirements intoautomated dependency tools. By &lt;em&gt;assuming&lt;/em&gt; Semantic Versioning, usersof tools like NodeJS's &lt;code&gt;npm&lt;/code&gt; and Rust's &lt;code&gt;cargo&lt;/code&gt; are able tospecify dependency &lt;em&gt;ranges&lt;/em&gt; rather than hard-coded versions. So if anew release of a library comes out, these tools are able to decideautomatically whether or not they can be used in a given project. Inother words, Semantic Versioning creates an opportunity for downstreamdevelopers to easily decide whether or not to upgrade to a new versionof a library, potentially picking up important bug fixes in theprocess.&lt;/p&gt;

&lt;h2&gt;
  
  
  Semantic Versioning As A Meta-API
&lt;/h2&gt;

&lt;p&gt;Let me go back and unpack what I mean by calling Semantic Versioning ameta-API. As I said above, API's represent a sort of contract betweenlibrary maintainers and downstream consumers. Semantic Versioning thenrepresents a sort of contract-about-the-contract. It's an agreementregarding when and how the API will change. In a situation whereSemantic Versioning is the &lt;em&gt;de facto&lt;/em&gt; norm, as it is in the languageecosystems mentioned above, a maintainer who chooses not to follow itis breaking this contract, creating the risk of needless downstreambreakage.&lt;/p&gt;

&lt;p&gt;Because Semantic Versioning requires more contextual knowledge thanany compiler or tool chain can boast, the process is largely manual.This means mistakes happen, and breaking changes are introducedwithout rolling the major version number. Responsible maintainers willown such mistakes and issue bug fixes to correct them, implicitlyacknowledging that the meta-API is as important as the API itself.&lt;/p&gt;

&lt;p&gt;Other maintainers aren't as interested as Semantic Versioning, andseem to view it as a sort of straight jacket they would rather breakfree of than a tool to promote software stability. These folks fightagainst their tool chains, and indeed their entire language ecosystems,arguing that Semantic Versioning doesn't work for them and they shouldbe free to work however they want. Some of their arguments are likelystronger than others, but none of them will be ultimately compelling.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;If you work in a language ecosystem where Semantic Versioning is the_de facto_ norm, where violating it can wreak havoc downstream, thenplease play nice and follow its dictates. Instead of viewing it as astraight jacket, try to see it as an algorithm to determine what yournext release number should be. We should all like algorithms!&lt;/p&gt;

&lt;p&gt;If you refuse to be persuaded, then understand I will will not workdownstream from you &lt;a href="https://www.danielmoch.com/posts/2020/09/regarding-semantic-versioning/#id2" rel="noopener noreferrer"&gt;1&lt;/a&gt;. I'll find a different upstream to work withbecause I cannot trust you to create a stable contract. Yourwillingness to conform to the meta-API is something I will take intoconsideration in the future before adopting a library into any projectthat I work on. I wish you well; I hope you have fun; I'll be sure togive you a wide berth.&lt;/p&gt;

&lt;dl&gt;
&lt;dt id="id2"&gt;&lt;span&gt;&lt;a href="https://www.danielmoch.com/posts/2020/09/regarding-semantic-versioning/#id1" rel="noopener noreferrer"&gt;1&lt;/a&gt;&lt;/span&gt;&lt;/dt&gt;
&lt;dd&gt;
&lt;p&gt;I'll note here that I'm more forgiving in environments where
Semantic Versioning is not a &lt;em&gt;de facto&lt;/em&gt; norm.&lt;/p&gt;
&lt;/dd&gt;
&lt;/dl&gt;

</description>
      <category>semver</category>
      <category>versioning</category>
    </item>
    <item>
      <title>Using QEMU Without Pulling Your Hair Out</title>
      <dc:creator>Daniel Moch</dc:creator>
      <pubDate>Tue, 16 Jul 2019 01:25:02 +0000</pubDate>
      <link>https://dev.to/djmoch/using-qemu-without-pulling-your-hair-out-o2a</link>
      <guid>https://dev.to/djmoch/using-qemu-without-pulling-your-hair-out-o2a</guid>
      <description>&lt;p&gt;I make it a rule to choose my tools carefully and to invest the time to learn them deeply. QEMU has been one of those tools that I've wanted to learn how to use for a long time, but was always a bit intimidated. I actually had been able to use it indirectly via libvirt, but it felt like I was cheating my rule by using one tool to manage another. Despite my vague sense of guilt, things continued this way until I read a recent(ish) &lt;a href="https://drewdevault.com/2018/09/10/Getting-started-with-qemu.html" rel="noopener noreferrer"&gt;introductory post on QEMU&lt;/a&gt; by Drew DeVault. The article is well written (as per usual for DeVault), and you'd do well to read it before continuing here. The point is that it was the kick in the pants I needed to finally roll up my sleeves and learn some QEMU.&lt;/p&gt;

&lt;p&gt;The process of gaining some level of mastery over QEMU ended up being a fair bit more painful than I had anticipated, and so I wanted to capture some of my lessons learned over and above the introductory-level topics. The hard lessons were, by and large, not related directly to QEMU per-se, but more how to manage QEMU VM's. I use my virtual machines to isolate environments for various reasons, and so I need ways to automate their management. In particular, I had two needs that took some time to work out satisfactorily.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Starting VM's automatically on system startup (and cleanly shutting them down with the host system).&lt;/li&gt;
&lt;li&gt;Securely and remotely accessing VM consoles.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Let's take each of those two issues in turn.&lt;/p&gt;

&lt;h2&gt;
  
  
  Automatic VM Management
&lt;/h2&gt;

&lt;p&gt;For our purposes, what I mean by automatic management of VM's is just what I said above. If I need to restart the host server, I want the VM's to cleanly shut down with the host system, and come back up automatically after the host restarts. Since this is the kind of thing init systems are designed to do, it's only natural that we start there as a place to design our VM management infrastructure.&lt;/p&gt;

&lt;p&gt;So we just tell our init system to signal QEMU to shut down any running VM's and we're good right? In theory yes, but in reality QEMU's management interface is a bit tricky to script interactions with. There is a &lt;span&gt;-monitor&lt;/span&gt; switch that allows you to configure a very powerful management interface, and you'll need to use that because the default is to attach that interface to a virtual console in the VM itself (or stdio, if you're not running a graphical interface locally). There are several options for configuring the monitor and the device it's connected to, but the best compromise I found between convenience and security was to make it available via a UNIX socket.&lt;/p&gt;

&lt;p&gt;If you've read DeVault's entry already, then you know that QEMU allows you to configure anything you could want via the command line. After deciding how to expose the monitor to the init system (systemd in my case), the rest came together pretty quickly. Here's what my service file looks like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight ini"&gt;&lt;code&gt;&lt;span class="nn"&gt;[Unit]&lt;/span&gt;
&lt;span class="py"&gt;Description&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;QEMU virtual machine&lt;/span&gt;
&lt;span class="py"&gt;Wants&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;network-online.target&lt;/span&gt;
&lt;span class="py"&gt;After&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;network-online.target&lt;/span&gt;

&lt;span class="nn"&gt;[Service]&lt;/span&gt;
&lt;span class="py"&gt;User&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;qemu&lt;/span&gt;
&lt;span class="py"&gt;Group&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;qemu&lt;/span&gt;
&lt;span class="py"&gt;UMask&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;0007&lt;/span&gt;
&lt;span class="py"&gt;Environment&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;SMP=1&lt;/span&gt;
&lt;span class="py"&gt;EnvironmentFile&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;/etc/conf.d/qemu.d/%I&lt;/span&gt;
&lt;span class="py"&gt;ExecStart&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;/bin/sh -c "/usr/bin/qemu-${TYPE} -enable-kvm -smp ${SMP} -spice unix,disable-ticketing,addr=/run/qemu/%I.sock -m ${MEMORY} -nic bridge,br=${BRIDGE},mac=${MAC&lt;/span&gt;&lt;span class="se"&gt;\_&lt;/span&gt;&lt;span class="s"&gt;ADDR},model=virtio -kernel /var/lib/qemu/%I/vmlinuz-linux -initrd /var/lib/qemu/%I/initramfs-linux-fallback.img -drive file=/var/lib/qemu/%I/%I.qcow2,media=disk,if=virtio -append 'root=/dev/vda rw' -monitor unix:/run/qemu/%I-mon.sock,server,nowait -name %I"&lt;/span&gt;
&lt;span class="py"&gt;ExecStop&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;/bin/sh -c "echo system_powerdown | nc -U /run/qemu/%I-mon.sock"&lt;/span&gt;

&lt;span class="nn"&gt;[Install]&lt;/span&gt;
&lt;span class="py"&gt;WantedBy&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;multi-user.target&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The %I should clue you in that this is a &lt;a href="https://www.freedesktop.org/software/systemd/man/systemd.service.html#Service%20Templates" rel="noopener noreferrer"&gt;service template&lt;/a&gt;, which is a nice convenience if you plan to run more than one VM as a service. This allows multiple VM's to use the same service file via a symlink. For example, a symlink from &lt;a href="//mailto:qemu%40webhost.service"&gt;qemu@webhost.service&lt;/a&gt; to &lt;a href="mailto:qemu@.service"&gt;qemu@.service&lt;/a&gt; would cause systemd to replace %Iwith webhost. In-depth description of service templates is beyond the scope of this post, but the link above should be sufficient to answer additional questions. The last point I'll make here is that the netcat (nc) implementation used in ExecStop must be OpenBSD netcat, otherwise the service will not shut down cleanly. Other implementations will disconnect immediately after sending thesystem_powerdown message, while OpenBSD netcat waits for the socket to close.&lt;/p&gt;

&lt;p&gt;It's also worth taking a moment to stress how important the UMask directive in the above service template is. QEMU uses this to set permissions for the files it creates (including sockets), so we use this to secure our monitor and console sockets. A umask of 0007 directs QEMU to create any files with full permissions for the qemu user and group, while giving no global permissions.&lt;/p&gt;

&lt;p&gt;All that's missing then is the environment file, and that looks like the following:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;TYPE&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;system-x86_64
&lt;span class="nv"&gt;SMP&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nv"&gt;cores&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;1,threads&lt;span class="o"&gt;=&lt;/span&gt;2
&lt;span class="nv"&gt;MEMORY&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;4G
&lt;span class="nv"&gt;MAC_ADDR&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;52:54:BE:EF:00:01
&lt;span class="nv"&gt;BRIDGE&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;br0
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The point of the environment file is to be tailored to your needs, so don't just blindly copy this. In particular, the BRIDGE device will need to exist, otherwise the service will fail. It bears mentioning that we use a bridge device so that the VM can appear like it's own machine to the outside world (and thus we can route traffic to it).&lt;/p&gt;

&lt;p&gt;So much for automating VM startup/shutdown, let's talk about how to access the console.&lt;/p&gt;

&lt;h2&gt;
  
  
  Accessing Your VM Console
&lt;/h2&gt;

&lt;p&gt;Again, QEMU has a plethora of options for accessing the VM console, both local and remote. Since I run my VM's on a server, I wanted something that would allow remote access, but I also wanted something reasonably secure. UNIX sockets end up being a good, middle-of-the-road solution again. They're treated like files, with all of the standard UNIX permissions, but it's also easy to route traffic from a remove machine to a UNIX socket via SSH.&lt;/p&gt;

&lt;p&gt;The applicable switch to achieve this configuration is &lt;span&gt;-spice&lt;/span&gt;. In the above service template, you see the full switch reads:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;-spice unix,disable-ticketing,addr=/run/qemu/%I.sock
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;unix configures QEMU to use a UNIX socket (as opposed to, say, a TCP port),&lt;span&gt;disable-ticketing&lt;/span&gt; configures the console without an additional password (this is okay since we're relying on UNIX file permissions), and addr gives the socket path.&lt;/p&gt;

&lt;p&gt;Now if you want to access the console remotely, it's as simple as setting up a forwarding socket via SSH and connecting your local SPICE client to it. Here's a shell script I whipped up to wrap that behavior:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#!/bin/sh

uid=`id -u`
path=/run/user/$uid/qemu

if [! -d $path]
then
 mkdir -p $path
 chmod 700 $path
fi

ssh -NL $path/$1.sock:/run/qemu/$1.sock urbock.djmoch.org &amp;amp;
pid=$!

while [! -S $path/$1.sock]
do
 sleep 1
done
spicy --uri="spice+unix://$path/$1.sock"kill $!rm $path/$1.sock
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And that's how I learned to use QEMU without pulling my hair out. It's a great tool, and I'm glad I took the time to learn how to use it. I suggest you do the same!&lt;/p&gt;

</description>
      <category>qemu</category>
      <category>systemd</category>
    </item>
    <item>
      <title>You Should Be Using Tags In Vim</title>
      <dc:creator>Daniel Moch</dc:creator>
      <pubDate>Sun, 02 Dec 2018 22:33:09 +0000</pubDate>
      <link>https://dev.to/djmoch/you-should-be-using-tags-in-vim-2c41</link>
      <guid>https://dev.to/djmoch/you-should-be-using-tags-in-vim-2c41</guid>
      <description>&lt;p&gt;&lt;em&gt;Note: This is a crosspost of an entry I wrote in this year's &lt;a href="http://vimways.org" rel="noopener noreferrer"&gt;vimways.org&lt;/a&gt; advent calendar. If you're interested in Vim, I recommend you pop on over there and read the other articles too.&lt;/em&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;I love you; you complete me.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Dr. Evil&lt;/li&gt;
&lt;/ul&gt;
&lt;/blockquote&gt;

&lt;p&gt;I first came to Vim by recommendation. I was looking for a good Python IDE (at the time I was new to the language) and one recommendation was to use Vim with a variety of plugins added on top. That Vim could do a lot of the things I thought only an IDE could do came as a bit of a shock. I spent a summer as an intern using Emacs at a Unix terminal, but didn't have enough curiosity at the time to use it any differently from &lt;code&gt;notepad.exe&lt;/code&gt;. I spent that summer wishing I had automatic features for completion, indentation, and all the things that made me appreciate the IDE's I used in college. How naive I was!&lt;/p&gt;

&lt;p&gt;So how was I directed to achieve powerful programming completions in Vim? By the use of a plugin called YouCompleteMe. My experience with it was okay, at least to start with. It took a while to install and get working, but that didn't bother me at the time since I was just playing around with it at home and the stakes were low if it suddenly broke. I did notice it slowed Vim down. Like a lot. But that was mainly when it was starting up and I didn't know enough to find it frustrating. Probably the first thing that really bothered me about the plugin was that the embedded Jedi language server used more memory than Vim itself. The other recommended plugins were similarly laggy, and I eventually went in search of something better.&lt;/p&gt;

&lt;p&gt;What I found was Vim itself.&lt;/p&gt;

&lt;p&gt;Did you know that Vim has built-in facilities for completions? It works admirably well out of the box too, but with a little bit of additional setup it can be great. Let's take a look at what Vim has on offer regarding completion and see what it takes to fully leverage it.&lt;/p&gt;

&lt;h2&gt;
  
  
  Completion in Vim
&lt;/h2&gt;

&lt;p&gt;Completion in Vim is powerful, but not necessarily straightforward. Read&lt;a href="http://vimdoc.sourceforge.net/htmldoc/insert.html#ins-completion" rel="noopener noreferrer"&gt;:h ins-completion&lt;/a&gt; and you'll see what I mean:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Completion can be done for:
1. Whole lines                                |i\_CTRL-X\_CTRL-L|
2. keywords in the current file               |i\_CTRL-X\_CTRL-N|
3. keywords in 'dictionary'                   |i\_CTRL-X\_CTRL-K|
4. keywords in 'thesaurus', thesaurus-style   |i\_CTRL-X\_CTRL-T|
5. keywords in the current and included files |i\_CTRL-X\_CTRL-I|
6. tags                                       |i\_CTRL-X\_CTRL-]|
7. file names                                 |i\_CTRL-X\_CTRL-F|
8. definitions or macros                      |i\_CTRL-X\_CTRL-D|
9. Vim command-line                           |i\_CTRL-X\_CTRL-V|
10. User defined completion                   |i\_CTRL-X\_CTRL-U|
11. omni completion                           |i\_CTRL-X\_CTRL-O|
12. Spelling suggestions                      |i\_CTRL-X\_s|
13. keywords in 'complete'                    |i\_CTRL-N| |i\_CTRL-P|
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Vim is smart enough to pull completion data from a variety of sources, but in turn expects users to know which source will provide the best answer and to invoke the correct keymap to draw the desired completions. It's not a huge hurdle in terms of a learning curve, but it's not as simple as hitting tab either.&lt;/p&gt;

&lt;p&gt;The first thing one should do when trying to learn Vim's completion system is to disable any completion plugins and learn these keymaps. Getting comfortable with them will also help you learn and remember where Vim can pull completion information from. You should also read &lt;a href="http://vimdoc.sourceforge.net/htmldoc/options.html#'completefunc'" rel="noopener noreferrer"&gt;:h 'completefunc'&lt;/a&gt; and &lt;a href="http://vimdoc.sourceforge.net/htmldoc/options.html#'complete'" rel="noopener noreferrer"&gt;:h 'complete'&lt;/a&gt; for more information on user-defined completion and the &lt;code&gt;complete&lt;/code&gt; option.&lt;/p&gt;

&lt;p&gt;Now that we have a cursory understanding of completion in Vim, let's take a deeper look at tags and how they figure into completion.&lt;/p&gt;

&lt;h2&gt;
  
  
  Introduction to &lt;code&gt;tags&lt;/code&gt; in Vim
&lt;/h2&gt;

&lt;p&gt;One source of completion in Vim is tag completion, which pulls from a special file called–appropriately—a tags file. Tags files are collections of identifiers (e.g., function names) that are compiled into a single file along with references to their location in a source tree. Vim is capable of using a (properly formatted) tags file for a variety of use cases, among them navigating your source code à la Visual Studio and completion.&lt;/p&gt;

&lt;p&gt;By default Vim doesn't do anything with a tags file except read it. See&lt;a href="http://vimdoc.sourceforge.net/htmldoc/options.html#'tags'" rel="noopener noreferrer"&gt;:h 'tags'&lt;/a&gt; to learn how to configure where Vim looks for tags files. Vimdoc also contains a very good &lt;a href="http://vimdoc.sourceforge.net/htmldoc/usr_29.html#29.1" rel="noopener noreferrer"&gt;introduction&lt;/a&gt; to tags more generally, so I won't spend any more time here introducing them. Let's move on and take a look at how we generate tags files.&lt;/p&gt;

&lt;h2&gt;
  
  
  Introduction to &lt;code&gt;ctags&lt;/code&gt;
&lt;/h2&gt;

&lt;p&gt;Tags files solve the problem of navigating and completing code in a given project, but they also create a problem: how do we create the tags file, and how do we keep it up to date? It would be a pain to manually maintain the tags file even for a small project; it would be all but impossible to do it for a large project like the Linux kernel. Luckily no one has to maintain a tags file. There are plenty of utilities to do that for you, usually bearing the name ctags, or some variant. One very popular choice is called &lt;a href="http://ctags.sourceforge.net/" rel="noopener noreferrer"&gt;Exuberant Ctags&lt;/a&gt;, which has the virtue of being extendable via regular expressions placed into a &lt;code&gt;.ctags&lt;/code&gt; file, but the drawback of not having been updated since 2009. Another increasingly popular option is &lt;a href="https://ctags.io/" rel="noopener noreferrer"&gt;Universal Ctags&lt;/a&gt;, which functions as a drop-in replacement for Exuberant Ctags and is actively maintained. I've had good luck with both.&lt;/p&gt;

&lt;p&gt;Tags files and the tools that generate them have a long history alongside advanced text editors. The history-of-computing nerd in me likes knowing that I'm using the same tool programmers have used since the early days of BSD Unix. It's also a testament to how strong of a solution they provide that folks are still using them 40 years later.&lt;/p&gt;

&lt;h2&gt;
  
  
  Generating &lt;code&gt;tags&lt;/code&gt; Files
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Manually
&lt;/h3&gt;

&lt;p&gt;When we speak of manually generating tags files, we're talking about using any one of the aforementioned tags utilities to generate the tags file. If you're the type of person who takes pleasure in at least understanding how to do things from the command line, you should consult the manual page for your selected tags utility. Take special note of the flags necessary to recurse through all of the subdirectories if you want to generate a tags file for an entire project in one shot.&lt;/p&gt;

&lt;h3&gt;
  
  
  Automatically
&lt;/h3&gt;

&lt;p&gt;You can always use your ctags utility to generate your tags files from the command line, but that's a heck of a lot of back and forth between your text editor and your shell, and I doubt anyone who tries to do that will enjoy the experience for long. So let's look at ways to generate them automatically.&lt;/p&gt;

&lt;p&gt;If you only ever see yourself using tags files with Vim, then maybe a plugin will interest you. I used &lt;a href="https://bolt80.com/gutentags/" rel="noopener noreferrer"&gt;Gutentags&lt;/a&gt; for a long time, and found it "just works" as advertised. It has sane defaults, but lots of opportunities to customize its behavior, which you'll see if you visit the above link.&lt;/p&gt;

&lt;p&gt;In spite of that, I ended up moving in a different direction with managing my tags files. There were several reasons, but the main one is that I like to think of tags files as separate from Vim, something the text editor consumes without having to manage. It's an opinionated view of things, but I increasingly didn't like to configure my text editor to manage my tags files. So I went in search of another method, and what I found was the &lt;a href="https://tbaggery.com/2011/08/08/effortless-ctags-with-git.html" rel="noopener noreferrer"&gt;Tim Pope&lt;/a&gt; method, which I've since implemented myself. Rather than using Vim itself to manage tags files, this method uses local &lt;a href="https://git-scm.com/docs/githooks" rel="noopener noreferrer"&gt;Git hooks&lt;/a&gt; to rebuild the tags whenever any of a handful of common Git operations are performed. The result is a system that also just works, but does so in a half-a-dozen lines of shell script rather than a few &lt;em&gt;hundred&lt;/em&gt; lines of Vimscript. Gotta keep that Vim config tight.&lt;/p&gt;

&lt;p&gt;As a bonus, if you already use Tim Pope's &lt;a href="https://github.com/tpope/vim-fugitive" rel="noopener noreferrer"&gt;Fugitive Git plugin&lt;/a&gt; (and you should), this method handily places your tags file where that plugin tells Vim to look for it—in the &lt;code&gt;.git&lt;/code&gt; folder. Of course the shell-script approach is infinitely configurable, so you can ultimately place the tags file wherever you want. One could also tailor this for other distributed SCM tools (e.g., Mercurial).&lt;/p&gt;

&lt;p&gt;Generically speaking, there are other options as well. You could set a filesystem watcher to watch your project tree and run ctags any time a file changes. A task runner like &lt;cite&gt;Grunt&lt;/cite&gt; might be a viable option too, especially for web developers. The goal is to automate the task of (re)generating your tags file, so there is likely to be no shortage of options.&lt;/p&gt;

&lt;h2&gt;
  
  
  Tying It All Together
&lt;/h2&gt;

&lt;p&gt;That brings us back to where we started, to the issue of code completion in Vim. Yes, Vim does offer native code completion (completing from tags is done with &lt;code&gt;C-x, C-]&lt;/code&gt; in insert mode). No, it's probably not as powerful as what you could get with something like a Jedi plugin à la YouCompleteMe, but I've found it satisfies my needs more often than not, with &lt;code&gt;:grep&lt;/code&gt; (or my own &lt;a href="https://git.danielmoch.com/vim-makejob" rel="noopener noreferrer"&gt;:GrepJob&lt;/a&gt; filling the gap nicely in a more native fashion.&lt;/p&gt;

&lt;p&gt;There's more you can do here too. For instance, if you find yourself instinctively reaching for the tab key in order to complete a word, there is &lt;a href="https://github.com/ajh17/VimCompletesMe" rel="noopener noreferrer"&gt;VimCompletesMe&lt;/a&gt;, which takes advantage of all of Vim's built-in completions through the clever use of an &lt;a href="http://vimdoc.sourceforge.net/htmldoc/options.html#'omnifunc'" rel="noopener noreferrer"&gt;omni completion function&lt;/a&gt;. It works, but users do give up some control over selecting what data source Vim uses for a particular completion. I used this plugin for a while after I gave up on YouCompleteMe, but ultimately removed it because it effectively made the tab key ambiguous in Insert mode. Sometimes I wanted to insert an actual tab character, but got a completion instead.&lt;/p&gt;

&lt;p&gt;With all of this in place, it's natural to ask whether a language server is even necessary with Vim. I don't intend here to suggest an answer to that question, but I will say that many of the solutions to date for language server integration in Vim have seemed like more trouble than they're worth. That said, with the advent of Vim 8 and its asynchronous capabilities, there is headroom for these solutions to improve, and I expect the best among them to become more compelling in the near future.&lt;/p&gt;

&lt;p&gt;I do not recommend coming to Vim with a mindset of creating an IDE in your terminal. That said, Vim is a very powerful tool and if you invest the time to learn how it works it will take you very far. In other words, use Vim for all it's worth &lt;em&gt;before&lt;/em&gt; looking for a plugin to help you out. Anyone who (like me) jumps right to installing a bunch of plugins—whether in a spree of grabbing anything that looks interesting or just to copy someone else's configuration—will likely end up with an unmaintainable mess of a tool that doesn't work consistently, may not work at all, or works about as slow as the IDE you wanted to break free of.&lt;/p&gt;

</description>
      <category>vim</category>
      <category>ctags</category>
    </item>
    <item>
      <title>Zsh Compinit ... RTFM</title>
      <dc:creator>Daniel Moch</dc:creator>
      <pubDate>Fri, 09 Nov 2018 11:04:08 +0000</pubDate>
      <link>https://dev.to/djmoch/zsh-compinit--rtfm-47kg</link>
      <guid>https://dev.to/djmoch/zsh-compinit--rtfm-47kg</guid>
      <description>&lt;p&gt;This week I dealt with a problem that had been bugging me. I noticed that the time a took to start a new &lt;a href="https://www.zsh.org" rel="noopener noreferrer"&gt;Zsh&lt;/a&gt; terminal session went from essentially instant to around 4 seconds &lt;a href="https://danielmoch.com/blog/2018/11/zsh-compinit-rtfm/#id5" rel="noopener noreferrer"&gt;[1]&lt;/a&gt;. I take some pride in running a lightweight system, so the thought of having to wait a few seconds for my terminal emulator to display a prompt feels like a personal affront. My system wasn't just behaving badly, it was challenging me by way of insult.&lt;/p&gt;

&lt;p&gt;Accepting the challenge laid before me, I took to &lt;a href="https://duckduckgo.com" rel="noopener noreferrer"&gt;my favorite search engine&lt;/a&gt; to see what tools were available to help me understand what was suddenly performing so poorly. Oh, okay. &lt;a href="https://xebia.com/blog/profiling-zsh-shell-scripts/" rel="noopener noreferrer"&gt;This post&lt;/a&gt; says that Zsh includes a script profiler. All I need to do is turn it on in my &lt;code&gt;.zshrc&lt;/code&gt; file, like so:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;zmodload zsh/zprof
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Maybe this challenge won't be so challenging after all. So I restart my shell and run the &lt;code&gt;zprof&lt;/code&gt; command, and I see this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;   num  calls                time                       self            name
   -----------------------------------------------------------------------------------
    1)    1         203.91   203.91   45.30%    203.91   203.91   45.30%  compdump
    2)  744         109.84     0.15   24.40%    109.84     0.15   24.40%  compdef
    3)    1         448.41   448.41   99.62%    108.01   108.01   24.00%  compinit
    4)    2          26.75    13.38    5.94%     26.75    13.38    5.94%  compaudit
    5)    1           1.37     1.37    0.30%      1.37     1.37    0.30%  colors
    6)    2           0.12     0.06    0.03%      0.12     0.06    0.03%  set_title
    7)    1           0.15     0.15    0.03%      0.08     0.08    0.02%  preexec
    8)    1           0.07     0.07    0.02%      0.03     0.03    0.01%  precmd
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Okay, so initializing the completion system (i.e., all of the items with names beginning with "comp") is taking a combined 99.64% of the startup time. No need to do any fancy pareto analysis here. Something is getting borked initializing Zsh's much-vaunted completion system.&lt;/p&gt;

&lt;p&gt;So I &lt;a href="https://duckduckgo.com" rel="noopener noreferrer"&gt;DuckDuckWent&lt;/a&gt; &lt;a href="https://danielmoch.com/blog/2018/11/zsh-compinit-rtfm/#id6" rel="noopener noreferrer"&gt;[2]&lt;/a&gt; and found some results that purported to fix the issue, except that they really only half fixed it for me (literally cutting my start time in half ... an improvement for sure, but not good enough). All of the advice &lt;a href="https://danielmoch.com/blog/2018/11/zsh-compinit-rtfm/#id7" rel="noopener noreferrer"&gt;[3]&lt;/a&gt; seemed to point in the same direction, which was basically:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;   autoload &lt;span class="nt"&gt;-Uz&lt;/span&gt; compinit
   &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="o"&gt;[&lt;/span&gt; &lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;date&lt;/span&gt; +&lt;span class="s1"&gt;'%j'&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt; &lt;span class="o"&gt;!=&lt;/span&gt; &lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;stat&lt;/span&gt; &lt;span class="nt"&gt;-f&lt;/span&gt; &lt;span class="s1"&gt;'%Sm'&lt;/span&gt; &lt;span class="nt"&gt;-t&lt;/span&gt; &lt;span class="s1"&gt;'%j'&lt;/span&gt; ~/.zcompdump&lt;span class="si"&gt;)&lt;/span&gt; &lt;span class="o"&gt;]&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;then
     &lt;/span&gt;compinit
   &lt;span class="k"&gt;else
     &lt;/span&gt;compinit &lt;span class="nt"&gt;-C&lt;/span&gt;
   &lt;span class="k"&gt;fi&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The effect of which is to only check &lt;code&gt;.zcompdump&lt;/code&gt; when it's more than 24 hours old, and otherwise to simply initialize the completion system without referring to it. But wait, why should this be necessary? The whole point of &lt;code&gt;.zcompdump&lt;/code&gt; is to speed up compinit. If ignoring it becomes the fix for slow compinit, then why should I ever use it?&lt;/p&gt;

&lt;p&gt;So I read the effing manual, and indeed my instincts were correct. To quote:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;To speed up the running of compinit, it can be made to produce a dumped configuration that will be read in on future invocations. ... The dumped file is .zcompdump ...&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;But why is it not working the way the manual describes? And why do so many other people seem to see similar behavior to me? Let's read a little further.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;If the number of completion files changes, compinit will recognise this and produce a new dump file.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Aha! So by implication one needs to be careful how they go about initializing the completion system. If you do something stupid like, say, eval a completion script (effectively initializing the completion system) before you update your &lt;code&gt;fpath&lt;/code&gt; &lt;a href="https://danielmoch.com/blog/2018/11/zsh-compinit-rtfm/#id8" rel="noopener noreferrer"&gt;[4]&lt;/a&gt; and otherwise run &lt;code&gt;compinit&lt;/code&gt;, then the number of completion files Zsh sees will differ by one between when you eval the completion script and when you later call &lt;code&gt;compinit&lt;/code&gt;, meaning you'll fully run &lt;code&gt;compdump&lt;/code&gt; every time you source your &lt;code&gt;.zshrc&lt;/code&gt; ... twice.&lt;/p&gt;

&lt;p&gt;Once I understood this, the fix was obvious. Take the completion script I was manually eval-ing, turn it into a function script, and add it to my &lt;code&gt;fpath&lt;/code&gt; before running &lt;code&gt;compinit&lt;/code&gt; so it gets picked up with the rest of the completion system initialization. Now the &lt;code&gt;.zcompdump&lt;/code&gt; isn't updated every time a spawn a new shell, and the initialization time dropped down to 60-90 ms for an improvement of over 90% in the worst case.&lt;/p&gt;

&lt;h2&gt;
  
  
  Sidebar: Oh My Zsh
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://ohmyz.sh/" rel="noopener noreferrer"&gt;Oh My Zsh&lt;/a&gt; seems by all accounts to be a very popular framework for managing your Zsh configuration, but while I don't personally use it, my impression is that it gives you enough rope to hang yourself with in this respect. I could see how it would be easy were I to source a bunch of third-party helper scripts in my &lt;code&gt;.zshrc&lt;/code&gt; that were written by a half-dozen different people for the completion system initialization to end up in a similar state. Just something to keep in mind if the themes, plugins, and other candy offered by Oh My Zsh is too much for you to pass up.&lt;/p&gt;

&lt;p&gt;| &lt;a href="https://danielmoch.com/blog/2018/11/zsh-compinit-rtfm/#id1" rel="noopener noreferrer"&gt;[1]&lt;/a&gt; | Later measurement indicated the actual time was 1.0-1.5s, but either way it felt like forever. |&lt;/p&gt;

&lt;p&gt;| &lt;a href="https://danielmoch.com/blog/2018/11/zsh-compinit-rtfm/#id2" rel="noopener noreferrer"&gt;[2]&lt;/a&gt; | &lt;a href="https://octodon.social/@jordyd/100189663891719203" rel="noopener noreferrer"&gt;https://octodon.social/@jordyd/100189663891719203&lt;/a&gt; |&lt;/p&gt;

&lt;p&gt;| &lt;a href="https://danielmoch.com/blog/2018/11/zsh-compinit-rtfm/#id3" rel="noopener noreferrer"&gt;[3]&lt;/a&gt; | Probably the best written example I found was at &lt;a href="https://carlosbecker.com/posts/speeding-up-zsh/" rel="noopener noreferrer"&gt;https://carlosbecker.com/posts/speeding-up-zsh/&lt;/a&gt;, from which I pulled the above quote. |&lt;/p&gt;

&lt;p&gt;| &lt;a href="https://danielmoch.com/blog/2018/11/zsh-compinit-rtfm/#id4" rel="noopener noreferrer"&gt;[4]&lt;/a&gt; | &lt;code&gt;fpath&lt;/code&gt; is the Zsh function path, which is similar to you system path, but for sourcing function scripts rather than executables |&lt;/p&gt;

</description>
      <category>zsh</category>
      <category>shell</category>
    </item>
    <item>
      <title>Getting Started On Mastodon</title>
      <dc:creator>Daniel Moch</dc:creator>
      <pubDate>Thu, 18 Oct 2018 10:29:45 +0000</pubDate>
      <link>https://dev.to/djmoch/getting-started-on-mastodon-1mhk</link>
      <guid>https://dev.to/djmoch/getting-started-on-mastodon-1mhk</guid>
      <description>&lt;p&gt;If I've identified a trend in my social media preferences, it's that I prefer not to use social media. That's not to say that I &lt;em&gt;don't&lt;/em&gt; use it, just that I often feel conflicted when I do. On the one hand, this is where my friends are, and online networks have become a sort of pseudo-public square. (My choice of words there is deliberate ... "pseudo" as in "fake." I actually don't think online networks work as a true replacement for a public square, but that's a post for another time.) Skip out on social media altogether and you basically opt-out of a lot of opportunities to rub elbows with people, which, despite all of the good and the bad that entails, I still think is worthwhile.&lt;/p&gt;

&lt;p&gt;On the other hand, popular social networks are for-profit companies that invariably make their money by turning their users into their product, which is packaged and sold to online advertisers. I don't know about you, but to me that feels a bit dehumanizing. Sure, that model of business existed long before social networks did, at least in the abstract, but let's not kid ourselves—the way we're packaged and sold to advertisers is far different in the hands of social networks than at any time in history. Magazines and television networks could guess at the kinds of readers and viewers they attracted, and companies like Nielson could even provide some hard data to back up their guesswork, but what they didn't have was gobs of very personal data from which to draw conclusions about us. Apart from our reading/viewing habits, older forms of media had comparatively little to work with.&lt;/p&gt;

&lt;p&gt;(As an aside, this is why Google and Facebook are such valuable companies with such obscenely high market caps. It's not because of the value they provide to their users. It's because of the value they provide to advertisers. If what they were doing wasn't such a marked departure from the way ad targeting was done in the past, then these companies wouldn't be so financially successful. Don't buy the argument that what these companies do is the same as magazines and television networks before them.)&lt;/p&gt;

&lt;p&gt;So here's the bind, I can either participate in social media in order to "stay connected," and deal with the icky feeling of being someone's product, or I can opt-out and look like an increasingly irrelevant luddite, telling Facebook and Google to get off his lawn. But here we are, this is just the deal we're all being given, and there's nothing we can do about it, right?&lt;/p&gt;

&lt;p&gt;Wrong.&lt;/p&gt;

&lt;h2&gt;
  
  
  Enter Mastodon
&lt;/h2&gt;

&lt;p&gt;What's Mastodon? There are a few perspectives from which to tackle that question. To wit:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Technically speaking, Mastodon is one of a few social media networks that participate in what's sometimes referred to as the "Fediverse," which is a term used to describe the common technologies/protocols underpinning them. These protocols allow Mastodon and it's sister networks to be both decentralized and federated. Taken together, it's useful to think of federated, decentralized networks as being like email. I might have an email address with Google, but I can use that address to send email to anyone at any other email provider. Loosely speaking, multiple providers/operators is what we mean by "decentralized;" and the fact that they can all talk to each other is what we mean by "federated."&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;To users, Mastodon works a lot like Twitter. You can tweet (which in Mastodon is called—rather unfortunately in my opinion—tooting), retweet (or boost, in Mastodon parlance), and send "private" messages to other users. Toots in Mastodon can be longer than tweets, 500 characters compared to 280. Hashtags work too.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;In terms of look-and-feel, the user experience on a typical Mastodon server's website reminds me a lot of Tweetdeck. That said, there are custom user interfaces available as well (not to mention mobile apps).&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Because of the above—and particularly because of Mastodon's decentralized, federated nature—there's a lot to recommend it.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Did I Mention No Ads?&lt;/strong&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Getting Started
&lt;/h2&gt;

&lt;p&gt;I actually started this post because I wanted to have something to send to folks either asking what Mastodon is or who have just joined and are a bit confused about next steps. There's a lot already out there, so my approach with what remains will be to link to other introductions. If I find another page that fills a gap, I'll update this page with a link.&lt;/p&gt;

&lt;p&gt;Without further ado:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;a href="https://github.com/tootsuite/documentation/blob/master/Using-Mastodon/FAQ.md" rel="noopener noreferrer"&gt;Mastodon FAQ&lt;/a&gt; – A more in-depth explainer of what Mastodon is and how it works.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://github.com/tootsuite/documentation#using-mastodon" rel="noopener noreferrer"&gt;Using Mastodon&lt;/a&gt; – Links to the above FAQ, and list of instances, a list of mobile apps, and a user guide&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://github.com/joyeusenoelle/GuideToMastodon/" rel="noopener noreferrer"&gt;Guide to Mastodon&lt;/a&gt; – A more in-depth user guide and FAQ&lt;/li&gt;
&lt;/ol&gt;

</description>
      <category>mastodon</category>
    </item>
    <item>
      <title>Hardening Services With Systemd</title>
      <dc:creator>Daniel Moch</dc:creator>
      <pubDate>Fri, 05 Oct 2018 13:13:03 +0000</pubDate>
      <link>https://dev.to/djmoch/hardening-services-with-systemd-2md7</link>
      <guid>https://dev.to/djmoch/hardening-services-with-systemd-2md7</guid>
      <description>&lt;p&gt;Systemd gets a lot of hate. There's a lot of heat and very little light in those discussions, in my opinion, and I don't expect that this post will change the mind of anyone who has already decided to hate Systemd. My goal here is far more modest. I want to share a feature of the new init system that I find really compelling, and that I hadn't seen discussed pretty much anywhere: Systemd's native ability to leverage the Linux kernel's namespacing features to run services in a sandboxed environment.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://en.wikipedia.org/wiki/Linux_namespaces" rel="noopener noreferrer"&gt;Linux namespaces&lt;/a&gt; are the kernel feature that enables partitioning everything from processes, to the filesystem, to the network stack. A process operating in one namespace will have a different view of the system's resources than a process operating in another namespace. Probably the best known application of Linux namespaces are container platforms such as &lt;a href="https://www.docker.com/" rel="noopener noreferrer"&gt;Docker&lt;/a&gt;. While Docker is ostensibly a devops platform enabling rapid deployment of applications, the underlying kernel namespace features can be applied to security as well, allowing processes to be effectively partitioned off from one another and, to varying degrees, the underlying system. That's what folks mean when they talk about running a process or service in a sandboxed environment.&lt;/p&gt;

&lt;p&gt;So what if I'm a sysadmin who wants to run a service in a sandbox? This would traditionally be done by setting up a &lt;a href="https://wiki.archlinux.org/index.php/Change_root" rel="noopener noreferrer"&gt;chroot&lt;/a&gt; environment. But another option, one that offers a bit more flexibility, would be to run the service in a mount namespace, and then reconfigure the existing filesystem within the namespace to apply least-privilege to data the services needs and hide data the service doesn't need access to.&lt;/p&gt;

&lt;p&gt;With Systemd, you can configure your service according to either of the above scenarios by simply adding a couple of lines to the service file. Say I want my service to run within a chroot located at &lt;code&gt;/srv/http&lt;/code&gt;. Assuming the chroot is set up appropriately so that the service has access to all of the data it needs within its folder hierarchy, then all I need to do is add the line &lt;code&gt;RootDirectory=/srv/http&lt;/code&gt; to the &lt;code&gt;[Service]&lt;/code&gt; section of the Systemd service file.&lt;/p&gt;

&lt;p&gt;The second scenario is a bit more interesting. Say I'm running a web front-end for my Git service, and that my service needs access to &lt;code&gt;/dev/urandom&lt;/code&gt; &lt;code&gt;/tmp&lt;/code&gt; and read-only access to &lt;code&gt;/home/git&lt;/code&gt;. Systemd offers several options that allow you to do this in a way that exposes little else to the service. Take the below service file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight ini"&gt;&lt;code&gt;&lt;span class="err"&gt;...&lt;/span&gt;
&lt;span class="nn"&gt;[Service]&lt;/span&gt;
&lt;span class="py"&gt;PrivateDevices&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;yes&lt;/span&gt;
&lt;span class="py"&gt;PrivateTmp&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;yes&lt;/span&gt;
&lt;span class="py"&gt;ProtectHome&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;tmpfs&lt;/span&gt;
&lt;span class="py"&gt;BindReadOnlyPaths&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;/home/git&lt;/span&gt;
&lt;span class="err"&gt;...&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;These options implicitly place the service within a mount namespace, meaning we set up our file hierarchy however we like. In the above example, &lt;code&gt;PrivateDevices&lt;/code&gt; creates a private &lt;code&gt;/dev&lt;/code&gt; structure that only provides access to pseudo-devices like &lt;code&gt;random&lt;/code&gt; &lt;code&gt;null&lt;/code&gt;, etc. Critically, disk devices are not visible to the service when the &lt;code&gt;PrivateDevices&lt;/code&gt; option is used. Likewise, &lt;code&gt;PrivateTmp&lt;/code&gt; creates a &lt;code&gt;/tmp&lt;/code&gt; folder that is empty except for what the service itself writes. &lt;code&gt;ProtectHome&lt;/code&gt; has a few options, but the &lt;code&gt;tmpfs&lt;/code&gt; option, according to the documentation, is designed for pairing with the &lt;code&gt;BindPaths/BindReadOnlyPaths&lt;/code&gt; options in order to selectively provide access to folders within &lt;code&gt;/home&lt;/code&gt;. Since all we need there is read-only access to the &lt;code&gt;git&lt;/code&gt; user, that's exactly what we provide, nothing more and nothing less.&lt;/p&gt;

&lt;p&gt;This is all great, but it admittedly only provides mount namespacing for the service. This is probably sufficient in most cases, but Systemd does offer options for network and user namespacing. Readers looking to utilize those, or looking for a comprehensive description of the mount namespacing options, are encouraged to read the &lt;a href="http://jlk.fjfi.cvut.cz/arch/manpages/man/core/systemd/systemd.exec.5.en" rel="noopener noreferrer"&gt;systemd.exec man page&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>linux</category>
      <category>systemd</category>
      <category>security</category>
    </item>
    <item>
      <title>Structural Problems With For-Profit Social Media</title>
      <dc:creator>Daniel Moch</dc:creator>
      <pubDate>Thu, 22 Mar 2018 20:30:06 +0000</pubDate>
      <link>https://dev.to/djmoch/structural-problems-with-for-profit-social-media-2hg</link>
      <guid>https://dev.to/djmoch/structural-problems-with-for-profit-social-media-2hg</guid>
      <description>&lt;p&gt;Many of you who know me personally know that I've become increasingly concerned with online privacy over the past few years. It's a topic that is near and dear to my heart, because I think the privacy, including online privacy, is very important to a healthy civic life. I hope to write a lot more about that going forward, but for now I want to introduce you to someone whose work I'm only beginning to dig into myself, but who has an awful lot of really smart things to say on the topic of online privacy (among other things): Zeynep Tufekci.&lt;/p&gt;

&lt;p&gt;I spare you my attempt at a mini-biography, mostly because I'm about the least qualified person in the world to deliver. Rather, I'll point you to this &lt;a href="https://www.ted.com/talks/zeynep_tufekci_we_re_building_a_dystopia_just_to_make_people_click_on_ads" rel="noopener noreferrer"&gt;TED talk&lt;/a&gt; that she did October of last year where she talks about the infrastructure that companies like Facebook and YouTube control and how much they're able to do with the data they collect on us.&lt;/p&gt;

&lt;p&gt;Here are a couple of quotes a took down as I was listening:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;It's not that the people that run Facebook or Google are maliciously and deliberately trying to make the world more polarized and encourage extremism. ... But it's not the intent and the statements that [these folks] are making that matter; it's the structures and the business models they are building.&lt;/li&gt;
&lt;li&gt;Either Facebook is a half-a-trillion dollar con and ads don't work on the site ... or it's power of influence is of great concern.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Her prescription seems on point: "We need a digital economy where our data and our attention is not for sale to the highest bidding authoritarian or demagogue." That's actually just the sound-bite synopsis of her more nuanced call for businesses, consumers, and governments all to work together to determine what our collective vision is for all of the algorithmic magic (read: artificial intelligence) underlying these systems, how they should operate, and the moral systems that ultimately must be coded into them.&lt;/p&gt;

&lt;p&gt;Evangelical Christians in the circles I run in have begun in the past few years to seriously confront the fact that cultural structures often exist to keep the powerful in power. I think a lot of us are going to have to take a look at what has grown out of the combination of free markets and the internet have caused to grow up, and then have a very serious discussion about what might need to change.&lt;/p&gt;

</description>
      <category>technology</category>
      <category>policy</category>
      <category>privacy</category>
      <category>leadership</category>
    </item>
  </channel>
</rss>
