<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Philipp Trommler</title>
    <description>The latest articles on DEV Community by Philipp Trommler (@ferruck).</description>
    <link>https://dev.to/ferruck</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/ferruck"/>
    <language>en</language>
    <item>
      <title>Sitemaps for (Small) Static Sites</title>
      <dc:creator>Philipp Trommler</dc:creator>
      <pubDate>Sat, 23 May 2020 16:57:05 +0000</pubDate>
      <link>https://dev.to/ferruck/sitemaps-for-small-static-sites-583</link>
      <guid>https://dev.to/ferruck/sitemaps-for-small-static-sites-583</guid>
      <description>&lt;p&gt;You may know that I run a &lt;a href="https://cv.philipp-trommler.me"&gt;small CV site&lt;/a&gt; and since I'd like our precious observers to regularly scan my toy sites, I tend to provide a sitemap for them. I've implemented the updates of those sitemaps with a git commit hook I'd like to share with you here.&lt;/p&gt;

&lt;p&gt;First of all, I've created a standard sitemap within the root of the repository that feeds my CV website. It goes like the following and should contain no surprises if you're used to them:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight xml"&gt;&lt;code&gt;&lt;span class="cp"&gt;&amp;lt;?xml version="1.0" encoding="utf-8"?&amp;gt;&lt;/span&gt;
&lt;span class="nt"&gt;&amp;lt;urlset&lt;/span&gt; &lt;span class="na"&gt;xmlns=&lt;/span&gt;&lt;span class="s"&gt;"http://www.sitemaps.org/schemas/sitemap/0.9"&lt;/span&gt;&lt;span class="nt"&gt;&amp;gt;&lt;/span&gt;
    &lt;span class="nt"&gt;&amp;lt;url&amp;gt;&lt;/span&gt;
        &lt;span class="nt"&gt;&amp;lt;loc&amp;gt;&lt;/span&gt;https://cv.philipp-trommler.me/&lt;span class="nt"&gt;&amp;lt;/loc&amp;gt;&lt;/span&gt;
        &lt;span class="nt"&gt;&amp;lt;lastmod&amp;gt;&lt;/span&gt;2020-05-21&lt;span class="nt"&gt;&amp;lt;/lastmod&amp;gt;&lt;/span&gt;
        &lt;span class="nt"&gt;&amp;lt;changefreq&amp;gt;&lt;/span&gt;monthly&lt;span class="nt"&gt;&amp;lt;/changefreq&amp;gt;&lt;/span&gt;
        &lt;span class="nt"&gt;&amp;lt;priority&amp;gt;&lt;/span&gt;1.0&lt;span class="nt"&gt;&amp;lt;/priority&amp;gt;&lt;/span&gt;
    &lt;span class="nt"&gt;&amp;lt;/url&amp;gt;&lt;/span&gt;
&lt;span class="nt"&gt;&amp;lt;/urlset&amp;gt;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Updating this file whenever I change something on my site seems like a cumbersome and error prone task, thus I've written a really small shell script that I use as a pre-commit hook by moving it to &lt;code&gt;.git/hooks/pre-commit&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;#!/bin/sh&lt;/span&gt;
&lt;span class="nb"&gt;sed&lt;/span&gt; &lt;span class="nt"&gt;-i&lt;/span&gt; &lt;span class="s2"&gt;"s#&amp;lt;lastmod&amp;gt;.&lt;/span&gt;&lt;span class="se"&gt;\*&lt;/span&gt;&lt;span class="s2"&gt;&amp;lt;/lastmod&amp;gt;#&amp;lt;lastmod&amp;gt;&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;date&lt;/span&gt; &lt;span class="nt"&gt;--rfc-3339&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nb"&gt;date&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;&lt;span class="s2"&gt;&amp;lt;/lastmod&amp;gt;#g"&lt;/span&gt; sitemap.xml
git add sitemap.xml
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;This script simply exchanges everything inside of &lt;code&gt;&amp;lt;lastmod&amp;gt;&amp;lt;/lastmod&amp;gt;&lt;/code&gt; with the output of &lt;code&gt;date --rfc-3339=date&lt;/code&gt; which is the current date in the format expected by the search engines and adds the resulting changes to the index. Therefore, the commit I'm currently editing automatically contains the current date within the sitemap.&lt;/p&gt;

&lt;p&gt;Of course this solution doesn't scale well onto bigger sites, there you'd probably go better with a full fledged static site generator. But maybe you also have some kind of one-pager, portfolio site or online CV, that you want to upgrade with an automatic sitemap.&lt;/p&gt;

&lt;p&gt;This was a rather short blog entry, but since I haven't written anything in quite some time I thought I share it with you nonetheless. If you have any improvements or found an error, please let me know!&lt;/p&gt;

</description>
      <category>web</category>
      <category>git</category>
    </item>
    <item>
      <title>A Response to a Response to Hello World</title>
      <dc:creator>Philipp Trommler</dc:creator>
      <pubDate>Sat, 11 Jan 2020 21:07:32 +0000</pubDate>
      <link>https://dev.to/ferruck/a-response-to-a-response-to-hello-world-5cb8</link>
      <guid>https://dev.to/ferruck/a-response-to-a-response-to-hello-world-5cb8</guid>
      <description>&lt;p&gt;Recently a blog post entitled "&lt;a href="https://www.doxsey.net/blog/a-response-to-hello-world"&gt;A Response to Hello World&lt;/a&gt;" by Caleb Doxsey has been making the rounds. In it he tries to dissect Drew DeVault's arguments for software simplicity in his own blog post "&lt;a href="https://drewdevault.com/2020/01/04/Slow.html"&gt;Hello World&lt;/a&gt;".&lt;/p&gt;

&lt;p&gt;First things first: This isn't meant to be a personal attack on Caleb Doxsey in any way. The points he makes are all reasonable and – depending on your domain and background – right. I just want to shed a light on views and opinions that are in my opinion declining in popularity and acceptance. And I want to point out that this is probably the only thing that Drew wanted to make a point on, as well.&lt;/p&gt;

&lt;p&gt;Meanwhile, Drew himself has posted a &lt;a href="https://drewdevault.com/2020/01/08/Re-Slow.html"&gt;follow-up&lt;/a&gt; on the topic where he explains his motivation behind the initial article a bit more in detail. The biggest point he makes here is that his first publishing was in no way meant as a comparison or – as he puts it – a benchmark between the different programming languages he'd mentioned but just as an emphasis on one point: complexity. And as his measurement of complexity he used the number of syscalls issued by a program for a specific task.&lt;/p&gt;

&lt;p&gt;I agree with Drew's opinion that software complexity induced by the abstraction and layering done by nearly all developers in the last years is eating up resources everywhere and that only awareness for the problem could maybe lead us into a better and more performant future in the field of software development. I want to elaborate on this by commenting on Caleb's blog post because I think he gives pretty good responses to the points brought up by Drew, just from a different point of view.&lt;/p&gt;

&lt;h1&gt;
  
  
  Perceptible Performance Improvements
&lt;/h1&gt;

&lt;p&gt;In his first part, Caleb talks about the three downsides of complexity that Drew mentions and elaborates on them one after another.&lt;/p&gt;

&lt;h2&gt;
  
  
  Harder to Debug Programs
&lt;/h2&gt;

&lt;p&gt;I have to fully agree with Caleb here that higher-level languages such as Go are not necessarily harder to debug than lower-level ones just because they're issuing more syscalls. Debuggability is much more influenced by the overall software architecture-wise complexity, whether or not parallelism is involved and – most importantly! – the tooling available.&lt;/p&gt;

&lt;p&gt;More available Stack Overflow answers for your problems when using a high-level language is a questionable argument, though.&lt;/p&gt;

&lt;h2&gt;
  
  
  More Disk Space is Used
&lt;/h2&gt;

&lt;p&gt;This argument is talked down by Caleb, but I have to stand up for it. The ever increasing size of programs is not just annoying, for me as an embedded developer it's a real problem. eMMCs are &lt;em&gt;always&lt;/em&gt; too small and they won't get bigger just because you want to ship that fancy Go application. Binary size &lt;em&gt;heavily&lt;/em&gt; impacts startup performance of embedded devices – yes, all these bits and bytes have to be read from a slow eMMC and put into memory when the device starts up! Storing build artifacts gets more and more expensive, the demands for the network capacity of your build infrastructure increases steadily.&lt;/p&gt;

&lt;p&gt;Have you ever been annoyed by the way-too-small integrated memory and the always-full RAM of your Android device? Well...&lt;/p&gt;

&lt;h2&gt;
  
  
  Worse User Experience
&lt;/h2&gt;

&lt;p&gt;Drew points out that more complex programs lead to longer execution times and that, in turn, leads to a worse user experience. And this is true, full stop.&lt;/p&gt;

&lt;p&gt;Caleb tries to relativize this argument by saying that, while some of the programs initially posted by Drew in fact need more time to finish, all the times are still fast &lt;em&gt;enough&lt;/em&gt;. This may be right in this special case, buts it's definitively not true in general. Further he tries to invalidate the argument by improving the Go version of the test provided by Drew and misses the actual point in multiple ways:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;He compares his optimized version against a not-in-the-same-way-optimized version of the assembly test.&lt;/li&gt;
&lt;li&gt;By optimizing the program in the first place he acknowledges Drew's main point that there's complexity in that program that the programmer needs to be aware of and that needs to be worked around in order to achieve acceptable performance.&lt;/li&gt;
&lt;li&gt;He works around startup complexity and argues that it's thus neglectable. But especially startup performance is essential! (Have you ever sat in front of your under-powered Windows machine waiting for the first call to Python that day to finish?)&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;He's right, though, that higher-level languages make it easier to develop multi-threaded programs and thus to actually utilize your whole hardware. But multi-threading isn't the solution to all problems, in particular when a more efficient environment could solve the same problem easily with just one thread.&lt;/p&gt;

&lt;h1&gt;
  
  
  Compilation Trade-Offs
&lt;/h1&gt;

&lt;p&gt;In his next part Caleb talks of the costs of more efficient compilation of programs. Of course it's true that optimization costs time and can disrupt your workflow, and may even cost you a nice little feature in your language. And indeed that's an argument I hear quite often. Still, I disagree.&lt;/p&gt;

&lt;p&gt;The code I've written professionally may run million to billion times. If a compiler optimization takes me one or two seconds longer but increases performance of every of those uncountable runs, is that really that high of a cost? Maybe folks should get away from compiler-driven development and try to write (nearly) error free code in the first place? That would probably reduce compile times far more than leaving out some optimizations.&lt;/p&gt;

&lt;h1&gt;
  
  
  An Analysis of Syscalls
&lt;/h1&gt;

&lt;p&gt;Here, Caleb breaks down the syscall-induced complexity found by Drew. He categorizes and classifies the different syscalls found in the Go binary and comes to the conclusion that they're all useful. Again, their usefulness depends on what you're expecting, but nevertheless I have some comments.&lt;/p&gt;

&lt;h2&gt;
  
  
  Scheduling
&lt;/h2&gt;

&lt;p&gt;Whilst it's nice that Go handles multi-threading at its core and makes it a first-class programming paradigm, forcing the developer to use it is no good in my opinion. Many developers would be really surprised what can be achieved by a single-threaded application when you just know the right tools and patterns. Multi-threading comes with its own costs, its own complexity and increases the possibilities for errors &lt;em&gt;by magnitudes&lt;/em&gt;. And all this just for finding most of your threads hanging in &lt;code&gt;poll(2)&lt;/code&gt; all of the time.&lt;/p&gt;

&lt;h2&gt;
  
  
  Garbage Collection
&lt;/h2&gt;

&lt;p&gt;Yes, it's true, garbage collection can protect you from all kinds of memory errors. But it comes at its costs and the developer should have the right to chose. Additionally, many of the bugs handled by garbage collection can be found by static analysis and/or &lt;code&gt;valgrind&lt;/code&gt; without decreasing runtime performance of your shipped binaries. Yes, most programmers don't use these tools, so maybe convincing them would be more appropriate then passing of that debt to the end user.&lt;/p&gt;

&lt;h2&gt;
  
  
  Stdin, Stdout, Stderr
&lt;/h2&gt;

&lt;p&gt;I don't get Caleb's point here. Yes, these file descriptors are non-blocking. Still they can be &lt;code&gt;poll(2)&lt;/code&gt;ed and if you need blocking behaviour you can still use &lt;code&gt;fcntl(2)&lt;/code&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Signals
&lt;/h2&gt;

&lt;p&gt;Converting all signals into run-time panics indeed sounds useful since proper signal handling is a permanent source of errors and the cost seems appropriate. Still, this should be optional (maybe opt-out).&lt;/p&gt;

&lt;h2&gt;
  
  
  Executable Path
&lt;/h2&gt;

&lt;p&gt;This seems like solving a non-problem to me.&lt;/p&gt;

&lt;h1&gt;
  
  
  Conclusion
&lt;/h1&gt;

&lt;p&gt;As I said in the beginning, all of the above, all the arguments used and disproved, all this &lt;em&gt;heavily&lt;/em&gt; depends on who you are, on the domain you're in and on your goals. I don't want to say that my points are right, I just want to point out that &lt;em&gt;these are valid points, too&lt;/em&gt;. Today, talking about software bloat puts you into a niche where it's hard to get out yourself. You're not one of the cool kids. And that's not justified: Not all computers are your big iMac, in fact IoT devices and edge computing are in the upswing. Software bloat is &lt;em&gt;real&lt;/em&gt; for embedded development.&lt;/p&gt;

&lt;p&gt;The one takeaway of this article is: Whenever you're introducing complexity, step back and take a moment to think about if it's necessary, think about your users and think about side-effects. Often software bloat is just passing the bill for ease of development to the end user and that's not fair. Having your user to buy new hardware just so you can use that fancy new language feature is mean and destructive. As a developer you're a service provider, software development is no end in itself.&lt;/p&gt;

</description>
      <category>opinion</category>
      <category>developerexpectations</category>
      <category>efficiency</category>
      <category>userexperience</category>
    </item>
    <item>
      <title>My Opinion on the "Blauer Engel" for Software</title>
      <dc:creator>Philipp Trommler</dc:creator>
      <pubDate>Mon, 30 Dec 2019 21:58:57 +0000</pubDate>
      <link>https://dev.to/ferruck/my-opinion-on-the-blauer-engel-for-software-433p</link>
      <guid>https://dev.to/ferruck/my-opinion-on-the-blauer-engel-for-software-433p</guid>
      <description>&lt;p&gt;For some time now, a group of experts has been working on evaluation criteria for the "Blauer Engel" seal for software. As usual with this award, the goal is to reward low resource consumption. So far, so reasonable. But...&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Background:&lt;/strong&gt; The seal called "Blauer Engel" is a more or less prominent German governmental award given to sustainable products and services after a voluntary application by the manufacturer for fullfilling product-category specific standards.&lt;/p&gt;

&lt;p&gt;The German Federal Ministry for the Environment, Nature Conservation and Nuclear Safety (sic!) wants to award its "Blauer Engel" environmental seal to software in the future. To receive this more or less coveted award, the software must meet minimum energy efficiency requirements in previously defined and standardized usage scenarios (including an idle scenario). In addition, advertising is taboo and the exchange of data across open interfaces must be possible. The final decisive point for the award of the seal is the runnability of the software on a five-year old, so-called "reference system".&lt;/p&gt;

&lt;p&gt;In principle, it is of course to be welcomed that the ever more widespread, often senseless growth of software is to be counteracted at least a little. In my opinion, however, the Commission misses its target open-eyed.&lt;/p&gt;

&lt;h1&gt;
  
  
  Scope of Comparison
&lt;/h1&gt;

&lt;p&gt;First of all, for the award of the label, only software from the same segment is compared. What at first sight sounds like a sensible restriction is already a critical hit for me, because it will probably make little difference to resource consumption (both in terms of energy consumption and hardware obsolescence resulting from software growth) whether Microsoft Word or LibreOffice is used (although I actually fear that the open source variant will do even worse here – thanks to Java). The much more fundamental question that should be asked here is: Does it even have to be a full-fledged word processing program?&lt;/p&gt;

&lt;p&gt;Government employees all over Germany (and let's face it, they are the ones most likely to be affected by the seal in some form) use huge office suites every day to create the simplest documents (letters, forms, etc.). These almost exclusively use strict templates and often consist of text modules. A specialized program using a Markdown renderer, LaTeX or similar in the background would probably even run on my router, assuming the appropriate GUI (TUI). In other words, every desktop computer sold in this millennium should be able to handle this workload with customized software. The typical office suite, however, is not custom software, but the exact opposite: a generalist. So it helps the environment little that Microsoft Office is 1 % more energy efficient than LibreOffice.&lt;/p&gt;

&lt;h1&gt;
  
  
  Operating systems
&lt;/h1&gt;

&lt;p&gt;This thought also leads directly to the next point of criticism of the criteria for the "Blauer Engel": Operating systems are not evaluated, neither directly nor indirectly. What does that mean? First of all, operating systems are not awarded with the "Blauer Engel" (direct consideration), because there are simply too few of them to allow a meaningful evaluation. I can agree with that.&lt;/p&gt;

&lt;p&gt;But the indirect consideration, i.e. the evaluation of the required operating system when looking at an application, is essential for me, still it is also left out by the jury. The operating system is a necessary prerequisite for the program running on it. Let's take the fictitious word processing program from the previous section: What use is it to me, if I programmed a highly specialized and efficient program for an operating system that was up-to-date at the beginning of the 2000s, if this operating system has been changed incompatibly since then or is not usable on newer hardware (which I had to buy, for example, due to failures)? What good is the efficiency of that same application if the original operating system is still up-to-date, but requires hardware upgrades at regular intervals due to its constantly increasing resource requirements? It is also important to keep in mind that the choice of a specific operating system necessarily influences the choice of all other applications.&lt;/p&gt;

&lt;p&gt;For these reasons, in my opinion, the required operating system MUST be part of the evaluation criteria for an application, unless it has been programmed to be platform-independent – and that does not necessarily mean in Java! Otherwise, the following applies: Only if the underlying operating system meets the same requirements as the evaluated application, the seal can be awarded.&lt;/p&gt;

&lt;h1&gt;
  
  
  Reference System
&lt;/h1&gt;

&lt;p&gt;The assessment standard of the five-year-old "reference system" is particularly random to me. Of course, the underlying idea is clear: the software should be so economical that it does not lead to hardware obsolescence even after five years. But the implementation here misses its goal in several ways:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;The requirement is more than vague: What exactly does this "reference system" look like? Does it correspond to the middle class of the time? What exactly is middle class (measured in terms of sales, power consumption, price, ...)?&lt;/li&gt;
&lt;li&gt;Never has a look into the past been a guarantee for the future. Rules known in computer science, such as Moore's Law, also show that linearity regarding computers is not too far off. Runnability on a five-year-old reference system therefore says next to nothing about runnability in five years.&lt;/li&gt;
&lt;li&gt;Ensuring runnability on a five-year-old system means either writing special code for the test (Hello, Volkswagen!) or having to work without current achievements.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Especially the last point is interesting. Assuming that the programmer does not cheat by writing special code for the test, he has to work without current achievements in order to pass the test. This means, on the one hand, that he cannot use any libraries or the like that were not available on a "reference system" from five years ago, unless he provides them himself. This leads to unnecessarily large, statically linked applications and bundles, as is common under Windows. One and the same code is present multiple times on every PC, because each application provides its own dependencies to be sure, which wastes disk space; not very efficient. On the other hand, the developer also has no access to new hardware features, which often handle typical workloads not only faster but also much more efficiently than a pure software solution.&lt;/p&gt;

&lt;p&gt;In summary, the evaluation using this "reference system" leads to the fact that particularly efficient programming is penalized. If I were to develop a video decoder today that works with available libraries on a current Linux system and uses hardware acceleration to decode the imaginary new H285 data stream, I could not get a "Blauer Engel" no matter how efficient my program is.&lt;/p&gt;

&lt;h1&gt;
  
  
  Embedded Devices
&lt;/h1&gt;

&lt;p&gt;The same problem also affects almost all embedded devices, or – better – the software running on them, as it is often much more hardware-specific than desktop programs. However, it is precisely this product group (in addition to all the IoT, I also include smartphones in it, for example) that is "shining" with strong growth on the one hand, but also with a particularly rapid obsolescence on the other. Accordingly, it is also the product group for which special attention should be paid to sustainability. But the current draft of the "Blauer Engel" guidelines for software simply cannot be applied to embedded devices.&lt;/p&gt;

&lt;p&gt;Of course, the problem is well known, and it is not for nothing that the "Blauer Engel" jury knows standards for &lt;a href="https://www.blauer-engel.de/de/produktwelt/elektrogeraete/router"&gt;routers&lt;/a&gt;, &lt;a href="https://www.blauer-engel.de/de/produktwelt/elektrogeraete/set-top-boxen"&gt;set-top boxes&lt;/a&gt; and &lt;a href="https://www.blauer-engel.de/de/produktwelt/elektrogeraete/mobiltelefone"&gt;mobile phones&lt;/a&gt;, but these are far too specific. Why not generalize these rules and make them available for all embedded devices? The same basic things are always important: battery life, repair and recycling friendliness and update supply. And after the last update has been shipped, the only question that remains is whether and how easily the customer can put his own software on the hardware. It would then be easy to develop extensions to this for special product categories – for example, for the maximum radio wave exposure from mobile phones.&lt;/p&gt;

&lt;h1&gt;
  
  
  Conclusion
&lt;/h1&gt;

&lt;p&gt;I am not as much at war with the "Blauer Engel" as it may appear from the criticism that has been made. In principle, I am a proponent of an independent, governmental seal that assesses the environmental friendliness of products and services. Such a label is probably more important today than ever before.&lt;/p&gt;

&lt;p&gt;Unfortunately, the "Blauer Engel" lacks public awareness. Apart from governmental institutions, almost no one cares about the label, which means that very few manufacturers undergo the voluntary certification. A vicious circle. I'm also afraid that even with the extension to software, not much will change, especially not with this rather moderate design. The basic assumption that useless growth and unfounded, incompatible changes lead to hardware obsolescence is correct, but my perception is that these are problems of certain operating systems rather than application programs. However, the former are explicitly excluded from the assessment.&lt;/p&gt;

&lt;p&gt;What is much more serious in my opinion, however, is that embedded devices, which are responsible for an ever-increasing share of hardware sales (and disposal!) today, are not only not brought into focus, but are completely excluded by the choice of evaluation criteria. Why do people prefer to ask the question whether Microsoft Office or LibreOffice is more efficient and which of the two runs on the mid-range computer from five years ago? For me this is difficult to understand. I can't remember the last time I heard someone talk about buying a new computer because their desktop program no longer runs smoothly. But I do remember people telling me about their new smartphones all the time.&lt;/p&gt;

&lt;p&gt;The only thing I can fully welcome is the ban of advertising and the necessary support of open data exchange formats. I invite the jury to start all over again based on these two points.&lt;/p&gt;

</description>
      <category>opinion</category>
      <category>award</category>
      <category>blauerengel</category>
      <category>environment</category>
    </item>
  </channel>
</rss>
