<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Henrik Warne</title>
    <description>The latest articles on DEV Community by Henrik Warne (@henrikwarne).</description>
    <link>https://dev.to/henrikwarne</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/henrikwarne"/>
    <language>en</language>
    <item>
      <title>Lessons From 9 More Years of Tricky Bugs</title>
      <dc:creator>Henrik Warne</dc:creator>
      <pubDate>Sun, 15 Jun 2025 08:17:28 +0000</pubDate>
      <link>https://dev.to/henrikwarne/lessons-from-9-more-years-of-tricky-bugs-5ece</link>
      <guid>https://dev.to/henrikwarne/lessons-from-9-more-years-of-tricky-bugs-5ece</guid>
      <description>&lt;p&gt;Since 2002, I have been &lt;a href="https://henrikwarne.com/2016/04/28/learning-from-your-bugs/" rel="noopener noreferrer"&gt;keeping track of all the tricky bugs&lt;/a&gt; I have come across. Nine years ago, I wrote a blog post with the &lt;a href="https://henrikwarne.com/2016/06/16/18-lessons-from-13-years-of-tricky-bugs/" rel="noopener noreferrer"&gt;lessons learned from the bugs&lt;/a&gt; up till then. Now I have reviewed all the bugs I have tracked since then. I wanted to see if I have learnt the lessons I listed in the first review. I also wanted to see what kind of bugs I have encountered since then. Like before, I have divided the lessons into the categories of &lt;em&gt;coding&lt;/em&gt;, &lt;em&gt;testing&lt;/em&gt; and &lt;em&gt;debugging&lt;/em&gt;:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://henrikwarne.com/wp-content/uploads/2025/05/path.jpg" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fni35ycpb14d5v0z7udhq.jpg" width="800" height="379"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Coding
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;1. Empty cases.&lt;/strong&gt; Five bugs had to do with empty lines, empty files, spaces, or values of zero. For example, lines with one space (not zero) should have been skipped as empty, but were not. In another case, empty headers in csv files caused problems. In a recent example, reminder mails were sent out, even though there were zero missing mappings that need to be fixed. I noticed in my previous post that I failed to consider cases of zero and null. Evidently, I need to be even more vigilante in spotting these kinds of errors.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Days.&lt;/strong&gt; Four bugs had to do with days in one way or another. For example, logic the looks at the previous day needs to consider what should happen if the previous day is on a weekend. If you make assumptions about how many holidays there can be in a row, remember the &lt;a href="https://en.wikipedia.org/wiki/Golden_Week_(Japan)" rel="noopener noreferrer"&gt;Golden Week&lt;/a&gt; in Japan. Also, checking if the end date is after today is not enough to see if an agreement is active – the start date may also be after today.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Old data formats.&lt;/strong&gt; Upgrading the logic to using a changed data format is always tricky. You have to consider that old data in the database may have to be converted to the new format. Also, there can be transient cases where ongoing operations may still use the old format, even though the new logic has been deployed. There was also a case where we stopped agreement names from ending in whitespaces, but the 4-eye approval logic failed on old names.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4. Aliased dicts/HashMaps.&lt;/strong&gt; More than once, I accidentally created a second dict that was just an alias to an already existing dict. This meant that a change in one of them also showed up in the other. This led to very confusing effects when running the code.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;5. Local changes.&lt;/strong&gt; Sometimes, I had local changes that I forgot to push, so what was tested locally was not what was deployed. Ideally it should have been caught in CI tests, but there were no tests for these specific cases. A related case: while working locally, I commented out some code, made some changes, then uncommented the code again. But now some other logic had changed (while the code was commented out), leading to bugs.&lt;/p&gt;

&lt;h3&gt;
  
  
  Testing
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;6. Exploratory testing.&lt;/strong&gt; I caught many bugs when doing some exploratory testing before I finished a feature. Often it was related to feature interactions, where various features happened to be turned on or off, which revealed bugs. In another case, I thought the customer was using a feature in a specific way. But when that didn’t work, I asked them, and they told me they used the feature in a completely different way. Also, some things become obvious when you look at them in a GUI. For example, one change I made accidentally added &lt;em&gt;“hasApiKey=false”&lt;/em&gt; in all records displayed, but the idea was to hide anything set to &lt;em&gt;false&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;7. Smaller config in test.&lt;/strong&gt; Usually, the test system is smaller than the prod system in many ways. For example, the test system may only have one event handler, but the prod system has two. This led to a bug where two events that should have been handled in sequence were handled in parallel in prod. The events went to two different event handlers, but in test (with only one event handler), they were always handled sequentially. These kinds of bugs are naturally very hard (or impossible) to discover in test.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;8. Access rights.&lt;/strong&gt; Sometimes I tested features with a user with too much access. This made it seem like the feature worked, when in fact it only worked if the user had certain features enabled.&lt;/p&gt;

&lt;h3&gt;
  
  
  Debugging
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;9. Good logging.&lt;/strong&gt; For many of the bugs, the key to solving them was looking at the logs to figure out what had happened. For example, when one of three (supposedly identical) calendar services gave the wrong answer, I could see in the logs that the faulty one had received only a fraction of the data at start-up (with no error indication). Reading logs and error messages carefully is also important – often I would assume I knew what had happened, without checking carefully in the logs. The time stamps in them are also very helpful. For the “How found”-section of several of the bugs, I wrote something along the lines of: “Then I searched in Kibana around the minute the dead letter happened”.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;10. Discussing with colleagues.&lt;/strong&gt; As before, discussing with a colleague is an incredibly effective way of solving difficult bugs. In one recent case, we were all in the office together when we were trouble shooting. Normally we work remotely three days a week, but being physically close makes cooperating even more effective&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;11. Alerting.&lt;/strong&gt; Some errors would not have been noticed at all, or not early enough, if it wasn’t for alerting. Setting up good alarms really pays off.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;12. Reproducing with the smallest case.&lt;/strong&gt; In many cases, I had a working case and a failing case (maybe in the main branch and in a feature branch). Commenting out code (or otherwise reducing the functionality) was key to finding what the cause of the problem was.&lt;/p&gt;

&lt;h3&gt;
  
  
  Reflections
&lt;/h3&gt;

&lt;p&gt;Going through the notes of all these bugs was quite fun. Some of the bugs I would have remembered even without the notes. Many of them I remembered when I read the notes, and some I had no memory of, even after reading the notes. It was quite nostalgic to remember the colleagues I used to work with, and the systems we worked on together (in different programming languages). What really struck me was the amount of details each system is made up of. It made me think (again) about how much of software engineering is actually learning about the domain.&lt;/p&gt;

&lt;p&gt;Looking back at my post from nine years ago, have I avoided the problems I highlighted there? For the most parts, I have. But I have still failed many times to handle cases with empty, zero or null. This is something I have to pay even more attention to. There was also one potentially really bad bug caused by a faulty if-statement. Luckily, I caught it when doing some exploratory testing, and noticed something weird in the logs. As for reading the logs, I should follow my old advice of “pay close attention” more often. But on the whole, I have managed to avoid many of the types of bugs I used to cause in the past.&lt;/p&gt;

&lt;h3&gt;
  
  
  Analysis
&lt;/h3&gt;

&lt;p&gt;The diagram below shows how many bugs I have recorded each year since the start. For the past nine years, I have encountered one tricky bug every two months on average.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://henrikwarne.com/wp-content/uploads/2025/05/trickybugsperyear.png" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F54da2uoq4ju60m334txy.png" width="800" height="655"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Not every bug was caused by me. Sometimes bugs caused by other people are so interesting that I include them too. For the past nine years, around 70% of the bugs were caused by me. I also keep notes on how much time was spent on fixing the bug. This includes troubleshooting it, fixing it and testing the fix. Below is a diagram of how long it took. Note that anything over 8 hours means multiple days. So 24 hours is 3 days, not 24 hours nonstop.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://henrikwarne.com/wp-content/uploads/2025/06/timetakentoresolve.jpg" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkej9x33cf22qwcmg32o0.jpg" width="800" height="631"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Conclusion
&lt;/h3&gt;

&lt;p&gt;Many errors seem quite inexplicable, until you figure out what the problem is. For example, there was an SQL error that none of us could understand how it could happen. In the end, it turned out that one node (that did the database queries) had not been restarted, so it ran an old version of the software. Other times, they are not hard to figure out once you see them, but interesting nevertheless. Several years ago there was an overflow in Cassandra. The variable in question was an int in both Python and Cassandra, but in Python integers can be arbitrarily large, whereas in Cassandra, an int is 32 bits.&lt;/p&gt;

&lt;p&gt;Whatever the cause, it is always satisfying to figure out what happened. Bugs are great sources for learning, and by tracking the trickiest ones, I am trying to learn as much as possible from each one of them.&lt;/p&gt;

</description>
      <category>debugging</category>
      <category>learning</category>
      <category>programming</category>
      <category>testing</category>
    </item>
    <item>
      <title>More Good Programming Quotes, Part 6</title>
      <dc:creator>Henrik Warne</dc:creator>
      <pubDate>Sun, 02 Mar 2025 09:09:49 +0000</pubDate>
      <link>https://dev.to/henrikwarne/more-good-programming-quotes-part-6-23nj</link>
      <guid>https://dev.to/henrikwarne/more-good-programming-quotes-part-6-23nj</guid>
      <description>&lt;p&gt;Here are more good programming quotes I have found since my &lt;a href="https://dev.to/henrikwarne/more-good-programming-quotes-part-5-5hae"&gt;last post&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://henrikwarne.com/wp-content/uploads/2025/03/dsc_2645.jpg" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F01h8ly4vg1hqnhoge79i.jpg" width="800" height="600"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Programming
&lt;/h3&gt;

&lt;p&gt;“Configuration is coding in a poorly designed programming language without tests, version control, or documentation.”&lt;br&gt;&lt;br&gt;
Gregor Hohpe&lt;/p&gt;

&lt;p&gt;“It’s the developers misunderstanding, not the expert knowledge, that gets released in production”&lt;br&gt;&lt;br&gt;
@ziobrando&lt;/p&gt;

&lt;p&gt;“The key to performance is elegance, not battalions of special cases.”&lt;br&gt;&lt;br&gt;
Jon Bentley and Doug McIlroy&lt;/p&gt;

&lt;p&gt;“It is not usually until you’ve built and used a version of the program that you understand the issues well enough to get the design right.”&lt;br&gt;&lt;br&gt;
Kernighan and Pike&lt;/p&gt;

&lt;p&gt;“Fools ignore complexity. Pragmatists suffer it. Some can avoid it. Geniuses remove it.”&lt;br&gt;&lt;br&gt;
Alan Perlis&lt;/p&gt;

&lt;p&gt;“The best performance improvement is the transition from the nonworking state to the working state.”&lt;br&gt;&lt;br&gt;
John Ousterhout&lt;/p&gt;

&lt;p&gt;“We make, not to have, but to know.”&lt;br&gt;&lt;br&gt;
Alan Kay&lt;/p&gt;

&lt;p&gt;“You don’t pay engineers to write code, you pay them to understand subtleties and edges of the problem. The code is incidental.”&lt;br&gt;&lt;br&gt;
@dozba&lt;/p&gt;

&lt;p&gt;“Scope doesn’t creep, understanding grows”&lt;br&gt;&lt;br&gt;
@jeffpatton&lt;/p&gt;

&lt;p&gt;“Testing leads to failure, and failure leads to understanding.”&lt;br&gt;&lt;br&gt;
Burt Rutan&lt;/p&gt;

&lt;p&gt;“The art of debugging is figuring out what you really told your program to do rather than what you thought you told it to do.”&lt;br&gt;&lt;br&gt;
Andrew Singer&lt;/p&gt;

&lt;p&gt;“The function of good software is to make the complex appear to be simple.”&lt;br&gt;&lt;br&gt;
Grady Booch&lt;/p&gt;

&lt;h3&gt;
  
  
  Programming Jokes
&lt;/h3&gt;

&lt;p&gt;What’s the difference between C and C++? 1&lt;br&gt;&lt;br&gt;
Unknown&lt;/p&gt;

&lt;p&gt;“If it wasn’t for C, we’d be writing programs in BASI, PASAL, and OBOL.”&lt;br&gt;&lt;br&gt;
Unknown&lt;/p&gt;

&lt;p&gt;How to solve Windows problems: reboot&lt;br&gt;&lt;br&gt;
How to solve Linux problems: be root&lt;br&gt;&lt;br&gt;
@CarlaNotarobot&lt;/p&gt;

&lt;h3&gt;
  
  
  Other
&lt;/h3&gt;

&lt;p&gt;“Every job looks easy when you’re not the one doing it.”&lt;br&gt;&lt;br&gt;
Jeff Immelt&lt;/p&gt;

</description>
      <category>programming</category>
      <category>quotes</category>
    </item>
    <item>
      <title>Programming Conference – Jfokus Stockholm 2025</title>
      <dc:creator>Henrik Warne</dc:creator>
      <pubDate>Sat, 08 Feb 2025 14:41:39 +0000</pubDate>
      <link>https://dev.to/henrikwarne/programming-conference-jfokus-stockholm-2025-c67</link>
      <guid>https://dev.to/henrikwarne/programming-conference-jfokus-stockholm-2025-c67</guid>
      <description>&lt;p&gt;This week I attended the &lt;a href="https://www.jfokus.se/jfokus25/" rel="noopener noreferrer"&gt;Jfokus&lt;/a&gt; software development conference in Stockholm, Sweden. I first went in 2011, and I have been back many times through the years. The conference has a Java focus (duh!), but many talks cover general topics as well.&lt;/p&gt;

&lt;p&gt;The whole development team at &lt;a href="https://www.ngm.se/en/" rel="noopener noreferrer"&gt;NGM&lt;/a&gt; got tickets. It is really nice to be able to discuss and compare notes with your colleagues. The big theme this year, apart from Java, was of course AI and LLMs.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://henrikwarne.com/wp-content/uploads/2025/02/gatherers.jpg" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmuiuu5ofcd8908c89cmz.jpg" width="800" height="541"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Talks I Liked
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;&lt;a href="https://www.jfokus.se/talks/2178" rel="noopener noreferrer"&gt;The First 80% of Reading One Billion Rows Fast Enough&lt;/a&gt;&lt;/strong&gt; by &lt;a href="https://bsky.app/profile/reneschwietzke.bsky.social" rel="noopener noreferrer"&gt;René Schwietzke&lt;/a&gt;. This is a talk on Java optimization, and I really enjoyed it! I had not heard about the challenge before. The input is one billion rows of simple weather csv data, and the idea is to see how fast it can be processed in Java. Of course there are solutions that use incredibly weird and obscure techniques, but this talk goes through some basic optimizations that together add up to a run time of 20% of the original solution.&lt;/p&gt;

&lt;p&gt;After setting up the problem, René shows a base-line solution, and gives its run time. Then he goes through a number of optimizations, and shows how much each saves. Examples of optimizations are: replace split() by indexOf(), use int instead of double (we know there is only ever one decimal digit), mutate existing objects instead of creating new ones, only read bytes (not Strings, doubles etc) and delay Unicode processing, simpler Min/Max, create the hash code while traversing the line.&lt;/p&gt;

&lt;p&gt;René used a flamegraph from a profiler to guide what areas of the program should be optimized. Some general rules he followed were: replace standard JDK functionality (it may be more general than what is needed), low or no memory allocation, avoid wrappers and immutability, reduce branching (if, loops), exploit what you know about the input data.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://henrikwarne.com/wp-content/uploads/2025/02/onebillionrows.jpg" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8c52jk2cjees7p04wwad.jpg" width="800" height="722"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;a href="https://www.jfokus.se/talks/2536" rel="noopener noreferrer"&gt;The Future of Work&lt;/a&gt;&lt;/strong&gt; by &lt;a href="https://www.ymnig.ai/trainers/henrik-kniberg" rel="noopener noreferrer"&gt;Henrik Kniberg&lt;/a&gt;. This was second of two keynote talks that opened the conference. Henrik did a live demo where he used several AI agents to accomplish tasks, such as making code changes, creating a branch in git, and creating a PR with the changes. The AI agents have instructions in the form of short text documents, and these instructions can be updated, even by the agents themselves (subject to human approval).&lt;/p&gt;

&lt;p&gt;The agents appear as their own users in Slack. They can also be given recurring tasks, for example to create a report each morning on a given subject, and to mail out the report and post a summary of it in Slack. He also demonstrated how they can trouble shoot if something goes wrong, for example if it can’t create a git branch. The LLM used in the demo was Claude, and Henrik used it in voice-input mode.&lt;/p&gt;

&lt;p&gt;He ended the presentation with some reflections on the implications of this way of working. He, like me, has always loved programming. Will this way of working, with agents writing a large chunk of the code, or all code, mean the end of software development for humans? First he noted that this is similar to moving from punch cards, to assembler, and to compiled higher level languages. You move up one abstraction level. He also noted that what he liked about programming was making and creating things, not necessarily writing the actual code.&lt;/p&gt;

&lt;p&gt;A very good and thought-provoking talk. Actually seeing agents in action, instead of merely being told what they can do, really brings the points home.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;a href="https://www.jfokus.se/talks/2526" rel="noopener noreferrer"&gt;Ask the Architect&lt;/a&gt;&lt;/strong&gt; with &lt;a href="https://www.jfokus.se/speakers/14844" rel="noopener noreferrer"&gt;Brian Goetz&lt;/a&gt; (Java Language Architect) and &lt;a href="https://www.jfokus.se/speakers/14854" rel="noopener noreferrer"&gt;Mark Reinhold&lt;/a&gt; (Chief Architect), both at Oracle. This was a Q&amp;amp;A session, where the audience had a chance to ask Brian and Mark Java questions. I didn’t really know what to expect from this session, since it will depend a lot on the questions asked. But I was pleasantly surprised. There was quite a variety of questions.&lt;/p&gt;

&lt;p&gt;There were questions about serialization, GraalVM vs HotSpot, records, streams, Lombok, different deprecation modes, and more. Quite interesting. Before we started, Brian and Mark cautioned us not to begin questions with “Why don’t you just…”. Modifying a language that has been around for so long, with so many existing programs, is not easy. This became very clear when hearing some of the answers.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Value of Conferences
&lt;/h2&gt;

&lt;p&gt;Going to a conference is different from watching talks on YouTube, or reading books or blog posts about software development. It is nice to meet and talk to other developers. My standard question when chatting with other attendees is “What is your favorite talk so far, and why?”. At Jfokus, your name and your company are printed on your badge. This gives you another set of good icebreaker questions: “What do you do at Company X? What does the company do? What tech stack do you use?”. Also, when listening to a talk live, you have a chance to ask questions, either during the talk, or afterwards.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://henrikwarne.com/wp-content/uploads/2025/02/crowd.jpg" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj8yn8li0yioav7exehe5.jpg" width="800" height="546"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;It is very convenient to attend a conference in your home city or country. It is cheaper and there is less travel. From the badges I saw at Jfokus, most people were from Sweden, and a few from Germany.&lt;/p&gt;

&lt;p&gt;At Jfokus, there are usually six parallel talks, so it can sometimes be hard to pick what to listen to. When attending a conference, and listening to many talks in a row, you notice things that are mentioned more than once. Examples this year were LLMs checking the output of other LLMs, RAG (Retrieval-Augmented Generation), AI-assisted coding (with IDE plugins), GraalVM, and the Quarkus framework. After a conference I always end up with a long list of things to look up: techniques and tools I hadn’t heard about before, and books and articles to look into.&lt;/p&gt;

&lt;p&gt;I also like to see which companies have exhibition booths at the conference. Even if I am not interested in their exact service, it gives me a general sense of what is popular right now. All the exhibitors get a little gold star in my book for sponsoring a conference.&lt;/p&gt;

&lt;p&gt;Finally, it is always inspiring to go to a conference. Meeting people and learning about new ideas is exciting.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://henrikwarne.com/wp-content/uploads/2025/02/waterfront.jpg" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fokarzmtd9s35d2qoq6ua.jpg" width="800" height="552"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Odds and Ends
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Tobias Modig, in his talk The Developer Rhapsody, talked about “the mediocre developer”. Someone competent that stays at the same company for 15 years. The contrast is the brilliant developer, that gets bored after a year and a half and moves on. Who is more valuable for a company?&lt;/li&gt;
&lt;li&gt;“We think in generalities, but we live in detail” – Alfred North Whitehead. One of many good quotes Kevlin Henney mentioned in his talk Keeping It Simple.&lt;/li&gt;
&lt;li&gt;The venue, Stockholm Waterfront Congress Centre, is great. Everything worked smoothly, the food and the “fika” was great, and it is very easy to get to.&lt;/li&gt;
&lt;li&gt;The Jfokus web site could be improved. First of all, it would be good if the back button in the browser worked. Going back after clicking into a talk, you lose where you were in the schedule. Also, the link for rating the talk would be good to have next to the talk description.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Another great conference, with a good variety of talks and speakers. If you haven’t been to a conference in a while, try to find one to attend. It is really inspiring, and a nice way of keeping up with new ideas and technology.&lt;/p&gt;

</description>
      <category>learning</category>
      <category>programming</category>
      <category>conference</category>
      <category>java</category>
    </item>
    <item>
      <title>My Simple Knowledge Management and Time Tracking System</title>
      <dc:creator>Henrik Warne</dc:creator>
      <pubDate>Sat, 09 Nov 2024 16:10:58 +0000</pubDate>
      <link>https://dev.to/henrikwarne/my-simple-knowledge-management-and-time-tracking-system-4g2k</link>
      <guid>https://dev.to/henrikwarne/my-simple-knowledge-management-and-time-tracking-system-4g2k</guid>
      <description>&lt;p&gt;I am using a very simple system for remembering commands and procedures, and for tracking what I work on. I have two plain text files called &lt;em&gt;notes.txt&lt;/em&gt; and &lt;em&gt;worktime.txt&lt;/em&gt;. In the notes file, I write down things that are important to remember. For example: various shell commands, steps when creating a new release, how to install and configure tools, company procedures for time reporting etc.&lt;/p&gt;

&lt;p&gt;In the worktime file, I write down the hours I worked that day, and what I worked on. I also have a python script that calculates the number of hours worked for the day and the week.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://henrikwarne.com/wp-content/uploads/2024/11/stones.jpg" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq27dq5n1eqijz832ejzc.jpg" width="800" height="310"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In the past few years, I have started at four different companies. At each company, there are many things to remember. Which repositories do I clone? How do I build, test and deploy the system? How do I report time? The first time I do these things, I typically write down some notes about it. The next time, I can do it without asking anybody. If I do the task often enough, I will usually remember how to do it without having to refer to my notes. But the first few times, it is good to have the steps written down.&lt;/p&gt;

&lt;h2&gt;
  
  
  Notes
&lt;/h2&gt;

&lt;p&gt;I usually add new stuff at the top of the file. The only exception is if there is already a few related items further down in the file. Then I will make an addition there. However, it doesn’t matter if there are duplicate entries. When I look for something, I usually just search from the top of the file. The most recent entry is usually the one I am looking for.&lt;/p&gt;

&lt;p&gt;Occasionally, I have &lt;em&gt;grep&lt;/em&gt;:ed through the whole file, so I can see all occurrences of some term at once, for example all git commands I have written down.&lt;/p&gt;

&lt;h3&gt;
  
  
  Commands
&lt;/h3&gt;

&lt;p&gt;Some examples of commands I have saved in the file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
# To see versions (images) of what's running:
kubectl -n trade-sched get deployments -o custom-columns="NAME:.metadata.name, IMAGE:.spec.template.spec.containers[0].image"

gcloud artifacts docker images list --include-tags docker.pkg.dev/edab-platform-cicd/kbt/tee --format=json | jq | grep -C2 'v5.11.20' # Find images with a given tag

grep -i 'error\|panic\|fail\|fatal\|SIGSEGV\|back-pressured' *

git rebase -i HEAD~3 # Interactively rebase the last 3 commits
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Some of these commands are not hard to recreate. But it is nice not having to think about what the path is, or what the arguments are, and instead just copy the command. Of course, if I used it recently, I will just find it in my shell history, so no need to look it up in the file then.&lt;/p&gt;

&lt;p&gt;Recently, I have been using &lt;em&gt;curl&lt;/em&gt; to test various APIs. Then I will save the commands, including headers, body, URL and query parameters. It is much quicker to look up a working example than to recreate it.&lt;/p&gt;

&lt;p&gt;With LLMs, it is quite easy to just ask for custom shell commands when you need them. I still think it is worthwhile to write important ones down. My goal is to learn them by heart, so I don’t have to look them up in my notes. Writing them down helps with that. Case in point: I know &lt;em&gt;git rebase -i&lt;/em&gt; by heart now, so don’t need it in the file anymore.&lt;/p&gt;

&lt;h3&gt;
  
  
  Setting Things Up
&lt;/h3&gt;

&lt;p&gt;Setting up a new MacBook takes some time. Because it is something I don’t do very often, I have written down what to install, and where to take it from. This is an (incomplete) example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
Install XCode from Appstore (needed for homebrew)
Install homebrew (from homepage, with curl)
Install MacVim: brew install macvim
Install GoLand
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Other things include shortcuts in the IDE, other apps to install, how to set access tokens, settings in applications like Chrome, Slack etc.&lt;/p&gt;

&lt;h3&gt;
  
  
  Procedures
&lt;/h3&gt;

&lt;p&gt;For example, how to make a release, how to deploy to the test and production environments, how do start and stop systems, and where to find the logs.&lt;/p&gt;

&lt;p&gt;Typically, these kinds of activities are documented somewhere else too, such as on a Wiki-page. However, I can write down a version that is tailored to my needs (more or less information). If the official documentation is just what I need, I will just save the link to it. Even this is worthwhile to have, since sometimes it is not easy to find. Ideally, all official documentation should be comprehensive and easy to find, but that is rarely the case.&lt;/p&gt;

&lt;h3&gt;
  
  
  Company Procedures
&lt;/h3&gt;

&lt;p&gt;This includes how to report time, vacation days and sick days, how to create an internal IT support ticket, how to book a meeting room, and where contact lists are located. I include both the URL, and the steps I need to take in the system (unless it is obvious).&lt;/p&gt;

&lt;h2&gt;
  
  
  Time tracking
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Time
&lt;/h3&gt;

&lt;p&gt;This is what it looks like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;240322 7:30-11:25 13:15-17:50 SNE-1635 Unzipping files from google cloud bucket ... 

240321 8:10-12:25 13:30-18:00 Made tag v5.11.13 for tee, and deployed ...

240320 8:25-11:55 12:25-17:45 Refactored Kraken futures websocket code ...
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The file is in reverse chronological order, so I always add the new day at the top of the file. I use an empty line to separate the weeks. I have a python script that calculates the hours and minutes worked each day, and the total and average for the week. The script makes it easy to see if I work my 40 hours per week, or if it is less or more.&lt;/p&gt;

&lt;h3&gt;
  
  
  What I Worked On
&lt;/h3&gt;

&lt;p&gt;I shortened the descriptions in the example lines above. Typically, I write a few sentences on what I worked on that day. If there is a Jira ticket number, I try to include that. Sometimes I just take the commit messages for the day.&lt;/p&gt;

&lt;p&gt;If I don’t write down what I worked on at the end of the day, I will not remember what it was. Fortunately, it only takes a few minutes to write down when it is fresh in my memory. Having it written down can sometimes be useful. For example, to answer “How much time did you spend on task X?”, or “Why did that take so long?”. When it is time for the yearly performance review, I usually look through the notes to see what main projects I worked on.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;I have been using this system for over fifteen years now, and it works very well for me. It is only two plain text files, nothing more. There is no configuration. There is no vendor lock-in. The data format will not be obsolete. I don’t have to spend time organizing or labeling the information I put in.&lt;/p&gt;

&lt;p&gt;I know many people, especially programmers, like to use more elaborate systems. No doubt such systems can do more than mine can. But I think it is useful to consider if maybe a simpler system will give you almost the same benefits, at a lower cost.&lt;/p&gt;

</description>
      <category>work</category>
      <category>knowledgemanagement</category>
      <category>timetracking</category>
    </item>
    <item>
      <title>Programming With ChatGPT</title>
      <dc:creator>Henrik Warne</dc:creator>
      <pubDate>Sun, 25 Aug 2024 15:50:51 +0000</pubDate>
      <link>https://dev.to/henrikwarne/programming-with-chatgpt-1h73</link>
      <guid>https://dev.to/henrikwarne/programming-with-chatgpt-1h73</guid>
      <description>&lt;p&gt;Using ChatGPT when I code has been a real productivity boost for me. Instead of reading an example on Stack Overflow and figuring out how to adapt it to my particular case, I immediately get code tailored to my specific needs. I my mind, generating code is a perfect use case for LLMs, since I will always test the generated code. If it isn’t working, I’ll find out right away, so hallucinations is not a problem.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://henrikwarne.com/wp-content/uploads/2024/08/img_20240814_200343.jpg" rel="noopener noreferrer"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fhenrikwarne.com%2Fwp-content%2Fuploads%2F2024%2F08%2Fimg_20240814_200343.jpg%3Fw%3D1024"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Programming
&lt;/h3&gt;

&lt;p&gt;A while ago, I needed to write code to download data from a Google bucket. I had never done that before, so I started with this query:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Give me a Python program that connects to a Google Bucket and downloads all files that have an ISO date string in the name.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;There was some trouble with authentication, so I wrote a few queries about how that works. Then, when the basic download worked, I used:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;For files that are zip-files, give code that unzips them (in memory), then iterates through all the lines in those files.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Being able to get code tailored to what I want is really helpful, and speeds up my work a lot. I like to work in small steps – getting a basic case working first, then adding functionality bit by bit. This workflow works well with ChatGPT. It is quite rare that I get an answer with an error in it. More common is that the code doesn’t do exactly what I want, or that it is using one framework, and I would prefer another. But then I just modify my query accordingly.&lt;/p&gt;

&lt;p&gt;In all cases though, I will test the code I get back. Both to make sure I does what I want, and to make sure I understand how it works. The reason I want to understand how it works is that I want to be able to trouble shoot my application if it doesn’t work as expected. If I don’t understand how the code works, I can’t trouble shoot it.&lt;/p&gt;

&lt;p&gt;I see ChatGPT as another useful tool programmers can use – it makes us more efficient. However, it is still a tool controlled by the developer. I have seen blog posts projecting that all coding in the future will be done by LLMs, with no need for programmers. I am skeptical of this claim, both because it is hard to specify the behavior of a system only in English, and because I think it would be difficult to figure out why a system is not doing what it is supposed to do.&lt;/p&gt;

&lt;p&gt;I have paid for ChatGPT for about a year now. For a few months when ChatGPT-4o came out, I stopped paying, since there didn’t seem to be any benefit of paying for it. However, once I started getting notices of rate limiting, I started paying again. I think it is a bargain at $25 a month, given how much more productive I am.&lt;/p&gt;

&lt;p&gt;Often when I look at the answer to a query, I want to use page up and page down (because there is more than one page of it). One small annoyance is that that often doesn’t work. The fix is to press the tab key to make them work again.&lt;/p&gt;

&lt;h3&gt;
  
  
  Other Tools
&lt;/h3&gt;

&lt;p&gt;Many companies are careful not to expose their source code to any outside vendors. My workflow of asking for specific pieces of code works well in this regard. It does not depend on using any existing code for context.&lt;/p&gt;

&lt;p&gt;I have tried GitHub &lt;strong&gt;CoPilot&lt;/strong&gt; a bit on some personal projects, but I prefer using ChatGPT. I also tried &lt;strong&gt;Claude&lt;/strong&gt; briefly, but I found that I have become used to the way ChatGPT formats its answers.&lt;/p&gt;

&lt;h3&gt;
  
  
  Other Uses
&lt;/h3&gt;

&lt;p&gt;I also often use ChatGPT instead of Google, or man-pages, for shell commands. For example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Give me the jq command that counts the number of objects in a json array.

Give me the curl command to send a POST with an empty body.

Give me a sed command that removes the beginning time stamps from lines looking like this:
2023-08-25 10:15:32,104 - INFO - User logged in from IP address 192.168.1.1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;I have tried using ChatGPT to generate texts, but I have always been disappointed at the result. It always sounds fake to me. Using it to summarize texts has also been disappointing. When I knew the source texts well, the summaries always sounded too generic, without any real insights. However, I am using ChatGPT more and more as a substitute for Google. For example, summarizing concepts, or asking questions about language use.&lt;/p&gt;

&lt;h3&gt;
  
  
  Conclusion
&lt;/h3&gt;

&lt;p&gt;Using ChatGPT for programming is almost the perfect use case. Since I always test the result, any hallucinations will quickly be discovered. And hallucinations are rare. Most of the time I get code that does what I want really fast.&lt;/p&gt;

</description>
      <category>programming</category>
      <category>chatgpt</category>
    </item>
    <item>
      <title>John von Neumann – The Man from the Future</title>
      <dc:creator>Henrik Warne</dc:creator>
      <pubDate>Sun, 07 Jul 2024 10:40:15 +0000</pubDate>
      <link>https://dev.to/henrikwarne/john-von-neumann-the-man-from-the-future-5050</link>
      <guid>https://dev.to/henrikwarne/john-von-neumann-the-man-from-the-future-5050</guid>
      <description>&lt;p&gt;Before I read &lt;a href="https://www.goodreads.com/book/show/61089520-the-man-from-the-future" rel="noopener noreferrer"&gt;The Man from the Future&lt;/a&gt; by Ananyo Bhattacharya, I only knew about John von Neumann in two contexts: that computers use the &lt;em&gt;von Neumann architecture&lt;/em&gt;, and that he appeared in a story about a mathematical problem I remember from many years ago. After reading it, I understand what a genius he was, and how much of science in the 20th century he influenced. He deserves to be better known than I think he is, and this is a great book to learn about him.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://henrikwarne.com/wp-content/uploads/2024/05/img_20240511_114134.jpg" rel="noopener noreferrer"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fhenrikwarne.com%2Fwp-content%2Fuploads%2F2024%2F05%2Fimg_20240511_114134.jpg%3Fw%3D1024"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The &lt;em&gt;von Neumann architecture&lt;/em&gt; means instructions and data are both stored in the same kind of memory, and instructions are fetched from memory and executed in order. This is taken for granted now, but this way of organizing computers was not a given when computers were invented.&lt;/p&gt;

&lt;p&gt;The story of the mathematical problem I remember is this:&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Two trains are 60 kilometers apart, traveling towards each other on the same track. Each train is moving at a constant speed of 30 kilometers per hour. A fly starts at the front of one train and flies back and forth between the two trains at a constant speed of 60 kilometers per hour. The problem is to determine the total distance the fly travels before the trains collide and the fly is crushed between them.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;One way to solve it is by summing the infinite series representing the fly’s back-and-forth trips. However, there is an easier way to solve it. Since each train is moving towards the other at 30 kilometers per hour, the combined speed at which the distance between them is closing is 60 kilometers per hour. Given that they are 60 kilometers apart, they will collide in 1 hour. Because the fly is flying at 60 kilometers per hour, and the trains will collide in 1 hour, the fly will travel a total distance of 60 kilometers.&lt;/p&gt;

&lt;p&gt;When this problem was posed to von Neumann, it didn’t take long for him to come up with the correct answer. The person who posed the problem was impressed by von Neumann’s quick response: “Ah, you came up with the short-cut for calculating the answer, instead of summing the series”. And von Neumann responded: “No, I summed the series in my head”.&lt;/p&gt;

&lt;p&gt;I read this story a long time ago. It is probably apocryphal, but I like the math problem, and I’ve always remembered the story.&lt;/p&gt;

&lt;h2&gt;
  
  
  Early Years
&lt;/h2&gt;

&lt;p&gt;János Neumann was born in Budapest in 1903 into a Jewish family. In 1913, his father Max was awarded a hereditary title from the Austrian emperor Franz Joseph I. This is the origin of the &lt;em&gt;von&lt;/em&gt; in the name. When János moved to the United States, he anglicized his name, and became John von Neumann.&lt;/p&gt;

&lt;p&gt;His extraordinary mind for mathematics was apparent very early on, and he was only 17 when he published his first mathematical paper (on zeros of Chebyshev polynomials). In 1919, there was a communist coup in Hungary. The communist reign lasted only 133 days, but von Neumann would remember that chaotic period, and he remained a life-long anti-communist.&lt;/p&gt;

&lt;h2&gt;
  
  
  Mathematical Upheaval
&lt;/h2&gt;

&lt;p&gt;At the beginning of the 20th century, there was a foundational crisis in mathematics. The roots of it came from discovering a flaw in Euclid’s &lt;em&gt;Elements&lt;/em&gt;, the standard textbook on geometry for centuries. In it, there were five axioms believed to be self-evident. By building on those axioms in logical steps, more advanced results (like Pythagoras’ theorem) could be proved. This axiomatic method was the cornerstone of mathematics.&lt;/p&gt;

&lt;p&gt;In the 1830s, Euclid’s fifth postulate, the parallel postulate, was shown not to be true. It states that if two lines are drawn that intersect a third line so that the sum of the interior angles (a and b in the picture) is less than 180 degrees (two straight angles), then the lines must intersect at some point. On the other hand, if a and b do sum to 180 degrees, then they never meet (and are thus parallel).&lt;/p&gt;

&lt;p&gt;&lt;a href="https://henrikwarne.com/wp-content/uploads/2024/05/fifth-postulate.jpg" rel="noopener noreferrer"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fhenrikwarne.com%2Fwp-content%2Fuploads%2F2024%2F05%2Ffifth-postulate.jpg%3Fw%3D1024"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The parallel postulate is true on flat surfaces, i.e. in Euclidean geometry, but it is not true in hyperbolic geometry, where the surface can curve, like a saddle. In the 1850s, Bernhard Riemann introduced spaces with any number of dimensions, hyperspace.&lt;/p&gt;

&lt;p&gt;By the end of the 19th century, many other theorems and proofs from Euclid’s geometry were being questioned. David Hilbert set out to rebuild geometry theory from scratch in a much more systematic and rigorous way. In 1899 he published his book &lt;em&gt;The Foundations of Geometry&lt;/em&gt;. At that time, some scientists were of the opinion that some questions could not be answered. Hilbert was of the opposite opinion – “we can know and we will know”. Having successfully tackled geometry, he wanted to do the same for all of mathematics: make sure it was on a solid base of irrefutable axioms and theorems.&lt;/p&gt;

&lt;p&gt;However, this project ran into problems almost immediately. In 1901, the British philosopher Bertrand Russell struggled with a paradox in set theory. Some sets are straightforward, for example the set of possible cheesecakes. This set does not contain the set itself, because that set is not a literal cheesecake. Let’s call these kinds of sets &lt;strong&gt;normal&lt;/strong&gt;. But when you consider the complement – the set of everything that isn’t cheesecake, then that set is a member of itself. Let’s call sets that are members of themselves &lt;strong&gt;abnormal&lt;/strong&gt;. So far, so good.&lt;/p&gt;

&lt;p&gt;Now let’s form the set of all normal sets, and call it &lt;strong&gt;R&lt;/strong&gt;. If R is normal, it should be contained in R, and thus be abnormal (because it would contain itself). On the other hand, if R is abnormal (i.e. it is a set that contains itself), it would not be contained in the set of all normal sets (itself), and therefore be normal. This is Russell’s paradox.&lt;/p&gt;

&lt;p&gt;This has the same structure as The Liar’s Paradox: “This sentence is false”. If the sentence is true, it is false, but if it is false, it is true. In both cases, the problem stems from the self-referential part of the paradox.&lt;/p&gt;

&lt;p&gt;Russell’s paradox threatened Hilbert’s project of get mathematics on more rigorous grounds. “If mathematical thinking is defective, where are we to find truth and certitude?” he asked. Von Neumann’s solution to the problem came out in a paper in 1925. In it, he lists, on one page, all the axioms needed to build up set theory. To avoid Russell’s paradox, he introduces sets and classes. A class is defined as a collection of sets that share a property. There is no “set of all sets that are not members of themselves”, but there is a “class of all sets that are not members of themselves”. This class is not a member of itself, because it is not a set (it’s a class).&lt;/p&gt;

&lt;p&gt;This development was to Hilbert’s liking. In 1928, he challenged mathematicians to prove that mathematics is &lt;strong&gt;complete&lt;/strong&gt; , &lt;strong&gt;consistent&lt;/strong&gt; and &lt;strong&gt;decidable&lt;/strong&gt;. &lt;strong&gt;Complete&lt;/strong&gt; meant that all mathematical theorems can be proved from a finite set of axioms. In other words, given some fixed set of axioms, is there a proof for every true statement? By &lt;strong&gt;consistent&lt;/strong&gt; , Hilbert meant that the axioms would not lead to any contradictions. That is, can only the true statements be proved? And by &lt;strong&gt;decidable&lt;/strong&gt; , he meant that there should be a step-by-step procedure (an algorithm) that, in finite time, can be used to show if a particular mathematical statement is true or false. This last property became known as the &lt;em&gt;Entscheidungsproblem&lt;/em&gt; (decision problem in German, since German was the language of science in those days). Within a decade, the answers were in: mathematics was neither complete nor consistent nor decidable!&lt;/p&gt;

&lt;h2&gt;
  
  
  Quantum Mechanics
&lt;/h2&gt;

&lt;p&gt;At the same time, physics was going through its own crisis. In 1900, the German physicist Max Planck proposed that energy might be absorbed or emitted in discreet quantities, &lt;em&gt;quanta&lt;/em&gt;. In 1905, Einstein theorized that light might be composed of a stream of particles, the first hint that quantum entities had both wave-like and particle-like properties. The Danish physicist Niels Bohr came up with a model of the atom, where electrons could only occupy special orbits, where the difference between the orbits equaled the difference in energy.&lt;/p&gt;

&lt;p&gt;To try to describe how atoms behaved, Werner Heisenberg came up with “matrix mechanics” in 1925. He wanted a theory that would explain experimental results. In experiments, scientists would “excite” atoms, by for example vaporizing a sliver of material in a flame, or by phasing a current through a gas. As a result, light was produces. There would be characteristic spectral lines (with given frequencies and intensities) for each element. Heisenberg proposed that the difference in the initial and final energy levels of the electrons accounted for the frequencies in the atomic emission lines. The possible transitions between levels could be represented in matrix form (with an infinitely large matrix). Since there could be different paths from the initial to the final levels (for example via intermediary levels), to get the probabilities of all possible transitions, he multiplied the individual transitions with their respective probabilities.&lt;/p&gt;

&lt;p&gt;At about the same time, Erwin Schrödinger came up with an entirely different way of describing atoms, as an infinite sum of superpositions of a wave function. Just like Heisenberg’s model, this worked well to describe how atoms behaved experimentally. But how could two wildly different models both describe reality so well? Could they be shown to be the same? This is exactly what von Neumann did, with some help from Hilbert. I won’t pretend to understand all the mathematics, but it relates to operators, eigenfunctions, Hilbert spaces, and square integrable functions. In the end, von Neumann was able to show that the coefficients of the expanded wave function were the elements that appear in the state matrix, in other words: they were fundamentally the same theory!&lt;/p&gt;

&lt;h2&gt;
  
  
  Bombs
&lt;/h2&gt;

&lt;p&gt;In 1930, von Neumann moved to the United States, after receiving an offer to work at Princeton University (and later at the Institute for Advanced Study). The same year, the Nazis became the second largest party in Germany. When they came to power in 1933, Jews were forced out from all parts of society. University maths and physics departments were particularly hard hit, with some 15 – 18 percent dismissed. Twenty of the ousted researchers were former or future Nobel laureates. Many of the researchers (including virtually all of the founders of quantum mechanics) moved to the United States. Almost instantly, the balance shifted from Germany to the US in terms of quality of scientific output. Before the Nazis, almost all important physics and mathematics results were published in German. After the war, the United States was the dominant force.&lt;/p&gt;

&lt;p&gt;Von Neumann soon moved from Princeton to the Institute of Advanced Study. One of his achievements there was to prove the ergodic hypothesis. The hypothesis essentially bridges the gap between the microscopic behavior of individual particles in a system and the macroscopic properties observed in thermodynamics. In 1935, von Neumann became a father, when his daughter Marina was born. In 1936, Turing was visiting Princeton, and von Neumann read his paper “On Computable Numbers”. During the 1930s, von Neumann predicted there would be a war in Europe. In September 1941, before the United States entered the war, von Neumann wrote to his congressman: “The present war against Hitlerism is not a foreign war, since the principles for which it is being fought are common to all civilized mankind, and since even a compromise with Hitler would mean the greatest peril to the future of the United States.”&lt;/p&gt;

&lt;p&gt;To calculate artillery projectile trajectories, many variables need to be considered. For example, long range projectiles fly through progressively thinner air as they gain altitude, so experience less resistance to its motion. This, and a lot a of other factors, meant that hundreds of multiplications were needed to calculate a single trajectory. Calculating the shock waves of bombs also required advanced mathematics. Von Neumann had been involved in this research for a while when he became an official consultant for the Army in 1937. He then quickly became more and more involved, as he showed what he could do. One of his contributions was to show that the damage from a bomb is far greater if it is detonated in the air above the target, rather than on the ground. This is due to the destruction caused from the airburst. The general principle was know before, but he showed that the effect was much larger than previously thought. He also improved the accuracy of the calculations for the optimal altitude of a bomb’s detonation.&lt;/p&gt;

&lt;p&gt;In 1943, Robert Oppenheimer wrote to von Neuman, asking for help with the atom bomb project. At that time, there were two alternative ways of triggering the nuclear explosion: a gun-type design (a “bullet” of fissile material is fired into a target piece to start a nuclear chain reaction), and an implosion design (high explosives around a core is detonated to compress the core to trigger the nuclear reaction). For the implosion device to work, it was important that the core was compressed evenly from all sides. When von Neumann’s arrived at Los Alamos, the gun-type was the primary option. His main contribution was to come up with a design of wedge-shaped charges around the plutonium core, that would ensure the compression happened fast enough for the implosion device. This also meant that less plutonium would be needed for an equivalent yield, compared to the gun-type design, and the focus switched to the implosion design.&lt;/p&gt;

&lt;h2&gt;
  
  
  Computers
&lt;/h2&gt;

&lt;p&gt;The calculations needed when developing the atom bomb were getting out of hand. Von Neuman knew that automatic calculating machines were being developed, and he became the chief advocate for using them at Los Alamos. He became aware of the ENIAC (Electronic Numerical Integrator and Computer) project after a chance meeting while waiting for a train in 1944. ENIAC was then more than a year away from being completed, but von Neumann immediately saw the usefulness of it. He was by then the second most famous scientist in the United States (after Einstein), and he had considerable influence in government and military circles. His first contribution was to ensure continued funding for the project.&lt;/p&gt;

&lt;p&gt;Not only did he have the necessary connections and influence, and saw the need for a computer to perform calculations. He was also perhaps the one person with the best understanding of the mathematical and logical underpinnings of the modern computer, since he was very familiar with both Gödel’s and Turing’s work.&lt;/p&gt;

&lt;h3&gt;
  
  
  Gödel
&lt;/h3&gt;

&lt;p&gt;In 1931, Gödel showed that if arithmetic is consistent, then there are true statements in arithmetic that cannot be proved – i.e. arithmetic is incomplete. The proof uses a variation of the liar’s paradox. Consider the statement “&lt;em&gt;This statement is not provable&lt;/em&gt;“. Let’s call this statement &lt;em&gt;Statement A&lt;/em&gt;. Suppose Statement A could be proved. Then it would be false (since it asserts that it cannot be proved). That would mean a false statement could be proved (which would be inconsistent). On the other hand, if Statement A can &lt;em&gt;not&lt;/em&gt; be proved, then the statement is true (because it says that it is not provable). So then we have a true statement that cannot be proved. This means either arithmetic is inconsistent (which would make it useless, since false statements could be proved), or it is incomplete (i.e. it has true statements that can not be proved).&lt;/p&gt;

&lt;p&gt;So far, the above is just logic statements, not arithmetic. But Gödel cleverly expressed the above idea in arithmetic, using what is now called &lt;em&gt;Gödel numbers&lt;/em&gt;. He came up with a system of numbering all axioms and theorems in &lt;em&gt;Principia Mathematica&lt;/em&gt;. Furthermore, in his system, a certain operation would always correspond to the same arithmetic operation. For example, assume that the Gödel number of the statement “All swans are white” is 122. The negation of the statement is “Not all swans are white”, and might have the Gödel number of double that, i.e. 244. This property (the Gödel number doubles) would hold for &lt;em&gt;all&lt;/em&gt; negations, not just for this specific statement. Furthermore, each Gödel number can be unambiguously decoded back to its original expression.&lt;/p&gt;

&lt;p&gt;All logical operations in this system had corresponding arithmetical operations. A proof is a series of logical statements linked together. With Gödel’s system, he was able to turn the proofs into their equivalent arithmetic operations. So any proof can be checked by simple math. With this in place, he produced an arithmetic statement that mirrored the phrase “&lt;em&gt;This statement is not provable&lt;/em&gt;“, and showed the result above using arithmetic. In other words, the language &lt;em&gt;of&lt;/em&gt; mathematics could be used to make meta-statements &lt;em&gt;about&lt;/em&gt; mathematics.&lt;/p&gt;

&lt;p&gt;By making arithmetics talk about arithmetics, Gödel dissolved the distinction between syntax and data. He also showed that numbers can represent logical operations, just like instructions in modern day computers. And the memory address of instructions are reminiscent of Gödel numbers.&lt;/p&gt;

&lt;h3&gt;
  
  
  Turing
&lt;/h3&gt;

&lt;p&gt;Gödel showed in 1931 that Hilbert’s first two questions had negative answers. Five years later, Turing demonstrated that mathematics is not decidable. To do this, he invented an imaginary machine, which we now recognize as a computer, long before computers were invented.&lt;/p&gt;

&lt;p&gt;The Turing machine, as it is now called, is very simple. It consists of an infinite tape, with squares that can each contain a symbol, or be empty. The machine has a read/write head, that can read a symbol from the tape, erase it, and write a new symbol. The head can also move the tape one square to the left or the right. The head also has an &lt;em&gt;m-configuration&lt;/em&gt;, which are the instructions on what to do, and its current state. For example, if reading a 0, erase it and write a 1, then move left. Today, we recognize the &lt;em&gt;m-configuration&lt;/em&gt; to be its program.&lt;/p&gt;

&lt;p&gt;Using this very simple machine, Turning built a set of instruction tables (i.e. subroutines, although he didn’t call them that) to search for and replace a symbol, to erase all symbols of particular kind, etc. Using these, he shows how to build a “universal computing machine”, itself a Turing machine, which is capable of simulating any other Turing machine. Given the other Turing machine’s &lt;em&gt;m-configuration&lt;/em&gt; (encoded as symbols on a tape), and that Turing machine’s input tape, the universal computing machine will output the same result as that other Turing machine.&lt;/p&gt;

&lt;p&gt;Armed with this universal Turing machine, Turing shows that the &lt;em&gt;Entscheidungsproblem&lt;/em&gt; is not possible to solve. To do so, he comes up with the halting problem. Assume that there is a Turing machine that can reliably answer whether another Turing machine will eventually halt on a given input. Call this H. Now create a new Turing machine, H’, that has H inside it. H’ will use H on its input, and if H answers “it will halt”, H’ goes into an infinite loop. On the other hand, if H answers “it will not halt”, H’ halts. Now run H’ with itself as the input. This lead to the logical impossibility of H’ both halting and not halting. Thus H’ can not exist, which mean H can not exist. In other words, it is not possible to have a general procedure for deciding if a mathematical statement is true or false.&lt;/p&gt;

&lt;h3&gt;
  
  
  The von Neumann Architecture
&lt;/h3&gt;

&lt;p&gt;ENIAC became operational at the end of 1945. Instead of being used for artillery firing tables, it was used for solving partial differential equations for the hydrogen bomb project at Los Alamos. ENIAC’s program was fixed, but it could be reconfigured using patch cables (like in old telephone exchanges). But the scientists behind it were already working on a successor. In June 1945, von Neumann wrote the &lt;em&gt;First Draft of a Report on the EDVAC&lt;/em&gt;. Whereas ENIAC was not easily reprogrammable, the new architecture described in the report was. This was the first instance of the modern design, where program and data are both stored in the same way in memory. This means it can easily be reprogrammed.&lt;/p&gt;

&lt;p&gt;Von Neumann was not the only one thinking about organizing a computer this way, but his clear description crystalized the thinking. His familiarity with both Gödel’s and Turing’s work was most likely helpful. The report was widely distributed, and helped keep this type of design in the public domain (open source), as opposed to being patented. This undoubtedly helped speed up the development and adoption of the design.&lt;/p&gt;

&lt;h3&gt;
  
  
  Other Firsts
&lt;/h3&gt;

&lt;p&gt;Stanislaw Ulam, a friend and colleague of von Neumann, invented the &lt;strong&gt;Monte Carlo&lt;/strong&gt; method of calculating probabilities using repeated simulations. While convalescing in hospital, he began to play solitaire to relieve the boredom. He started to try to calculate the probability to play out a hand to completion, but the calculation quickly became intractable. Then he realized he could get a good estimate simply by keeping track of the number of successful plays in 100 tries. And so Monte Carlo simulations were born. Von Neumann was quick to apply this new technique to simulate chain reactions for bomb calculations at Los Alamos.&lt;/p&gt;

&lt;p&gt;To run Monte Carlo simulations, you need a source of randomness. Von Neumann came up with a way of producing a sequence of &lt;strong&gt;random numbers&lt;/strong&gt; he called “the method of middle-squares”. His method consisted of squaring an 8- or 10-digit binary number, then using the middle digits to become the next random number. This number in turn would be squared, and its middle digits became the next random number. Of course, this is not truly random, but it is good enough. Von Neumann later remarked “Anyone who considers arithmetical methods of producing random digits is, of course, in a state of sin”.&lt;/p&gt;

&lt;p&gt;Von Neumann and his colleague Herman Goldstine invented &lt;strong&gt;flowcharts&lt;/strong&gt; to describe how the simulation programs worked. Von Neumann’s wife Klári worked as a programmer on the project. In a program in April 1948, Klári used a &lt;strong&gt;subroutine&lt;/strong&gt; to generate random numbers by von Neumann’s “method of middle squares”. The invention of the subroutine is generally credited to computer scientist David Wheeler, but Klári’s code made use of one at least a year earlier. In 1955, von Neumann noted that computing capacity had nearly doubled every year since 1945, and he was of the opinion that this trend would continue. Noteworthy, since it predates &lt;strong&gt;Moore’s law&lt;/strong&gt; by ten years.&lt;/p&gt;

&lt;h2&gt;
  
  
  Game Theory
&lt;/h2&gt;

&lt;p&gt;Von Neumann also helped found the field of &lt;em&gt;Game Theory&lt;/em&gt;. What does game theory mean? The game of chess is not a game in the game theory sense. As von Neumann explained, chess is a form of computation. In every situation, there is an optimal move. We may not be able to find what the optimal move is, but it exists. On the other hand, in a game-theoretic game, you must consider that the other player may bluff, and you have to consider how other players will react to your actions. It started in 1928, when von Neumann published the paper &lt;em&gt;On the Theory of Parlour Games&lt;/em&gt;. In it he proves the &lt;em&gt;minimax&lt;/em&gt; theorem. There he considers a two-person game, where one person’s gain is the other person’s loss. For this he coined the term _ &lt;strong&gt;zero-sum&lt;/strong&gt; _ game. The strategy is to minimize a player’s maximum loss. He proves that in every two-player zero-sum game, there is a strategy that guarantees the best outcome.&lt;/p&gt;

&lt;p&gt;In 1944, von Neumann and Oskar Morgenstern published the book &lt;em&gt;Theory of Games and Economic Behavior&lt;/em&gt;, which became a bestseller, despite weighing in at a hefty 1200 pages. At the time of its writing, economists didn’t have a way to account for a persons preferences. Von Neumann came up with a system of a happiness/utility scale from 0 – 100, that could be used to account for them. The influence of utility theory and the notion of rational calculating individual introduced here spread wide and far.&lt;/p&gt;

&lt;p&gt;In the book von Neumann also analyzes the game of poker. He simplifies the analysis by assigning a score to each hand between 1 and 100. He also simplifies bids to just be either high or low. With these simplifications he is able to show the importance of bluffing. The purpose of bluffing is not so much to try to win with a bad hand, as it is to encourage the opponent to bet high with middle-range hands when you have a good hand.&lt;/p&gt;

&lt;p&gt;The book found applications in a lot of areas, for example on how to handle monopolies, and eventually helped in many fields, such as evolutionary biology. Another application became cold war planning, for example nuclear strategies, especially at the RAND (Research ANd Development) corporation in Santa Monica, California. RAND was started as a think tank for the US Army Air Forces. The subjects researched at RAND were completely alligned with von Neumann’s three obsessions at the time: computing, game theory and the bomb. One example of the research done at RAND is the duel, for example two aircraft about to shoot at each other. Each one wants to hold off fire until it is close enough to have a good chance of hitting the opponent. At the same time, it still wants to fire first. This is a good example of a two-player zero-sum game. Another problem worked on at RAND was that of nuclear war. I was surprised to learn that right after World War II, there were many advocates of a pre-emptive nuclear strike against the Soviet Union.&lt;/p&gt;

&lt;h2&gt;
  
  
  Self-Replication
&lt;/h2&gt;

&lt;p&gt;As if he wasn’t busy enough with all his various projects, von Neumann also took an interest in comparing biological machines to synthetic ones. This resulted in the book &lt;em&gt;Theory of Self-Reproducing Automata&lt;/em&gt;. He was thinking about whether a machine could be constructed that could create a copy of itself. To those who doubted it is possible, we replied that organisms in nature reproduce themselves all the time.&lt;/p&gt;

&lt;p&gt;Von Neumann used the universal Turing machine as his starting point. It can imitate any other Turing machine if it is given its instructions and input. What would it take for it to instead copy itself? Von Neumann’s answer is that three things are needed. First, it needs a set of instructions on how to build a new machine. Second, the machine needs a construction unit that can build the new machine by following the instructions. Finally, the machine needs a unit that can copy the instructions and insert them into the new machine. It is interesting to note that five years before the discovery of the structure of DNA in 1953, and before scientists had a good understanding of cell replication in detail, von Neumann identified the key steps of what it needed for an entity to replicate itself.&lt;/p&gt;

&lt;p&gt;After a while, von Neumann wondered if three dimensions were necessary, or if self-replication was possible in two dimensions. After getting inspiration from Stanislaw Ulam, he developed what would be known as his &lt;strong&gt;cellular model of automata&lt;/strong&gt;. His self-reproducing automata lives on an endless two-dimensional grid, consisting of squares, or &lt;em&gt;cells&lt;/em&gt;. Each cell can be in one of twenty-nine different states. Further, each cell can only communicate wih its four contiguous neighbors. There are many rules on how cells can communicate, and how this is done. Most cells start in a dormant state, but can be brought to “life” by neighboring cells, and can later also be “killed”.&lt;/p&gt;

&lt;p&gt;Using this setup, he then builds units that perform the three tasks he identified as necessary for self-reproduction. He constructs a tape consisting of dormant cells (representing 0) and live cells (representing 1). Adding a control unit that can read from and write to tape cells, he is able to reproduce a universal Turing machine in two dimensions. He then designs a constructing arm that can snake out to any cell, set it to the right state, then withdraw. The design took a lot longer than he expected, and other pressing government tasks meant that he was never able to finish his design.&lt;/p&gt;

&lt;p&gt;After von Neumann’s death in 1957, Arthur Burks, who had worked with him on the ENIAC, completed the automaton by going through von Neumann’s notes. The complete machine fits in an 80×400-cell box, but has an enormous tail, 150,000 squares long, containing the instructions needed to clone itself. Start the clock, and step by step the behemoth starts work, reading and executing each tape instruction to create a copy of itself some distance away. The construction arm grows until it reaches a predefined point in the matrix, then it starts depositing row upon row of cells to create its offspring. The first attempts to actually run his design on a computer was in 1994, but then it took impossibly long. On a modern laptop though, it takes minutes to run. The most famous example of a cellular automaton was developed in 1970 – &lt;strong&gt;Conway’s Game of Life&lt;/strong&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Another Story
&lt;/h2&gt;

&lt;p&gt;A few weeks after I finished reading the book, I came across this quote. The book has many great stories about von Neumann, but not this one. But I think it fits to include this one as well:&lt;/p&gt;

&lt;p&gt;“There was a seminar for advanced students in Zürich that I was teaching and von Neumann was in the class. I came to a certain theorem, and I said it is not proved and it may be difficult. Von Neumann didn’t say anything but after five minutes he raised his hand. When I called on him, he went to the blackboard and proceeded to write down the proof. After that I was afraid of von Neumann.”&lt;br&gt;&lt;br&gt;
―  &lt;strong&gt;George Pólya&lt;/strong&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Von Neumann is such a fascinating character. He was involved in so many important scientific projects and break-throughs that it truly boggles my mind. I particularly liked to read about the birth of the modern computer, and his influence on that. I also really enjoyed learning about the background and theoretical underpinnings from Gödel’s and Turing’s work.&lt;/p&gt;

&lt;p&gt;The book is very well written, and full of interesting anecdotes. In addition to all the math, science and technology, there is also a good bit of history of the first half of the 20th century. Highly recommended if you are interested in the history of computing and science in general!&lt;/p&gt;

</description>
      <category>learning</category>
      <category>book</category>
      <category>bookreview</category>
      <category>history</category>
    </item>
    <item>
      <title>Finding a New Software Developer Job</title>
      <dc:creator>Henrik Warne</dc:creator>
      <pubDate>Sun, 11 Feb 2024 16:57:27 +0000</pubDate>
      <link>https://dev.to/henrikwarne/finding-a-new-software-developer-job-869</link>
      <guid>https://dev.to/henrikwarne/finding-a-new-software-developer-job-869</guid>
      <description>&lt;p&gt;For the first time ever, I was laid off, and had to find a new software developer job. I managed to find a new one, but it took longer than I thought, and it was a lot of work. I was in contact with 30 companies, got a no from 8 companies, no reply from 6 companies, and offers from 3 companies. Here is what I learnt from the process.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://henrikwarne1.files.wordpress.com/2024/01/dsc_2037-3938421749-e1706351475537.jpg"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--FVJU7rh1--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://henrikwarne1.files.wordpress.com/2024/01/dsc_2037-3938421749-e1706351475537.jpg%3Fw%3D1024" alt="" width="800" height="240"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Laid Off
&lt;/h2&gt;

&lt;p&gt;At the end of October last year, I lost my job. I was completely surprised, but in retrospect, maybe I shouldn’t have been that surprised. The times were getting tougher, and many companies had been laying off people during all of 2023. If the company is not making enough money, eventually there will be layoffs. In my case, we were 17 people let go that day, including 8 developers.&lt;/p&gt;

&lt;p&gt;A few minutes after I had the Zoom call with the CEO, my access to all company resources was cut off. Apart from not being able to finish what I was working on (I had several unpushed changes), it becomes harder to say goodbye to everybody. Many of my colleagues reached out on LinkedIn, which was great. Some even set up Zoom calls so we could talk about what had happened, and say a proper goodbye, which I really appreciated.&lt;/p&gt;

&lt;p&gt;Being let go was a new experience for me. The closest I have been in the past is during the dot com bust, when the project I was working on at Ericsson was cut. They were not yet laying people off (that came a bit later), so we were offered other roles within Ericsson. But I decided to change to another company instead (they reached out directly to me, since I had worked with of them at Ericsson). All the other times I have changed jobs, it’s been on my own initiative, while still being employed. I typically stayed on for three months (the standard notice period in Sweden), finishing up what I was working on, before starting at the new company.&lt;/p&gt;

&lt;p&gt;The upside of being cut off immediately is that I could immediately spend all of my time looking for a new job, while still getting paid for some time. Even though I was surprised that I was let go, I didn’t panic. &lt;a href="https://henrikwarne.com/2017/03/12/programmer-career-planning/"&gt;My philosophy&lt;/a&gt; has always been that I should be prepared to find a new job at any time, since you never know what will happen. So I keep a list of companies that I would like to work at. I also stay friendly with recruiters that contact me, in case I need to get back to them.&lt;/p&gt;

&lt;h2&gt;
  
  
  Finding Roles
&lt;/h2&gt;

&lt;p&gt;I started looking for another job immediately. The best source for this was LinkedIn, but I found some jobs through other means.&lt;/p&gt;

&lt;h3&gt;
  
  
  LinkedIn
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Open to work.&lt;/strong&gt; The first thing I did was to change my LinkedIn “open to work” status to &lt;em&gt;“Immediately, I’m actively applying”&lt;/em&gt; (from &lt;em&gt;“Flexible, I am casually looking”&lt;/em&gt;, which I normally use). I kept the visibility to recruiters only, not all LinkedIn members (which would set the green &lt;strong&gt;#OpenToWork&lt;/strong&gt; photo frame). I have seen arguments for and against using &lt;strong&gt;#OpenToWork&lt;/strong&gt; – it lets more people know you are looking, versus it makes you look desperate. It is hard to know which is better, but I decided to only let recruiters know.&lt;/p&gt;

&lt;p&gt;As soon as I changed this setting, I got contacted by maybe five recruiters per day for the first week or so. I suppose the LinkedIn algorithm alerts recruiters to people that have just changed their status. The quality of the roles was about the same as I normally get (albeit at a much higher rate) – some I really liked, some were OK, and some were definitely not for me.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Applying to known companies.&lt;/strong&gt; Next, I went through my list of companies I would like to work for, and looked to see if they had any open developer roles. I first looked on the company page on LinkedIn, then clicked on the Jobs tab. Many of the companies were actively recruiting. A good thing when you click on an ad is that you can see how many people have applied, and how old the ad is. Sometimes I would also go to the company home page and look at their career page. But I found it convenient to go through LinkedIn, where the format is the same, and you can see if any of your contacts work there. Sometimes you can also see who has posted the ad.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Job ads.&lt;/strong&gt; I also looked through job ads on LinkedIn. There is a search function, and I tried different searches, for example “Golang Stockholm”. It works well enough, and I would click on anything that looked interesting. LinkedIn also has a “Recommended for you”-section and “Jobs where you’d be a top applicant”-section (only if you have Premium), and I guess they use your skills and previous searches to populate these. These also showed a good selection of job ads.&lt;/p&gt;

&lt;p&gt;There are however two problems. The first is that searching for fully remote jobs is unreliable. Sometimes it turns up good ads, but sometimes it turns up roles that are e.g. remote only in the UK. It would be good to be able to search for fully remote roles in Sweden, fully remote in EU, or fully remote worldwide.&lt;/p&gt;

&lt;p&gt;The second problem is that it is not possible to get only the latest ads, for example ads that are less than a week old. So I ended up having to page through a lot of ads I had already seen. After I had found a new job, I saw a good solution for this in a tweet: use Google, set the time interval to last week, and search for e.g. &lt;em&gt;“golang fully remote site:linkedin.com/jobs”&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Reaching out.&lt;/strong&gt; I also reached out to around 15 recruiters that had contacted me on LinkedIn in the past, but nothing came out of that. I knew that the roles they contacted me for would not be open, but I thought that they might be recruiting for something similar. On a few occasions I sent a direct message (InMail) to managers that were recruiting (some encouraged you to do this in their bios), but I don’t think I got a single response. Perhaps I was not a good enough fit, but it was still disappointing.&lt;/p&gt;

&lt;h3&gt;
  
  
  Other Sources
&lt;/h3&gt;

&lt;p&gt;In Sweden there is a site that is matching developers with employers called &lt;a href="https://demando.io/sv"&gt;Demando&lt;/a&gt;. You fill out a profile, listing your skills, and giving a minimum salary you will accept. Companies advertise jobs there too, and you get a message if there is a match. I already had an inactive profile there that I activated, setting a relatively high minimum salary. I got contacted by one company there, which I later got an offer from. I also found a job ad there for a company I would like to work for. I did not find that ad on LinkedIn. I applied to them, and got an offer from them too. So pretty good payoff from using that site.&lt;/p&gt;

&lt;p&gt;I also briefly looked at a site called &lt;a href="https://remoteok.com/"&gt;RemoteOK&lt;/a&gt;, but didn’t find anything that I applied to there. My general sense was that the quality of job ads there was much lower than on LinkedIn. I also had a look at &lt;a href="https://www.efinancialcareers.co.uk/"&gt;Efianancialcareers&lt;/a&gt;, but there are almost no fully remote roles there (and they are hard to search for). On the first of every month, there is Hacker News thread called &lt;a href="https://hn.algolia.com/?dateRange=pastMonth&amp;amp;page=0&amp;amp;prefix=true&amp;amp;query=Who%27s%20hiring&amp;amp;sort=byPopularity&amp;amp;type=story"&gt;Who’s hiring&lt;/a&gt;. I looked there briefly too, but I found it too hard to find something relevant there.&lt;/p&gt;

&lt;p&gt;I later found another good way of finding companies to check to see if they have any open roles: google “competitor to” or “alternative to” and a company name, to find similar companies.&lt;/p&gt;

&lt;h3&gt;
  
  
  Applying and Tracking
&lt;/h3&gt;

&lt;p&gt;All recruiters I was in contact with asked for my CV, even though it is mostly the same information that is already on my LinkedIn profile. It is almost as if it is a sign that you are serious. This is fine with me, since once I had an up-to-date CV, I just attached that one. When applying to companies directly, most companies ask for a CV (even when including your LinkedIn profile), and many also asked for a cover letter. I saved all the cover letters I wrote, and when I needed to write a new one, I copied the most similar previous one I had, and modified it to fit the new application.&lt;/p&gt;

&lt;p&gt;Soon after I started to send out applications, I created an Excel sheet to keep track of my applications. I included company name, date the application was sent, recruiter or contact person, and a column for general notes. Looking at it now, it has 30 entries, but I didn’t send applications to all of them. In some cases, I added an entry after speaking to a recruiter, but then nothing came of it.&lt;/p&gt;

&lt;p&gt;6 companies I applied to never responded at all. In some cases, the ad was more than a month old. But if the ad is no longer relevant, they should take it down instead of leaving it up and not responding. In some cases, I tried to contact the recruiter that had posted the ad, but I didn’t get a response that way either.&lt;/p&gt;

&lt;p&gt;Ideally, all companies should respond. But I don’t mind too much if I don’t get a response if I haven’t been in contact with a person at the company. However, if I have had an interview with them, I think they should at least let you know if they are not interested. At one company, I had an initial interview with a recruiter at the company. She said she would set up an interview with a manager. Then crickets. I mailed her after two weeks, asking what was happening, but didn’t get a response. Two weeks later, I sent another mail saying I was no longer interested, and got an half-hearted apology back (but no reason for why she never got back to me). So this exchange now colors my view of that company.&lt;/p&gt;

&lt;h3&gt;
  
  
  Tools I Paid For
&lt;/h3&gt;

&lt;p&gt;I have never had &lt;strong&gt;LinkedIn Premium&lt;/strong&gt; before, but I decided this was a good time to try it, so I paid for a month. However, it was quite disappointing. Maybe I could see more profile viewers, I am not sure, but it is definitely not showing who all of them are. And even if it had been, it has limited value in a job search. I also got a number of InMails to send each month (maybe 5?). I sent a couple, but they were not very useful for me either. Then there was the “Jobs where you’d be a top applicant”, but that too wasn’t very useful for me. So I cancelled after one month. Before cancelling, I had the option of extending it for two months for the price of one.&lt;/p&gt;

&lt;p&gt;I also paid for a GoLand license for two months (quite expensive), since I was looking for Go jobs and wanted to practice in an environment I am used to. I signed up for &lt;strong&gt;Github CoPilo&lt;/strong&gt; t too, because I wanted to try it. It’s quite good, but I didn’t use it much, because I wanted to make sure I did all coding by myself at interviews. I already had a subscription to &lt;strong&gt;ChatGPT&lt;/strong&gt; , and that came in very handy for many take-home assignments.&lt;/p&gt;

&lt;p&gt;I signed up for &lt;strong&gt;Leetcode&lt;/strong&gt; too, as I have done in the past when preparing for interviews. Mostly I like the paid version because you can get the editorial explanation for the solutions. I practiced a bit in Leetcode, so it was worth the expense. One company used an IQ test from Alva labs, so I paid for a practice course called &lt;strong&gt;Alva Logic Cram Course&lt;/strong&gt; from 12minprep (there were lots of vendors, but this was relatively cheap). It was definitely worth it, I did much better on the test than I would have, had I not practiced beforehand.&lt;/p&gt;

&lt;h3&gt;
  
  
  Reflections
&lt;/h3&gt;

&lt;p&gt;Times are definitely harder now compared to the previous decade. There were still many open positions to apply for, but it looked like there were many more applicants for each role than in the past. On LinkedIn, where you can see how many applicants there are, many job ads would have more than a hundred applicants. Further evidence of times being tougher is the number of companies that I never heard back from, even though I believe I would be great for the role.&lt;/p&gt;

&lt;p&gt;There is also a big focus on having experience in a given language. In the past, I have started a job developing in Python without any Python experience. The same for Golang. It didn’t take me long to get productive in each language. Partly this is because many imperative languages are very similar. Of course, knowing the libraries and ecosystems is good, but in my experience not strictly necessary. But many recruiters now told me that it was a hard requirement from the hiring companies to have at least two or three years’ experience in the given language.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Interview Process
&lt;/h2&gt;

&lt;p&gt;For a typical job there were four or five interviews: an initial interview with a recruiter, an interview with a hiring manager, one or two technical interviews (either live coding, or going through a take-home assignment). There could also be an interview with a product manager, and/or one with a CTO or founder. All in all, quite a time commitment.&lt;/p&gt;

&lt;p&gt;I was applying for both local and remote roles. For the remote ones, all interviews were naturally on Zoom/Meet/Teams. For local jobs in Stockholm, most companies wanted in-person interviews. This caused some problems, because it takes time to travel into the city. I could easily do many remote interviews in a day, but one in-person interview would take half a day with commuting. Mostly I managed to schedule in-person interviews on the same days, which helped. The advantage of the in-person interviews is that you get a better feel for the other person, and you can see what the office looks like.&lt;/p&gt;

&lt;h3&gt;
  
  
  Preparations
&lt;/h3&gt;

&lt;p&gt;Since before, I had a Word document that I called &lt;em&gt;Interview Tips&lt;/em&gt;. In it I wrote down things to think about before an interview, in a format that is easy to review quickly. One section I added now was &lt;strong&gt;Behavioral Questions&lt;/strong&gt;. These are questions of the form &lt;em&gt;“Tell me about a time when you disagreed with a coworker. How did you resolve it?”&lt;/em&gt;. Typically, you should answer them using the STAR framework: Situation, Task, Action, Result, Reflection. In the past, I have failed interviews because of these questions – I hadn’t prepared, and couldn’t come up with good examples on the spot in the interviews.&lt;/p&gt;

&lt;p&gt;This time I went through a good list of such questions (&lt;a href="https://leetcode.com/explore/interview/card/leapai/"&gt;Rock the Behavioral Interview&lt;/a&gt;) from Leetcode, and thought about examples to use. Once I had good examples, I wrote the question and my answer down in the document. Before an interview, I would review what I had written down, so I would be able to come up with good examples. This worked well, I didn’t fail any interviews because of behavioral questions.&lt;/p&gt;

&lt;p&gt;In the document I also wrote down little snippets of code in both Python and Go. I tried to cover many common patterns and idioms. I did this so I could refresh my memory and quickly come up with the right syntax in a coding interview. I ran all the snippets first, to see that I hadn’t made any mistake, and included relevant output. Reviewing these snippets before an interview made me feel calmer and more prepared.&lt;/p&gt;

&lt;p&gt;I also watched a good video by Gergely Orosz, &lt;a href="https://www.youtube.com/watch?v=vFOw_m5zNCs"&gt;Confessions from a Big Tech Hiring Manager: Tips for Software Engineering Interviews&lt;/a&gt;, on technical interviews in general. Some takeaways: be curious and collaborative, and ask questions.&lt;/p&gt;

&lt;h3&gt;
  
  
  Interviews
&lt;/h3&gt;

&lt;p&gt;In all my initial interviews, I was open with the fact that I had been let go from my previous job due to cut backs. I didn’t seem like disclosing this was to my disadvantage. I was never nervous talking to recruiters or managers – I always knew what to say, since I had done it many times, both in the past and for this round of interviews. It is easy for me to articulate what I am looking for in a job, and what my strengths are, because it is very clear in my mind. I was also not nervous before non-coding technical interviews, since I feel I know most technologies I have worked with quite well.&lt;/p&gt;

&lt;p&gt;However, I was nervous when I had coding interviews. I don’t exactly know why, but my brain seems to work at only 50% capacity every time I have to do live coding. So, it can be hard for me to come up with a solution, or remember some syntax, when trying to solve a problem. Luckily, all live coding interviews went well this time, but probably mostly because I had prepared a lot.&lt;/p&gt;

&lt;p&gt;Of all non-coding interviews, I failed only one (I failed several coding and take-home assignments though). For the one I failed, I was asked what timeout I would set on a database connection. I was more thinking about how long an individual user could be prepared to wait for a page to render, so blurted out too high of a number. This was enough to fail an interview that otherwise went pretty well.&lt;/p&gt;

&lt;h3&gt;
  
  
  Coding Tests
&lt;/h3&gt;

&lt;p&gt;Compared to when I last interviewed a few years ago, there were more take-home assignments this time. Take-home assignments are a lot less stressful than live coding, but they also take much more time. Regardless of what the companies claim, I would say each assignment took at least six hours. There is an unfortunate asymmetry here: it is easy for a company to demand that you do a take-home assignment early on in the process, with almost no cost in effort to them. For the job-seeker, it is at least six hours of work that may or may not pay off. Even so, I noticed that each time I got going with an assignment, I &lt;em&gt;loved&lt;/em&gt; the programming – being immersed in a task, structuring the code well, finding good names etc. It was extra obvious because I wasn’t programming as much as when I had a developer job.&lt;/p&gt;

&lt;p&gt;I did five take-home assignments, two in Java, two in Python, and one in Go. I failed one and a half. For the Go assignment, I wrote a working solution, but did not include tests. This wasn’t stated as a requirement, but I should have included some even so (when developing the solution, I used a more interactive approach, which meant running the code a lot as I developed). That assignment was also failed because I did not include caching to speed it up (to me it was not clear that it would be run more than once though).&lt;/p&gt;

&lt;p&gt;The other assignment, I failed because of hidden test cases and sloppy coding from me. The task was really good – I was given test cases, but no implementation. I implemented enough of the system to pass all the existing test cases. My instinct told me that I should add more test cases on my own (that’s what I would have done if this was on the job). However, I thought that I had already spent a lot of time on it, so I didn’t. When we went through my solution in the interview afterwards, they had run some extra (hidden) test cases on my solution, and discovered two errors in my code. Both had to do with cases with empty input. I felt really stupid for not being more careful implementing the solution. In the end, I still got an offer from that company, and that’s why I am counting it as only half a failure.&lt;/p&gt;

&lt;p&gt;For the take-home assignments, it was quite helpful to have access to ChatGPT. Getting a working framework up for what I wanted to do was a lot quicker that way.&lt;/p&gt;

&lt;p&gt;Of the live coding assignments, I passed three and failed two. In the first one I failed, I had to write a limited chess program, that only supported two kinds of pieces. It needed a project structure, a data model, valid movements for the pieces, and tests. I started from nothing, and had to be send the solution in within two hours. It was very tight and stressful. I got most of it working, but not all functionality. I also had a bug in the movement code. That combination made it a fail.&lt;/p&gt;

&lt;p&gt;The other live coding was less well prepared from the company. I downloaded a repo from Github with some initial code. But there were no working tests, and it took a while to set up an environment to work in. I also had to ask many questions on how the logic was supposed to work (with 50% of my brain capacity), and in the end I took too long. For all live coding tests, I used the IDE on my computer, and shared the screen over Zoom.&lt;/p&gt;

&lt;h2&gt;
  
  
  Salary
&lt;/h2&gt;

&lt;p&gt;A couple of times in the beginning, I gave too high of a salary number, resulting in cancelled interviews. So I changed my tactics to instead tell them what my salary had been each year for the past four years, to give them a sense of what I was ideally looking at (I had had quite good salaries). Often, they would say “well, we can’t pay that”. To this I would respond that since I currently don’t have a job, I don’t really have a minimum acceptable salary, it will depend on what they (and others) can offer. This would often mean that we could continue the process. Sometimes it also meant that I had a chance to show what I had to offer, which might later translate to a better offer than they initially intended.&lt;/p&gt;

&lt;p&gt;Once I had my first offer, it became easier. Surprisingly often when I asked the companies what the salary range for the role was, I would get an answer. This was really good, because I could say no to the ones that offered less than I was prepared to accept. In one case, it didn’t check what salary they were prepared to pay. I only found out when I got an offer from them (after a lot of interviews and a take-home assignment). I did this because I really liked the company, but their offer was very low. In hindsight, I should have checked upfront, instead of wasting a lot of time and effort on them.&lt;/p&gt;

&lt;p&gt;Even before this job-hunting round, I had quite a good sense of what salaries companies in Stockholm pay developers. It got even better after talking to many companies. In Sweden, you think in terms of the monthly salary. You also have to consider if pension contributions are included or not (in most cases they are included, and can be an additional 10 to 15 percent, which is implicitly included). A very average developer salary would be 55K – 65K SEK per month. Almost all companies are prepared to pay that. Getting a salary of 80K a month and over is more difficult, but definitely not impossible, even in a tight market. In addition to salary, many companies can offer options or equity, and/or a bonus.&lt;/p&gt;

&lt;p&gt;In the end, you also have to weigh other aspects of the job. How interesting is the product and company? What language will you work in? What will you learn? Who will your colleagues be?&lt;/p&gt;

&lt;h2&gt;
  
  
  Choosing
&lt;/h2&gt;

&lt;p&gt;One company I interviewed with was very positive, and ended up asking for two references, which I provided. But the next day, they announced that they would go with another candidate. I was quite surprised, since I thought I would get an offer from them. It’s ok if they find somebody else, but I was upset that they wasted the time of two of my references. The recruiter later apologized and said that she was convinced they wanted to give me an offer, but the manager had picked somebody else. I don’t know what to think, but it was quite a disappointment.&lt;/p&gt;

&lt;p&gt;Even if you fail to get offers at most companies you apply to, you only need to have one where you are successful. In a sense it is a numbers game – go through enough processes so that you get at least one offer.&lt;/p&gt;

&lt;p&gt;In the end, I had three and a half offers to choose from (I didn’t formally get an offer from one of the companies, mostly because I knew it would pay less than two of the offers I already had, so I am only counting it as a half). I managed to get them more or less at the same time. I tried to slow some companies down, and speed other ones up, by delaying interviews, or being able to have interviews as soon as possible.&lt;/p&gt;

&lt;p&gt;I was lucky to have more than one offer, because then I was in a much better negotiating position. The top two offers were both very good, and I could see my self taking either. In the end I got everything I was looking for – a very interesting job, and a good salary.&lt;/p&gt;

&lt;h2&gt;
  
  
  Lessons Learned
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Stressful.&lt;/strong&gt; Looking for a new job is a lot of work. It is hard to relax, even on weekends, because there is always some interview preparation you can do. It wasn’t until I had a firm job offer that I could enjoy my time off (I had a few weeks off before I started).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Takes time.&lt;/strong&gt; I thought it would be quicker to find a new job. But there are four or five interviews to schedule, and often a take-home assignment to do. You also want to read up on the company and product. Then the companies usually can’t schedule the interviews as fast as you would like. Add that you will fail interviews, and the process can easily take months.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;A numbers game.&lt;/strong&gt; Even if you fail a lot of interviews, it only takes one company where you are successful for you to have a new job. So apply to many. I also realized that I am bad at judging if I did well or not. Almost every time I was rejected, I was surprised.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;People to discuss with.&lt;/strong&gt; I have two former colleagues, Patrik and Peter, that I discussed my jobhunting process with. These discussions were really useful. There are so many aspects to consider, and having somebody who understands to talk to helps immensely!&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Responsive recruiters.&lt;/strong&gt; Recruiters that consistently get back to you quickly are great! It is such a simple way to build confidence in the company they represent, yet so many recruiters really are bad at it.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Reach out.&lt;/strong&gt; If a colleague of yours is let go, reach out to them (through LinkedIn or other means) to say goodbye. It meant a lot to me, and I think most people would appreciate it.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;I am lucky. I love programming, and I have a lot of experience. Even in a tougher market it was comparatively easy for me. It still took a lot of work, but I ended up with a great new job as a senior developer at &lt;a href="https://swissblock.net/"&gt;Swissblock&lt;/a&gt;. I hope my jobhunting experience can be helpful to other developers looking for their next jobs.&lt;/p&gt;

</description>
      <category>programming</category>
      <category>work</category>
      <category>career</category>
      <category>interview</category>
    </item>
    <item>
      <title>Tidy First?</title>
      <dc:creator>Henrik Warne</dc:creator>
      <pubDate>Wed, 10 Jan 2024 19:47:51 +0000</pubDate>
      <link>https://dev.to/henrikwarne/tidy-first-21hi</link>
      <guid>https://dev.to/henrikwarne/tidy-first-21hi</guid>
      <description>&lt;p&gt;&lt;em&gt;“Software design is preparation for change; change of behavior”&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.goodreads.com/book/show/171691901-tidy-first"&gt;Tidy First?&lt;/a&gt; is a new book by Kent Beck. It is a short little book, only about 100 pages (and lots of white space on them), but it contains some deep insights about software development. The book has three parts, going from concrete to abstract. First there is a list of 15 &lt;em&gt;tidyings&lt;/em&gt;, which are small refactorings. The next part, &lt;em&gt;Managing&lt;/em&gt;, discusses how and when to perform the tidyings. The final part, &lt;em&gt;Theory&lt;/em&gt;, presents a great framework for how to think about software development, using the concepts of &lt;em&gt;time value of money&lt;/em&gt; and &lt;em&gt;optionality&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://henrikwarne1.files.wordpress.com/2024/01/tidyfirst.jpg"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--tb_WrXhe--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://henrikwarne1.files.wordpress.com/2024/01/tidyfirst.jpg%3Fw%3D1024" alt="" width="800" height="359"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The author, Kent Beck, is of course the creator of extreme programming (XP). As &lt;a href="https://dev.to/henrikwarne/20-5-years-of-xp-and-agile-3lpb"&gt;I have written before&lt;/a&gt;, his article in 1999, presenting XP, gave me the biggest productivity boost of my entire career as a software developer. Even though this is a very short book, it contains a lot of wisdom. It is worth reading slowly, to really digest the content.&lt;/p&gt;

&lt;p&gt;A key idea in the book is that before you implement a behavior change (B) in the code, it may be beneficial to first perform one or more structural changes (S). These changes do not alter the program behavior, and are almost trivially simple. These changes are called tidyings. The idea is that by doing these tidyings, the behavior change will be easier to implement.&lt;/p&gt;

&lt;h3&gt;
  
  
  Tidyings
&lt;/h3&gt;

&lt;p&gt;There are 15 tidyings, and they are presented in very short, almost tweet-like, chapters.&lt;/p&gt;

&lt;h4&gt;
  
  
  Tidyings I Like the Most:
&lt;/h4&gt;

&lt;p&gt;&lt;strong&gt;Guard clause&lt;/strong&gt; – exit a function early if certain conditions are not met. This makes the rest of the function easier to write (no nested if-statements). &lt;strong&gt;Normalize symmetries&lt;/strong&gt; – the same logic should be expressed in the same way everywhere it appears, since it makes reading the code easier.&lt;/p&gt;

&lt;p&gt;I have noticed that many developers are reluctant to introduce &lt;strong&gt;explaining variables/constants&lt;/strong&gt;. The idea here is to extract a subexpression into a variable named after the intention of the expression – typically done after reading the code and realizing what some part of it means. In the author’s words: &lt;em&gt;“In this tidying, you are taking your hard-won understanding and putting it back into the code”&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;New interface, old implementation&lt;/strong&gt;. If the design was like this, it would be easier to make the change. So create that new interface, and in it delegate to the old interface (for now). I really like this way of thinking, and I am using it often: first I assume I have a function that does XXX, and using that makes the solution easier. Then I create the function that does XXX. In a way it is working backwards – first assuming you have something available, and later implementing it.&lt;/p&gt;

&lt;p&gt;The simplest tidying of them all is &lt;strong&gt;chunk statements&lt;/strong&gt; , which just means using blank lines to indicate which parts of code are closely related, and which parts a separate. An underestimated practice, even though sometimes it can be hard to know how to chunk things. Too many blank lines can also mean you fit less code on your screen, so it needs to be balanced. &lt;strong&gt;Extract helper&lt;/strong&gt; is another underused technique. Like explaining variable, it lets you name a part of the logic. &lt;em&gt;“Interfaces become tools for thinking about problems”&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;One tidying that I don’t think I have used before is &lt;strong&gt;one pile&lt;/strong&gt;. Normally, tidyings will divide the code up in parts, where each part can be understood in isolation. However, sometimes the way the code is divided can hinder understanding. In this case, bringing it all together in one place can be a way to understand it better. Then it can be subdivided (in a new, easier to understand, way).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Explaining comments/delete redundant comments&lt;/strong&gt;. When needed, &lt;a href="https://dev.to/henrikwarne/on-comments-in-code-4545"&gt;I am all for adding a comment&lt;/a&gt; with extra information that is not obvious from the code. Also, if the comment says exactly what the code does, delete the comment (there is a good example in the book how this can happen when tidying).&lt;/p&gt;

&lt;h4&gt;
  
  
  Other Tidyings:
&lt;/h4&gt;

&lt;p&gt;Delete &lt;strong&gt;dead code&lt;/strong&gt; – this should be easy, but I often see either dead code, or commented out code, still left in code bases. &lt;strong&gt;Reading order&lt;/strong&gt; – put the code in the file in the order that makes the most sense when reading it. This advice is less important in the age of IDEs, where navigating in and out of functions is very easy. Still, keeping functions in a good order doesn’t hurt. It is also similar to &lt;strong&gt;cohesion order&lt;/strong&gt; – keeping elements that change together close to each other. There is a similar theme for &lt;strong&gt;move declaration and initialization together&lt;/strong&gt; – keep related things together.&lt;/p&gt;

&lt;p&gt;If the arguments to a function are in the form of a map/dict, then use &lt;strong&gt;explicit parameters&lt;/strong&gt; to make clear what the inputs are.&lt;/p&gt;

&lt;p&gt;For many of these, the best way may be to try them and see if the resulting code is better than before. If not, undo the change. Many times I have been too reluctant to make a change to see how it looks (somehow it feels like wasted effort). But I have come to realize that actually seeing the changed code (not just contemplating it) is the best way of evaluating the change.&lt;/p&gt;

&lt;h3&gt;
  
  
  Managing
&lt;/h3&gt;

&lt;p&gt;Each individual tidying is very simple. They only change the structure of the code, never the behavior. Even chaining several tidyings together will result in a change that is easy to understand, and easy to undo if necessary. Sometimes the behavior change will be easier if we tidy first, then implement the change. In other cases, it is better to make the structural changes later, or not at all. This is the reason there is a question mark in the book title. Regardless, structural and behavioral changes should be kept in separate PRs (or at least in separate commits).&lt;/p&gt;

&lt;p&gt;In many work places, there are high fixed costs (in time and effort) associated with PR reviews. The ideal solution for this, according to the author, is to not require PR reviews for only tidyings. If this is not feasible, then at least keep the changes in separate commits.&lt;/p&gt;

&lt;p&gt;A problem I often encounter is that once you start making behavior changes, you see structural changes that should be done. This results in a mix of B and S changes. Separating them out can be hard. There is a good discussion on how to handle this in the chapter &lt;em&gt;Getting Untangled&lt;/em&gt;. Either you ship it as it is (tangled), or you untangle the different changes (I have been doing this using git’s interactive rebase), or you discard all the changes and re-implement the changes. The last option sounds a bit crazy, but the author thinks that this may lead to even better code in the end.&lt;/p&gt;

&lt;h3&gt;
  
  
  Theory
&lt;/h3&gt;

&lt;h4&gt;
  
  
  Beneficially Relating Elements
&lt;/h4&gt;

&lt;p&gt;Software design is &lt;em&gt;beneficially relating elements&lt;/em&gt;. On one extreme there is a single gigantic soup of tiny subelements, for example assembly code with a global namespace. Even though such a program can work and produce the correct output, it would be virtually impossible to modify. The key then is to structure the program to make it understandable and changeable. This is done by creating and deleting elements, and creating and deleting relationships between the elements in a way that aids the overall understandability (this is the &lt;em&gt;beneficially&lt;/em&gt; part).&lt;/p&gt;

&lt;h4&gt;
  
  
  Time Value of Money, Optionality
&lt;/h4&gt;

&lt;p&gt;How do we balance keeping the program well-structured with the need to add behavior? Now we get to perhaps my favorite part of the book – relating software development to the concepts of &lt;em&gt;time value of money&lt;/em&gt; and &lt;em&gt;optionality&lt;/em&gt;. These are in tension with each other, and explain the question mark in the title.&lt;/p&gt;

&lt;p&gt;The time value of money simply means that a dollar today is worth more than a dollar tomorrow. Therefore, getting features out quickly, so you can start earning money earlier, is imperative. So don’t tidy first.&lt;/p&gt;

&lt;p&gt;However, software creates value in two ways: what it does today, but also in what it &lt;em&gt;could&lt;/em&gt; do tomorrow. As noted in the book: &lt;em&gt;“The mere presence of a system behaving a certain way changes the desire for how the system should behave”&lt;/em&gt;. This explains why software is never done – using a system makes you continually see new usages for it. Just like options in finance have value even before they are exercised, so do options in software. The options in this case is the structuring of the code that enables quick changes. Tidyings improve the structure, thus creating more, and more valuable, options. Therefore you should tidy first.&lt;/p&gt;

&lt;p&gt;Because of this tension between the cases, you have to find a balance for when, and how much, to tidy.&lt;/p&gt;

&lt;h4&gt;
  
  
  Coupling
&lt;/h4&gt;

&lt;p&gt;The key reason a program is expensive to change is that changing one element requires changing other elements (because the elements are coupled with respect to that change). Changing the other elements can in turn necessitate more changes, i.e. cascading changes. Therefore, reducing coupling will reduce the cost of change.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Constantine’s Equivalence&lt;/strong&gt; states that the cost of software is roughly equal to the cost of changing it. This cost of change is dominated by the cost of the big, cascading changes. Therefore, the cost of software is approximately equal to the coupling.&lt;/p&gt;

&lt;h3&gt;
  
  
  To Keep in Mind
&lt;/h3&gt;

&lt;p&gt;Here are the key lessons from the book that I want to keep in mind when developing software:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;What structural change(s) (S) will make the next behavioral change (B) easier to implement?&lt;/li&gt;
&lt;li&gt;Keep S and B in separate commits (or even separate PRs).&lt;/li&gt;
&lt;li&gt;Create future behavior options by keeping a structure that supports change.&lt;/li&gt;
&lt;li&gt;Constantine’s Equivalence: cost(software) ~= coupling&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Conclusion
&lt;/h3&gt;

&lt;p&gt;There is a lot to like about this book. It has many concrete code tidyings you can put to use right away. It also has interesting discussions on how and when to perform them, as well as models to help you think about the tradeoffs present. Throughout the text there are numerous indications that the author has long practical experience, and has thought long and hard about software development.&lt;/p&gt;

&lt;p&gt;This book is focused on the individual developer, and is the first in a series of three books. The next book will be about teams of software developers, and the third book will be about the cooperation between developers and non-developers. I really enjoyed Tidy First?, and I am looking forward to reading the next books in the series.&lt;/p&gt;

</description>
      <category>programming</category>
      <category>book</category>
      <category>bookreview</category>
    </item>
    <item>
      <title>What I Have Changed My Mind About in Software Development</title>
      <dc:creator>Henrik Warne</dc:creator>
      <pubDate>Sun, 10 Sep 2023 12:23:05 +0000</pubDate>
      <link>https://dev.to/henrikwarne/what-i-have-changed-my-mind-about-in-software-development-180o</link>
      <guid>https://dev.to/henrikwarne/what-i-have-changed-my-mind-about-in-software-development-180o</guid>
      <description>&lt;p&gt;I really like this quote from Jeff Bezos:&lt;/p&gt;

&lt;p&gt;&lt;em&gt;“Anybody who doesn’t change their mind a lot is dramatically underestimating the complexity of the world we live in.”&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Lately I have been thinking about what I have changed my mind about in software development. Here are the things I came up with:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://henrikwarne1.files.wordpress.com/2023/09/stenmur.jpg"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ZEeMcrk8--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://henrikwarne1.files.wordpress.com/2023/09/stenmur.jpg%3Fw%3D1024" alt="" width="800" height="322"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Self-documenting code.&lt;/strong&gt; I used to think that the names of the classes, methods and variables should be enough to understand what the program does. No comments should be needed. Over the years I have realized that some comments are needed and useful. These days I add comments when there is something particularly tricky, either with the implementation, or in the domain. Every time I came back to code where I wrote a comment, I am happy that I took the time to do it. I have written more about this in &lt;a href="https://dev.to/henrikwarne/on-comments-in-code-4545"&gt;On Comments in Code&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Unit testing private methods.&lt;/strong&gt; I wrote a blog post called &lt;a href="https://henrikwarne.com/2014/02/09/unit-testing-private-methods/"&gt;Unit Testing Private Methods&lt;/a&gt;, where I argued that you might as well make them package private, so you can easily write tests for them. However, several people commented and argued that you can test the private methods through the public interface. After a bit of thinking, I ended up agreeing with them, and changed my approach.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Using an IDE.&lt;/strong&gt; Many years ago, I was using Emacs when writing code. I was quite happy with that, and didn’t particularly feel that anything was lacking. However, one day my colleague Johan showed me what IntelliJ IDEA could do. I was sold, and never looked back. The biggest difference is navigation – it is so much easier to move around in a code base with one. Nowadays, I can’t imagine not using an IDE. I have written more on this in &lt;a href="https://henrikwarne.com/2012/06/17/programmer-productivity-emacs-versus-intellij-idea/"&gt;Programmer Productivity: Emacs versus IntelliJ IDEA&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Using a debugger.&lt;/strong&gt; I like &lt;a href="https://henrikwarne.com/2014/01/01/finding-bugs-debugger-versus-logging/"&gt;trouble shooting using log statements&lt;/a&gt; and &lt;em&gt;printf&lt;/em&gt;. It is simple and effective, and works in many situations. However, when I started writing Go code several years ago, my colleague Erik showed me how nice it is to explore the state of the program when a test case fails. I had used debuggers before, but he showed me a great use case for them.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Working remotely.&lt;/strong&gt; Even during the pandemic, when I was working from home full time, I was &lt;a href="https://dev.to/henrikwarne/working-from-home-cons-and-pros-2boo"&gt;skeptical of working remotely&lt;/a&gt;. However, I have changed my mind, and I now think working from home is great. The downside is still that I miss the face-to-face interactions. But working remotely allows me to work for companies I previously could not work for. Not having to commute is a another big plus. On balance, I think the advantages outweigh the disadvantages.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Using ChatGPT.&lt;/strong&gt; When ChatGPT came out, I was impressed with what it could do. However, I was a bit skeptical of exactly how it would work in software development. But my colleague Filip kept telling me of all the cases where he used ChatGPT to help with development. So I decided to put some more effort into seeing how I could use it. For me, the main use has been for minor stand-alone tasks. For example, to generate a first draft of a Python script, to write a &lt;em&gt;SQL INSERT/UPDATE&lt;/em&gt; trigger, or giving me a &lt;em&gt;sed&lt;/em&gt; regular expression that removes the initial time stamp (when present) from log lines. In all these cases, it has been a great time saver.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Am I changing my mind about enough things? I don’t know. But it is definitely worthwhile to once in a while examine your beliefs about how to develop software. In many of the above cases, it took somebody else to show me, or convince me, of other ways of working. My conclusion is that collaboration and pair programming is important for spreading good ideas.&lt;/p&gt;

&lt;p&gt;What have you changed your mind about when it comes to software development? Let me know in the comments.&lt;/p&gt;

</description>
      <category>learning</category>
      <category>programming</category>
    </item>
    <item>
      <title>Well-maintained Software</title>
      <dc:creator>Henrik Warne</dc:creator>
      <pubDate>Sun, 23 Apr 2023 13:53:41 +0000</pubDate>
      <link>https://dev.to/henrikwarne/well-maintained-software-2lo4</link>
      <guid>https://dev.to/henrikwarne/well-maintained-software-2lo4</guid>
      <description>&lt;p&gt;Two months ago, I was a guest on the &lt;a href="https://www.maintainable.fm/episodes/henrik-warne-there-is-no-software-maintenance"&gt;Maintainable podcast&lt;/a&gt;. The first question the host &lt;a href="https://www.planetargon.com/about/robby-russell"&gt;Robby Russell&lt;/a&gt; asks is “What are a few characteristics of well-maintained software?”. This is such a great question, and I thought I would expand a bit on my answer from the show.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://henrikwarne1.files.wordpress.com/2023/04/maintainablehenrikwarne.png"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--7Re7xQQU--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://henrikwarne1.files.wordpress.com/2023/04/maintainablehenrikwarne.png%3Fw%3D1024" alt="" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;That software is well-maintained only matters if you need to change it. If you never have to change it, it doesn’t matter how it is done, as long as it works. However, in pretty much all cases you &lt;em&gt;do&lt;/em&gt; need to change it. For me, well-maintained means that it is easy to change. And for the software to be easy to change, it must first be easy to understand. These are the characteristics that most help me understand how a program works:&lt;/p&gt;

&lt;h3&gt;
  
  
  1. Steps
&lt;/h3&gt;

&lt;p&gt;Being able to see the different steps done by the program is the key to understanding what it does. Methods with descriptive names help in explaining the flow. For example, processing a message may be done like this on the top level: &lt;em&gt;parseInput(), removeDuplicates(), processRequest(), sendResponse()&lt;/em&gt;. In &lt;em&gt;processRequest()&lt;/em&gt;, the same pattern is used recursively. The work that needs to be done is again separated into different methods, until we end up with reasonably-sized methods of only statements.&lt;/p&gt;

&lt;p&gt;I believe this is how a lot of software is initially written. However, as time goes by, code is added into the methods without adding new sub-methods as needed. The result is methods with hundreds of statements after each other, without structure. It still works, but it is now much harder to understand the overall flow. When you read this code for the first time, it is difficult to see the bigger picture, and thus harder to understand.&lt;/p&gt;

&lt;p&gt;There are many reasons this happens. Following the existing pattern (just adding into an existing method, instead of creating a new one), wanting to make the smallest possible change, time pressure, or not being familiar enough with the code. I’ve written more about this in &lt;a href="https://henrikwarne.com/2017/04/28/code-rot/"&gt;Code Rot&lt;/a&gt;. The solution is usually to add &lt;a href="https://henrikwarne.com/2013/08/31/7-ways-more-methods-can-improve-your-program/"&gt;more methods&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Tests
&lt;/h3&gt;

&lt;p&gt;When I worked at Nasdaq, I was taking over a middleware system when the last original developer quit. The system routed messages between the trading system components, and also handled failovers. It was written in Erlang, and was quite compact in size. It was enormously helpful that there were plenty of test cases for the system. To understand how it worked, I could both &lt;em&gt;read&lt;/em&gt; the code, and &lt;em&gt;run&lt;/em&gt; the code. There were unit tests, integration tests and end-to-end tests (running a complete system). The end-to-end tests were the most helpful in the beginning, because they allowed me to see how all the pieces worked together.&lt;/p&gt;

&lt;p&gt;Tests are useful for automatically checking that the code does what it is supposed to do. But a side-effect is that it is possible to execute parts of the code without much effort. The work of setting up the state needed to run the code has already been done by the author of the test. So in addition to reading the code, I can also run the part of the code I am currently interested in. This greatly helps in understanding how it works&lt;/p&gt;

&lt;h3&gt;
  
  
  3. Naming
&lt;/h3&gt;

&lt;p&gt;Descriptive names of variables, methods, classes, files, database tables and so on really help with understanding. Ideally, reading the name of a method should make it clear to you what it does. But even when the names are clear and descriptive, there can be problems. One common case is that there are several names for the same concept. Another is when the names in the program are not the names used by the non-programmers. Getting this right is harder than it sounds.&lt;/p&gt;

&lt;h3&gt;
  
  
  4. Logging
&lt;/h3&gt;

&lt;p&gt;Being able to run the program helps a lot to understand how it works. However, just getting the result back, without knowing how it arrived at it, is not enough. This is where logging comes in. It allows you to “see” the execution of the program. Good logging helps you understand the flow of the program. Too much logging doesn’t help, since that obscures rather than clarifies.&lt;/p&gt;

&lt;p&gt;If there are errors, logging can help you find what went wrong. Note that an error doesn’t have to mean that an exception was thrown. Arriving at the wrong result is also an error, even if the program executes without exceptions. I have written more about logging in &lt;a href="https://dev.to/henrikwarne/good-logging-16o5"&gt;Good Logging&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  5. Comments
&lt;/h3&gt;

&lt;p&gt;I used to think that code should be self-explanatory, so that there would be no need for comments. But I have changed my mind. I think comments are often very helpful in understanding how a program works, and why it is written the way it is. It is most useful when explaining tricky or unusual cases. More about it in &lt;a href="https://dev.to/henrikwarne/on-comments-in-code-4545"&gt;On Comments in Code&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Conclusion
&lt;/h3&gt;

&lt;p&gt;One of the topics we covered in the podcast was &lt;a href="https://dev.to/henrikwarne/there-is-no-software-maintenance-m50"&gt;There is No Software Maintenance&lt;/a&gt;. My argument is that we should not think of software development as having the phases “development” and “maintenance” – it is all just software development. Is it a paradox then that I talk about well-maintained software? Not really, well-maintained software in my mind is the same as well-written software.&lt;/p&gt;

&lt;p&gt;So why is the combination of traits above important? Because they help making the existing program easier to understand. In my mind, this is the essence of well-written software! I am interested in hearing what other developers think characterizes well-maintained software. Let me know in the comments.&lt;/p&gt;

</description>
      <category>programming</category>
      <category>software</category>
      <category>wellmaintained</category>
    </item>
    <item>
      <title>Algorithmic Trading: A Practitioner’s Guide</title>
      <dc:creator>Henrik Warne</dc:creator>
      <pubDate>Sun, 12 Feb 2023 17:39:47 +0000</pubDate>
      <link>https://dev.to/henrikwarne/algorithmic-trading-a-practitioners-guide-401</link>
      <guid>https://dev.to/henrikwarne/algorithmic-trading-a-practitioners-guide-401</guid>
      <description>&lt;p&gt;I really enjoyed reading &lt;a href="https://www.bacidore.com/algorithmic-trading-book" rel="noopener noreferrer"&gt;Algorithmic Trading: A Practitioner’s Guide&lt;/a&gt; by Jeffrey M. Bacidore. Before starting, I imagined it would cover various strategies for trading in the markets, along the lines of “buy on this condition, sell on this condition”. But that is not what this book covers. What trade to make is always a given, typically from a portfolio manager. Instead, the book is all about how to make it happen, almost always by portioning out the trade little by little, while trying to get the best price.&lt;/p&gt;

&lt;p&gt;It is fascinating how many factors come into play when implementing this seemingly simple task. The book covers all parts of this process in a clear and concise way, with lots of illuminating examples. The author has over 20 years of experience in the field of algorithmic trading, both from industry and academia. I particularly liked all the examples of implementation corner cases and gotchas that clearly come from experience.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://henrikwarne1.files.wordpress.com/2023/01/algorithmic-trading.jpg" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcedrydrskx1bgcpli3dp.jpg" width="800" height="359"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Trading, Costs, Alpha
&lt;/h3&gt;

&lt;p&gt;The book starts by defining and explaining several concepts in trading. The most important concept is the &lt;a href="https://en.wikipedia.org/wiki/Order_book" rel="noopener noreferrer"&gt;&lt;strong&gt;order book&lt;/strong&gt;&lt;/a&gt;. It is a list of bids and asks/offers (buy and sell orders) ordered by price levels. The price is the limit set when placing the order. The order book also includes the aggregate size at each level. The gap between the highest buy order and the lowest sell order is the &lt;strong&gt;bid-ask spread&lt;/strong&gt;. A marketable order is one that can execute immediately, that is it will cross the spread.&lt;/p&gt;

&lt;p&gt;The orders are sorted by price first, and within each price level by arrival time (first in first out). Orders have to be priced in specific increments, the minimum price variation, or &lt;strong&gt;tick size&lt;/strong&gt;. There is a good example in the book explaining why a tick size is needed. Say that the current bid in the market is $20. If you want to buy at that price, you will be placed last in the list of orders. But since the sorting order is first by price, then by arrival time, you could get first in line by putting in an order with a price only slightly better than $20 (say $20.00000001). The tick size limitation stops this behavior. If the tick size is $0.01, you would have to bid at least $20.01 in order to get priority by price.&lt;/p&gt;

&lt;p&gt;Many markets use the &lt;strong&gt;maker-taker&lt;/strong&gt; fee structure. Traders that place orders that rest on the exchange earn a maker fee, and traders that “take” liquidity, that is execute orders against the existing resting orders, pay a taker fee. The taker fee is higher than the maker fee, and the exchange earns the difference between the two. This fee structure encourages traders to place resting orders on the exchange (providing liquidity), and this will in turn attract taker orders.&lt;/p&gt;

&lt;p&gt;The explicit &lt;strong&gt;cost&lt;/strong&gt; of trading is the fees paid. However, there are also several implicit costs. One such cost is the bid-ask spread. If we assume the current fair price is the midpoint of the spread, then the cost will be half of that spread. There can also be costs due to &lt;strong&gt;market impact&lt;/strong&gt;. If a large order is placed, there is a risk that the price moves unfavorably when the other market participants adjust their prices to take advantage of the demand. In many cases it is therefore better to hide the size of the order, for example by dividing it up into smaller parts over a period of time. There is also the problem of &lt;strong&gt;adverse selection&lt;/strong&gt;. This happens when one party first has to set a price. If you put an order out, an informed counterparty will only “select” to trade with you if the price is to their advantage. If not, they will not trade with you. Ways to avoid this is to trade with retail investors that typically trade for liquidity reasons (they have money to invest, or need invested money), and to get fast market data updates, so you don’t have prices based on old information.&lt;/p&gt;

&lt;p&gt;In investments, &lt;strong&gt;alpha&lt;/strong&gt; means return above some benchmark, typically the beta-adjusted market return of an assets. In this book, alpha means that over the trade horizon, the price moves in a specific direction. So for example, if you are buying, it could be that the price is expected go up during your trade (positive alpha).&lt;/p&gt;

&lt;h3&gt;
  
  
  Fixed Schedule
&lt;/h3&gt;

&lt;h4&gt;
  
  
  TWAP
&lt;/h4&gt;

&lt;p&gt;A simple strategy is to spread out an order over a fixed time interval (for example one hour), and try to trade at a constant rate. This is the Time-Weighted Average Price ( &lt;strong&gt;TWAP&lt;/strong&gt; ) algorithm. Typically, there will be upper and lower limits deciding how much ahead, or how far behind, of the ideal path the execution is allowed to be. Furthermore, it costs more to send marketable orders that will cross, compared to putting out passive resting orders that may or may not fill. The algorithm designer needs to find a balance here.&lt;/p&gt;

&lt;h4&gt;
  
  
  VWAP
&lt;/h4&gt;

&lt;p&gt;A variation on the TWAP strategy is Volume-Weighted Average Price ( &lt;strong&gt;VWAP&lt;/strong&gt; ). Like in TWAP, the order is executed over a fixed time interval. But instead of using a constant rate over the interval, the volume traded is proportional to the historical volume of a typical day for the asset. US equities usually trade more in the first and last 30 minutes of the trading day, relative to the rest of the day. The idea here is to trade more when there is typically more other trades, and less when there is typically less other trades. The volume data is divided up into bins of for example 5 minutes, and within each bin the trading rate is constant.&lt;/p&gt;

&lt;h4&gt;
  
  
  IS
&lt;/h4&gt;

&lt;p&gt;A third algorithm is the schedule-based Implementation Shortfall ( &lt;strong&gt;IS&lt;/strong&gt; ) algorithm, also known as &lt;strong&gt;Arrival Price&lt;/strong&gt;. The benchmark to compare against is a hypothetical trade for the full volume of the order done costlessly when the order starts (that is, at the “arrival price”). Buying above, or selling below, the arrival price represents an implementation cost. Three factors influence how well the algorithm will do. First there is the &lt;strong&gt;execution cost&lt;/strong&gt; : the more marketable orders that are used, the higher the cost, and the more passive resting orders, the lower the cost. This cost falls non-linearly with time. Second, if there is positive &lt;strong&gt;alpha&lt;/strong&gt; over the trade horizon, it means that the market moves in the direction of the trade. This means that as time passes, the price will be less and less favorable. On the other hand, if there is negative alpha, the execution prices will get better with time. Finally, there is &lt;strong&gt;risk aversion&lt;/strong&gt;. The longer it takes to complete the whole order, the greater the risk that the price moves unfavorably. Therefore, the less risk the trader is willing to take, the faster the trading should finish.&lt;/p&gt;

&lt;p&gt;If there is no alpha, and no risk aversion, the only factor to consider is trading cost. The longer the trading horizon, the lower the total cost of the trade. However, both positive alpha and a risk aversion penalty increase with time. With a model for how these three components develop over time, it is possible to determine the optimal trade horizon (that is, at what time will this cost function be at a minimum). This determines how long the schedule for the trade should be. It can be difficult to estimate alpha, and to put a numerical value on the risk aversion. In practice it is common to combine these two values into an &lt;strong&gt;urgency&lt;/strong&gt; parameter.&lt;/p&gt;

&lt;h3&gt;
  
  
  Variable Schedule
&lt;/h3&gt;

&lt;h4&gt;
  
  
  POV
&lt;/h4&gt;

&lt;p&gt;In the Percent of Volume ( &lt;strong&gt;POV&lt;/strong&gt; ) algorithm, the aim is to participate at a certain rate, for example 10% of the realized volume traded. So when 900 shares have been traded, the POV algorithm submits an order for 100 shares. Those 100 shares will be 10% of the 900 + 100 shares. In the VWAP algorithm, the trading is proportional to the historic volume. In the POV algorithm, the idea is to participate at the given rate of the actual volume. This means that there is no fixed schedule. Instead, the end time depends on the traded volume. This strategy has intuitive appeal. It will trade more aggressively when risk increases, since risk and volume are positively correlated. It therefore reduces risk (reducing “inventory”) at times of increased risk.&lt;/p&gt;

&lt;p&gt;One problem with the POV algorithm is that it is often implemented to stay quite close to the participation rate, which means it uses more marketable orders and less passive orders, leading to higher costs. Another problem is that its reactive nature means that it often &lt;em&gt;follows&lt;/em&gt; volume rather than actually &lt;em&gt;participating&lt;/em&gt; in the realized volume. If there has been a large order in the market, the POV needs to send a larger order to maintain its rate. This can draw other participants in, further increasing the volume. The result can be trades at temporarily inflated prices.&lt;/p&gt;

&lt;p&gt;A way to combat this is to try to forecast the volume, and participate in it at the given rate. Then it will be easier to use passive limit orders to earn a spread. Another variation is to allow for a “must complete” option. Many portfolio managers prefer the orders to finish the same day. This can be accomplished by switching to a TWAP or VWAP schedule if the rate is not high enough for the order to complete by the end of trading.&lt;/p&gt;

&lt;h4&gt;
  
  
  “Hide and Take”
&lt;/h4&gt;

&lt;p&gt;Opportunistic algorithms aims to take advantage of specific conditions. The &lt;strong&gt;Hide and Take&lt;/strong&gt; algorithm will stay hidden, and only trade when favorable price or liquidity conditions occur. If the price moves favorably relative to some benchmark, it will send out marketable orders to exploit the opportunity. Likewise, if the liquidity increases, either by a larger depth, or a tighter spread.&lt;/p&gt;

&lt;h4&gt;
  
  
  Adaptive IS
&lt;/h4&gt;

&lt;p&gt;The &lt;strong&gt;Adaptive IS&lt;/strong&gt; algorithm conditions its trading on the direction and magnitude of price movements relative to the arrival price. For example, if the price moves in the trader’s favor (declining when buying, rising when selling), the algorithm will trade more to lock in the good price. If it moves in the opposite direction, the algorithm will trade less. Interestingly, some traders want an algorithm with the exact opposite behavior, that is trading more aggressively if the price moves unfavorably. The motivation in this case is to lock in a price before it gets any worse. There has been a lot of debate on whether these strategies are useful, or if they are only reacting to noise (and therefore only increasing cost without any benefit).&lt;/p&gt;

&lt;h3&gt;
  
  
  Multi-Order Algorithms
&lt;/h3&gt;

&lt;p&gt;In some cases there is a relation between two assets. For example, stock ABC may usually be valued at twice the price of stock XYZ. In &lt;strong&gt;pairs trading&lt;/strong&gt; , the aim is to exploit when this relation temporarily deviates from the historical or expected value. The algorithm is triggered when the deviation is large enough, for example 1%. It will then buy the stock that is &lt;em&gt;relatively&lt;/em&gt; undervalued, and sell the stock that is &lt;em&gt;relatively&lt;/em&gt; overvalued. When the relationship reverts back to its expected value, the trades are reversed.&lt;/p&gt;

&lt;p&gt;The algorithm is executed in steps, buying and selling in equal values up to the maximum position size. One of the assets is the &lt;em&gt;leader&lt;/em&gt;, the other the &lt;em&gt;follower&lt;/em&gt;. The leader is typically the asset that is most difficult or costly to trade. The algorithm will trade the leader passively using limit orders. It will wait to trade the follower until it has “legged into” the leader. Then the follower will be traded, often with a marketable order in order to minimize the time the legs are unbalanced (leg risk).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Portfolio algorithms&lt;/strong&gt; are different from pairs trading. They are multi-order extensions of single order IS algorithms. Like in the single order case, the idea is to balance the cost and the risk to find the optimal schedule for the individual trades. The trading cost for the portfolio is just the sum of the costs of trading the assets in the portfolio. However, if there is any correlation between different assets, the risk can be reduced by taking these correlations into account.&lt;/p&gt;

&lt;h3&gt;
  
  
  Child Order Pricing, Sizing and Routing
&lt;/h3&gt;

&lt;p&gt;When the overall algorithm has been decided (for example, using TWAP or POV), you still need to decide what limit price to set on each child order, how big the orders should be, and which venues it should be routed to (assuming there is more than one venue to choose from).&lt;/p&gt;

&lt;h4&gt;
  
  
  Pricing
&lt;/h4&gt;

&lt;p&gt;When considering how to set the price, it is useful to divide the price into two components: the &lt;strong&gt;fair value&lt;/strong&gt; , and the &lt;strong&gt;edge&lt;/strong&gt;. The fair value is the true economic value of the asset. For stocks, it could be calculated as the discounted value of all future cash flows. However, mostly the fair value is assumed to lie within the bid-offer spread. If it wasn’t, arbitrage trades would be possible. Often, the assumption is that it is at the midpoint of the spread. But other models are possible, for example by weighted average of the bid and offer sizes (or the logarithm of the sizes). However, these more complicated models have problems of their own, for example that size usually is more volatile than price, so simply using the midpoint is a good choice.&lt;/p&gt;

&lt;p&gt;The edge is the discount received on a buy order, or the premium earned on a sell order, relative to the fair price. A positive edge is a gain, and a negative edge a loss for a given trade. The fair value is not affected by the order, so setting the limit price of the child order simply becomes deciding what the edge should be. The higher the edge, the lower the chances of the order being filled.&lt;/p&gt;

&lt;p&gt;The first decision to make is whether to send a marketable order (with a negative edge), or to send a passive order that will rest on the book and maybe get filled. This can be affected by if the algorithm is ahead or behind in the schedule, or if the algorithm has a “must complete” instruction. To find how to set the edge, you calculate the expected gain for each value of the edge, and pick the edge with the highest expected gain. To calculate the expected gain, you need to be able to estimate the probability of a fill for a given edge, and you have to subtract the cost of a non-fill. The fill probability can be estimated from historical data.&lt;/p&gt;

&lt;p&gt;The closer you are to the end of the schedule, the more urgent it will be to get a fill. By dividing the remaining time in for example minute intervals, you can work backwards from the last interval (where you must get a fill if the order is unfilled at that point) in a dynamic programming-like way to find how to set the edge optimally in earlier periods.&lt;/p&gt;

&lt;p&gt;One strategy to update the edge as the market fair price and spread change is to use &lt;strong&gt;pegging&lt;/strong&gt;. This means that the algorithm will set the price in relation to the current best bid or offer, either exactly, or with an offset from these values. As the market values change, the algorithm updates its order values in relation to those changes. There are however pitfalls with this strategy. Suppose the pegged limit order price is set to $20 to match the $20 current best bid. If all the other traders reduce their buy prices, the best bid would stay at $20 because of the pegged order. There is also a risk that short-lived (fleeting) orders will make the pegged order update its price. This can be countered by requiring that new prices must be present for at least X seconds before updating the price. But then the pegged order would be further back in the queue, reducing its fill probability.&lt;/p&gt;

&lt;p&gt;There is also a special order type called &lt;strong&gt;post-only&lt;/strong&gt;. It is designed to only supply liquidity, never take liquidity. If the market moves between the decision to send out an order, and the order reaching the exchange, the order will not cross. Instead, it will be hidden, or cancelled. This makes it easier for algorithm designers to get the behavior they intend (that is, resting orders will not accidentally be converted into crossing orders).&lt;/p&gt;

&lt;h4&gt;
  
  
  Sizing
&lt;/h4&gt;

&lt;p&gt;Schedule based algorithms can adjust the price depending on if they are ahead or behind in the schedule, setting a more aggressive price if they need to catch up. The same idea can be used regarding size – setting a larger size if they need to catch up, and a smaller size if they are ahead. There is also a technique to place multiple orders in the order book, where some are resting at more passive levels to be able to take advantage if an overly aggressive liquidity demander (prepared to pay a large premium) enters the market. This is called &lt;strong&gt;layering the book&lt;/strong&gt;. A disadvantage of this is that some information on the size of the demand is leaked.&lt;/p&gt;

&lt;p&gt;One way of hiding the size is to use a &lt;strong&gt;reserve order&lt;/strong&gt;. In this type of order, only a fraction of the size is displayed in the market, and as soon as it is filled, the order is refreshed with more quantity from the hidden part. This is also called an &lt;strong&gt;iceberg order&lt;/strong&gt; , since only the tip is visible. However, other market participants can infer the existence of a reserve order if they notice that the size keeps getting refreshed.&lt;/p&gt;

&lt;h4&gt;
  
  
  Routing
&lt;/h4&gt;

&lt;p&gt;A &lt;strong&gt;Smart Order Router (SOR)&lt;/strong&gt; is the component that decides where to send the order the algorithm has decided on. For a marketable order, the goal is to find the current best price. This can involve splitting the order up and sending the parts to different venues. Because the order books can change very rapidly, there is always a risk that the book has been updated when the order arrives. Furthermore, sending a regular limit order could mean that instead of crossing, the order would rest if the price has moved away. To handle this case, an &lt;strong&gt;Immediate or Cancel (IOC)&lt;/strong&gt; limit order is used. If it can’t execute right away, the order is cancelled back to the sender.&lt;/p&gt;

&lt;p&gt;For a non-marketable order, which can’t be immediately filled due to the limit price set on it, the goal for the SOR is to maximize the probability of a fill. Ideally as fast as possible, so the market doesn’t move away while resting. The fill probability depends both on the venue’s queue length, and on its trading rate. Even if the queue is longer, it can still be the better choice, if the orders tend to get filled there faster. The fill probability can also depend on the size of the order. The SOR needs a model so it can estimate the fill probability in each case. The model can use past data on asset-, order-, and market-level statistics to get an estimate, for example by using a logistic regression.&lt;/p&gt;

&lt;h3&gt;
  
  
  Measuring Performance
&lt;/h3&gt;

&lt;p&gt;It’s tricky to measure how good an algorithm is, since there is no way of knowing what price it &lt;em&gt;could&lt;/em&gt; have received if it had followed some other strategy. The best we can do is to compare the price a trade executed at to a &lt;em&gt;benchmark&lt;/em&gt;. The &lt;strong&gt;realized performance&lt;/strong&gt; will then be the difference in the actual price relative to the benchmark. If a trader bought an asset at $20 when the benchmark was $19, the realized performance was -$1. In other words, the cost was $1.&lt;/p&gt;

&lt;p&gt;It is important to also consider &lt;strong&gt;unrealized performance&lt;/strong&gt;. The unrealized price is defined as the prevailing market price at the end of execution. The portion of the original order that was not executed is compared to this unrealized price, while also taking into account what the trading costs would have been (for example by using a trading cost model). Combining the realized and unrealized performance gives the total performance of the order. If we don’t consider unrealized performance, it can appear better not to trade at all, since then you don’t incur any trading costs.&lt;/p&gt;

&lt;p&gt;The most commonly used benchmark is the &lt;strong&gt;arrival price&lt;/strong&gt; , that is the price of the asset at the start of trading. The trading cost is the difference between the realized price and what would have been paid if the trade had been (costlessly) executed at the arrival price. There are other benchmarks, for example the volume-weighted average price (VWAP) over the life of the order. However, the advantage of using the arrival price is that it is not affected by the trading itself (since the trading may move the price), and it can’t be influenced by the traders themselves. The disadvantage is that there will be a component of randomness to it, since the market price can vary during the trade horizon, independently of the impact from the trades. Even though these movements may average to zero, the impact can be large for the performance measurement of a single order. Therefore, a large number of samples is needed to reliably judge the performance of a given algorithm.&lt;/p&gt;

&lt;p&gt;Performance is typically measured in percent of the price, to make the comparisons valid even if the size of the trades vary. And because trading costs tend to be small in terms of percentage, &lt;strong&gt;basis points&lt;/strong&gt; ( &lt;strong&gt;bps&lt;/strong&gt; , pronounced “bips”) are used. One basis point is 0.01%, so for example 50 bps equals 0,5%.&lt;/p&gt;

&lt;h4&gt;
  
  
  Absolute performance
&lt;/h4&gt;

&lt;p&gt;Sometimes you are interested in the absolute performance, for example to estimate how much it will cost to execute a trade idea (to make sure it is likely to make money). One way of estimating the cost is to compare to previous trades in a similar situation regarding asset, size, spread, volatility, time-of day etc. However, there are so many parameters that can influence whether the situation is similar or not that this approach breaks down. Instead, you can use a &lt;strong&gt;trading cost model&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;A simple but useful model is: &lt;em&gt;ExpectedTradingCost&lt;/em&gt; = &lt;em&gt;HalfSpread&lt;/em&gt; + &lt;em&gt;σ&lt;/em&gt; * &lt;em&gt;γ&lt;/em&gt; * sqrt(&lt;em&gt;Ordersize&lt;/em&gt;/&lt;em&gt;Volume&lt;/em&gt;)&lt;/p&gt;

&lt;p&gt;σ is the volatility and γ is a model parameter estimated empirically. For small order sizes, the cost will be roughly the half-spread. The trading cost increases with increasing order size (relative to the traded volume), but at a decreasing rate.&lt;/p&gt;

&lt;h4&gt;
  
  
  Relative performance
&lt;/h4&gt;

&lt;p&gt;To decide which trading strategy has better performance, a “horse race” can be used, where both strategies are used and you compare the results. However, the characteristics of the orders they are used for are most likely not identical. Therefore, you can use a cost model to try to account for differences in market conditions.&lt;/p&gt;

&lt;p&gt;When analyzing performance, it is important to look at the distribution of the values, not just the averages. For example, sample A has three trades of size $50,000 each. Sample B has three trades of size $1, $50,000 and $99,999. Both have the same average size, but the orders in sample B will have higher average cost. This is because the average cost increases with the order size.&lt;/p&gt;

&lt;p&gt;It is also important to watch out for &lt;em&gt;outliers&lt;/em&gt; and &lt;em&gt;influential&lt;/em&gt; orders. Samples often contain some orders that are hundreds or thousands of times larger than the other orders in the sample. These outlier orders have often been longer in the market (because of their size), so the variance is greater, which can lead to large performance numbers. To temper the effect of these orders, trimming or &lt;a href="https://en.wikipedia.org/wiki/Winsorizing" rel="noopener noreferrer"&gt;winsorizing&lt;/a&gt; can be used.&lt;/p&gt;

&lt;p&gt;Influential orders may not cause extreme performance results, but should still be looked at. For example, if there are 1,000 orders for $2,000, and one order for $2 million, the large order may skew the result. One way to handle this is to run the analysis both with and without the influential order, and see if the result is robust, or if it changes.&lt;/p&gt;

&lt;h3&gt;
  
  
  Conclusion
&lt;/h3&gt;

&lt;p&gt;I really like that the book is written in such a clear style. At only 222 pages, the information density is high. It also contains helpful tables and diagrams where appropriate. I also like that there are lots of examples of pitfalls, and how to avoid them. My only complaint is that I would have like to have the chapter number and title written somewhere on the page – as it is now it takes a bit of turning pages to find which chapter a given page is part of.&lt;/p&gt;

&lt;p&gt;I find the subject of algorithmic trading quite interesting. It reminds me a bit of &lt;a href="https://en.wikipedia.org/wiki/Core_War" rel="noopener noreferrer"&gt;Core war&lt;/a&gt; – algorithms battling each other, but here trying to make as profitable trades as possible. There is an arms race in trying to outwit other algorithms, and no wonder, since so much money is at stake. This book is most useful if you work in the trading space. But even if you don’t, it is still worth reading, since the problems described are interesting, and because markets are of such importance today.&lt;/p&gt;

</description>
      <category>learning</category>
      <category>algorithmictrading</category>
      <category>book</category>
      <category>bookreview</category>
    </item>
    <item>
      <title>There Is No Software Maintenance</title>
      <dc:creator>Henrik Warne</dc:creator>
      <pubDate>Sat, 07 Jan 2023 16:40:18 +0000</pubDate>
      <link>https://dev.to/henrikwarne/there-is-no-software-maintenance-m50</link>
      <guid>https://dev.to/henrikwarne/there-is-no-software-maintenance-m50</guid>
      <description>&lt;p&gt;Every time I hear about software maintenance as a distinct activity, I cringe. That’s because it is based on the outdated notion that first software is developed, then it is maintained. But that is not how software development works today. Software development does not have the two phases &lt;em&gt;development&lt;/em&gt; and &lt;em&gt;maintenance&lt;/em&gt; – it is a continuous process. Software maintenance is simply software development.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://henrikwarne1.files.wordpress.com/2023/01/img_20230106_124326.jpg" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdwavh3x4se2we17x2igf.jpg" width="800" height="359"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;It is fairly common to come across the concept of software maintenance. Recently I have seen it in posts on LinkedIn (how developers leave if they have to do maintenance), in books (“it is well known that the majority of the cost of software is not in its initial development, but in its ongoing maintenance”), and in surveys (do you develop new features, or do you maintain existing features). But this is based on the false premise of the software project.&lt;/p&gt;

&lt;h2&gt;
  
  
  Project vs. Product
&lt;/h2&gt;

&lt;p&gt;In the &lt;strong&gt;project model&lt;/strong&gt; , you set out to develop a system. So you create a project, gather requirements, develop the software, and deliver the result. Any changes after this delivery are considered maintenance, be it changes to functionality or bug fixes. This is how I was taught software development works when I went to university a long time ago.&lt;/p&gt;

&lt;p&gt;There are two big problems with the project view of software development. The first is that it is almost impossible to decide how the system should work before you try it. As soon as you start using the system, you learn more about how it should work. This inevitably leads to changed requirements. Secondly, once the system works, you start to think of additional uses for it. In other words, the problem you are solving is open-ended (expanding uses), rather than clearly defined. In a sense, you are never finished, because what you want the system to do keeps expanding. This may seem counterintuitive, but for all systems I have worked on, I have been surprised at how we never ran out of features to add. The expansion is also fractal – you add new big features, but you also keep tweaking and expanding the behavior of existing features.&lt;/p&gt;

&lt;p&gt;So, the project model (build &lt;em&gt;the system&lt;/em&gt; once and for all, the rest is maintenance) does not match how software systems evolve.&lt;/p&gt;

&lt;p&gt;A better model for software development is the &lt;strong&gt;product model&lt;/strong&gt;. Here you consider the software system to be a product that is continually developed. There is a permanent team of developers working on the system, and you continuously add features to it. In the product model it doesn’t make sense to distinguish between development and maintenance, because you are constantly changing and developing the system. This includes fixing bugs. Over my career in software development, I have seen a shift from project to product. This makes sense, since the product model aligns much better with how systems are used.&lt;/p&gt;

&lt;p&gt;There are other advantages with the product model too. The developers working on the product stay with the same product. They see how it is used, and understand how it has evolved. In the project model it is more common to have people develop the initial system, then leave for the next project. They don’t have to live with the decisions they made, and they don’t get the benefit of learning how the customers are using the system.&lt;/p&gt;

&lt;p&gt;Happily, many (or most) companies have realized that the product model is better than the project model for software development. This means that it doesn’t make sense to talk about software maintenance. Changing and improving the systems &lt;em&gt;is&lt;/em&gt; software development.&lt;/p&gt;

&lt;h2&gt;
  
  
  What About “Pure” Maintenance?
&lt;/h2&gt;

&lt;p&gt;Maintenance in the traditional sense includes lubricating moving parts, changing filters, or mending broken pieces (like sewing on a loose button). In software, fixing bugs is the equivalent of repairing broken parts. What about preventing wear and tear? Well, in this sense software is the opposite of physical objects. The more you use it (if by that we also include bug fixing), the better it gets. I like this quote:&lt;/p&gt;

&lt;p&gt;&lt;em&gt;“Hardware eventually fails. Software eventually works.” – Michael Hartung&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;However, if you consider that the environment the program works in can change (library or OS upgrades for example), then you could compare this to handling wear and tear.&lt;/p&gt;

&lt;p&gt;There are systems that are maintained only in this sense: fixing bugs, and making sure that it can keep running. But I would argue that this is a very small part of all software development work being done. Furthermore, when it comes to fixing bugs, there can be ambiguity. Is this really a bug, or is it in fact a request for new functionality? And why fix the bug at all, if it has worked up until now, and the only objective is to keep the system running. So, in some sense, this form of maintenance is also just ordinary software development.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;In any form of software development, you always have to read and understand existing code, even when you mostly add new features. As the system grows, this becomes even more common. Newly written code becomes “legacy” very quickly. Furthermore, you always have to fix bugs. So, let’s stop talking about software maintenance is if it were a separate activity. It is not. It is just software development.&lt;/p&gt;

</description>
      <category>programming</category>
      <category>softwaredevelopment</category>
      <category>softwaremaintenance</category>
    </item>
  </channel>
</rss>
