<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Kristian R.</title>
    <description>The latest articles on DEV Community by Kristian R. (@kr428).</description>
    <link>https://dev.to/kr428</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/kr428"/>
    <language>en</language>
    <item>
      <title>Years of Linux and FLOSS desktop?</title>
      <dc:creator>Kristian R.</dc:creator>
      <pubDate>Fri, 21 Sep 2018 05:58:12 +0000</pubDate>
      <link>https://dev.to/kr428/years-of-linux-and-floss-desktop-ejn</link>
      <guid>https://dev.to/kr428/years-of-linux-and-floss-desktop-ejn</guid>
      <description>&lt;p&gt;Recently, and once again, I have stumbled across someone (more or less) claiming the Year Of The Linux Desktop (wonder whether the acronym YOTLD already made it through) might be in 2019. And I wonder…&lt;/p&gt;

&lt;p&gt;Well, throughout the last two decades, The Year Of The Linux Desktop apparently became either the ultimate vaporware or the ultimate running gag among techies, always good for longer threads in places such as reddit or Twitter. Needless to say: So far, this didn’t happen. GNU/Linux and “open source” software continues to thrive in certain server environments and developer infrastructure but still seems to have its difficulties on the desktop, ending up in developers building FLOSS web based applications on top of shiny MacOS laptops. And, in some ways, I think there’s not much likely to change about this anytime soon. Having been into desktop Linux as user ever since the late 1990s and early pre-releases of GNOME and KDE, I still see no YOTLD happen anytime soon. Looking at my personal and professional environment, I see a couple of reasons for that…&lt;/p&gt;

&lt;h1&gt;
  
  
  Target Group Limits
&lt;/h1&gt;

&lt;p&gt;When I got into Software Libre back in the late 1990s by reading the GNU Manifesto, I was enticed by the idea of having professional-grade software developed in such a “collaborative”, open way, with business ideas placed all around to live off this collaborative effort without forcing people through restrictive EULAs. In 2018 however, things look a bit different: Though there indeed are projects living just like that, it seems the majority of FLOSS projects also in the desktop environments is driven by volunteer enthusiasts either in their spare time or for building tools to satisfy personal needs and requirements. A load of these people have a background in technology somehow, are programmers, systems engineers, CS students and the like. Chances are these tools work for them, and for all people alike with a similar background.&lt;/p&gt;

&lt;p&gt;This is perfectly fine. However, not everyone out there is capable of writing computer programs, and people who are, in example, professionally into video or audio engineering might be neither skilled nor enthusiastic enough to even remotely consider writing their tools themselves. If they don’t manage to find a couple of developers ready and able to understand and accept their requirements, they will be left with some proprietary software as their only choice. Same goes for virtually any other field of specialization where dedicated software is required, for virtually everything beyond “standard” office, e-mail or web applications. We lack diversity here, and we are missing focus on non-technical professional end users. Personally, I always felt both a bit amused and concerned seeing applications that provide a plug-in or scripting mechanism and a documented procedure for programmatic extension even before a finished user interface and (end-user) documentation. This perfectly works for a tech-savvy target group – and possibly for no one else…&lt;/p&gt;

&lt;h1&gt;
  
  
  Diversity in all the wrong places
&lt;/h1&gt;

&lt;p&gt;I wrote before that we lack diversity in the GNU/Linux or FLOSS desktop, which is a bit harsh. We do have diversity, but we have it in ways and places that are difficult to handle: There’s a wagonload of different GNU/Linux distributions and desktop variants of operating systems such as FreeBSD. There’s GNOME, KDE, XFCE and a load of other desktop environments or window managers for these desktops. There are bunches of different approaches to package management and software distribution, there are distributions with different variants of libraries, kernels, GUI environments (see XOrg vs. Wayland vs. Mir in the past), and much more. We’re left with a complex environment, and though this is good from a freedom-of-choice point of view, it’s difficult in other ways: For example, it makes moving applications to GNU/Linux for “proprietary” vendors more dificult.&lt;/p&gt;

&lt;p&gt;Like it or not, but in my business environment, I do have a lot of specialized business software for things such as technical calculations, accounting, … which would very easily run on a GNU/Linux platform too but those in charge of maintaining them don’t even consider doing so because there are so many technical decisions to make that eventually limit the application to one very special GNU/Linux variant (distribution, desktop), while on MS Windows and .NET it’s way easier to have well-integrated development tools, commercial-grade support, API documentation and a runtime that allows for using your application on most of the supported versions of that system. One could argue whether running proprietary software on top of GNU/Linux is generally desirable, but the other way ’round, world’s not black or white, and there’s quite a potential of small and mid-sized company desktops that could very well run GNU/Linux or FLOSS for most of its tasks if just that couple of business-critical applications they use was able to easily run on that platform too. It wouldn’t make a perfect world that is all-FLOSS, but it would help increase FLOSS desktop adoption for sure.&lt;/p&gt;

&lt;h1&gt;
  
  
  Desktop? What desktop?
&lt;/h1&gt;

&lt;p&gt;All this, so far, still works out in this way or the other. But the most critical thing I see these days: Who is actually still using desktop computers? Who would be the target group for a real FLOSS or GNU/Linux desktop environment? Right now, more than ever, I see people using computing devices, communicate and “work” online. Yet, there are way more people now using tablets, smartphones and the like than I ever had non-technical people in my environment ever before using laptop or even desktop computers.&lt;/p&gt;

&lt;p&gt;And it gets worse: Many of those who still do use those devices don’t actually seem to need much more software than just Chrome or another up-to-date web browser, because everything from e-mail to calendaring to chat and communication in general as well as even office work (looking at you, Google Docs) happens online in the browser. From that point of view, being a bit rude, it’s safe to say desktop GNU/Linux is way easier than ever before in the late 2010s also because, essentially, for a load of use cases you don’t need much more than a rudimentary GUI and a decent web browser and not much else. Unfortunately, most of the browsers are built on top of Google Chrome, which bears the risk of establishing a technological monoculture similar to early 2000s predominance of Microsoft Internet Explorer. We should have learnt from that. And, likewise, even while either using Firefox, Chromium, Iridium, Brave or some other up-to-date browser that’s not directly Chrome, people end up interacting with web services that are difficult from a FLOSS as well as from a privacy point of view. In such a setup, having a Software Libre desktop environment is both nice and mostly pointless…&lt;/p&gt;

&lt;h1&gt;
  
  
  And now…?
&lt;/h1&gt;

&lt;p&gt;Nice doing some complaining, but how to deal with these mess? Generally, looking at my environment and experiences of the last two decades, I see several options. The first one would be to actually focus on the idea of a FLOSS desktop and do whatever it takes to make it happen, no matter whether in a YOTLD or not. This, possibly, would require to figure out who’s the target group of such an approach, what is needed to make those users happy and how to achieve this in the best way possible. Another option might be to give up on the desktop at least for now and rather focus on the server and “cloud” side of things, to make sure we do have decent FLOSS browsers and acceptable web services accessible to users who are able to use in example the Google services today. And even another option might be to completely leave the desktop where it is now – a good, working environment for a relatively small group of users – and focus on FLOSS on smartphones and tables (where things look considerably more bleak these days, just thinking about Software Libre on Android or iPhone). Maybe we should take a look at supporting LineageOS or /e/ to build libre software for common smartphones and apps that are on par with (or even better) than stock offerings pre-installed on most Android or iOS devices.&lt;/p&gt;

&lt;p&gt;But maybe, in some ways, it would “just” suffice to actually do a Year Of The Linux Desktop, as a community and, maybe similar to Ubuntus One Hundred Papercuts project earlier, make a considerable effort to building an easily usable, easily available FLOSS desktop system for as many different types of users as somehow possible. I don’t think this is something that can’t be done, I just have absolutely no idea how to get it started…&lt;/p&gt;

&lt;p&gt;(Originally posted at &lt;a href="https://dm.zimmer428.net/2018/09/years-of-linux-and-floss-desktop/"&gt;https://dm.zimmer428.net/2018/09/years-of-linux-and-floss-desktop/&lt;/a&gt; ; gotta take on writing again as I also wanted to continue my musings on microservices but this sort of got into my way...)&lt;/p&gt;

</description>
      <category>linux</category>
      <category>floss</category>
      <category>desktop</category>
    </item>
    <item>
      <title>Small-scale microservices in the wild (1): Anachronistic monoliths</title>
      <dc:creator>Kristian R.</dc:creator>
      <pubDate>Mon, 18 Sep 2017 18:56:45 +0000</pubDate>
      <link>https://dev.to/kr428/small-scale-microservices-in-the-wild-1-anachronistic-monoliths</link>
      <guid>https://dev.to/kr428/small-scale-microservices-in-the-wild-1-anachronistic-monoliths</guid>
      <description>&lt;p&gt;Microservices are all over these days. So are frameworks to build different “microservices” from scratch, and so are infrastructure and runtime components to help getting real-life application environments built out of microservices. Using small components to build larger, more complex applications seems so incredibly much the de-facto standard of how to work these days that we tend to apply this pattern to each and every problem at hand. Depending upon your problem, this might or might not be a Good Thing(TM). We’ve been down the route of cutting more complex applications into smaller components for the last couple of years, and there are a couple of insights to be gained from this. I’d like to spend some time pondering these things and looking at where we are now.&lt;/p&gt;

&lt;p&gt;In order to not be too boring to the few people who will be reading through this, as well as in order to not be trying to write too much in one batch and not getting it done after all, I’ll try making this a short series of write-ups. Hope it will somehow work out.&lt;/p&gt;

&lt;h2&gt;
  
  
  Solving problems
&lt;/h2&gt;

&lt;p&gt;In general, technical solutions should, well, be actual solutions to certain real-world problems, no matter whether in an actual business domain or in a technical domain solving real business problems. Microservices, most of the time, seem to fall into the latter category – a technical solution to solving certain technical problems one might encounter when building non-trivial applications. Not too much a surprise, those solutions do address some problems better than others, and there might be requirements or demands where a certain solution could make things considerably worse. Microservices are no exception here, as to be seen in two scenarios to optimize an existing IT environment.&lt;/p&gt;

&lt;h2&gt;
  
  
  Speed up development.
&lt;/h2&gt;

&lt;p&gt;In most cases, one main pain point is the notoriously limited amount of developer resources available, assuming most of us aren’t Google or Amazon and most of our teams are considerably smaller. Simple and rude: You might want a monolith. Period. This sounds pretty much anachronistic. However, splitting up your system into microservices these days usually means composing an application from several smaller services communicating by means of HTTP and JSON. This has interesting aspects from a development point of view:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;You will spend a good deal of brain (as well as CPU) cycles on things such as (de)serialization and transfer of data. You are supposed to translate your business objects and your data structures into some language-agnostic representation that can safely be sent across some sort of wire. In a monolithic applications where all or at least the vast majority of communication happens locally, this is of little to no relevance. You live inside a shared common data model and a common application context so there’s no “outer world” involved.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;You will programmatically have to add reliability and resilience to your application by dealing with the fact that required systems randomly could respond too slow or completely fail. Again, working in a single application context, this is of no relevance at all if the application tends to just work or fail as one.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;You also need to consider scenarios for retransmitting certain batches of data as well as getting things such as distributed transaction handling “right” in at least a rudimentary way – whatever means “right” for your application. Handling transactions isn’t always easy across one single application (if it involves data storage and business processes that might have been started already); spanning a single transaction across multiple applications doesn’t make it much easier.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;As soon as any interface you exposed is used by any client, it will become virtually “immutable” as (assuming loosely coupled systems) it will be hard to impossible to track down all users of that certain system, so removing “old” methods always is more risky than, in example, removing deprecated API in a large Java project which will fail to build or even deploy as soon as you remove code still in use by certain specialized parts of your system.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;You will have more boilerplate to deal with for building various distributable modules in the infrastructure they require, configurability for various dependencies (like other services) and these things. In a monolith, you will do these things once for the whole application if you do it right.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;There are for sure more than these but I see them to be the most critical ones if following such an approach at least in our environment. It might be different in yours, though.&lt;/p&gt;

&lt;h2&gt;
  
  
  Speed up Ops, too?
&lt;/h2&gt;

&lt;p&gt;Development is all interesting and good but usually doesn’t earn us any money; in most cases, we need a running application for that. And, much alike development, operating and running applications gets more difficult if working with a distributed system, compared to a single monolith or a single application in some application server:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Suddenly you will have to deal about various services that need to be up and running for the whole system to flawlessly work. You will most likely have to take care of services (re)starting, maybe in the right order, too.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Most operational aspects, such as deployment, logging, monitoring, debugging or auditing get more complex because there are more components involved. You most likely will need considerable efforts, even conceptually, to maintain a bunch of interdepending modules where, given a monolith, you would just have to work with a single application and maybe its logfiles and monitoring hooks.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;What’s actually an advantage in some use cases comes as a disadvantage in straightforward operations: You will have to consider scalabiltiy and performance not just for one application but for many. If you encounter bottlenecks, you will potentially have a much harder time finding which component is critical and how to scale it in order to work well.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Also from a system security point of view, things might become … interesting. This, obviously, includes HTTP/REST APIs exposed and available to clients – are they always appropriately secured, audited, encrypted? Or is SSL termination rather just something happening at an external gateway while local network traffic doesn’t use HTTPS at all? Are services capable of detecting (and dealing with) man-in-the-middle attacks? Are applications in any way able to validate requests and responses to figure out where they are coming from and whether they are actually reliable? Or, even more simple, how can we avoid internal services talking to each other using standard credentials stored in configuration files (and maybe worst of all in some git repository, available to the whole team)? Do we ensure “production” systems only to be able to talk to each other but not to be reachable from, say, development or testing environments?&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  So why bother after all?
&lt;/h2&gt;

&lt;p&gt;Being confronted with all these points myself, a reasoning to defend microservices wouldn’t be all too hard: There established are solutions to most of these problems. There are orchestration and deployment infrastructures to handle vast loads of servers and services ranging from more “traditional” tools such as puppet or chef to large-scale environments such as kubernetes. There are tools such as zabbix, nagios, elastic stack and a plethora of others, in most cases open-source, freely available and just waiting for you to solve your problems with. Plus, for each of these reasons you might find at least one good reason to give up on monolithic structures in favor of a more modular microservices architecture.&lt;/p&gt;

&lt;p&gt;That’s actually okay. I don’t really see a monolith a very desirable approach to application architecture anymore, either. But, as always: Your mileage might var. If there’s one thing to take away here, then it should be: Careful why to cut your system small. It will end up with a more distributed system, at least initially increase maintaineance effort and end up adding a load of &lt;a href="https://en.wikipedia.org/wiki/No_Silver_Bullet"&gt;accidental complexity&lt;/a&gt; to your system that might just fail. So be critical about the drawbacks and downsides of such an approach, focus on business requirements and, most of all: &lt;/p&gt;

&lt;p&gt;Come up with good reasons why to build a modular, distributed system instead of a monolith. Come up, too, with a good idea of how “small” or “large” is a good size for a service in your environment. These reasons might vary depending upon your busines domain and company size, so I’ll soon spend some time pondering our reasons to still follow this path. Stay tuned… And feel free to add feedback and insights of yours. Way more than reporting our ideas, I'd like to see experiences others have made in this field...&lt;/p&gt;

&lt;p&gt;(originally posted on &lt;a href="http://dm.zimmer428.net/2017/09/small-scale-microservices-in-the-wild-1-anachronistic-monoliths/"&gt;dm.zimmer428.net&lt;/a&gt;)&lt;/p&gt;

</description>
      <category>microservices</category>
      <category>tech</category>
      <category>architecture</category>
      <category>lessonslearnt</category>
    </item>
    <item>
      <title>Keeping the beginners mind - lessons learnt from building and deploying software modules</title>
      <dc:creator>Kristian R.</dc:creator>
      <pubDate>Wed, 09 Aug 2017 19:11:13 +0000</pubDate>
      <link>https://dev.to/kr428/keeping-the-beginners-mind---lessons-learnt-from-building-and-deploying-software-modules</link>
      <guid>https://dev.to/kr428/keeping-the-beginners-mind---lessons-learnt-from-building-and-deploying-software-modules</guid>
      <description>&lt;p&gt;I've been into writing and running software for quite a while now, and as I see a bunch of articles and writings out here providing inspiring hints on what (not) to do to become a good or even better developer, thought I'd just chime in and share some of my insights on that subject. These are things I'd like to see in aspiring developers working in my team, too. Here we go:&lt;/p&gt;

&lt;h2&gt;
  
  
  Put problems first
&lt;/h2&gt;

&lt;p&gt;No matter how good you are at mastering framework X or language Y: Learn to focus on problems you want to solve rather than solutions at hand. Otherwise you're in for &lt;a href="https://en.wikipedia.org/wiki/Law_of_the_instrument"&gt;the Golden Hammer&lt;/a&gt;: If you put your solution ideas first, you sooner or later will end up looking at your problem with the solution in mind, and you will always tend to judge whether you can solve a problem by how well your "familiar" solution works here. So, here: Try to talk to "business" people (those who want their problems solved) often, try to listen, try to understand what they need. Eventually help them get their requirements straight. Don't throw in solution ideas too quickly.&lt;/p&gt;

&lt;h2&gt;
  
  
  Qualified thinking outside the box
&lt;/h2&gt;

&lt;p&gt;Successor to "problems first" and "Golden Hammer": Yes it is important to have solid skills in a bunch of tools in order to get work done quickly and in a high quality. But just the same way you should have a vague idea of a whole load of different architectural and technical approaches, frameworks, concepts, solution methods. &lt;/p&gt;

&lt;p&gt;I had an interesting experience in that a few years ago when working on performance issues in our environment with a database consultant. That's when he looked all across the list of open connections on the system, wondered they all were coming from the same IP - until eventually ending like "ah ok, that's some sort of application server then." I both felt completely dumb and was greatly enlightened in course of the conversation that followed, mostly because in my world back then, hiding RDBMS and other means of persistency behind an application server facade was so incredibly common that the "obvious" thing at least for a local use cases (build a desktop application that connects to the database directly) was completely out of sight. It wouldn't have been absolutely "better" or "worse" - we didn't even think of evaluating this.&lt;/p&gt;

&lt;p&gt;A similar thing I end up seeing these days with services using HTTP (and JSON) for communication excessively. That's when, sometimes in applications, I see dedicated routes or query params for downloading binary files if necessary. The "obvious" and HTTP'ish solution (using a different content-type and content-negotiaton) just rarely happens. &lt;/p&gt;

&lt;p&gt;Some problems are likely to be solved easier with different solutions, different ways of thinking, different approaches to implementation. Try to search for good solutions from a conceptual point of view. Try to realize when you start putting your "comfortable" solution approach first, and try to question that in these situations.&lt;/p&gt;

&lt;p&gt;So, to cut this short: In your learning efforts, you should spend some time on browsing what's going on in the tech and development world, maybe listen to some podcasts and hack your way through five-minute-tutorials to get an idea of things even without dedication to completely master them or to &lt;em&gt;have&lt;/em&gt; to use them in your next project. Think of a toolbox: You should have a mostly familiar set of tools around, plus you should have an idea of what tools are available off-the-shelf and how they generally work, knowing you'd be able to quickly learn how to use them well if needed. &lt;/p&gt;

&lt;h2&gt;
  
  
  Keep solutions simple wherever possible
&lt;/h2&gt;

&lt;p&gt;There's complexity inherent to the problem you're trying to solve, and there's complexity arising from the tools you use trying to solve this. You won't change much about the first one - the problem is as complex as it is. But you possibly can change the second one by carefully choosing how much frameworks, how much runtime dependencies, how many mandatory configuration options you add to your components. Sure they can ease your development and make it faster for your to get your solution up and running, but in the long run you will have to handle these things all along the way each and every time you touch that module. &lt;/p&gt;

&lt;h2&gt;
  
  
  Know your basics
&lt;/h2&gt;

&lt;p&gt;Even though straightforward coding is fun: Some fundamental understanding about things behind the code makes life easier and your code better. Yes it's nice to be able to read and write code in a bunch of languages, but even more it's important to know some basic concepts - a bunch of fundamental algorithms, some essentials about data structures, abstraction, things such as polymorphism or inheritance, recursion, not to forget about some design patterns... . Knowing a QuickSort implementation or a Singleton once you stumble across it will help you understanding both written code and overall system behaviour faster and better. &lt;/p&gt;

&lt;p&gt;This, too, is a field where you might want to spend a bit more time learning as this knowledge usually is more re-usable if moving between different stacks and languages (even though, in example, some design patterns make more sense in some languages than in others...). &lt;/p&gt;

&lt;h2&gt;
  
  
  Know your tools
&lt;/h2&gt;

&lt;p&gt;I'm pretty much rooted in a Java world where people are used to using fully-fledged powerful IDEs such as Eclipse or NetBeans to do their work. Personally I like this pretty much as it makes things easier indeed. Be careful, however, to see that these tools are made to ease repetitive, boring routine tasks for people who know what's going on and want to be faster by not having to do all these things manually over and over again. The very moment you use an IDE or a similar tool without understanding (or worse: without wanting to understand) what happens under the hood, you might end up in trouble faster than you like. The same goes for some batteries-included frameworks such as &lt;a href="https://jhipster.github.io/"&gt;jhipster&lt;/a&gt; which get a load of boilerplate code and setup hassle out of your way but are absolutely sure to make you stumble if you don't have an understanding of what's inside and how these things go together.&lt;/p&gt;

&lt;h2&gt;
  
  
  Be ready to ship
&lt;/h2&gt;

&lt;p&gt;I had quite some experiences both in research projects and in corporate development where prototypes and early versions of modules &lt;em&gt;of course&lt;/em&gt; used to run in an IDE on a local developers notebook. In some of these cases, moving the application out of the IDE and to some dedicated production server turned messy because all of a sudden we used to have a wholly new level of complexity. Suddenly packaging of applications and dependencies was necessary. Suddenly we had to cope with the fact that an application required dependencies in "newer" versions than those provided by the application server so we needed to question runtime and deployment strategy for those. In some cases it was only after first deployment to an "external" server that we experienced access to resources local to the development machine hard-coded somewhere in the application sources. Now, presenting applications on a dedicated testing system (not on a developers laptop) is part of the teams Definition Of Done. So generally, making yourself ready to ship artifacts of your applications early is a good idea. It helps gathering fast feedback, and it helps making dependencies and runtime requirements obvious. And it helps avoiding the dreaded "works on my device" situation which is a good chance to get you into trouble after going live and experiencing issues because production environment is in an unknown way different to your development system. That's something easily to be prepared for, and so you should be.&lt;/p&gt;

&lt;h2&gt;
  
  
  Keep the "beginners mind"
&lt;/h2&gt;

&lt;p&gt;Finally: Never fall for the misconception of mastering things, of knowing all or even just "pretty much". You never do. You'll always be a learner in every possible way. No matter how many development jobs and projects you went through, no matter how many tech stacks you have seen or how many business problems you have solved - there's always way more. There's absolutely no reason to think you don't have to learn anything else. You'll always be learning... in development, in technology, in life in general.&lt;/p&gt;

&lt;p&gt;From a developers perspective, this is especially interesting as you grow older: You're likely to see more and younger people in your team while possibly, at the same time, moving forth a bit into some sort of "senior engineer" role. You will see young colleagues coming up with new and unfamiliar ideas. You will see new tools and frameworks arise (and fall, in some cases). You'll see new methods and processes come, stay and go. You'll see people trying things and possibly making the same mistake. Challenge these guys, but be open to what they say. Let them inspire you. Grow all along with each other a bit more.&lt;/p&gt;

&lt;p&gt;Comments and inspirations welcome. I'm but a learner too. ;)&lt;/p&gt;

&lt;h2&gt;
  
  
  Some readings
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="http://www.catb.org/esr/faqs/hacker-howto.html"&gt;esr's "How To Become A Hacker"&lt;/a&gt; -- something that shouldn't be taken all too seriously either but you should still read it at least once, a few points in there are pretty valid.&lt;/li&gt;
&lt;li&gt;
&lt;a href="http://artlung.com/smorgasborg/C_R_Y_P_T_O_N_O_M_I_C_O_N.shtml"&gt;Neal Stephensons "In The Beginning There Was The Command Line"&lt;/a&gt; -- more fiction than fact, this is an extremely interesting read on technology, technological progress, user interfaces, disputable metaphors and a bunch of other things.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://landing.google.com/sre/book/index.html"&gt;Google "Site Reliability Engineering" book online&lt;/a&gt; -- in a traditional sense this is not of relevance for developers, but still and also for developers there are some valid insights to be found here.&lt;/li&gt;
&lt;li&gt;&lt;a href="http://simplicable.com/new/accidental-complexity-vs-essential-complexity"&gt;Essential vs. accidential complexity&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;
&lt;a href="http://www.dylanbeattie.net/2017/04/it-works-on-my-machine.html"&gt;It Works On My Machine&lt;/a&gt; -- entertaining article with some neat pointers on one of the fundamental "issues" between dev and ops&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>career</category>
      <category>opinion</category>
      <category>learning</category>
    </item>
    <item>
      <title>SAP MaxDB on Docker</title>
      <dc:creator>Kristian R.</dc:creator>
      <pubDate>Tue, 08 Aug 2017 19:16:30 +0000</pubDate>
      <link>https://dev.to/kr428/sap-maxdb-on-docker</link>
      <guid>https://dev.to/kr428/sap-maxdb-on-docker</guid>
      <description>&lt;p&gt;Using containers for development and testing environments is pretty much state of the art these days. This way, developers and operations guys alike are able to gain speed by reducing the time wasted on handling various infrastructural aspects just to be able to work at all. Consequently, a load of applications and runtime environments are already available to run inside containers – but not all. SAP MaxDB is on our list of tools in production use, and this is one of these technologies which is no first-class citizen on Docker yet. But it’s not too difficult to change this, either. Here’s our current approach…&lt;/p&gt;

&lt;h2&gt;
  
  
  What’s the point?
&lt;/h2&gt;

&lt;p&gt;SAP MaxDB is a relational DBMS offered by SAP AG. It’s a proven, stable product &lt;a href="https://en.wikipedia.org/wiki/MaxDB#History"&gt;with a long “history”&lt;/a&gt; both in technological and in licensing terms. We got to know this tool in around 2005 when MaxDB was provided under an open-source (GPL) license and even, at version 7.5, available pre-packaged in Debian main. Things changed considerably ever since, the application now is offered exclusively by SAP again, provided “free-of-charge as a community version for use with non-SAP applications and without any “official SAP support. We still enjoy using this platform however, and meanwhile excellent enterprise-grade support is provided by &lt;a href="http://infolytics.com/"&gt;7P infolytics&lt;/a&gt; – for administrative questions and beyond. It’s safe to say they saved us more than once.&lt;/p&gt;

&lt;p&gt;So at the moment we're torn here: From one point of view, preferring open source software in our stack, current SAP MaxDB licensing is a difficult thing to deal with. Then again, changing relevant components in a grown software stack always is difficult, and even more so if talking about a core SQL database containing a significant amount of data and having a bunch of relevant applications tied to it. We'll still have to live with it for a while, and in the meantime it should be handled as well as possible. This is where Docker comes in.&lt;/p&gt;

&lt;h2&gt;
  
  
  Local installation
&lt;/h2&gt;

&lt;p&gt;To build a MaxDB Docker image, start out by installing plain SAP MaxDB on a Linux machine or VM. In our case we do have an archive of installer packages cached locally, anyone else can get a &lt;a href="https://www.sap.com/community/topic/maxdb.html"&gt;download&lt;/a&gt; here, which requires an SAP SDN login however.&lt;/p&gt;

&lt;p&gt;Straight ahead:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Download the Community Edition. Don’t let the “Trial Version on the web page scare you away. Unzip this somewhere on your drive.&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Use the SDBINST text based installer to install the database to your machine. I don’t use the interactive installer but rather start SDBINST with all required parameters just like this: &lt;code&gt;./SDBINST -global_prog /opt/maxdb/sdb/globalprograms -global_data /opt/maxdb/sdb/globaldata -o root -g root -i MaxDB -path /opt/maxdb/MaxDB -description "maxdb install" -network_port 7200&lt;/code&gt;&lt;br&gt;
There are two basic changes made, compared to a MaxDB default installation:&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;All data installed by this process should go to /opt/maxdb/. This is not strictly required for building a Docker container but it eases things a bit.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Owner (usually sdb) and group (usually sdba) are forced to be root. This is not nice and maybe not necessary but helps getting started.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;By then, you should have a local MaxDB installation that should be able to run on your system. You so far don’t have any databases though, and you also don’t have a running container.&lt;/p&gt;

&lt;h2&gt;
  
  
  Building and running the container
&lt;/h2&gt;

&lt;p&gt;… should be next. One of the annoying things about doing so for applications that don’t come packaged for any operating system distribution is that they possibly spread out files and data all across the local file system. By providing the install folders specified above, this is at least somewhat reduced to merely three file system resources you need. My build configuration &lt;a href="https://github.com/kr428/maxdb-docker"&gt;can be found at github&lt;/a&gt;; you could start out just there:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Clone or download the project folder. Unlike in virtually any other situation you should be root all the time.&lt;/li&gt;
&lt;li&gt;Copy and move the MaxDB installation data from your local file system to there, full paths included:

&lt;ul&gt;
&lt;li&gt;/opt/maxdb – contains the actual database engine and all the files belonging to it&lt;/li&gt;
&lt;li&gt;/etc/opt/sdb – contains the MaxDB installation registry required by the runtime tools to find its resources&lt;/li&gt;
&lt;li&gt;/var/lib/sdb – resource mostly for the database servers shared memory handling&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;Run docker build . -t local/maxdb in this folder.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;At this point i quietly assume there is a running Docker installation available on your machine.&lt;/p&gt;

&lt;p&gt;You should then be able to run containers using your newly created MaxDB image like this: &lt;code&gt;docker run –name maxdb -d -p 7200:7200 -p 7210:7210 local/maxdb:latest&lt;/code&gt;. This will start a local container named “maxdb with an &lt;a href="https://dev.toubuntu:latest"&gt;https://hub.docker.com/_/ubuntu/&lt;/a&gt; base image. Startup procedure will startup &lt;a href="https://help.sap.com/doc/saphelp_maxdb77/7.7/en-US/45/376baca05f6bf1e10000000a1553f6/frameset.htm"&gt;MaxDB x_server&lt;/a&gt;, create an empty database and bring it online and expose the ports required for external applications to talk to the database. If everything worked well, by now you should be able to connect to this instance using in example a JDBC tool of your choice (I prefer and recommend &lt;a href="http://www.sql-workbench.net/"&gt;SQLWorkbench&lt;/a&gt;) using JDBC URL &lt;code&gt;jdbc:sapdb://localhost/TESTDB&lt;/code&gt; and credentials SQLUSER,SQLUSER.&lt;/p&gt;

&lt;h2&gt;
  
  
  Customizing the container
&lt;/h2&gt;

&lt;p&gt;This way you get a MaxDB installation running inside a local container which your applications can talk to. If you want to change how the container works, have a look at db.sh and db.ini; these make use of &lt;a href="https://help.sap.com/saphelp_nw70/helpdata/en/a9/ffa841c0dadd34e10000000a1550b0/frameset.htm?frameset=/en/a0/f8a841c0dadc34e10000000a1550b0/frameset.htm&amp;amp;current_toc=/en/44/4885bdc64c0a67e10000000a422035/plain.htm&amp;amp;node_id=4&amp;amp;show_children=false"&gt;MaxDB dbmcli utility&lt;/a&gt;, and the options you most likely want to change (name of the initial database create, credentials for database users, …) are in there. Generally that’s not rocket science so if you have rudimentary MaxDB experience you’ll figure out what to tweak pretty quickly.&lt;/p&gt;

&lt;h2&gt;
  
  
  TODOs and limitations
&lt;/h2&gt;

&lt;p&gt;There still are things to be changed about this of course: First and foremost, the container should not run as root. This require a bit more fiddling with users inside the container and preserving file permissions and SUID bits on some of the binaries. Need to find a good solution for that. The second approach is that, this way, you end up with a database that’s empty. For testing environments, this might not always be what you want. Asides this, obviously everything possible to Docker can be done with MaxDB too, including mapping external data volumes and stores for keeping persistent databases – which I’ll be evaluating on some of our testing systems next. It’s just a matter of effort – and requirements, as usual. ;)&lt;/p&gt;

</description>
      <category>linux</category>
      <category>docker</category>
      <category>tutorial</category>
      <category>image</category>
    </item>
    <item>
      <title>Shipping containers: Lessons learnt from adopting Docker</title>
      <dc:creator>Kristian R.</dc:creator>
      <pubDate>Thu, 03 Aug 2017 13:19:05 +0000</pubDate>
      <link>https://dev.to/kr428/shipping-containers-lessons-learnt-from-adopting-docker</link>
      <guid>https://dev.to/kr428/shipping-containers-lessons-learnt-from-adopting-docker</guid>
      <description>&lt;p&gt;We started dealing with Docker in Q3/2016 pursuing the goal of adopting it for an existing system infrastructure. By now we do have several production applications deployed and running as Docker containers. It’s a fairly good approach helping you getting standards into your environment and making life a little easier. It has a learning curve though, and a load of advanced features that might get you off track pretty quickly because you’re tempted to deal with them even while you don’t need them (yet). Don’t try to go full-blown container deployment pipeline in one step but rather try to get your environment transformed in small incremental steps. Make sure you feel safe with each increment, gain experience with all things involved in order to avoid losing control of your systems after a big change. Read on for more…&lt;/p&gt;

&lt;h2&gt;
  
  
  What about it?
&lt;/h2&gt;

&lt;p&gt;Containers are everywhere, and so is Docker which seems the essence and most widespread of these technologies to date. We started looking into this only a year ago for various reasons. Things always change. Our team structure changed, especially regarding development and operation. The amount of technical components increased, and so did the set of different frameworks and stacks involved. Deployment on the other side was supposed to be made easier not more difficult. Strategic approaches changed, too, moving towards a industry group based IT infrastructure with a central hosting and teams shipping dedicated services in a common way. All along with this, Docker appeared on our radar and remained there.&lt;/p&gt;

&lt;p&gt;By now, we do have a bunch of application modules (to avoid the term “microservices” for a moment) built, shipped, running in a Docker environment. It works well, it surely eased a couple of things while introducing new interesting questions to solve. We’re still far from being in a fully-automated continuous delivery environment where each and every piece of code runs into production automatically. Every tooling and every technology-backed solution comes with a learning curve, and handling containers is no different. However, there are already a couple of insights we made, insights I consider worth noting…&lt;/p&gt;

&lt;h2&gt;
  
  
  Know your problems
&lt;/h2&gt;

&lt;p&gt;Though this sounds blatantly obvious, you should very very much focus on figuring out what problems you actually do have in your environment and how containers or more specifically Docker could help solving. Simple reason for stating this: There’s a load of vendors, solution providers and consultants trying to both figure out what needs their customers eventually could have, and there’s a load of tooling providers trying to sell their idea of what a good container based environment looks like. Same way, there are wagonloads of case studies, whitepapers, “best practises” and the like outlining what you really need to do to be successful shipping software in containers.&lt;/p&gt;

&lt;p&gt;Most of them aren’t wrong at all. But at the end you are about to choose whether or not their solutions actually address your challenges. If your environment works perfectly well – fine. Don’t fall for a technology like Docker simply because they could make things “even better”. If it ain’t broke, don’t fix it.&lt;/p&gt;

&lt;p&gt;That said, our problem scope was rather clear: We want(ed) to straighten deployment procedures a bit. All along with moving from a Java EE application server to multiple standalone JVM applications with embedded HTTP servers a while ago, at some point we exchanged the complexity of a full-blown heavy server component with the complexity of maintaining and running multiple standalone applications. It got better, but of course it left a couple of old and new issues unsolved. Docker, on first and second glance, looked fairly well like an approach to solve at least some of these issues. This is why we started introducing Docker for deployment and operation first, explicitely focussing on keeping everything “as is” for developers. This is a sufficiently complex task – early inclusion of developers would have made this blow up.&lt;/p&gt;

&lt;h2&gt;
  
  
  Set your boxes
&lt;/h2&gt;

&lt;p&gt;One of the best things Docker can do for existing IT applications is forcing you to re-think applications and dependencies. On real-world environments, there’s always a chance of interactions and dependencies between components and systems that aren’t obvious or documented simply because no one ever thought about it as dependencies. Think about simple things such as your application depending upon some Linux binary that is “just there in /usr/bin” and invoked from within your application: Everything’s fine as long as your application runs on the host it was initially deployed to. Will this still work after moving to a different host with different packages in different versions installed in the base operating system distribution? Or your application uses a mounted network filesystem it expected in /media which, now, after moving to a new environment, is moved to /mnt? Maybe these things even are subject to configuration in your application – are you still sure it will be obvious in a relevant situation? Are you sure you will be able to gracefully handle additional dependencies being introduced at some point in time?&lt;/p&gt;

&lt;p&gt;Dealing with Docker, here, greatly has helped raising awareness for these issues by providing that “container” idea. It forces to think of your applications even more as of closed boxes and interactions between applications as links between these boxes that need to be made explicit in order to work at all. Docker is not really needed for that, but it helps getting there by technologically enforcing this container approach as well as by providing ways how to describe dependencies, communications between applications, … : The base image used for your container describes what operating system libs and binaries will be around. Volumes mounted into and ports exposed by your container describe some of its interface. Using something like docker-compose, you’ll be able to describe dependencies between containers in a more strict manner too (in which order do they have to be started to work, how they are linked, …).&lt;/p&gt;

&lt;p&gt;Again: This is nothing Docker really is needed for, but it’s a point where it definitely helps getting done some kind of work IT organizations will have to do anyway.&lt;/p&gt;

&lt;h2&gt;
  
  
  Start safe and fast
&lt;/h2&gt;

&lt;p&gt;Pretty often these days people are talking about “infrastructure as code”. If so (and I think this really is a good idea in some ways), we also should apply the same rules we want to apply to software development that should be agile, in general: Focus on providing visible value in short iterations, try to incrementally improve, try to start somewhere, learn in real-world use cases in production environments, improve. Translating this approach to Docker based deployment, recommendation would be: Make sure you get from mere theory to a working environment quickly. Pick an application candidate, learn how to build an image from it, learn how to run this image in a container. Get this to run in production and make sure everyone on your (Dev)Ops team has an idea to handle this even when you’re off on vacation.&lt;/p&gt;

&lt;p&gt;Why? Well, with a full-blown environment featuring, say, Docker, dedicated operating systems such as CoreOS, orchestration tools like Kubernetes, maybe configuration services such as etcd or local or remote Docker registries, you very quickly end up with too much new things to introduce to your environment. There are loads of changes that are very likely to break existing workflows for relevant stakeholders in your IT organization. Your operations people need to learn to manage and get used to all this, and while they do so, your system stability is at risk.&lt;/p&gt;

&lt;p&gt;Don’t do that. Try to find a minimum meaningful increment, and get this out and running as fast as possible. In our case, this was all about extending our current deployment procedure based upon common tools such as…&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;ssh/scp for copying artifacts between various servers,&lt;/li&gt;
&lt;li&gt;bash scripts for starting / stopping / updating binaries on testing and production environments,&lt;/li&gt;
&lt;li&gt;zip artifacts to ship binaries, start/stop scripts and minimum configuration files,&lt;/li&gt;
&lt;li&gt;stock Ubuntu/Debian servers for running the components, and&lt;/li&gt;
&lt;li&gt;a local gitlab server and gitlab-ci for building “regular” Java / maven artifacts and Docker images all alike&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;to work with Docker cli and Docker images stored to files. Initially we’re running even without a local or central Docker registry and just use docker load and docker save for saving images to files and loading them again on the Docker servers. Docker containers themselves run on the same stock Ubuntu servers as before, side-by-side with existing applications. Docker containers still are stopped and restarted using bash scripts.&lt;/p&gt;

&lt;p&gt;It can’t possibly get any more bare-boned than this, but it works. It doesn’t so far change anything for developers. It changes little for operations guys. But it provides a few benefits already, including looking into containers, log files, … using the web based portainer UI, or being able way easier to roll back to old versions of production code on live servers. That’s enough benefit for operation folks to justify dealing with this, and yet the change introduced this way is acceptable.&lt;/p&gt;

&lt;h2&gt;
  
  
  Scale well
&lt;/h2&gt;

&lt;p&gt;If you made it here, the remainder will possibly be rather trivial – essentially: re-iterate. Find the thing bugging you the most about the status quo you just reached, figure out how to improve, and actually improve in another small step. Maybe giving your IT teams some time to get acquainted to the new structure and collect feedback on what should be the next step is a good idea, even if you have a clear agenda where you want to be heading. In our case, a few next steps that seem desirable and needed are …&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;… using or adopting an internal Docker registry for better image handling,&lt;/li&gt;
&lt;li&gt;… dealing with configuration and container start / stop scripts in a smarter manner,&lt;/li&gt;
&lt;li&gt;… figuring a way for dealing with and delivering large binary data files in a better way than having to mount CIFS shares to all servers.
We do have a couple of ideas for each of those, but neither priorities nor actual solutions are finalized on that. Let’s see where this all is heading, maybe I’ll recap this in near future if there’s something worth noting. Stay tuned.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Post Scriptum
&lt;/h2&gt;

&lt;p&gt;There’s possibly a load more to write about that, but at some point someone would be required to read all this, so I’ll keep it at that and get into details if anyone’s interested. Conclusions so far, however: Spending quite some of office as well as spare time on Docker and containers so far paid off, even though there’s a load of things that can be improved. Personally, looking at these things, I found great insights and entertainment in listening to podcasts and reading books and articles on DevOps and agile software development at the same time. I found that most developers (well, including myself at times) are likely to push for great, feature-laden new releases or new technologies or new frameworks and stacks with all the bells, whistles and kitchen sink possible these days. At the same time I perfectly understand each and every operations guy who’s supposed to adhere to a way more conservative point of view of keeping any changes done to a production environment as small as somewhat possible (or best of all avoiding them), knowing that change is the enemy of stability. Agile development and DevOps culture seem a good way to resolve this “conflict” – even though not an easy way, for several reasons. But that’s a different thing…&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Originally posted &lt;a href="http://dm.zimmer428.net/2017/07/shipping-containers-lessons-learnt-from-adopting-docker/"&gt;on my blog&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

</description>
      <category>docker</category>
      <category>devops</category>
      <category>deployment</category>
      <category>linux</category>
    </item>
  </channel>
</rss>
