<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Andrei Dascalu</title>
    <description>The latest articles on DEV Community by Andrei Dascalu (@andreidascalu).</description>
    <link>https://dev.to/andreidascalu</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/andreidascalu"/>
    <language>en</language>
    <item>
      <title>CI vs CD - where the magic happens</title>
      <dc:creator>Andrei Dascalu</dc:creator>
      <pubDate>Thu, 30 Dec 2021 17:42:24 +0000</pubDate>
      <link>https://dev.to/andreidascalu/ci-vs-cd-where-the-magic-happens-1aoa</link>
      <guid>https://dev.to/andreidascalu/ci-vs-cd-where-the-magic-happens-1aoa</guid>
      <description>&lt;p&gt;Continuous Integration and Continuous Deployment are centerpieces of modern software development. They're well known concepts. A bit too well known, I'd say, in the sense that they're so ubiquitously used that few spend much time thinking about what they mean.&lt;/p&gt;

&lt;p&gt;Here we'll try to formulate (and nuance a bit) the key concepts that sit behind each of them, what makes them different and what makes them work together.&lt;/p&gt;

&lt;p&gt;Continuous Integration is a product of extreme programming (a lot of the things we take for granted today come from the days of good ol' XP). The gist of CI is that &lt;strong&gt;developers shouldn't sit on the code they wrote, instead the code should be integrated with the project and other developers' code as soon as possible&lt;/strong&gt; so that the team can ensure everything still behaves as intended and bugs can be rooted out ASAP.&lt;/p&gt;

&lt;p&gt;For that purpose, Continuous Integration relies on the following (these are not hard rules, nor are they necessarily agreed as such but instead they are a compilation of advice given over time by XP practitioners).&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;a repository: it's easy to forget that version control hasn't always been around and even when it came to be, there were things before Git (just watch Linus Torvalds famous Google presentation on Git - many used plain tarball archives to integrate code). However, it's important to consider that the ability to branch is an enemy of CI. The goal is to integrate. That is, according to Martin Fowler, that developers should merge code often and branch only when really necessary.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;self-testing: each piece of code should be able to stand on its own. THis means developers should test before integrating (whether automatically or manually, doesn't matter).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;automated integration: since integrations must be done often, there must be a process in places to ensure code quality and integrity in an automated way, that is able to run continuously as well as on demand. Unit tests, static code analysis, integration tests and so on must happen often.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;speed: since the process above must take place often, it must be fast. In a nutshell, it should scale as the project grows, as the quantity and complexity of the code grows.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;fixes must come with unit tests: a bug means that something escaped the system, therefore a fix should bring the required tests up to speed (whether it means fixing an existing one and/or adding tests)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;transparency: tests and their results must be accessible and clear to everyone. This means that manual test plans must be shared and known across the team, the results of test runs must be made public - same goes for automated tests.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Now, the initial proponents of CI included automated delivery in the process. Yes, some time ago CI also meant what we today call CD - Continuous Delivery, in that CD was a step in CI.&lt;/p&gt;

&lt;p&gt;Today, we think of CD as a domain of its own. This is natural given the complexity of the systems to deliver. We rarely deliver one piece of software but rather an ensemble made from frontend(s), backends, APIs, maybe content management tools, often working together. It has become a domain of its own, that's governed by some principles:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;the item(s) to deliver must have gone through CI process: that is, the quality of the item(s) to delivered must be ensured before it comes up for delivery.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;the configuration comes from a secure place: when we develop, we generally keep our configuration together within the environment we work in. However, this is insecure and unfit for a production environment where we must ensure that access to secret configuration item is as restricted as possible and only available to the automated delivery process&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;data integrity: while individual services may be stateless, applications as a whole rarely are. In that case, any delivery must ensure that data integrity is not affected, that any schema changes happen securely in a way that won't affect the running application and that failures result in graceful rollbacks.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;no downtime: today we have many ways to ensure that deliveries come with little to no downtime. This in itself poses challenges (particularly with respect to data integrity).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;monitoring: the process doesn't end with a deployment. A successful delivery isn't one that doesn't have errors along the way but one that ensure the application is running as expected. This means tracking of metrics that showcase expected behaviour (eg: no increased error rate, no new errors, maybe less latency if a performance update was done, etc) in each layer of the application (backend, frontend, data).&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>programming</category>
      <category>devops</category>
      <category>testing</category>
      <category>beginners</category>
    </item>
    <item>
      <title>A little about Github Copilot</title>
      <dc:creator>Andrei Dascalu</dc:creator>
      <pubDate>Mon, 08 Nov 2021 10:24:18 +0000</pubDate>
      <link>https://dev.to/andreidascalu/a-little-about-github-copilot-33dh</link>
      <guid>https://dev.to/andreidascalu/a-little-about-github-copilot-33dh</guid>
      <description>&lt;p&gt;I've played with Github's copilot for a few days so I guess it's mandatory to write a few lines about. &lt;/p&gt;

&lt;p&gt;I'll start with some general observations and then I'll add a few concerns I have with the approach.&lt;/p&gt;

&lt;h2&gt;
  
  
  Observations
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Copilot isn't perfect. It's trained on public code and as such it's only as good as the public code is. &lt;/li&gt;
&lt;li&gt;It's not copy/paste of public code but a sort of average take on very specific solutions. &lt;/li&gt;
&lt;li&gt;Quality depends a A LOT on the language. I've tried it with Javascript, Go and PHP and I could (and I might) write a piece on Copilot with each of them. Javascript is pretty good (but it's not my main language). Go is great - underlining a bit that the language goes a long way to prevent you from shooting yourself in the foot and the majority of codebases tend to be idiomatic (this is reflected in what Copilot suggests). PHP, on the other hand, suffers from a lot of publicly available legacy codebases with old coding styles and the fact that Copilot doesn't quite infer from context the version of PHP that you're using. It may not be bad, depending on what you do with the suggestion received.&lt;/li&gt;
&lt;li&gt;It can be annoying as while it trained on public code, it doesn't seem to pick up important things from local context. For example, I wrote a class and when I wrote a function making use of some private member, the suggestion made use of it nicely. The implementation (though simple) was perfect. However, a moment later I wanted to wrap it in a try/catch. Instead of wrapping the existing code, Copilot suggested a whole new block of code: duplicating the existing solution, but inside a try catch (eg: try {  } catch () {} ). Rather annoying when trying to fix up existing code.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Concerns
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;I can see how Copilot can be very very bad for beginners. Particularly as underlined for PHP, Copilot is as good as the codebases it learned from. With PHP, Copilot produces a lot of ancient-looking code despite surrounding code using PHP 8 features. It's &lt;strong&gt;VERY&lt;/strong&gt; tempting to simply accept the provided solutions, because they do work.&lt;/li&gt;
&lt;li&gt;I can see how Copilot can be good for beginners. To me, it makes sense to use Copilot in the same way you're using Stackoverflow code: as an example to build on. However, this presents another challenge: resist the urge to use code as-is and focus on adjusting the code according to your standards (aka: the project's). It's difficult because the code works and makes sense as-is, but can still be messy.&lt;/li&gt;
&lt;li&gt;Copilot can be bad in the sense that it puts you (a developer in need of help) into a box. It's box formed by the code Copilot has learned from. A regular internet search can provide you with a variety of ideas to choose from, whereas Copilot chooses for you. It's expediting the process but depriving you of a source of learning: simply confronting and weighing different solutions helps your evolution as a developer.&lt;/li&gt;
&lt;li&gt;You can get different solutions from Copilot &lt;strong&gt;but&lt;/strong&gt; at the moment there's no straightforward way to compare them directly.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Cheers!&lt;/p&gt;

</description>
      <category>programming</category>
      <category>watercooler</category>
      <category>webdev</category>
    </item>
    <item>
      <title>Some Go(lang) tips</title>
      <dc:creator>Andrei Dascalu</dc:creator>
      <pubDate>Sat, 02 Oct 2021 09:43:18 +0000</pubDate>
      <link>https://dev.to/andreidascalu/some-go-lang-tips-30fj</link>
      <guid>https://dev.to/andreidascalu/some-go-lang-tips-30fj</guid>
      <description>&lt;p&gt;Go (styled golang for the purpose of SEO, otherwise finding anything would be impossible) is a pretty great language. It's a language that gets out of your way so that you can just write your applications ASAP. It's a "batteries included" and opinionated ecosystem that brings everything you need to get started.&lt;/p&gt;

&lt;p&gt;I'm writing this mostly as a reminder for myself, as a summary of some observations I made along the way. These are just tiny details (but not really gotchas or traps, just generic tips). I'm sure most of you know about these.&lt;/p&gt;

&lt;h3&gt;
  
  
  Don't use Logrus
&lt;/h3&gt;

&lt;p&gt;Ok, this is related to a generic practice in Go. As a strongly and statically typed language, it's not easy to wriggle your way around datatypes as you would in JS (Node) or PHP for example. With a lack of generics, writing general purpose code as you'd need in a &lt;strong&gt;logger&lt;/strong&gt; or &lt;strong&gt;ORM&lt;/strong&gt; is quite difficult and people resort to reflection.&lt;/p&gt;

&lt;p&gt;Logrus uses reflection heavily, which results in heavy allocation count. While generally not a huge problem (depending on code), the reason people choose is performance and while it may sound like micro-optimisation, avoiding reflection matters. If you see something that can use structs without regard for type, it uses reflection and that has an impact on performance.&lt;/p&gt;

&lt;p&gt;For example, Logrus doesn't care about the type, though obviously Go needs to know (eventually). Logrus uses reflection to detect the type, which is overhead.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;  log.WithFields(log.Fields{
    "animal": myWhatever,
  }).Info("A walrus appears")
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;What to use&lt;/strong&gt; I prefer &lt;a href="https://github.com/rs/zerolog"&gt;zerolog&lt;/a&gt;, but &lt;a href="https://github.com/uber-go/zap"&gt;zap&lt;/a&gt; isn't bad either. Both boast zero-allocation, which is what you want for a task that should have the smallest possible impact on your application.&lt;/p&gt;

&lt;h3&gt;
  
  
  Don't use encoding/json
&lt;/h3&gt;

&lt;p&gt;Lots of people recommend using the standard library before looking to anything else. I call the &lt;code&gt;encoding/json&lt;/code&gt; modul an exception. Like the above case, &lt;code&gt;encoding/json&lt;/code&gt; uses reflection. This is not efficient and it can take a toll when writing APIs that return json responses (or any kind of microservice where reading/writing json is important).&lt;/p&gt;

&lt;p&gt;Take a look &lt;a href="https://yalantis.com/blog/speed-up-json-encoding-decoding/#:~:text=The%20benchmark%20tests%20showed%20that,in%20comparison%20with%20other%20packages."&gt;here&lt;/a&gt; for some alternatives/benchmarks.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What to use&lt;/strong&gt; &lt;a href="https://github.com/mailru/easyjson"&gt;Easyjson&lt;/a&gt; is about the top of the pack and it's straightforward. The downside of efficient tools is that they use code generation to create the code required to turn your structs into json to minimise allocations. This is a manual build step which is annoying. Interestingly &lt;a href="https://github.com/json-iterator/go"&gt;json-iterator&lt;/a&gt; also uses reflection but it's significantly faster. I suspect black magic.&lt;/p&gt;

&lt;h3&gt;
  
  
  Do not use closures as goroutines
&lt;/h3&gt;

&lt;p&gt;Here's a basic example code:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;for i:=0;i&amp;lt;10;i++ {
  go func() {
     fmt.Println(i)
  }()
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Most people may expect this to result in random printing of numbers 0 to 9, as one would when delegating the task to goroutines. &lt;/p&gt;

&lt;p&gt;Actual result: depending on system you will get one or two numbers and a lot of 10's. &lt;/p&gt;

&lt;p&gt;Why? &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Closures have access to the parent scope so variables can be used directly. You will not be asked to redeclare though updated linters might warn you about "variable closure capture"&lt;/li&gt;
&lt;li&gt;Go's performance fame is due a lot to the runtime optimisations performed, where it tries to "guess" what you want to do and optimise for some execution paths. During this, it "captures" variables and passes them where they're needed in a way that should theoretically be most efficient (for example, after some non concurrent operations are done to release allocation on some CPU). The result in this case is that the loop may launch goroutines, the the goroutins &lt;strong&gt;may&lt;/strong&gt; receive the value of i which they access from parent scope much later. It's not guaranteed which you will see when executing this code a few times (you will get some random value along side all the 10's).&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;What to use&lt;/strong&gt; I don't quite see a reason to use closures like this. It's much more clean and readable to just create a function, whose execution will benefit from a stack of its own. If you do use a closure though for whatever reason, &lt;em&gt;pass the variables&lt;/em&gt;! Treat the closure as you would every function. &lt;/p&gt;

&lt;p&gt;Cheers!&lt;/p&gt;

</description>
      <category>go</category>
      <category>development</category>
    </item>
    <item>
      <title>It's not the language - it's you</title>
      <dc:creator>Andrei Dascalu</dc:creator>
      <pubDate>Tue, 21 Sep 2021 18:59:34 +0000</pubDate>
      <link>https://dev.to/andreidascalu/it-s-not-the-language-it-s-you-17h8</link>
      <guid>https://dev.to/andreidascalu/it-s-not-the-language-it-s-you-17h8</guid>
      <description>&lt;p&gt;Note: this is a post created with respect to the discussion stemming from &lt;a href="https://dev.to/jorgecc/i-regret-using-php-4b5a"&gt;this post&lt;/a&gt;. I felt the need to write a reaction (it's not a rebuttal, sorry - it's not in support either) because the post makes some good arguments yet it was met with the sort of fanatical derision which is fairly common in the PHP world (there is absolutely nothing questionable about the pinnacle of computing technology that is PHP).&lt;/p&gt;

&lt;h3&gt;
  
  
  it's not the language, it's you
&lt;/h3&gt;

&lt;p&gt;This is a reply in one of the comments. In an absolute way, it's definitely true. &lt;/p&gt;

&lt;p&gt;A language is a rather inert thing. Sure, it evolves as it's developed by its creators/maintainers but it does so in line with its ways and its philosophies which may not align with what you think it's best.&lt;/p&gt;

&lt;p&gt;There's no guarantee that your way and the way of the language will converge and you, as a developer, need to be aware of that when making a choice.&lt;/p&gt;

&lt;p&gt;That's not to say a language it's perfect. It's to say that it's an evolving tool which you use, but don't control. &lt;/p&gt;

&lt;p&gt;Of course, as a developer you will not become an expert before deciding whether it's usable to you or not and there's also some expectation that a language will be consistent with itself as well as intuitive in line with its philosophies.&lt;/p&gt;

&lt;h3&gt;
  
  
  a bad language
&lt;/h3&gt;

&lt;p&gt;No language is perfect, not to everyone at least. People will complain that Go(lang) doesn't do more to adopt OOP concepts. People will complain about typing in Javascript. People will complain about the over restrictive OOP (every &lt;em&gt;bleeping&lt;/em&gt; thing is an object) in Java. And so on.&lt;/p&gt;

&lt;p&gt;Developers do need to go out of their base language and learn a bit about another language with a critical eye on their own main tool. It's good to have a main tool, just don't live with the illusion of perfection.&lt;/p&gt;

&lt;h3&gt;
  
  
  how to tell
&lt;/h3&gt;

&lt;p&gt;There's a simple way. To compensate for bad design, people create tools to make up for shortcomings in the language. That's ok, people should try to make things better. But it's a mistake not to recognise that a top reason for tooling is that those things are commonly used yet not part of the language package.&lt;/p&gt;

&lt;p&gt;Sometimes they do become that, as part of language's evolution. It's a great way to recognise that needs of developers evolve and the language is supportive of that. The best languages are those that evolve and incorporate (and are not afraid to remove stuff as well).&lt;/p&gt;

&lt;p&gt;Some examples? In PHP, since we've started there.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;the large list of built-in utility functions (mainly for arrays) which are shorthands for mapping functions. Of all the things could be in a library and not in the main language, those are it.&lt;/li&gt;
&lt;li&gt;inconsistently parameterised functions (arrays and strings mainly): sometimes it changes the subject itself (by reference, orly?!), sometimes the subject is the first, other times it's last. Annoying more than anything, but speaks to consistency.&lt;/li&gt;
&lt;li&gt;lack of standard serialization (xml/json). PHP has its own serialization model (useless for data exchange with anything else, which is the very definition of serialization) but no useful standard json/xml. It does have support in the sense that you can easily serialize arrays, but not objects. Say what? Yes, you can json_encode class A to json, but decoding will give you a stdClass.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Some non-examples? PHP has worked hard to offer optional strong(er) typing. As an interpreted weak-typed language as a philosophy, this is not a point I would take against the language. That's part of its reason to exist and its conception. But it's still a weakness - a recognised one.&lt;/p&gt;

&lt;p&gt;Why is it a weakness? Because people not just fight hard to make up for that (see not just the number of static code analysis tools, but also what they check for) via tools and libs but they're also pretty much a default part of a project setup. What, you don't have PHPStan/EasyCodingStandard/PHPCS/PHPMD and so on ...? Yes, they exist to make up for PHP's weaknesses. Which is good, but it would be a mistake not to recognise their limits.&lt;/p&gt;

&lt;p&gt;The next step would be to make them part of the standard PHP setup. Honestly that would be way more useful than more "utility" functions in the language.&lt;/p&gt;

&lt;h3&gt;
  
  
  takeaway
&lt;/h3&gt;

&lt;p&gt;It's good to know how to critically evaluate a language. A language should have features and philosophy, but it should also be consistent with it and willing to become better.&lt;/p&gt;

</description>
      <category>development</category>
      <category>php</category>
      <category>watercooler</category>
    </item>
    <item>
      <title>What do you use for CI/CD ?</title>
      <dc:creator>Andrei Dascalu</dc:creator>
      <pubDate>Mon, 20 Sep 2021 17:59:02 +0000</pubDate>
      <link>https://dev.to/andreidascalu/what-do-you-use-for-ci-cd-2ae1</link>
      <guid>https://dev.to/andreidascalu/what-do-you-use-for-ci-cd-2ae1</guid>
      <description>&lt;p&gt;Hi everyone!&lt;/p&gt;

&lt;p&gt;I'm really curious what you and/or your company uses for CI/CD (and why, if you happen to know why the current solution was chosen over something else), whether it's one tool or a mix to cover the full process.&lt;br&gt;
Are you using the tools provided by your VCS provider (Bitbucket / Github / Gitlab), Cloud provider (Azure Devops, Google Cloudbuild ...), the classics (Jenkins / Spinnaker), the Cloud Native bunch (Argo, Tekton, JenkinsX, Flux), plain old bash/Python scripts ... you name it (and please name it!)&lt;/p&gt;

</description>
      <category>devops</category>
      <category>development</category>
      <category>watercooler</category>
      <category>discuss</category>
    </item>
    <item>
      <title>On the importance of DevOps</title>
      <dc:creator>Andrei Dascalu</dc:creator>
      <pubDate>Fri, 17 Sep 2021 07:14:08 +0000</pubDate>
      <link>https://dev.to/andreidascalu/on-the-importance-of-devops-1fn6</link>
      <guid>https://dev.to/andreidascalu/on-the-importance-of-devops-1fn6</guid>
      <description>&lt;p&gt;Before we get down to the subject, I need to clarify that my take on "devops" is not one of DevOps engineers or DevOps as a role, but the "original" meaning of DevOps when it was coined: a practice that brings together ops and development for the purpose of enabling development teams to own the full lifecycle of an application, from inception to deployment.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Problem
&lt;/h2&gt;

&lt;p&gt;I will (once again) use PHP as an example, mostly because it poses some interesting challenges from this perspective. It's also related to a recent experience on a project I've been involved with.&lt;/p&gt;

&lt;p&gt;The challenge is fairly straightforward.&lt;/p&gt;

&lt;p&gt;You're working on an application. You write your code, unit tests and so on, you click a button and once your code passes code review, your coverage (unit tests, integration and so on) is insane enough to allow you to press a button that takes the application, deploys it on a staging environment to be subjected to a set of automated tests and on success it's already in production!&lt;/p&gt;

&lt;p&gt;It's pretty great and in line in about 50% of projects I worked with in the past few years.&lt;/p&gt;

&lt;p&gt;Then, say, you're developing a feature that requires interactions with RabbitMQ. You need your amqp extension made available, in your development stack (which is managed by a team similar to the people managing all the production infrastructure) as well as prod. You can't really just click to deploy anymore since your code will break without the extension. Or, perhaps, you're fixing something that also requires changes to OpCache or memory allocation (or any PHP-related configuration).&lt;/p&gt;

&lt;p&gt;What do you do? Well, in a non-devops-ish practice, you'd ask a team (or two, in the case of my project) to do that for you and let you know when it's done so you can proceed with merging it. Of course, the configuration changes also need to be backward-ish compatible in the sense that it won't break anything when they're there for people who might get the updated dev environment before your changes are merged in.&lt;/p&gt;

&lt;p&gt;Wouldn't it be great if there was a way to simply make things work as intended, in a way that no change pertaining to the application would depend on work done by a different team (which might block your delivery or have other repercussions)?&lt;/p&gt;

&lt;h2&gt;
  
  
  The Solution
&lt;/h2&gt;

&lt;p&gt;DevOps. It means that your team should have sufficient knowledge and the ability to do this work on their own (either by having specialised people or by spreading the knowledge).&lt;/p&gt;

&lt;p&gt;Personally I prefer the latter because the vast majority of changes to do are limited in scope and it's still ok for those rare cases when large scale changes to call on more specialised knowledge (or when it implies changes to the infrastructure).&lt;/p&gt;

&lt;p&gt;Basically, it boils down to redefining configuration that belongs to the platform and/or application as part of the application rather than part of the infrastructure.&lt;/p&gt;

&lt;p&gt;Docker is a pretty great tool for this. In the example above, you would have a Dockerfile defining the runtime of your application with its dependencies (as well as php ini files per various environments). When I need to add the amqp extension, I do it in the Dockerfile in the same merge request containing my other changes. It will reach deployment as well as the other devs at the same time when they pull the changes. My requirements and configuration travels together with the rest of my changes. &lt;strong&gt;I can click to deploy safely&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Of course, if you don't use this model (in whole or in part), there are changes needed to get there.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;As an organisation you must not be afraid of people learning. Learning is scary because when people do something even a bit different, it does affect their "productivity".&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;You must not "silo" - don't restrict the knowledge. When I proposed the Docker change, I was told that "this is not who we are, we don't want to maintain this" - missing the point that knowledge is distributed to empower people to make the changes they need when they need them.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Development is a fast paced domain and silos slow things down. Being even a bit polivalent is a great advantage.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;yes, Docker Desktop is now a paid tool for large companies. But no, &lt;em&gt;docker daemon&lt;/em&gt; &amp;amp; &lt;em&gt;docker cli&lt;/em&gt; are still free, though a bit more difficult to install now (most people are used to installing everything through Docker Desktop)&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>devops</category>
      <category>development</category>
      <category>agility</category>
    </item>
    <item>
      <title>Apple  vs the world - the power of tinkering</title>
      <dc:creator>Andrei Dascalu</dc:creator>
      <pubDate>Sun, 12 Sep 2021 13:24:30 +0000</pubDate>
      <link>https://dev.to/andreidascalu/apple-vs-the-world-the-power-of-tinkering-1cpp</link>
      <guid>https://dev.to/andreidascalu/apple-vs-the-world-the-power-of-tinkering-1cpp</guid>
      <description>&lt;p&gt;I was shocked to discover quite recently that I've become an Apple user. For years I've been a Linux man (intermittently using a Mac, offered by my employer), while using an Android phone since the very first commercially available version. I've never appreciated Apple's universe and locked-in platform, though I did admire its seamless interconnectivity and how the ecosystem worked together.&lt;/p&gt;

&lt;p&gt;Another thing I've admired about Apple (design aside - matter of taste) is that it is a giant which competes with other giants on a number of fronts. It competes with Google in the mobile OS market (and smart home), against a number of competitors (led by Samsung) in the mobile device category, against Microsoft on computer OS and against a multitude of desktop/laptop makers.&lt;/p&gt;

&lt;p&gt;Its offer is quite interesting - an integrated ecosystem of devices that "just work" together, with simple operation that removes any entry overhead so that whether you're a creator (developer, musician, etc) or just consumer, you can get right down to what you're doing and forget anything else.&lt;/p&gt;

&lt;p&gt;There was a time when I would appreciate the ability to tinker with my system. The endless customisation options of Linux to create that perfect desktop system that blends beauty and functionality (hello &lt;code&gt;Enlightenment&lt;/code&gt;). The straightforward plugin development for &lt;code&gt;Sublime Text&lt;/code&gt;. The ability to create interoperability with other devices (rooting and customizing Android devices) and so on.&lt;/p&gt;

&lt;p&gt;Nowadays, my focus is on just doing the work. I tinker, but with the "outer" technologies that I learn on the job (Kubernetes, Golang, all sorts of frameworks and so on) and I like my local environment to be stable and just support that (preferably out of the box).&lt;/p&gt;

&lt;p&gt;My old self would have never seriously considered Apple. My old self has become to appreciate the offer. But it's an evolution, I think.&lt;/p&gt;

&lt;p&gt;Looking back, joining the "get to business" mindset would have deprived me of a lot of learning. Tinkering with Linux allowed me to learn (besides scripting) a lot about the inner workings of an OS. Creating plugins for Sublime allowed me to learn a lot about programming and what it means to parse code (as well as natural language and string processing). Tinkering with Android (and briefly creating my own flavour) allowed me to learn about ARM architecture and the composition of a mobile OS, drivers and so on.&lt;/p&gt;

&lt;p&gt;Tinkering is great and I consider it to be a hallmark of the inquisitive mind. It's a great learning process that comes out of passion for technology and it also allows the discovery of your true interests. It sparks creativity.&lt;/p&gt;

&lt;p&gt;But it's still just a phase (albeit an essential one). There's no shame in specialising, creating your recipes and algorithms and then applying them. That's what experience is ... as long as you down allow this to become a closed box. Or, if you do, do try to peek outside every now and then.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Layered environments</title>
      <dc:creator>Andrei Dascalu</dc:creator>
      <pubDate>Thu, 19 Aug 2021 08:21:41 +0000</pubDate>
      <link>https://dev.to/andreidascalu/layered-environments-14im</link>
      <guid>https://dev.to/andreidascalu/layered-environments-14im</guid>
      <description>&lt;p&gt;... in progress&lt;/p&gt;

</description>
      <category>azure</category>
      <category>php</category>
      <category>kubernetes</category>
    </item>
    <item>
      <title>Refactoring - Migrating to a cloud provider</title>
      <dc:creator>Andrei Dascalu</dc:creator>
      <pubDate>Thu, 19 Aug 2021 08:20:36 +0000</pubDate>
      <link>https://dev.to/andreidascalu/refactoring-migrating-to-a-cloud-provider-o01</link>
      <guid>https://dev.to/andreidascalu/refactoring-migrating-to-a-cloud-provider-o01</guid>
      <description>&lt;p&gt;Strangely this challenge proved to be the most straightforward bit.&lt;/p&gt;

&lt;p&gt;The customer imposed Azure as a cloud, so that was that.&lt;/p&gt;

&lt;p&gt;Our requirements for this:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;shared (not dedicated) application infrastructure. To control costs, we don't want to dedicate infrastructure to customers. A full-pipeline production customer may have 5 base environments, each with some number of running instances. We don't want to automatically add a 32Gb VM if they need another instance and there may be some unused resources on an existing one. We also don't want to manually provision smaller ones or have a gazillion different VM pools.&lt;/li&gt;
&lt;li&gt;easy way for developers to cough up a new environment without micromanaging routes&lt;/li&gt;
&lt;li&gt;a customer will have the following environment levels: dev (automatically or manually deployed)/auto (for automated testing)/test (our acceptance and some manual testing)/accept (customer acceptance)/prod&lt;/li&gt;
&lt;li&gt;each given environment could be scaled up to a number of running instances, automatically or manually&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;On our side, we looked at traditional deployment pipelines that would take the code and script a delivery process on a VM which would be part of the Azure equivalent of AWS' autoscaling groups. That would mean, operationally, to maintain some routing lists on the external load balancer level.&lt;/p&gt;

&lt;p&gt;However, this would mean that the load balancer would have to route a given domain or path rule to a given VM (or to all VMs in a group), so we would have to provision and configure a different local proxy if we wanted to have multiple environments on a VM.&lt;/p&gt;

&lt;p&gt;For example, our loadbalancer would need to route, say &lt;code&gt;*.customer1.com&lt;/code&gt;, &lt;code&gt;*.customer2.com&lt;/code&gt; etc. But where? We don't know on which VM a running instance may be. We could label them, but then when scaling happens, we need to make sure an instance only has the proper labels to service a given customer. Also, we don't have different load balancers per customer.&lt;/p&gt;

&lt;p&gt;The existing system was sort-of configured like this, except that the local proxy was a single Apache instance that also handled the PHP interpretation. Multi-tenant done properly (with shared infrastructure) would mean dedicated webservers which could be restarted individually, with the common routing done at proxy level.&lt;/p&gt;

&lt;p&gt;Too complicated to do manually ...&lt;/p&gt;

&lt;p&gt;But fortunately most of us were versed in the art of containers and we managed to cook up a Dockerized development environment in a couple of days. It was a no-brainer then to decide to use Kubernetes in Azure.&lt;/p&gt;

&lt;p&gt;The system went like this:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Azure AKS with nginx-ingress and a couple of static IPs (both outgoing and incoming)&lt;/li&gt;
&lt;li&gt;configmaps would hold the per-customer configuration&lt;/li&gt;
&lt;li&gt;a build would create and push a container to a registry&lt;/li&gt;
&lt;li&gt;a daemon inside the AKS cluster itself would poll the registry and deploy new builds automatically to QA environments&lt;/li&gt;
&lt;li&gt;HPAs would enable some basic autoscaling based on memory/CPU usage but later we would add more interesting rules.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Changes done to the application:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;make it stateless (this was very time consuming): since containers are disposable, the application must not write files in local paths (or even shared paths, if multiple instances are expected to run when scaled up) which are needed later (for example: file uploads).&lt;/li&gt;
&lt;li&gt;logging to stdout: AKS collects stdout/stderr from containers, so the application should not write logs to files, but directly to output. Fortunately, there's &lt;code&gt;Monolog&lt;/code&gt;!&lt;/li&gt;
&lt;li&gt;use Azure for customer uploads: there's a thing called &lt;code&gt;Flysystem&lt;/code&gt; which provides a filesystem abstraction that allows seamless access between local filesystem (like copy from local &lt;code&gt;tmp&lt;/code&gt;) and various cloud storage systems.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Developer experience:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;a developer would need to copy/adjust a deployment/configmap/service and ultimately ingress, usually by editing out the relevant labels&lt;/li&gt;
&lt;li&gt;we ended up scripting with &lt;code&gt;yq&lt;/code&gt; (CLI yaml find/replace tool) and later on packaging with helm&lt;/li&gt;
&lt;li&gt;much later the configmaps were encrypted with &lt;code&gt;sops&lt;/code&gt; and Azure KM and kept in codebase.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Phew! This was by far the fastest bit. Two days to make enough changes to create local docker-compose system, two more for the initial setup in Azure .... but quite some time to make the application stateless. Uploads were a fairly quick thing to do, but for some time afterwards we would keep discovering unexpected places where the application relied on locally produced files. Of course, often used things were quickly discovered and fixed but more obscure features came back with a vengeance (then again, obscure features were always a pain since they never found a place in test suites).&lt;/p&gt;

&lt;p&gt;Onwards, to glory!&lt;/p&gt;

</description>
      <category>php</category>
      <category>azure</category>
      <category>cloudnative</category>
    </item>
    <item>
      <title>Refactoring Legacy - Intro</title>
      <dc:creator>Andrei Dascalu</dc:creator>
      <pubDate>Thu, 19 Aug 2021 07:58:05 +0000</pubDate>
      <link>https://dev.to/andreidascalu/refactoring-legacy-intro-1f1k</link>
      <guid>https://dev.to/andreidascalu/refactoring-legacy-intro-1f1k</guid>
      <description>&lt;p&gt;This post is a summary of a couple of years spent battling a legacy application. This is not (necessarily) a how-to because I have yet to declare a "mission accomplished" here and whether it's a success or not, it's too soon to call. &lt;/p&gt;

&lt;p&gt;While this represents a particular set of circumstances, I believe it can help you ask the right questions in the beginning, take into account a variety of paths and also give the benefit of the doubt to those whose mess you've inherited (they were doing their best at the time).&lt;/p&gt;

&lt;p&gt;The application:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;legacy PHP, started with early 5.x versions and patched just enough so that it runs on 7.2&lt;/li&gt;
&lt;li&gt;really old-school: html is formed by echo-ing stuff in random scripts that are "require'd" in the right order. Warnings are totally silenced, nobody cared about a missing array item when fetching a deeper array entry.&lt;/li&gt;
&lt;li&gt;evidence that someone tried to MVC the whole thing, but not by introducing, say, Symfony components but by creating scripts that act like views (just echo stuff, but without buffering it into a final string - echo wherever/whenever) and some classes - but missing a proper service layer&lt;/li&gt;
&lt;li&gt;database access done via a class that mysqli_connects, no caching of any kind &lt;/li&gt;
&lt;li&gt;some models carry state but still choke-full of static methods that bring fresh data, as if someone wanted to go more like Doctrine-style but someone else later wanted a poor man's Eloquent Active Record &lt;/li&gt;
&lt;li&gt;sometimes the same query ended up being done against the DB 10s of times during a request (well, 29 was the top I've seen myself)&lt;/li&gt;
&lt;li&gt;running on manually maintained VMs, deploy manually via "git pull" then clear some local static files then restart Apache&lt;/li&gt;
&lt;li&gt;the whole thing was becoming multi-tenant&lt;/li&gt;
&lt;li&gt;the application was heavily dependent on data fetched from remote APIs and had to push data back to the same APIs (some where REST, some SOAP)&lt;/li&gt;
&lt;li&gt;non-existent error handling, WARNING were silent, catching was used to ignore Errors (for the most part), virtually no logging.&lt;/li&gt;
&lt;li&gt;objects were used for data models only for the DB, for SOAP/REST(JSON) only arryas/stdClass were used to serialise/deserialise (always assuming that some item exists via the developer's knowledge of the structure of a given request).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The Challenges:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;migrate to a cloud provider and add scalability (automatic, preferably)&lt;/li&gt;
&lt;li&gt;create a layered system of environments for each tenant with automatic deployment on a QA environment and manual promotion for local (our) acceptance then customer acceptance then prod.&lt;/li&gt;
&lt;li&gt;tackle a number of bugs in the code. Right off the bat it was clear most were linked to the hidden warnings, where the code would for example expect &lt;code&gt;$array['level1']['level2']&lt;/code&gt; but 'level2' may be missing due to some buggy change yet that fact was hidden from developers since warnings were silenced.&lt;/li&gt;
&lt;li&gt;create a system of aggregated structure logging&lt;/li&gt;
&lt;li&gt;integrate more external APIs&lt;/li&gt;
&lt;li&gt;tackle an existing feature backlog&lt;/li&gt;
&lt;li&gt;improve performance&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Let's explore them, as part of the series!&lt;/p&gt;

</description>
      <category>php</category>
      <category>refactoring</category>
      <category>legacy</category>
      <category>development</category>
    </item>
    <item>
      <title>Make containers small again</title>
      <dc:creator>Andrei Dascalu</dc:creator>
      <pubDate>Sat, 31 Jul 2021 14:40:34 +0000</pubDate>
      <link>https://dev.to/andreidascalu/make-containers-small-again-3lgg</link>
      <guid>https://dev.to/andreidascalu/make-containers-small-again-3lgg</guid>
      <description>&lt;p&gt;Making small containers is undoubtedly an art. It's an importat art in today's development (of any kind) where container are quite ubiquitous. But why?&lt;/p&gt;

&lt;h3&gt;
  
  
  Small containers
&lt;/h3&gt;

&lt;p&gt;Are small. Meaning they take up less space and space can be expensive in hosted registries. It's not a huge deal but it's a small thing that helps.&lt;/p&gt;

&lt;p&gt;But size doesn't just take up storage. When you pull a container (to deploy it) also takes bandwidth. Smaller containers are also faster to download which means faster to deploy in a production environment.&lt;/p&gt;

&lt;p&gt;Small containers are usually small because they contain less stuff. Less stuff means a smaller footprint and it means better security.&lt;/p&gt;

&lt;p&gt;To observe these two things about your containers, I recommend two important tools.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/wagoodman/dive"&gt;dive&lt;/a&gt; allows you explore your containers' layers (filesystem but also each layer and the command that produced it). Each layer is defined by the command that created it and its size.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/aquasecurity/trivy"&gt;trivy&lt;/a&gt; by AquaSecurity is a static scanning tool that explores the content of your images and lists any security advisory related to the contents. Although &lt;em&gt;docker scan&lt;/em&gt; has been available for the past minor version (using &lt;strong&gt;snyk&lt;/strong&gt;), two source for security evaluation can be helpful.&lt;/p&gt;

&lt;h3&gt;
  
  
  How to
&lt;/h3&gt;

&lt;p&gt;Well, since we're talking size, the main ideas are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;em&gt;use the smallest base image that you can&lt;/em&gt; - gone are the days where your base Ubuntu was 400Mb. Now any respectable base distro comes under 100Mb. The smallest full-featured base is by far &lt;strong&gt;alpine&lt;/strong&gt; at 5Mb but we can't forget &lt;strong&gt;scratch&lt;/strong&gt; which is essentially the kernel. Scratch is useful when your application can be distributed as a binary and you only need an environment and filesystem but without any other amenities (package manager, curl, etc). If you can use &lt;strong&gt;scratch&lt;/strong&gt;, do it.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;em&gt;use the same base image across your builds, if possible&lt;/em&gt; - why? because images come in layers. Docker caches and reuses layers so if you have a X builds but each ads only a layer on top of a base, the base will be reused so you only have X+1 layers, not 2X layers. When you pull/deploy, the base will be reused. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;em&gt;create your own base images&lt;/em&gt; - this doesn't mean create a distro but rather if you notice a number of repeated steps you take in your builds, it's better to create your own base (starting from those steps) for maximising layer reusability between your final images.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;em&gt;use multi stage builds&lt;/em&gt; - multi stage builds will discard intermediary containers so that you can install your build tools and perform the build in a stage, then have the next step copy the build output from the previous build - thus you don't need to cleanup build tools. This goes great for frontend builds (step 1: get yarn/node/etc, fetch packages and build the static resources then step 2: from an nginx base copy the static build from step 1, add nginx configuration =&amp;gt; success!)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;em&gt;Merge Dockerfile commands if you can&lt;/em&gt; mainly RUN and ENV commands can be merge together. Individual commands each creates a layer so it stands to reason to join multiple RUN commands for example (via &amp;amp;&amp;amp;) and minimise the layer count.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;em&gt;don't install crap you don't need&lt;/em&gt; - in a different post here I made a comment on a recommended Docker build for Go applications. Most Go Dockerfiles you will find will have you make the build in step 1, copy the binary in step 2 and add &lt;em&gt;ca-certificates&lt;/em&gt; in an alpine base. You don't need alpine or ca-certificates (unless your application makes external HTTPS calls). You don't need apk (or generally, you shouldn't need a package manager in a production build).&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Buulding small containers is a must in a security-conscious enterprise. Performance (overall) is usually death by a thousand cuts. Very rarely it's about one big issue. Rather it's about a thousand small issues and size matters!&lt;/p&gt;

</description>
      <category>docker</category>
      <category>go</category>
      <category>development</category>
    </item>
    <item>
      <title>Docker cleanup: images by name search</title>
      <dc:creator>Andrei Dascalu</dc:creator>
      <pubDate>Fri, 23 Jul 2021 07:49:17 +0000</pubDate>
      <link>https://dev.to/andreidascalu/docker-cleanup-images-by-name-search-bpk</link>
      <guid>https://dev.to/andreidascalu/docker-cleanup-images-by-name-search-bpk</guid>
      <description>&lt;p&gt;There's nothing more annoying than collecting docker images locally and suddenly stateful application containers (like databases, mySQL, RabbitMQ) suddenly exiting because they don't have space left on the (usually VM) where they are running.&lt;/p&gt;

&lt;p&gt;You can &lt;code&gt;docker system prune -a&lt;/code&gt; to clean as much as possible or you can &lt;code&gt;docker rmi&lt;/code&gt; everything, but what if you want to do a bit more targeted cleaning?&lt;/p&gt;

&lt;p&gt;&lt;code&gt;docker image ls&lt;/code&gt; will list all images with some info split in columns. First column is the image name, the second column contains the tag.&lt;/p&gt;

&lt;p&gt;Now, we know that images share layers so deleting one won't necessarily free up all the space since some layers may still be linked to other images but we can improve cleanup by untagging images. How to do that in a more targeted way? By listing them and then joining the name column and the tag column and then passing it all to &lt;code&gt;docker rmi&lt;/code&gt;. AWK comes to the rescue!&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker image ls | grep "MY SEARCH" | awk '{print $1 ":" $2}' | xargs docker rmi
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Of course, feel free to replace grep with your preferred search command as long as its output only filters &lt;code&gt;docker image ls&lt;/code&gt; output and doesn't extract information from it, as AWK expects the columnised output.&lt;/p&gt;

&lt;p&gt;Note: you should still do a&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker system prune
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;afterwards as unlinking tags only frees up layers but those layers may remain stored locally - but docker prune will remove the dangling ones (eg: not tied to tagged images or to running containers).&lt;/p&gt;

</description>
      <category>docker</category>
      <category>development</category>
    </item>
  </channel>
</rss>
