<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Trikoder</title>
    <description>The latest articles on DEV Community by Trikoder (@trikoder).</description>
    <link>https://dev.to/trikoder</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/trikoder"/>
    <language>en</language>
    <item>
      <title>Fun times with MySQL upgrade</title>
      <dc:creator>Eva Marija Banaj Gađa</dc:creator>
      <pubDate>Fri, 17 Dec 2021 09:25:00 +0000</pubDate>
      <link>https://dev.to/trikoder/fun-times-with-mysql-upgrade-1ei4</link>
      <guid>https://dev.to/trikoder/fun-times-with-mysql-upgrade-1ei4</guid>
      <description>&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F1024%2F1%2AhjC30NH5V2DHVq-POKlMpg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F1024%2F1%2AhjC30NH5V2DHVq-POKlMpg.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Like all service upgrades, MySQL is no different. Bump the service version, build the docker image, try to “make up“ the project and hope for the best. I’ve decided to dedicate this blog to four things that turned this “it is going to be an easy project” to “8 months in hell while upgrading MySQL from v5.6 to v8.0”.&lt;/p&gt;

&lt;h3&gt;
  
  
  1. Fantastic SQL modes and where to find them
&lt;/h3&gt;

&lt;p&gt;While looking at all changes made to MySQL between v5.6 and v8.0, I came across something rather interesting. They announced they had enabled a bunch of other previously optional SQL modes as a part of strict mode.&lt;/p&gt;

&lt;p&gt;To my surprise, only &lt;em&gt;NO_ENGINE_SUBSTITUTION&lt;/em&gt; mode was enabled in our database. 😕?!?! What could possibly go wrong after years of using the database with strict mode off?&lt;/p&gt;

&lt;p&gt;A lot of things apparently, so I made a list of modes I need to enable/check out before finally enabling &lt;em&gt;STRICT_TRANS_TABLES&lt;/em&gt;.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;em&gt;ERROR_FOR_DIVISION_BY_ZERO&lt;/em&gt; → &lt;a href="https://dev.mysql.com/doc/refman/5.7/en/sql-mode.html#sqlmode_error_for_division_by_zero" rel="noopener noreferrer"&gt;in later versions, no longer an option but part of strict mode&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;em&gt;NO_ZERO_IN_DATE&lt;/em&gt; → &lt;a href="https://dev.mysql.com/doc/refman/5.7/en/sql-mode.html#sqlmode_no_zero_in_date" rel="noopener noreferrer"&gt;in later versions, no longer an option but part of strict mode&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;em&gt;NO_AUTO_CREATE_USER&lt;/em&gt; → &lt;a href="https://dev.mysql.com/doc/refman/5.7/en/sql-mode.html#sqlmode_no_auto_create_user" rel="noopener noreferrer"&gt;in version 8.0, no longer an option but part of strict mode&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;em&gt;NO_ZERO_DATE&lt;/em&gt; → &lt;a href="https://dev.mysql.com/doc/refman/5.7/en/sql-mode.html#sqlmode_no_zero_in_date" rel="noopener noreferrer"&gt;in later versions, no longer an option but part of strict mode&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;em&gt;ONLY_FULL_GROUP_BY&lt;/em&gt; → decided to leave it disabled&lt;/li&gt;
&lt;li&gt;
&lt;em&gt;NO_ENGINE_SUBSTITUTION&lt;/em&gt; → was already enabled&lt;/li&gt;
&lt;li&gt;
&lt;em&gt;STRICT_TRANS_TABLES&lt;/em&gt; → needs to be enabled&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This little side quest of cleaning up migrations, Doctrine entities which were not in sync with corresponding database tables, fixtures, using null for everything — does not matter is the field nullable or not, missing primary keys on tables etc. cost us 193,4 hours (around 33 days) just to make the application run with some additional SQL modes enabled.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Sphinx client issues
&lt;/h3&gt;

&lt;p&gt;Some legacy parts of our application still used Sphinx instead of ElasticSearch. We faced a tough choice then and there because MySQL v8.0 had a new default authentication plugin &lt;em&gt;caching_sha2_password&lt;/em&gt;, and Sphinx v2.2.4, that we were still using, uses the old authentication plugin &lt;em&gt;mysql_native_password&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;Connecting to Sphinx with a user that was using &lt;em&gt;caching_sha2_password&lt;/em&gt; authentication plugin resulted in:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;bash-4.4$ mysql -hsphinx -usphinx 
ERROR 2003 (HY000): Can't connect to MySQL server on 'sphinx' (111)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;So, we tried to create a user that used the old authentication plugin. That still resulted with an error:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;bash-4.4$ mysql -hsphinx -P9306 
ERROR 2000 (HY000): Unknown MySQL error
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="http://sphinxsearch.com/docs/sphinx3.html#version-3.1.1-17-oct-2018" rel="noopener noreferrer"&gt;Issues with connecting to MySQL client 8.0+ were fixed in Sphinx v3.1.1&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;This left us with two possible ways to go:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Upgrade legacy parts of the application, that we all want removed, to v3.1.1 — not too much effort&lt;/li&gt;
&lt;li&gt;Remove Sphinx from the project — little sub-project&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;We went with a 751 hour endeavor for 5 people and removed Sphinx from the project.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. The magnificent world of charsets, collates and row formats
&lt;/h3&gt;

&lt;h4&gt;
  
  
  About charsets and collates
&lt;/h4&gt;

&lt;p&gt;MySQL uses &lt;em&gt;UTF8&lt;/em&gt; as an alias for the now deprecated &lt;em&gt;UTF8MB3&lt;/em&gt;. It is expected, at some point in the future, that &lt;em&gt;UTF8&lt;/em&gt; will become an alias for the &lt;em&gt;UTF8MB4&lt;/em&gt; charset. In a future MySQL release, &lt;em&gt;UTF8MB3&lt;/em&gt; should be removed. You can read more about it &lt;a href="https://dev.mysql.com/doc/refman/8.0/en/charset-unicode-utf8mb3.html" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;To follow the recommendation, we decided to change our charset to &lt;em&gt;UTF8MB4&lt;/em&gt;. To match our brand new charset, we had to change the collate to any one compatible with &lt;em&gt;UTF8MB4&lt;/em&gt; charset.&lt;/p&gt;

&lt;h4&gt;
  
  
  About row formats
&lt;/h4&gt;

&lt;p&gt;There are four row formats: &lt;em&gt;REDUNDANT, COMPACT, DYNAMIC&lt;/em&gt; and &lt;em&gt;COMPRESSED&lt;/em&gt;. MySQL v5.6 uses &lt;em&gt;COMPACT&lt;/em&gt; by default, and v5.7 and later use DYNAMIC.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;a href="https://dev.mysql.com/doc/refman/8.0/en/innodb-row-format.html" rel="noopener noreferrer"&gt;&lt;em&gt;The DYNAMIC row format offers the same storage characteristics as the COMPACT row format but adds enhanced storage capabilities for long variable-length columns and supports large index key prefixes.&lt;/em&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;In our code, we like to use these “cutting edge” things from the “era gone by”, meaning, we used row format &lt;em&gt;FIXED&lt;/em&gt;. It is so deprecated, that if &lt;em&gt;innodb_strict_mode&lt;/em&gt; is disabled, InnoDB issues a warning and assumes row format &lt;em&gt;DYNAMIC&lt;/em&gt;, and if innodb_strict_mode is enabled, InnoDB returns an error. We replaced &lt;em&gt;FIXED&lt;/em&gt; and &lt;em&gt;COMPACT&lt;/em&gt; row formats with &lt;em&gt;DYNAMIC&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;There was a bug. 🐛&lt;/p&gt;

&lt;p&gt;If you try to create an index on a field that exceeds 767 bytes you will get an error that looks like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ERROR 1709 (HY000): Index column size too large. The maximum column size is 767 bytes
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;But it was still possible to create an index like that if you were using row format &lt;em&gt;COMPRESSED&lt;/em&gt;/&lt;em&gt;REDUNDANT&lt;/em&gt;, or you didn’t explicitly define row format &lt;em&gt;DYNAMIC&lt;/em&gt;. The result, after a server reboot, the table was inaccessible and could not be recovered. But luckily, this &lt;a href="https://bugs.mysql.com/bug.php?id=99791" rel="noopener noreferrer"&gt;issue&lt;/a&gt; was fixed in MySQL v8.0.22.&lt;/p&gt;

&lt;p&gt;If you still want to create an index on VARCHAR field, make sure the length is less or equal to 190. This is because &lt;em&gt;UTF8&lt;/em&gt; takes up to (3*255) 765 bytes, and &lt;em&gt;UTF8MB4&lt;/em&gt; takes up to (4*255) 1020 bytes.&lt;/p&gt;

&lt;h4&gt;
  
  
  The real issue
&lt;/h4&gt;

&lt;p&gt;Ok, not to hard. So we change the charset, collate and row format. Big woop, right?&lt;/p&gt;

&lt;p&gt;The real hard part was altering every single table in the production database. This will not be a problem if you do not have any huge tables, but if you do, these alters can and will take hours.&lt;/p&gt;

&lt;p&gt;The trickiest thing of all is altering all these tables with some reasonable downtime. Because, if you try to join two tables with a different collate and charset — it will fail. If you try to alter everything on one slave, and then replicate it — it will fail.&lt;/p&gt;

&lt;p&gt;Your safest bet is to backup or delete some data you can spare to reduce the table size. Or create new empty tables that will be used while the real ones are being altered and sync the deltas once it is over.&lt;/p&gt;

&lt;h3&gt;
  
  
  4. MySQL query cache is no more
&lt;/h3&gt;

&lt;p&gt;If you rely on MySQL query cache, you will have to replace it with something else. It was deprecated in MySQL v5.7 and completely removed in MySQL v8.0.&lt;/p&gt;

&lt;p&gt;There are some alternatives, like ProxySQL query cache. But there are definitely some cutbacks.&lt;/p&gt;

&lt;p&gt;It is the simplest alternative, really easy to setup, benchmarks show better throughput — meaning performance boost. But…&lt;/p&gt;

&lt;p&gt;Unlike MySQL query cache that would invalidate the cache every time there was a write, in ProxySQL there is no way to define a way to invalidate the cache other then &lt;em&gt;cache_ttl&lt;/em&gt;. This can definitely be a limitation because there is a chance you will serve some stale data.&lt;/p&gt;

&lt;p&gt;Other then that, it does not support caching prepared statements and there is no way to manually purge the query cache. There is a parameter &lt;em&gt;mysql-query_cache_size_MB&lt;/em&gt; that defines how big your cache can get. But this is not strict, it is only used to automatically trigger the query cache purge.&lt;/p&gt;

&lt;p&gt;In any case, it just depends on whether or not this is acceptable to you. You can find more about it &lt;a href="https://www.percona.com/blog/2018/02/07/proxysql-query-cache/" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;If you are planning on upgrading MySQL, I hope you will find this helpful. The biggest problem for me was underestimating the time needed for delivering the project. I wrote this post, if for nothing else, to help you know what to look out for. :)&lt;/p&gt;




</description>
      <category>sql</category>
      <category>tips</category>
      <category>tipsandtricks</category>
      <category>mysql</category>
    </item>
    <item>
      <title>Being a Tech Lead</title>
      <dc:creator>Robert Basic</dc:creator>
      <pubDate>Wed, 08 Dec 2021 14:44:30 +0000</pubDate>
      <link>https://dev.to/trikoder/being-a-tech-lead-4l9</link>
      <guid>https://dev.to/trikoder/being-a-tech-lead-4l9</guid>
      <description>&lt;p&gt;I’ve been the tech lead of my team at Trikoder for just over a year now (380 days, but who’s counting?) I think this is a good time to look back at what this role means to me, the things I’ve learned, and mistakes I made.&lt;/p&gt;

&lt;h2&gt;
  
  
  My background
&lt;/h2&gt;

&lt;p&gt;Ever since I started programming back in 2005, I sort of have known that “writing code and solving problems with software” is the thing I’ll do. As I grew older and more experienced, I’ve slowly come to realize that, well, writing software is only one part of the equation and there’s a bit more to it. Turns out the “people stuff” is quite important and necessary, even when dealing with computers all day.&lt;/p&gt;

&lt;h2&gt;
  
  
  Joining Trikoder
&lt;/h2&gt;

&lt;p&gt;In the summer of 2018, I joined Trikoder as an external contributor on the Njuskalo.hr platform.&lt;br&gt;
As part of the Common Base Technology (CBT) team, I’ve took part in work that enabled us to internationalize the Njuskalo.hr platform and launch bolha.com on the same code, as well as undertook some bigger refactors and rewrites to lessen the burden of technical debt and legacy code on other teams. We still have a lot of work ahead of us as 10 plus years of shipping fast tends to leave a lot of “baggage” behind.&lt;/p&gt;

&lt;h2&gt;
  
  
  What do I do as a tech lead?
&lt;/h2&gt;

&lt;p&gt;I’ve been the tech lead of this small team for the past year and, mostly through trail and error, I’ve been figuring out what does this role expect from me. I have good support both from my team, my team lead, and from the company in general, so it’s been a great learning experience so far.&lt;br&gt;
A thing I learned over the years is that one of the reasons “legacy code” happens is due to a communication breakdown between the business people that need the software to solve a particular problem, and the software people that write the software. This is why I believe the position of a tech lead is a unique one. We can help the business understand why delivering new features takes as long as it takes, or why is it necessary to do some seemingly unrelated code maintenance. But, communication is a two way street, so we also need to ensure that the developers can understand the business side of things, how it’s not financially viable to halt producing new features for several months to rewrite that ugly piece of code someone else wrote, or how this project might not be the best place to try out the latest and shiniest new technology. I see my main role as a tech lead to be a bridge in the communication between business and development.&lt;br&gt;
Through regular communication with the other teams, I try to understand what parts of the platform should we focus on next when it comes to dealing with technical debt and legacy code. Then, together with the leader of my team, we try to come up with a strategy and goals that will get us buy-in from the business.&lt;br&gt;
Within the team itself, I do my best to guide the team towards good technical and technological choices. To make sure the code we write (and don’t write!) is the best it can be under the current circumstances, that it’s aligned with both the needs of the business as well as with the overall architecture.&lt;br&gt;
While I love nothing more than getting “into the zone” and delivering code, I’ve come to realize that that part of the job is gone. I’ve seen this mistake made by other tech leads, and then, sadly, made it myself. As a tech lead I can’t let myself focus too much on any single problem, because then I don’t see what else is going on in my team. I might miss out on an important decision being made, or someone might decide to not reach out to me for advice as they don’t want to disturb me.&lt;br&gt;
I see myself now as an enabler — my work is to enable the other programmers on my team to shine. Enable them to learn, to grow, to get into the zone, to make an impact. Even enable them to fail.&lt;br&gt;
And this is where I think I’ve come full circle as a programmer. When I was starting out I was always volunteering for the tasks that no one else wanted, the boring tasks, the non important but still have to be done tasks. I’ve started to pick up those tasks again, so that my team can focus on the important things.&lt;/p&gt;

&lt;h2&gt;
  
  
  Self-retrospectives are weird
&lt;/h2&gt;

&lt;p&gt;Am I doing it right? I think so. It feels right. I’ll probably make a few more mistakes along the way, but that’s how we learn. I’ve been fighting this direction of my career for a long time, as I didn’t want to bother with “management”. Now that I see and understand what the position of a tech lead brings to the table, I’m going all in.&lt;/p&gt;

&lt;p&gt;Until next time, take care my friend.&lt;/p&gt;

</description>
      <category>techlead</category>
      <category>programming</category>
      <category>technicallead</category>
    </item>
    <item>
      <title>Tips&amp;Tricks for project organization</title>
      <dc:creator>Eva Marija Banaj Gađa</dc:creator>
      <pubDate>Fri, 20 Aug 2021 07:12:46 +0000</pubDate>
      <link>https://dev.to/trikoder/tips-tricks-for-project-organization-3ooa</link>
      <guid>https://dev.to/trikoder/tips-tricks-for-project-organization-3ooa</guid>
      <description>&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F1024%2F1%2A7GXfC3vz5O138u4t2YZuUQ.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F1024%2F1%2A7GXfC3vz5O138u4t2YZuUQ.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;At some point in your career you might find yourself leading a project. This can be stressful and hard regardless of the projects scope, especially the first time you do it.&lt;/p&gt;

&lt;p&gt;You might find it hard to be on top of everything all the time, and still be a productive developer. Sometimes it feels like things are going nowhere, deadlines are just too soon and everything will crumble around you if you don’t do everything by yourself.&lt;/p&gt;

&lt;p&gt;Here are three simple things that helped me stay sane, organized and manage my daily activities. I hope they help you too.&lt;/p&gt;

&lt;h3&gt;
  
  
  Make sure everybody has the same idea of what will be the end product of the project
&lt;/h3&gt;

&lt;p&gt;The most frustrating thing for me is having to plan something poorly specified or vaguely defined. Specification is your most prized possession while planning a project. Of course, there is no way to have everything set in stone. Requests change, things get complicated and a different approach has to be taken but, taking time to prepare and analyze everything that is requested in the specification gives you a different perspective and insight.&lt;/p&gt;

&lt;p&gt;To avoid this mess, take time to prepare for the project.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Take your time to go through the specification.&lt;/strong&gt;
See what is required, what is not specified well enough and what might bite you in the… behind.
Parts that are described well probably won’t cause problems, but things that are not, need to be clarified right away. Be annoying if you have to, but find out every single detail you can possibly get your hands on.
And for those things that… khm… might bite, create analysis tasks. Take some additional time to better estimate how much work they might require.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Create tasks.&lt;/strong&gt;
Doesn’t matter if you use Jira or post-it notes on the wall. Write down things from the specification that need to be done and describe them as well as you can. You will thank yourself later.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Estimate tasks.&lt;/strong&gt;
Take a day or two, sit down with all people involved in the development process and estimate tasks — best case/worst case. Be careful with worst case estimates. Those are not “mildly bad” cases, they are “dog ate my code and I have to start again“ cases.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Milestones.&lt;/strong&gt;
Now when you know exactly what needs to be done and you have step by step tasks, divide those tasks into smaller groups. This way you get reachable goals that make it easier to follow the project timeline and keep your team motivated as there is no feeling that things are not progressing. Milestones are just small victories that eventually lead to the finished product.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Set the deadlines.&lt;/strong&gt;
Each milestone should have an unofficial deadline, more of a “goal date“. For calculating milestone deadlines, your gold number should be somewhere between the best and worst case estimate. Some tasks will be done sooner — some later, but in the end you will be in the ball park.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Confirm with everybody involved that the project can start and what they asked for is really what they want, what they really really want.&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Make sure everybody is well informed about your project status
&lt;/h3&gt;

&lt;p&gt;Other then poorly set goals, poor communication is another source of stress. Even though sometimes it may seem like you do not have time to finish anything on time, those 30 minutes you might spend on syncing with everybody involved will make a huge difference.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Report to your team lead.&lt;/strong&gt;
Do not go to your team lead only when s**t hits the fan. Make it a habit to report the project status regularly, regardless whether things are going good or bad (or ugly). It is important he/she/other knows how things are going in this mini-team of yours. Nobody is going to be happier if things are going great, and people are getting along and working together very well. But, if that is not the case, these reports will give your team lead a chance to make some changes and help you sort things out before things get out of hand.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Keep all involved parties up to date.&lt;/strong&gt;
If your project depends on other people or other teams, let them know how things are progressing. Sync once in a while with everyone, just to let them know how the project is going, will there be some delays, will you be done before the estimate (yeah, right…). But also, this way you will know what they did, what else they have to do, whether they have some problems…&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Update the project specification.&lt;/strong&gt;
I can’t stress this enough. Yeah, it is tedious and you “don’t have time“ and you “will remember all the details from that really important meeting after two weeks“ but I really urge you to make writing things down a habit.
Keep your project specification up to date. If some requirements change, make a note of it and notify involved people there have been some changes. This way, everyone involved can easily check what is happening with the project and it gives an ensuring feeling of you being on top of everything.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Make sure you know exactly what is going on and what still needs to be done
&lt;/h3&gt;

&lt;p&gt;At any point in time you need to know what is going on. Who is doing what, what is late, what is early, is someone stuck with something, and last but not least, is your team O.K. mentally.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Be in sync with your development team.&lt;/strong&gt;
It is O.K. to offload work and let your teammates worry about things you assigned them, but that does not mean you should just let things go. It is really important to know what everybody is doing every day, how far they got, how much is there left to do and are they struggling with the given task.
Make time in your day just to hear what everybody is doing, answer any possible question they might have, debug and brainstorm ideas together.
It can get quite chaotic if everybody just does what they are assigned, without knowing what others are doing and how the whole project stands in general. There will always be pings and meetings that might interrupt you in your daily activity (programming, writing documentations, whatever else), but this way, if you dedicate time in your day just for staying on top of things and helping everybody deal with their daily activities, you will be much more productive with way less interruptions.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Trust your team.&lt;/strong&gt;
Don’t try to do everything yourself. You should be able to rely on your development team and believe that they will do the things you assigned to them (and that those things will be done well). Not being able to trust your development team is worse than having to do all these things yourself, because of constant worrying things won’t be done on time or won’t be done well.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;There is one more bonus thing that I decided not to include in the list but I use it a lot. Make to-do lists. I’m not talking about daily tasks here. Make a to-do list of things that have to be accomplished, big meetings that have to happen (like a project kickoff meeting). For me, it was super satisfying to have less and less of those tasks to cross off, and it helped me organize and not forget something.&lt;/p&gt;

&lt;p&gt;With this, I conclude this post, I hope you found it useful. I hope it will help you find your own unique way of managing stressful situations and organizing your projects.&lt;/p&gt;




</description>
      <category>tips</category>
      <category>projectmanagement</category>
      <category>projects</category>
      <category>organization</category>
    </item>
    <item>
      <title>How to get coding pleasure time</title>
      <dc:creator>Marko Vušak</dc:creator>
      <pubDate>Thu, 15 Jul 2021 13:03:19 +0000</pubDate>
      <link>https://dev.to/trikoder/how-to-get-coding-pleasure-time-56ll</link>
      <guid>https://dev.to/trikoder/how-to-get-coding-pleasure-time-56ll</guid>
      <description>&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--OTVlCiyk--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2A3sKJqI3xya5o6dAn-EirWg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--OTVlCiyk--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2A3sKJqI3xya5o6dAn-EirWg.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As you advance your career as a developer your responsibilities change with time. You start as a junior who does what he is told. Changing a label here and there, validating an input and other less complex things. With time you become a senior that designs the architecture of the entire system, educates colleagues, defines the business rules with the stakeholders, reviews the code, deploys and does many other things. As the list grows you have more and more tasks on your shoulders. The time spent on context switching becomes more noticeable and you get less and less time for active development.&lt;/p&gt;

&lt;p&gt;While that’s ok because you are doing more “senior” stuff than simple coding you can still do some optimizations of your time and get some time for the coding pleasure. Last year right before the covid lockdowns I became one of the technical leads on a huge project. My coding time went from I code most of the day to I will maybe find some time to code in the afterhours in a matter of days. My responsibilities changed rapidly and covid lockdown didn’t help at all. If I wanted to help the team in the delivery, I had to code (Don’t get me wrong, I love coding), which meant I had to optimize my time management. This is what I've done to get some coding pleasure time.&lt;/p&gt;

&lt;h3&gt;
  
  
  Some general stuff
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Group your meetings&lt;/strong&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;I found that if I put most of my meetings in 2 or 3 days of the week I could get 2 or 3 days to do some coding. While those days packed with meetings are exhausting I found that this is a good sacrifice for me to get 2 or 3 days of peaceful coding.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. It’s ok to say: “can’t talk right now”&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;We are experiencing the boom of remote work. While the more experienced developers can compensate the physical absence of colleagues to some degree, they still need to communicate in order to perform their tasks. However, the junior developers cannot compensate for that and they will need support to complete their tasks. All of that means you can expect a significant amount of calls during the day. If you are in a coding session and the call is not about something urgent it’s ok to say: “Can you please call me later while I finish the current task at hand?”&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. You are not needed at every meeting that requires the input of your team&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Assuming you are not a one man team you can delegate the meetings to you team members. Everyone from your team knows the project and the plan for it to some degree. Figure out what degree is needed at what meeting and delegate. That will help your colleagues to grow because they will practice communication with business and spread the project knowledge around the team. Plus you will have some time for coding.&lt;/p&gt;

&lt;h3&gt;
  
  
  And now some developer stuff that I found helpful
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--TdcphAfr--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/500/1%2AOlMfSybcMWDvVMeW_aMX6A.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--TdcphAfr--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/500/1%2AOlMfSybcMWDvVMeW_aMX6A.jpeg" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Code review&lt;/strong&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;There are few tools that can make your life easier during the code review. Implementing them will allow you to focus on business and architectural aspects of the code that you are reviewing and possibly reduce the time you are code reviewing.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Use linters&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Linters will check and possibly fix the coding style of the work done so you don’t have to spend the time on that during the code review&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Use static analyzers&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Static analyzers perform the static analysis and will alert the coder on the suspicious code like some potentially bad comparison or invalid return statement. One less thing to worry about during the code review. Since I am a PHP developer I found PHPStan does the great job for this.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4. Dependency checker&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Assuming you are using some kind of layered architecture you want to be sure the boundaries are not crossed illegally. Deptrac did a wonderful job for us.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;5. Testing code&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Before the automated tests, code needed to be checked manually. While the manual part will never be obsolete because you can never predict every single case the human can produce, automated tests will noticeably reduce your time of testing stories and bugfixes. Automated tests will catch unexpected side effects of your code faster than you can find them by clicking. And E2E testing tools can cover your regression and confirm that you didn’t break something along the way.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;6. Automated deployment&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;We found that deployment to a testing environment took between 15 and 30 minutes of developer’s time. That was the case if everything went right. If he would make a mistake, that time would increase significantly. Automating the deployment removed the possibility of human error and reduced the time needed to one click for deploy and a refresh of the page after 15 minutes to check if everything is ok.&lt;/p&gt;

&lt;h3&gt;
  
  
  Conclusion
&lt;/h3&gt;

&lt;p&gt;As you grow through your career you will code less and less but that doesn’t mean you can’t grab some time anyway. Some of the development stuff that will reduce the context switching will not only be fun and challenging to implement, but it will also optimize your development process and help your colleagues optimize their time as well. The way you spend your time will dictate how you spend your energy and focus during the day. You have a limited amount of both during the day and if you try to cross that limit on a regular basis you are risking a burnout which is very nasty. Hopefully these tips will help you in burnout prevention.&lt;/p&gt;




</description>
      <category>timemanagement</category>
      <category>coding</category>
      <category>development</category>
    </item>
    <item>
      <title>Our experience with upgrading ElasticSearch</title>
      <dc:creator>Eva Marija Banaj Gađa</dc:creator>
      <pubDate>Tue, 06 Jul 2021 08:25:57 +0000</pubDate>
      <link>https://dev.to/trikoder/our-experience-with-upgrading-elasticsearch-240p</link>
      <guid>https://dev.to/trikoder/our-experience-with-upgrading-elasticsearch-240p</guid>
      <description>&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--4A5x8oyX--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1024/1%2AML0hIWv5-U6MNl1MpJo9-Q.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--4A5x8oyX--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1024/1%2AML0hIWv5-U6MNl1MpJo9-Q.png" alt="" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Why upgrading ElasticSearch was not an easy task
&lt;/h3&gt;

&lt;p&gt;ElasticSearch, they say, packs a “ton of goodness into each release“, and if you skip a few tons of goodness, it can lead to goodness overflow that we experienced while upgrading it.&lt;/p&gt;

&lt;p&gt;One might say we had a peculiar idea of good usage of ElasticSearch mapping types, so we just used them for everything — keys in arrays, table names, search etc.&lt;/p&gt;

&lt;p&gt;That was the primary reason why the upgrade waited so long. I mean, we were stuck on version 5.3.2 aiming to jump to 7.10.1. The code depended heavily on the mapping types.&lt;/p&gt;

&lt;p&gt;Another problem entirely was the complete removal of custom plugins. One feature we had, had to be completely shut down because it needed a custom elastic plugin to perform. Luckily, it was never enabled on the production so it was no biggie, right?&lt;/p&gt;

&lt;h3&gt;
  
  
  No more mapping types, what now?
&lt;/h3&gt;

&lt;p&gt;To give you a better idea of what I’m talking about, here is a small sample of what our mappings looked like before upgrading ElasticSearch:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;mapping_type_1:
    active: {type: byte, index: 'true'}
    additional: {type: integer, index: 'true'}
    . . .
mapping_type_2:
    active: {type: byte, index: 'true'}
    additional: {type: integer, index: 'true'}
    . . .
. . .
mapping_type_36:
    active: {type: byte, index: 'true'}
    additional: {type: integer, index: 'true'}
    . . .
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We had four indices in our 5.3.2 cluster, three of those posed no problem to upgrade. We even managed to completely remove one index because there were around 300 documents indexed in it, so there was no reason why that data could not be retrieved directly from the database.&lt;/p&gt;

&lt;p&gt;That one index that remained, had 36 mapping types that were same-same but different. At this point, we did what anyone would have done — check the &lt;a href="https://www.elastic.co/guide/en/elasticsearch/reference/current/removal-of-types.html#_alternatives_to_mapping_types"&gt;ElasticSearch official documentation&lt;/a&gt; for the recommended procedure. And now we had two options:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;creating 36 different indices, one for each mapping type&lt;/li&gt;
&lt;li&gt;combining all the fields in one ultimate mapping that would cover all 36 mapping types.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;We went with the second option, combining all the fields in one mapping. By doing that, we got one index with a lot, and I mean, a &lt;strong&gt;LOT&lt;/strong&gt; of fields. But it was still better that the other option, creating 36 different indices with almost identical mappings. Another argument for “one ultimate mapping option“ was the fact that we would have to cross index search all the indices without losing any performance.&lt;/p&gt;

&lt;h3&gt;
  
  
  One mapping to rule them all
&lt;/h3&gt;

&lt;p&gt;Good. We have a course of action, what now?&lt;/p&gt;

&lt;p&gt;Let’s summarize the situation:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;there is a file that contains the index mappings → let’s call it the &lt;em&gt;static mapping&lt;/em&gt; file, since those fields never change&lt;/li&gt;
&lt;li&gt;there are a 1000+ files that contain additional fields for each mapping type → let’s call these &lt;em&gt;dynamic mapping&lt;/em&gt; files, because those fields change often&lt;/li&gt;
&lt;li&gt;there are 36 tables in the database and 36 corresponding mapping types in the _static mapping _file&lt;/li&gt;
&lt;li&gt;there are 36 tables in the database that correspond to one or more &lt;em&gt;dynamic mapping&lt;/em&gt; files&lt;/li&gt;
&lt;li&gt;the code depends on the mapping types in the index to retrieve data, search etc.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;We started the great cleanup / refactor / rewrite session to merge all those numerous &lt;em&gt;dynamic mapping&lt;/em&gt; files into one file which would then be combined with &lt;em&gt;static mappings&lt;/em&gt;. The mapping types were removed in this step, and the mapping type name was added as a new field to the &lt;em&gt;static mappings&lt;/em&gt;. That way we didn’t have to rewrite the entire application and we could use ElasticSearch 7.10.1. The new &lt;em&gt;static mappings&lt;/em&gt; file ended up looking something like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;_doc:
    class: {type: text, index: 'true'}
    active: {type: byte, index: 'true'}
    additional: {type: integer, index: 'true'}
    . . .
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This “easy” part was followed by the removal of dependencies on mapping types across the entire code base. Hours turned to days, days to weeks, and a few weeks later we finally managed to refactor all the places that fetched mapping types from elastic and did magic with them.&lt;/p&gt;

&lt;h3&gt;
  
  
  Better indexing procedure with zero downtime
&lt;/h3&gt;

&lt;p&gt;Indexing documents, creating and manipulating indices in any way was a whole procedure that required a hefty multi-step document. It seemed as good a time as any to refactor it.&lt;/p&gt;

&lt;p&gt;Instead of a three-page procedure we now had five console commands: &lt;em&gt;Create&lt;/em&gt;, &lt;em&gt;Delete&lt;/em&gt;, &lt;em&gt;Index&lt;/em&gt;, &lt;em&gt;Replay&lt;/em&gt; and &lt;em&gt;AddToQueue&lt;/em&gt; all of which used &lt;a href="https://github.com/ruflin/Elastica"&gt;ruflin/elastica&lt;/a&gt; to communicate with the ElasticSearch cluster in the background.&lt;/p&gt;

&lt;h4&gt;
  
  
  Queue
&lt;/h4&gt;

&lt;p&gt;The update queue is just one table in the database where the ID of the changed document and the name of the index are stored. Once the queue is enabled, any changes that go to the ElasticSearch index with the write alias are also recorded to the queue.&lt;/p&gt;

&lt;p&gt;The &lt;em&gt;AddToQueue&lt;/em&gt; command is intended to be used to easily add one or more IDs to the update queue table. This could be useful if for some reason some documents aren’t in sync with the database.&lt;/p&gt;

&lt;h4&gt;
  
  
  Replay
&lt;/h4&gt;

&lt;p&gt;The &lt;em&gt;Replay&lt;/em&gt; command then takes chunks of ids from the update queue and bulk &lt;em&gt;upserts&lt;/em&gt; (insert or update) that data into the appropriate index that has the write alias. Once the documents are updated or inserted, the records are simply deleted from the update queue table.&lt;/p&gt;

&lt;h4&gt;
  
  
  Index
&lt;/h4&gt;

&lt;p&gt;The &lt;em&gt;Index&lt;/em&gt; command creates a new index with a &lt;em&gt;write_new&lt;/em&gt; alias, enables syncing changes to the queue and bulk inserts data from the database to the index. After all documents are inserted, the &lt;em&gt;write&lt;/em&gt; alias is switched to the new index, the update queue is replayed via the Replay command, the &lt;em&gt;read&lt;/em&gt; alias is switched to the new index and the old one is deleted. And voila, indexing with zero downtime!&lt;/p&gt;

&lt;h3&gt;
  
  
  Up and running
&lt;/h3&gt;

&lt;p&gt;How are we going to deploy this huge change in a way that everything works? Once again, to the &lt;a href="https://www.elastic.co/guide/en/elasticsearch/reference/current/setup-upgrade.html"&gt;documentation&lt;/a&gt;! This left us with several possibilities:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://www.elastic.co/guide/en/elasticsearch/reference/7.10/rolling-upgrades.html"&gt;upgrade from 5.3 to 5.6, then do a rolling upgrade from 5.6 to 6.8 and then from 6.8 to 7.10&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.elastic.co/guide/en/elasticsearch/reference/7.10/reindex-upgrade-remote.html"&gt;reindex from a remote cluster&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Since we wanted to upgrade without downtime, we went with the second option → reindex from a remote cluster. For this to happen we had to have two parallel clusters:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;the old 5.3.2 cluster that is still used in production&lt;/li&gt;
&lt;li&gt;this cluster has 4 indices, and each index has both read and write alias pointing to it&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--E6P7Au5I--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1024/1%2ALSoVOO9i8MC0Dx7XWSa5cQ.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--E6P7Au5I--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1024/1%2ALSoVOO9i8MC0Dx7XWSa5cQ.png" alt="" width="800" height="375"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;and a new empty 7.10.1 cluster&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--dLjlsMcb--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1024/1%2APb6gj856WoJGz0PTuGNsuw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--dLjlsMcb--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1024/1%2APb6gj856WoJGz0PTuGNsuw.png" alt="" width="800" height="412"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We deployed the code overnight when we have the least amount of traffic on the site. To guide you through our deploy process I will list the deploy actions.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Deploy actions&lt;/strong&gt; :&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;We took one of the application servers out of the production pool, deployed new code on it and set it to connect to the new 7.10.1 cluster.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--371zzXqn--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1024/1%2AH5JoXrnxG9vPmrNP3Zsing.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--371zzXqn--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1024/1%2AH5JoXrnxG9vPmrNP3Zsing.png" alt="" width="800" height="390"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;After that we created three new indices with the &lt;em&gt;Create&lt;/em&gt; command.
Each index had &lt;em&gt;read&lt;/em&gt; and &lt;em&gt;write&lt;/em&gt; alias pointing to it.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--1-dxBizp--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/906/1%2AcZh7qyoN6MrlfSjnA6InFA.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--1-dxBizp--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/906/1%2AcZh7qyoN6MrlfSjnA6InFA.png" alt="" width="800" height="546"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;We enabled saving changes to the queue on the old cluster. These changes will later be replayed on the new cluster, ensuring everything is up to date and users will not be aware of the cluster switch.&lt;/li&gt;
&lt;li&gt;Now that everything is ready, we ran the &lt;em&gt;Index&lt;/em&gt; command for each index in our cluster.
The &lt;em&gt;Index&lt;/em&gt; command first created a new index with the alias &lt;em&gt;write_new&lt;/em&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--JeLbggOT--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/772/1%2ArKhXOBahPmtAyspHMeB6wQ.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--JeLbggOT--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/772/1%2ArKhXOBahPmtAyspHMeB6wQ.png" alt="" width="772" height="558"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;After the index creation, the command then bulk inserted data fetched from the database into the new index. Indexing documents in all three indexes took about three hours.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--2jqUC7zR--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1024/1%2ALxu9r62HtzaIBoP5G8CSiw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--2jqUC7zR--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1024/1%2ALxu9r62HtzaIBoP5G8CSiw.png" alt="" width="800" height="574"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;After indexing all documents to a single index, the indexing command switched the &lt;em&gt;write&lt;/em&gt; and &lt;em&gt;read&lt;/em&gt; alias to the new index, and the &lt;em&gt;write_new&lt;/em&gt; alias and the old index were deleted.
This was done for all indices in 7.10.1 cluster.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--hrb_IYkc--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1024/1%2A_JTPL4qOD8R27iojM8iRzA.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--hrb_IYkc--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1024/1%2A_JTPL4qOD8R27iojM8iRzA.png" alt="" width="800" height="478"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Half of the application servers had now been taken out of the production pool and the new code had been deployed on them.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--njgKsWqC--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1024/1%2ANP9DlSFlnD9wriH-gQ_bZg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--njgKsWqC--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1024/1%2ANP9DlSFlnD9wriH-gQ_bZg.png" alt="" width="800" height="360"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;After the deploy had finished, these servers were returned to the production pool and the other half was taken out.&lt;/li&gt;
&lt;li&gt;We now ran the &lt;em&gt;Replay&lt;/em&gt; command that updates documents from the update queue, making sure users don’t see stale data for more than a few minutes.&lt;/li&gt;
&lt;li&gt;After replaying changes, we disabled syncing data to the update queue. Production now used the new cluster and all changes were saved directly into the new 7.10.1 cluster.&lt;/li&gt;
&lt;li&gt;The code was then deployed to the other half of the servers that were now out of the production pool.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--lbQX-bvg--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1024/1%2A-ywuw5K5klS7krkBGrB_Eg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--lbQX-bvg--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1024/1%2A-ywuw5K5klS7krkBGrB_Eg.png" alt="" width="800" height="448"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;All servers were added back to the production pool and 7.10 cluster was up and running.&lt;/li&gt;
&lt;li&gt;No new data was saved to the old cluster at this point, and it could be shut down. We decided to leave it for 24 hours as backup in case something went wrong.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--x40zUEaT--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/900/1%2AfFjfjEkf916NkYwp-hPKsA.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--x40zUEaT--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/900/1%2AfFjfjEkf916NkYwp-hPKsA.png" alt="" width="800" height="462"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Nothing went wrong. Mission accomplished!&lt;/p&gt;




</description>
      <category>elastic</category>
      <category>experience</category>
      <category>upgrade</category>
      <category>elasticsearch</category>
    </item>
    <item>
      <title>Dependency injection in Swift</title>
      <dc:creator>Dino</dc:creator>
      <pubDate>Fri, 28 May 2021 14:31:56 +0000</pubDate>
      <link>https://dev.to/trikoder/dependency-injection-in-swift-3939</link>
      <guid>https://dev.to/trikoder/dependency-injection-in-swift-3939</guid>
      <description>&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--H1QbS7Gd--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2AV8dCfxxnzQSK8PGA0J0itw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--H1QbS7Gd--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2AV8dCfxxnzQSK8PGA0J0itw.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  How to write self injecting code?
&lt;/h4&gt;

&lt;p&gt;Hey there, I hope you’re having a wonderful day :)&lt;/p&gt;

&lt;p&gt;In this article, we will demonstrate on how to minimize the amount of work needed to setup class instance definition and dependency injection inside your iOS project.&lt;/p&gt;

&lt;p&gt;I will assume you already know what dependency injection is and why is it an integral part of a testable architecture, but if not, I highly recommend taking a look at various sources in order to understand it before continuing.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;#1 Define classes and dependencies&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;In this example, we will create a simple shopping app where a user is able to select items from a list of products, add them to a shopping cart and then perform a purchase. This project will consist of 4 main components:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A service for performing mocked API calls&lt;/li&gt;
&lt;li&gt;A repository for providing the data&lt;/li&gt;
&lt;li&gt;Use cases for fetching items and buying&lt;/li&gt;
&lt;li&gt;UI presenting the data to the user&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Notice how we first started with the service. This is because the service will be at the top of our dependency graph. In other words, our service will have no dependencies on its own, but components bellow it will have a dependency on a component above it.&lt;/p&gt;

&lt;p&gt;In a real world however, a service would probably also have a dependency on some network provider for performing the API calls, but we will keep this example as simple as possible, by having mocked data provided by the service.&lt;/p&gt;

&lt;p&gt;So let’s define our &lt;strong&gt;Service&lt;/strong&gt; protocol and class implementation:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import RxSwift

protocol ShopService {
    func getProducts() -&amp;gt; Single&amp;lt;[Product]&amp;gt;
    func purchaseProducts(from shoppingCart: ShoppingCart) -&amp;gt; Single&amp;lt;Bool&amp;gt;
}

final class ShopServiceImpl: ShopService {

    func getProducts() -&amp;gt; Single&amp;lt;[Product]&amp;gt; {
        return .just(Product.allCases)
    }

    func purchaseProducts(from shoppingCard: ShoppingCart) -&amp;gt; Single&amp;lt;Bool&amp;gt; {
        return .just(true)
    }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Our service is pretty simple and straightforward. We will keep the implementation at a minimum, since that part is irrelevant for the topic of dependency injection.&lt;/p&gt;

&lt;p&gt;Next we will define the &lt;strong&gt;Repository&lt;/strong&gt; which has a dependency on our service:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import RxSwift

protocol ShopRepository {
    func getProducts() -&amp;gt; Single&amp;lt;[Product]&amp;gt;
    func purchaseProducts(from shoppingCart: ShoppingCart) -&amp;gt; Single&amp;lt;Bool&amp;gt;
}

final class ShopRepositoryImpl: ShopRepository {

    private let service: ShopService

    init(service: ShopService) {
        self.service = service
    }

    func getProducts() -&amp;gt; Single&amp;lt;[Product]&amp;gt; {
        return service.getProducts()
    }

    func purchaseProducts(from shoppingCart: ShoppingCart) -&amp;gt; Single&amp;lt;Bool&amp;gt; {
        return service.purchaseProducts(from: shoppingCart)
    }
}   
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Our repository almost seems redundant, because it just calls the service methods. But that is fine, since it keeps our architecture consistent and clean. In a real life scenario, a repository would probably also perform some data mapping from API resource format to a domain format and/or save data to a local storage. Anyway, back to business.&lt;/p&gt;

&lt;p&gt;Let’s define our use cases:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import RxSwift

protocol GetProductsUseCase {
    func execute() -&amp;gt; Single&amp;lt;[Product]&amp;gt;
}

final class GetProductsUseCaseImpl: GetProductsUseCase {

    private let repository: ShopRepository

    init(repository: ShopRepository) {
        self.repository = repository
    }

    func execute() -&amp;gt; Single&amp;lt;[Product]&amp;gt; {
        return repository.getProducts()
    }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import RxSwift

protocol PurchaseProductsUseCase {
    func execute(shoppingCart: ShoppingCart) -&amp;gt; Single&amp;lt;Bool&amp;gt;
}

final class PurchaseProductsUseCaseImpl: PurchaseProductsUseCase {

    private let repository: ShopRepository

    init(repository: ShopRepository) {
        self.repository = repository
    }

    func execute(shoppingCart: ShoppingCart) -&amp;gt; Single&amp;lt;Bool&amp;gt; {
        return self.repository.purchaseProducts(from: shoppingCart)
    }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And lastly, our view model to which we will bind our view:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import RxCocoa

class ShopVM {

    // Dependencies
    private let getProductsUseCase: GetProductsUseCase
    private let purchaseProductsUseCase: PurchaseProductsUseCase
    private let mapper: ShopViewMapper

    // Stored data
    private var selectedProducts: SelectedProducts = []

    init(getProductsUseCase: GetProductsUseCase,
         purchaseProductsUseCase: PurchaseProductsUseCase,
         mapper: ShopViewMapper) {

        self.getProductsUseCase = getProductsUseCase
        self.purchaseProductsUseCase = purchaseProductsUseCase
        self.mapper = mapper
    }
}

extension ShopVM: ViewModelType {

    typealias SelectedProducts = Set&amp;lt;Product&amp;gt;

    struct Input {
        let selectProduct: Driver&amp;lt;Product&amp;gt;
        let purchase: Driver&amp;lt;Void&amp;gt;
    }

    struct Output {
        let productList: Driver&amp;lt;[Product]&amp;gt;
        let selectedProducts: Driver&amp;lt;SelectedProducts&amp;gt;
        let totalPrice: Driver&amp;lt;String&amp;gt;
    }

    func transform(input: Input) -&amp;gt; Output {

        let productList = self.getProductsUseCase.execute()
            .asDriver(onErrorJustReturn: [])

        let selectedProducts = input.selectProduct
            .map { [unowned self] product -&amp;gt; SelectedProducts in

                // If product is already added to the shopping cart, tapping it again will remove it from the list
                if self.selectedProducts.contains(product) {
                    self.selectedProducts.remove(product)
                } else {
                    self.selectedProducts.insert(product)
                }

                return self.selectedProducts
            }

        let purchaseResult = input.purchase
            .withLatestFrom(selectedProducts)
            .asObservable()
            .map(mapper.mapShoppingCart)
            .flatMapLatest(purchaseProductsUseCase.execute)
            .map { [weak self] success -&amp;gt; SelectedProducts in
                guard let self = self else {
                    return []
                }
                if success {
                    self.selectedProducts.removeAll()
                }
                return self.selectedProducts
            }
            .asDriver(onErrorJustReturn: [])

        let selectedProductsMerge = Driver.merge(selectedProducts, purchaseResult)

        let totalPrice = selectedProductsMerge
            .map(mapper.mapTotalPrice)

        return .init(
            productList: productList,
            selectedProducts: selectedProductsMerge,
            totalPrice: totalPrice
        )
    }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import Foundation

protocol ShopViewMapper {
    func mapShoppingCart(from selectedProducts: ShopVM.SelectedProducts) -&amp;gt; ShoppingCart
    func mapTotalPrice(from selectedProducts: ShopVM.SelectedProducts) -&amp;gt; String
}

final class ShopViewMapperImpl: ShopViewMapper {

    func mapShoppingCart(from selectedProducts: ShopVM.SelectedProducts) -&amp;gt; ShoppingCart {
        //let products = selectedProducts.map { $1 }

        return .init(
            id: UUID().uuidString,
            products: Array(selectedProducts)
        )
    }

    func mapTotalPrice(from selectedProducts: ShopVM.SelectedProducts) -&amp;gt; String {
        let price = selectedProducts.reduce(0) { $0 + $1.pricePerKg * $1.averageWeight }
        let formattedPrice = price.formatPrice ?? ""
        return "Total price: \(formattedPrice)"
    }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Our final dependency graph looks like this:&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--WkBu9E5p--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/wmgx893n29avrtvprea3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--WkBu9E5p--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/wmgx893n29avrtvprea3.png" alt="image"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Components that are in the same row are part of the same architectural layer. Our &lt;strong&gt;ShopVC&lt;/strong&gt; is the view, but we don’t care about its implementation, so we have omitted it from the example. You can find the full implementation in the git repo link bellow. Notice how each of our implementation classes ends with a “Impl” suffix after the protocol name. This will be very important later on.&lt;/p&gt;
&lt;h3&gt;
  
  
  &lt;strong&gt;#2: Resolving class instances&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Now that we have defined our classes with their corresponding dependencies, how do we instantiate them?&lt;/p&gt;

&lt;p&gt;First of, we want our &lt;strong&gt;ShopRepository&lt;/strong&gt; and &lt;strong&gt;ShopService&lt;/strong&gt; to be shared centralized data providers (singletons). A common way to achieve this is by defining a shared instance like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;extension ShopServiceImpl {
    static let shared: ShopService = ShopServiceImpl()
}

extension ShopRepositoryImpl {
    static let shared: ShopRepository = ShopRepositoryImpl(service: ShopServiceImpl.shared)
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We can already see several problems with this approach:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A developer may not know which implementation to use as a dependency to some other class. He might create an instance manually without knowing there is a shared instance that should be used instead.&lt;/li&gt;
&lt;li&gt;A developer shouldn’t care how to instantiate some class. He should be provided an already created instance with its dependencies ready to be used, if possible.&lt;/li&gt;
&lt;li&gt;We are manually defining shared instances for each of our singletons, which is tedious work and creates boiler plate code. For big projects which has tons of singletons, this also affects maintainability.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Ignoring these problems for now, let’s see how would we instantiate our &lt;strong&gt;ShopVM&lt;/strong&gt; view model:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;let shopRepository: ShopRepository = ShopRepositoryImpl.shared
let getProductsUseCase: GetProductsUseCase = GetProductsUseCaseImpl(repository: shopRepository)
let purchaseProductsUseCase: PurchaseProductsUseCase = PurchaseProductsUseCaseImpl(repository: shopRepository)
let shopMapper: ShopViewMapper = ShopViewMapperImpl()

let shopViewModel = ShopVM(getProductsUseCase: getProductsUseCase,
                           purchaseProductsUseCase: purchaseProductsUseCase,
                           mapper: shopMapper)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now we can see a lot more problems. Every time we need a new instance of the view model, we need to instantiate all of its non-singleton dependencies as well. Luckily, we can use a dependency injection framework to provide us with the instances, without caring about their dependencies. One of the most popular DI frameworks for Swift is &lt;a href="https://github.com/Swinject/Swinject"&gt;&lt;strong&gt;Swinject&lt;/strong&gt;&lt;/a&gt; due to being very lightweight and easy-to-use. Using Swinject, we can register our protocol implementations and dependencies as follows:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import Swinject

enum SingletonContainer {

    static let instance: Container = {
        let container = Container(defaultObjectScope: .container)

        container.register(ShopService.self) { _ in
            ShopServiceImpl()
        }

        container.register(ShopRepository.self) {
            ShopRepositoryImpl(service: $0.resolve(ShopService.self)!)
        }

        return container
    }()
}

enum InstanceContainer {

    static let instance: Container = {
        let container = Container(parent: SingletonContainer.instance, defaultObjectScope: .transient)

        container.register(GetProductsUseCase.self) {
            GetProductsUseCaseImpl(repository: $0.resolve(ShopRepository.self)!)
        }

        container.register(PurchaseProductsUseCase.self) {
            PurchaseProductsUseCaseImpl(repository: $0.resolve(ShopRepository.self)!)
        }

        container.register(ShopViewMapper.self) { _ in
            ShopViewMapperImpl()
        }

        container.register(ShopVM.self) {
            ShopVM(getProductsUseCase: $0.resolve(GetProductsUseCase.self)!,
                   purchaseProductsUseCase: $0.resolve(PurchaseProductsUseCase.self)!,
                   mapper: $0.resolve(ShopViewMapper.self)!)
        }

        return container
    }()
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This is much better, because we can now retrieve a new instance of the &lt;strong&gt;ShopVM&lt;/strong&gt; without caring about its dependencies by just calling:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;let shopViewModel = InstanceContainer.instance.resolve(ShopVM.self)!
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And we no longer need the &lt;em&gt;shared&lt;/em&gt; instances for &lt;strong&gt;ShopRepository&lt;/strong&gt; and &lt;strong&gt;ShopService&lt;/strong&gt; singletons, since they are now registered in a shared singleton container.&lt;/p&gt;

&lt;p&gt;This is a huge improvement, however we still have to write boiler plate code for registering our instances to the containers. Also, if we add/remove/change a dependency in the class’ constructor, we also have to update the code in the container it was registered in. For big projects, the container would grow huge and become hard to maintain. We should delegate writing this boiler plate code to the compiler, using a code generator like &lt;a href="https://github.com/krzysztofzablocki/Sourcery"&gt;&lt;strong&gt;Sourcery&lt;/strong&gt;&lt;/a&gt; &lt;strong&gt;.&lt;/strong&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  #3 Writing self injecting code
&lt;/h3&gt;

&lt;p&gt;For starters, let’s add a generic type extension to &lt;strong&gt;Swinject&lt;/strong&gt; ’s Resolver &lt;em&gt;resolve()&lt;/em&gt; method, so we don’t have to explicitly specify a type when resolving an instance:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import Swinject

extension Resolver {

    func resolve&amp;lt;T&amp;gt;(type: T.Type = T.self) -&amp;gt; T {
        guard let instance = self.resolve(T.self) else {
            fatalError("Implementation for type \(T.self) not registered to \(self).")
        }
        return instance
    }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We can now resolve an instance by simply calling &lt;em&gt;.resolve()&lt;/em&gt; if type can be inferred. This will make it slightly easier to write the code generation script.&lt;/p&gt;

&lt;p&gt;Next, integrate &lt;strong&gt;Sourcery&lt;/strong&gt; into the project. I will assume you already know what &lt;strong&gt;Sourcery&lt;/strong&gt; is and how to use it by following the &lt;a href="https://github.com/krzysztofzablocki/Sourcery"&gt;documentation&lt;/a&gt;. In short, it’s a code generator for Swift which saves your from hassle of writing boiler plate or repetitive code by letting the compiler do it for you. If you aren’t already using code generators in your projects, you should definitely start.&lt;/p&gt;

&lt;p&gt;With that out of the way, let’s write a Sourcery &lt;strong&gt;template&lt;/strong&gt; file that will generate containers for registering our types and performing dependency injections. We will then add a build phase run script that will execute Sourcery and generate our code whenever we build the project.&lt;/p&gt;

&lt;p&gt;But before it can do that, we need to tell Sourcery which types we want to be injectable and which types we want as singletons. So let’s define two blank protocols: &lt;em&gt;Injectable&lt;/em&gt; and &lt;em&gt;Singleton.&lt;/em&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;protocol Injectable {}

protocol Singleton {}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then make all our types conform to &lt;em&gt;Injectable type.&lt;/em&gt; On top of that, make our &lt;strong&gt;ShopRepository&lt;/strong&gt; and &lt;strong&gt;ShopService&lt;/strong&gt; conform to &lt;em&gt;Singleton&lt;/em&gt; type. Our &lt;strong&gt;ShopService&lt;/strong&gt; protocol now looks like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;protocol ShopService: Injectable, Singleton {
    func getProducts() -&amp;gt; Single&amp;lt;[Product]&amp;gt;
    func purchaseProducts(from shoppingCart: ShoppingCart) -&amp;gt; Single&amp;lt;Bool&amp;gt;
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now the fun part. We will write a Sourcery template file which will scan our code for &lt;em&gt;Injectable&lt;/em&gt; and &lt;em&gt;Singleton&lt;/em&gt; types, register them in respective containers and then resolve their implementation class. So &lt;strong&gt;PurchaseProductsUseCase&lt;/strong&gt; will be resolved as &lt;strong&gt;PurchaseProductsUseCaseImpl&lt;/strong&gt;. The script will then scan constructor parameters of the implementation class for any &lt;em&gt;Injectable&lt;/em&gt; type and perform constructor injections. And if it finds a &lt;em&gt;non-injectable&lt;/em&gt; type in the constructor parameters, it will throw a compiler error.&lt;/p&gt;

&lt;p&gt;In other words, we want Sourcery to scan this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;protocol GetProductsUseCase: Injectable {
    func execute() -&amp;gt; Single&amp;lt;[Product]&amp;gt;
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;…to generate this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;container.register(GetProductsUseCase.self) {
      GetProductsUseCaseImpl(
          repository: $0.resolve()
      )
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Seems easy enough. So let’s start by writing a macro that will generate constructor injection code:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{% macro injectType type %}
    {% if type.initializers.count == 0 %}
            {{ type.name }}()
    {% else %}
        {% for initializer in type.initializers %}
            {{ type.name }}(
                {% for parameter in initializer.parameters %}
                    {% if parameter.type.based.Injectable %}
                {{ parameter.name }}: resolver.resolve(){% if not forloop.last%}, {% endif %}
                    {% else %}
                #error("Cannot inject non-injectable dependency '{{ parameter.name }}' of type '{{ parameter.unwrappedTypeName }}'")
                    {% endif %}
                {% endfor %}
            )
        {% endfor %}
    {% endif %}
{% endmacro %}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;There is a lot going on here, so let’s go step by step:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;We give this macro a name so we can reuse it from several places&lt;/li&gt;
&lt;li&gt;Then we check whether this type’s constructor has any initializers; if not then we just call the init method&lt;/li&gt;
&lt;li&gt;If the constructor has initializers, we iterate through them and check whether each initializer is of type &lt;em&gt;Injectable&lt;/em&gt;, otherwise place a compiler error in place&lt;/li&gt;
&lt;li&gt;call .resolve() method for each parameter to resolve and inject its instance&lt;/li&gt;
&lt;li&gt;We need to append a comma after each parameter until the last one, to avoid syntax issues&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If you call this macro for &lt;strong&gt;ShopVM&lt;/strong&gt; for example, you would get this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ShopVM(
    getProductsUseCase: resolver.resolve(), 
    purchaseProductsUseCase: resolver.resolve(), 
    mapper: resolver.resolve()
)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Works like magic! That’s why it’s called &lt;strong&gt;Sourcery&lt;/strong&gt;  :)&lt;/p&gt;

&lt;p&gt;This code won’t compile, of course, because we are missing a reference to the &lt;em&gt;resolver&lt;/em&gt;. So let’s continue by adding stencil code for registering our &lt;strong&gt;ShopVM&lt;/strong&gt; class to the container:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{% macro registerClass type %}
        // MARK: {{ type.name }}
        container.register({{ type.name }}.self) { resolver in
    {% call injectType type %}
        }
{% endmacro %}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Running our template script now will give us this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// MARK: ShopVM
container.register(ShopVM.self) { resolver in
     ShopVM(
          getProductsUseCase: resolver.resolve(), 
          purchaseProductsUseCase: resolver.resolve(), 
          mapper: resolver.resolve()
     )
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;There is one last piece of the puzzle missing: the container itself.&lt;/p&gt;

&lt;p&gt;So let’s expand our template and add the stencil script for generating our Instance and Singleton containers:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;/// Provides singletons
enum SingletonContainer {

    static let instance: Container = {
        let container = Container(defaultObjectScope: .container)

{% for type in types.protocols where type.based.Injectable and type.based.Singleton %}
    {% call registerProtocol type %}

{% endfor %}
        return container
    }()
}

/// Provides new instances
enum InstanceContainer {

    static let instance: Container = {
        let container = Container(parent: SingletonContainer.instance, defaultObjectScope: .transient)

{% for type in types.protocols where type.based.Injectable and not type.based.Singleton %}
        {% call registerProtocol type %}

{% endfor %}

{% for type in types.classes where type.based.Injectable and not type.implements.Singleton %}
    {% for inheritedType in type.inheritedTypes %}
        {% if inheritedType == "Injectable" %}
            {% call registerClass type %}
        {% endif %}
    {% endfor %}
{% endfor %}

        return container
    }()
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Notice that when registering classes we need to make sure only classes that directly implement &lt;em&gt;Injectable&lt;/em&gt; type are included, because classes like &lt;strong&gt;GetProductsUseCaseImpl&lt;/strong&gt; are already registered via &lt;em&gt;registerProtocol&lt;/em&gt; macro. Which we are missing still, so let’s add it:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{% macro registerProtocol type %}
        // MARK: {{ type.name }}
        container.register({{ type.name }}.self) { resolver in
    {% for impl in types.implementing[type.name] where impl.name|contains:"Impl" %}
        {% call injectType impl %}
    {% endfor %}
        }
{% endmacro %}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;As you can see, this simply works by iterating through all types which implement the current protocol and contain “&lt;em&gt;Impl&lt;/em&gt;” in their name. This will make sure we can have as many classes conform to the same protocol as we want in our project, but only one will be registered to and provided by the container.&lt;/p&gt;

&lt;p&gt;Our final stencil template for generating dependency injection code looks like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import Swinject

{% macro injectType type %}
    {% if type.initializers.count == 0 %}
            {{ type.name }}()
    {% else %}
        {% for initializer in type.initializers %}
            {{ type.name }}(
                {% for parameter in initializer.parameters %}
                    {% if parameter.type.based.Injectable %}
                {{ parameter.name }}: resolver.resolve(){% if not forloop.last%}, {% endif %}
                    {% else %}
                #error("Cannot inject non-injectable dependency '{{ parameter.name }}' of type '{{ parameter.unwrappedTypeName }}'")
                    {% endif %}
                {% endfor %}
            )
        {% endfor %}
    {% endif %}
{% endmacro %}

{% macro registerProtocol type %}
        // MARK: {{ type.name }}
        container.register({{ type.name }}.self) { resolver in
    {% for impl in types.implementing[type.name] where impl.name|contains:"Impl" %}
        {% call injectType impl %}
    {% endfor %}
        }
{% endmacro %}

{% macro registerClass type %}
        // MARK: {{ type.name }}
        container.register({{ type.name }}.self) { resolver in
    {% call injectType type %}
        }
{% endmacro %}

/// Provides singletons
enum SingletonContainer {

    static let instance: Container = {
        let container = Container(defaultObjectScope: .container)

{% for type in types.protocols where type.based.Injectable and type.based.Singleton %}
    {% call registerProtocol type %}

{% endfor %}
        return container
    }()
}

/// Provides new instances
enum InstanceContainer {

    static let instance: Container = {
        let container = Container(parent: SingletonContainer.instance, defaultObjectScope: .transient)

{% for type in types.protocols where type.based.Injectable and not type.based.Singleton %}
        {% call registerProtocol type %}

{% endfor %}

{% for type in types.classes where type.based.Injectable and not type.implements.Singleton %}
    {% for inheritedType in type.inheritedTypes %}
        {% if inheritedType == "Injectable" %}
            {% call registerClass type %}
        {% endif %}
    {% endfor %}
{% endfor %}

        return container
    }()
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Running the script will generate the following code:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// Generated using Sourcery 1.3.4 — https://github.com/krzysztofzablocki/Sourcery
// DO NOT EDIT
import Swinject




/// Provides singletons
enum SingletonContainer {

    static let instance: Container = {
        let container = Container(defaultObjectScope: .container)

        // MARK: ShopRepository
        container.register(ShopRepository.self) { resolver in
            ShopRepositoryImpl(
                service: resolver.resolve()
            )
        }

        // MARK: ShopService
        container.register(ShopService.self) { resolver in
            ShopServiceImpl()
        }

        return container
    }()
}

/// Provides new instances
enum InstanceContainer {

    static let instance: Container = {
        let container = Container(parent: SingletonContainer.instance, defaultObjectScope: .transient)

        // MARK: GetProductsUseCase
        container.register(GetProductsUseCase.self) { resolver in
            GetProductsUseCaseImpl(
                repository: resolver.resolve()
            )
        }

        // MARK: PurchaseProductsUseCase
        container.register(PurchaseProductsUseCase.self) { resolver in
            PurchaseProductsUseCaseImpl(
                repository: resolver.resolve()
            )
        }

        // MARK: ShopViewMapper
        container.register(ShopViewMapper.self) { resolver in
            ShopViewMapperImpl()
        }


        // MARK: ShopVM
        container.register(ShopVM.self) { resolver in
            ShopVM(
                getProductsUseCase: resolver.resolve(), 
                purchaseProductsUseCase: resolver.resolve(), 
                mapper: resolver.resolve()
            )
        }

        return container
    }()
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And that’s it. You no longer ever need to worry about managing your instances, dependencies and singletons. You just add a dependency to the initializer of a &lt;em&gt;Injectable&lt;/em&gt; class and let &lt;strong&gt;Sourcery&lt;/strong&gt; do the rest. Not only did we add a template for letting Sourcery generate dependency injections for us, but we also learned how to write our own stencil templates.&lt;/p&gt;

&lt;p&gt;Hope this was helpful! You can find the full demo project in the GitHub repo below:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/dinocata/dependency-injection-demo"&gt;dinocata/dependency-injection-demo&lt;/a&gt;&lt;/p&gt;




</description>
      <category>dependencyinjection</category>
      <category>swift</category>
      <category>ios</category>
    </item>
    <item>
      <title>Naming things</title>
      <dc:creator>Robert Basic</dc:creator>
      <pubDate>Tue, 18 May 2021 08:38:56 +0000</pubDate>
      <link>https://dev.to/trikoder/naming-things-54c1</link>
      <guid>https://dev.to/trikoder/naming-things-54c1</guid>
      <description>&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F1024%2F1%2AksHZffNufLujRSuWpQu7Cw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F1024%2F1%2AksHZffNufLujRSuWpQu7Cw.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;When we develop software, we name things. Things like variables, functions, methods, classes, interfaces, exceptions. Also database, database tables, columns in those tables. We name files our software uses or creates: configuration files, log files, lock files, temporary files… The list goes on.&lt;/p&gt;

&lt;p&gt;And yet, how much thought do we put in in naming these “things”? Why should we care?&lt;/p&gt;

&lt;h3&gt;
  
  
  Why is naming important?
&lt;/h3&gt;

&lt;p&gt;When we develop software, we make approximations of problems from the real world. We take those problems and we model them in software. These models help us solve the problems, but they are never perfect. They can’t be, because we lose information in the process of “translating” the real world problem into code. That’s why it’s important for us to preserve, as much as we can, the names of the concepts we are translating into code.&lt;/p&gt;

&lt;p&gt;Good naming is important for the future programmer who will read the code. That future programmer can be anybody, with experiences ranging from none to over 20 years. It can as well be us, the authors of the original code.&lt;/p&gt;

&lt;h3&gt;
  
  
  Context matters
&lt;/h3&gt;

&lt;p&gt;A big problem with naming things is that, when we are naming, we have all the context around that name built up. At that moment, we know why we are choosing that specific name. We’re also writing other code around that name, which gives us additional information and, well, justification, for why we think it is a good name.&lt;/p&gt;

&lt;p&gt;But if we give that code to another programmer, or even if we ourselves revisit it after some time, most of the context that we had when we were writing that code is gone. The name might not be as good anymore like when we were coming up with it.&lt;/p&gt;

&lt;p&gt;For that reason we have to consider what information will be available when reading the code, how the lack of the context we take for granted when writing, will affect the meaning of the name we chose.&lt;/p&gt;

&lt;p&gt;When we’re coming up with names for things in our code base, it’s helpful to “switch” our mindset from writing code to reading code. Take a look at the names with this “reader” mindset and consider is the name giving answers to the whys, whats, and hows, or is it just creating an even longer list of questions?&lt;/p&gt;

&lt;h3&gt;
  
  
  Where will we use it?
&lt;/h3&gt;

&lt;p&gt;It is also important to consider where in our code base will we use the thing we are naming?&lt;/p&gt;

&lt;p&gt;Imagine we’re writing a repository to find a list of products from the database. We create an interface like this for it:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;/** [@return](http://twitter.com/return) Product[] */ 
ProductRepository::find($filter): array
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Looks okay, makes sense at the moment of writing this code. Later on we, our someone else, writes some other code that uses our repository of products:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$products = $this-&amp;gt;repository-&amp;gt;find($filter);
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Still when we write this code, we know what it does. But lets switch our mindset to “reading” code. There’s at least three different questions that stand out: what repository are we working with, what are we finding, and by what criteria? The &lt;code&gt;$products&lt;/code&gt; variable can give us a hint, a suggestion, but we need to double check to be sure.&lt;/p&gt;

&lt;p&gt;A better line would be:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$activeProductsInTimePeriod = $this-&amp;gt;productRepository-&amp;gt;find($filterActiveProductsInTimePeriod);
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now we don’t have to guess and look at other code that we’re finding active products in a given time period. Someone will argue that the names are too long, or that the InTimePeriod appears twice in one line. Yes, but it appears twice only in this one line, we don’t know where else will be the $filterActiveProductsInTimePeriod or the $activeProductsInTimePeriod variables be used. In every other line they appear, these “long” names will carry enough context and information to the reader of the code that they will have no, or very little, questions about our code.&lt;/p&gt;

&lt;h3&gt;
  
  
  Make the names searchable
&lt;/h3&gt;

&lt;p&gt;When naming things, we also have to consider that at some point we will want to search for that name across the code base. How unique is the name, how easy it is to find it among other similarly named things? Going back to our product repository example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$products = $this-&amp;gt;repository-&amp;gt;find($filter);
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;All four names are hard to search for: products, repository, find, filter. They are not unique in any way.&lt;/p&gt;

&lt;p&gt;If we look at the example with the improved namings:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$activeProductsInTimePeriod = $this-&amp;gt;productRepository-&amp;gt;find($filterActiveProductsInTimePeriod);
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here only the find method sticks out as not unique enough, so we should maybe look for a name that is a easier to search for.&lt;/p&gt;

&lt;p&gt;There’s much more to naming things, and to naming them well. For the end I want to leave you with a good presentation on naming things by Peter Hilton: &lt;a href="https://www.youtube.com/watch?v=SctS56YQ6fg" rel="noopener noreferrer"&gt;https://www.youtube.com/watch?v=SctS56YQ6fg&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;What’s your biggest challenge in naming things?&lt;/p&gt;




</description>
      <category>naming</category>
    </item>
    <item>
      <title>Test doubles</title>
      <dc:creator>Robert Basic</dc:creator>
      <pubDate>Thu, 18 Mar 2021 07:46:20 +0000</pubDate>
      <link>https://dev.to/trikoder/test-doubles-3dhm</link>
      <guid>https://dev.to/trikoder/test-doubles-3dhm</guid>
      <description>&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--7Zck29C---/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2AoYaZwbfanWTKXMWAy8qBPQ.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--7Zck29C---/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2AoYaZwbfanWTKXMWAy8qBPQ.jpeg" alt=""&gt;&lt;/a&gt;Photo by &lt;a href="https://unsplash.com/@antoinepeltier?utm_source=unsplash&amp;amp;utm_medium=referral&amp;amp;utm_content=creditCopyText"&gt;Antoine Peltier&lt;/a&gt; on &lt;a href="/?utm_source=unsplash&amp;amp;utm_medium=referral&amp;amp;utm_content=creditCopyText"&gt;Unsplash&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;It is easy to be the proponent of the classical TDD or mockist TDD when we are starting to develop a new application, but what should we do when we inherit an application where the code is already written and the tests are no where to be found? My career as a software consultant gave me the opportunity to work on applications where design patterns, best practices, dependency injection, and everything else that makes a code base easy to work with are wishful thinking. The “easy” answer in those cases would be, of course, let’s rewrite the entire application from scratch. If an application enables a business to earn a living, has users using it daily, then a complete rewrite is rarely the correct answer, easy or not.&lt;/p&gt;

&lt;p&gt;After a thorough investigation of the existing code base one of the first things we should do is to try and cover the existing functionality we need to work on with tests. Integration tests, unit tests, end-to-end tests, the more type of tests we can write, the better. For integration and end-to-end tests we should mostly focus on having a test database and a good set of fixture files, while for unit tests we’ll probably need to create test doubles for the dependencies. These tests should be a support for us while we are trying to learn, understand, and improve the existing code base, and those tests should change over time along with the code that we are improving.&lt;/p&gt;

&lt;h3&gt;
  
  
  Test Doubles
&lt;/h3&gt;

&lt;p&gt;But what are these test doubles? They are objects that, in testing, can replace the real objects that would otherwise be created and used during the real execution of the application. Using test doubles in our tests we can ensure the isolation of the dependencies from the unit under test, to mimic classes that we didn’t even write yet, to allow us to discover the APIs of our classes by exploring how would they interact without worrying about their implementation detail, as well as keep the test suite fast as calls to databases or HTTP endpoints are replaced with these test doubles.&lt;/p&gt;

&lt;p&gt;The process of creating a test double is called “mocking” and there are cases when the term “mock object”, or “mock”, is used instead of the term “test double”, whereas the truth is that a mock object is only one type of a test double. The types of test doubles we can create are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Dummies&lt;/li&gt;
&lt;li&gt;Fakes&lt;/li&gt;
&lt;li&gt;Stubs&lt;/li&gt;
&lt;li&gt;Mocks&lt;/li&gt;
&lt;li&gt;Spies&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;They all have their place in the world of unit testing, regardless if we are working on a green-field project applying the classical or the mockist TDD process, or if we are working on a legacy application that is difficult to test.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Dummy&lt;/strong&gt; is a type of test double that is created only to be passed around, but its methods are never actually called by any of the code that we are testing. It can be created manually or with a mocking framework. It’s most often used to fulfill the argument list for a method call.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Fake&lt;/strong&gt; is a test double that is always manually created and it’s a simplified implementation of the same API as the real “thing”. An example of a fake would be an in memory database.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Stub&lt;/strong&gt; is a test double that is used in the place of a real object, when we only need the test double to return a predefined result so that the code under test can be brought into a working state. It can be created manually, but a mocking framework should be used to help speed up the process.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Mock&lt;/strong&gt; is the “big brother” test double of a stub. Besides setting a predefined return value, we can also set up expectations how the methods on the mock object should be called, with what arguments, and in what order. Mocks are most often used in the mockist style of TDD, and whenever we are interested in how the unit we are testing interacts with its (mocked out) dependencies.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Spy&lt;/strong&gt; is a type of test double that records the interactions between it and the unit that we are testing and allows us to verify the method calls we’re interested in at the end. This approach makes it possible for the unit test to follow more closely the Arrange-Act-Assert style of writing unit tests.&lt;/p&gt;

&lt;h3&gt;
  
  
  Libraries for test doubles in PHP
&lt;/h3&gt;

&lt;p&gt;In PHP we have several libraries to help us create test doubles.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://phpunit.de/"&gt;PHPUnit&lt;/a&gt;, the most popular and most used testing framework in PHP, has its own built in support for &lt;a href="https://phpunit.readthedocs.io/en/9.5/test-doubles.html"&gt;test doubles&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/phpspec/prophecy"&gt;Prophecy&lt;/a&gt; is a framework for creating test doubles that was initially built for the requirements of phpspec, but it can be used with any other PHP testing framework. Since PHPUnit 4.5 it bundles Prophecy within PHPUnit itself, but &lt;a href="https://github.com/sebastianbergmann/phpunit/issues/4141"&gt;as of PHPUnit 9.x this bundling is deprecated&lt;/a&gt; and set to be removed in PHPUnit 10.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/mockery/mockery"&gt;Mockery&lt;/a&gt; is another framework for creating test doubles. It can be used with PHPUnit, phpspec, Behat, or any other testing framework. I find it especially powerful when working with legacy code, due to its support for &lt;a href="http://docs.mockery.io/en/latest/reference/partial_mocks.html"&gt;creating partial mocks&lt;/a&gt; or &lt;a href="http://docs.mockery.io/en/latest/cookbook/mocking_hard_dependencies.html"&gt;mocking hard dependencies&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;It is also possible to &lt;a href="https://blog.frankdejonge.nl/testing-without-mocking-frameworks/"&gt;test without mocking frameworks&lt;/a&gt;, but still use some types of test doubles.&lt;/p&gt;

&lt;p&gt;Even though test doubles are helpful when writing unit tests, we need to use them sparingly. Test doubles require additional maintenance, and if we overuse them the quality of our tests can decrease.&lt;/p&gt;

&lt;p&gt;What’s your take on test doubles? Love ’em or hate ’em? Let me know in the comments.&lt;/p&gt;




</description>
      <category>testdrivendevelopmen</category>
      <category>testdoubles</category>
      <category>tdd</category>
    </item>
    <item>
      <title>Test driven development</title>
      <dc:creator>Robert Basic</dc:creator>
      <pubDate>Thu, 18 Feb 2021 08:14:11 +0000</pubDate>
      <link>https://dev.to/trikoder/test-driven-development-26k</link>
      <guid>https://dev.to/trikoder/test-driven-development-26k</guid>
      <description>&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s---ZSFgdQL--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2APHnrUzWPd32hsnPfr3U8IQ.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s---ZSFgdQL--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2APHnrUzWPd32hsnPfr3U8IQ.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As software developers, we should aim to write applications which deliver great business value to our clients, applications that solve real problems for the users, applications that are of high quality, and applications that are free of any defects. We should aim to write bug-free software.&lt;/p&gt;

&lt;p&gt;Due to errors in communication between us and the clients, or in our teams, unforeseen circumstances in which our applications can and will be used by the end users, negligence, or purely due to lack of knowledge and skills, hardly any software we write is 100% free of any issues.&lt;/p&gt;

&lt;p&gt;An industry standard in increasing the quality of software we write is the use of unit testing, a method of testing software that focuses on a single unit of our application and verifies that the unit is working correctly under different circumstances. These circumstances can vary from verifying that the output of these units are correct after providing them with different inputs, or making sure that the units can handle different scenarios during their actual production use, like missing database records, or unreachable 3rd party HTTP APIs.&lt;/p&gt;

&lt;p&gt;A number of studies have been conducted and published on the positive effects of automated unit testing on software quality:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;a href="https://link.springer.com/article/10.1007/s10664-008-9062-z"&gt;https://link.springer.com/article/10.1007/s10664-008-9062-z&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://link.springer.com/chapter/10.1007/978-3-642-01853-4_4"&gt;https://link.springer.com/chapter/10.1007/978-3-642-01853-4_4&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://link.springer.com/chapter/10.1007/978-3-319-03602-1_10"&gt;https://link.springer.com/chapter/10.1007/978-3-319-03602-1_10&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="http://ieeexplore.ieee.org/abstract/document/5362086/"&gt;http://ieeexplore.ieee.org/abstract/document/5362086/&lt;/a&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;These unit tests, when automated, are the basis of a software development process called Test Driven Development (TDD). With TDD we first write the tests for our software, then we run the complete test suite to make sure that the new tests fail, followed by writing just enough code to make those tests pass. Then we repeat this entire process until the feature we’re implementing is complete. This approach to software development has the advantage that the development cycle is shortened and the bugs that come from misunderstanding requirements or from programmer errors are caught early. TDD also tends to drive the programmer towards cleaner code and the usage of design patterns, because code that is made easy to test is also easy to understand by other programmers and maintain in the future.&lt;/p&gt;

&lt;p&gt;What the actual units of these tests are is usually up for debate, but in the case of Object Oriented Programming (OOP) the unit is most often a single class in our code base. Other units could be methods in our classes, or even a group of classes that form a single module. For the sake of this article, we will assume that when we are talking about a “unit”, we mean a single class.&lt;/p&gt;

&lt;h3&gt;
  
  
  Schools of TDD
&lt;/h3&gt;

&lt;p&gt;When developing applications using the TDD process there are two schools, two approaches, we can take when it comes to writing our code. The first school, the older one, is the school of classical TDD, or the Chicago style TDD. The other one, the newer one, is the school of mockist TDD, or the London style TDD. Both of these schools have their advantages and disadvantages and which one is used is pretty much up to the developer, or to the team to decide. They can also be mixed; it is not unheard of to use one style in developing one part of the application and the other one in other parts. We should always use the right tool for the right job, after all.&lt;/p&gt;

&lt;p&gt;When going with the classical TDD approach the code is usually developed from the inside out. These tests are good when we know in advance what are the classes and their methods and how they integrate with each other. It allows us to focus on one thing at a time, on the actual unit that is being developed. The tests verify the state of the unit after the tests were run and not on the communication between different objects that are used within the unit. The usage of mock objects in the classical TDD approach is usually frowned upon and when the unit being tested requires a dependency, the actual implementation of that dependency is being used in the test. This requires from us to write the innermost dependency first and then branching out from there, hence the “inside out” approach.&lt;/p&gt;

&lt;p&gt;The mockist TDD approach allows us to develop our code from the outside in. These tests are good when we want to take a “discovery” path down our code base. The dependencies of the unit being tested are “mocked out” — -mock objects are created that mimic the behavior of the real dependency. The actual implementation of these dependencies that we are mocking can be written at a later time. This leaves us with the opportunity to start developing from the outermost layer and work our way in, discovering the API of our dependencies, hence the “outside in” approach.&lt;/p&gt;

&lt;p&gt;Both classical TDD and mockist TDD have their place in the software development process and their strengths and weaknesses must be considered when we choose what approach are we going to take when working a particular piece of the application.&lt;/p&gt;




</description>
      <category>testdrivendevelopmen</category>
      <category>testdoubles</category>
      <category>tdd</category>
    </item>
  </channel>
</rss>
