<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Denis Anikin</title>
    <description>The latest articles on DEV Community by Denis Anikin (@xfenix).</description>
    <link>https://dev.to/xfenix</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/xfenix"/>
    <language>en</language>
    <item>
      <title>Poetry &amp; setuptools trouble fix</title>
      <dc:creator>Denis Anikin</dc:creator>
      <pubDate>Tue, 20 Sep 2022 19:49:22 +0000</pubDate>
      <link>https://dev.to/xfenix/poetry-setuptools-trouble-fix-16f4</link>
      <guid>https://dev.to/xfenix/poetry-setuptools-trouble-fix-16f4</guid>
      <description>&lt;p&gt;If you encounter a problem similar to this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;from pkg_resources import iter_entry_points
ModuleNotFoundError: No module named &lt;span class="s1"&gt;'pkg_resources'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;However, there is a chance that you are using poetry and your lock file has a setuptools dependency. And yet, when you install your dependencies, setuptools is missing somewhere. So how do you fix this?&lt;br&gt;
Very simple, but a bit counterintuitive. Simply change your build-system as follows.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight toml"&gt;&lt;code&gt;&lt;span class="err"&gt;...&lt;/span&gt;
&lt;span class="nn"&gt;[build-system]&lt;/span&gt;
&lt;span class="py"&gt;requires&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s"&gt;"setuptools"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="py"&gt;"poetry_core&amp;gt;&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="mf"&gt;1.0&lt;/span&gt;&lt;span class="s"&gt;"]&lt;/span&gt;&lt;span class="err"&gt;
...&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



</description>
      <category>poetry</category>
      <category>python</category>
      <category>pyproject</category>
    </item>
    <item>
      <title>Pooling in aioredis/redis-py may be dangerous</title>
      <dc:creator>Denis Anikin</dc:creator>
      <pubDate>Mon, 06 Jun 2022 00:21:04 +0000</pubDate>
      <link>https://dev.to/xfenix/pooling-in-aioredis-may-be-dangerous-36pk</link>
      <guid>https://dev.to/xfenix/pooling-in-aioredis-may-be-dangerous-36pk</guid>
      <description>&lt;p&gt;This story happens to me a couple of days ago. It learns me some things, and I'm ready to tell them to people. These things you may recognize by the bold highlight down the text. The story may look terrifying or entertaining for you, depending on point of view.&lt;/p&gt;

&lt;p&gt;This was regular Friday. I was prepared to spend the day as usual — meetings as I am a zoom certified expert (team lead), and doing coding, because, you know, I'm still a programmer too.&lt;/p&gt;

&lt;p&gt;Suddenly, I heard a loud boom from production — our beautiful microservice distributed event-driven almost reactive architecture says «whoops, a couple of services are dead, bye, this was funny». And then it followed by head-crashing 14 hours of zoom mob debugging, that lead us to impressive conclusion: &lt;a href="https://docs.keydb.dev/"&gt;keydb&lt;/a&gt; (performance oriented redis fork) cluster dead and can't be restarted properly, because each time when we start new master node, we got ten's thousands of connections in a blink of eye from our services. Then we got connection limit overflow and… dead master node. And it couldn't be fixed with a reboot.&lt;/p&gt;

&lt;p&gt;And, here we must start a list of things, that I learned that day. &lt;strong&gt;First&lt;/strong&gt; point of my story — keydb has really strange mechanics of handling connections overflow. It may lead the database to crash. I don't know how to properly handle this kind of situation other than to increase the maximum connection size on the server side and limit the pool size on the client side. But perhaps it can lead you to your own conclusions.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Second&lt;/strong&gt; — you should monitor any connections in your cluster/service, regardless of their type, especially if we are talking about any type of database. We didn't, and it cost us a lot of stress.&lt;/p&gt;

&lt;p&gt;So, back to our tragic story. Why we got so many connections at each restart? It looks insane — our project does not have many requests per second, — about 40-50 at peak times. But here I must admit, we got multiplications because of «hops» between microservices, but it can't produce ten of thousands of connections at the moment no matter what. So, how and what?&lt;/p&gt;

&lt;p&gt;Well, it was hard to determine, but we found «superposition of mistakes», as I call such things.&lt;br&gt;
First, it was &lt;a href="https://github.com/aio-libs/aioredis-py"&gt;aioredis&lt;/a&gt; library. We are using sentinel based client because with this we can achieve failover easily. Aioredis spawn pool of connections, that transparently reconnects (and here &lt;strong&gt;third&lt;/strong&gt; thing — FOREVER, hello DDOS) to our sentinel nodes, and then to master node. It supposed to do so. Also, we found that if you are not limiting maximum connections count, library will do it for you and set it as 2 ** 31 (&lt;a href="https://github.com/aio-libs/aioredis-py/blob/master/aioredis/connection.py#L1310"&gt;here you can see it&lt;/a&gt;) — this is &lt;strong&gt;fourth&lt;/strong&gt; thing. Furthermore, pool in our version (2.0.1) not closing automatically, and it makes the problem worse.&lt;/p&gt;

&lt;p&gt;upd. If you are concerned about pool auto-closing/cleanup, just check &lt;a href="https://github.com/redis/redis-py"&gt;redis py&lt;/a&gt;, it now includes aioredis where this issue is being solved. And it complete drop-in replacement for aioredis itself:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="nn"&gt;redis&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;asyncio&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;aioredis&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;However, back to our code. Things were as if all this was not enough for us. Another piece of the puzzle was in the simple fact, in that we are trying to adopt defensive programming techniques. Because of them, we used the connection pool in a very specific way — as a context manager, where each time in each coroutine we created a new connection pool. And even more, we have added a &lt;a href="https://github.com/litl/backoff"&gt;backoff&lt;/a&gt; library wherever we work with the keydb. In other words, the power of DDOS on our database was just epic, because of our good intentions, of course. And it can't be handled anyhow by poor keydb, I guess.&lt;/p&gt;

&lt;p&gt;Couple more words about aioredis/redis-py pooling. Let's look at the extracted code from the library:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;__init__&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="bp"&gt;self&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;connection_class&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;Type&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;Connection&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;Connection&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;max_connections&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;Optional&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nb"&gt;int&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="bp"&gt;None&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="o"&gt;**&lt;/span&gt;&lt;span class="n"&gt;connection_kwargs&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="n"&gt;max_connections&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;max_connections&lt;/span&gt; &lt;span class="ow"&gt;or&lt;/span&gt; &lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="o"&gt;**&lt;/span&gt;&lt;span class="mi"&gt;31&lt;/span&gt;
    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="ow"&gt;not&lt;/span&gt; &lt;span class="nb"&gt;isinstance&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;max_connections&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nb"&gt;int&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="ow"&gt;or&lt;/span&gt; &lt;span class="n"&gt;max_connections&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="k"&gt;raise&lt;/span&gt; &lt;span class="nb"&gt;ValueError&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;'"max_connections" must be a positive integer'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Be prepared — there is no word in the documentation about  &lt;code&gt;max_connections&lt;/code&gt; option. But you must pass it if you don't want troubles as we got:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;sentinel&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;aioredis&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;sentinel&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Sentinel&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="p"&gt;[(&lt;/span&gt;&lt;span class="s"&gt;"localhost"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;26379&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"sentinel2"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;26379&lt;/span&gt;&lt;span class="p"&gt;)],&lt;/span&gt;
    &lt;span class="n"&gt;max_connections&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;1000&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Moreover, while we are trying to stop fire from spread, crashed consumer where lied our keydb related code was not able to handle (hard to handle anything when you are dead) around 100k of messages from kafka. It also amplifies DDOS and blocked us from fixing the service.&lt;/p&gt;

&lt;p&gt;You may ask — how we fixed this horrible mess? We write another consumer, that handle those messages from kafka (just trow them away — it was unimportant things), patch the library and revived our shiny master. It sounds boring, but it was hard as hell.&lt;/p&gt;

&lt;p&gt;I hope these things from the article can make your life easier if you've ever come across similar architectures and circumstances. Peace!&lt;/p&gt;

&lt;p&gt;Upd. The colleague of mine opened an issue in redis-py (aioredes has been merged in this project recently) &lt;a href="https://github.com/redis/redis-py/issues/2220"&gt;https://github.com/redis/redis-py/issues/2220&lt;/a&gt;. Please, vote for this issue, this is serious problem.&lt;/p&gt;

</description>
      <category>redis</category>
      <category>asyncpython</category>
      <category>microservices</category>
      <category>story</category>
    </item>
    <item>
      <title>Simple and stupid way to reload k8s pods regularly</title>
      <dc:creator>Denis Anikin</dc:creator>
      <pubDate>Mon, 16 May 2022 23:43:22 +0000</pubDate>
      <link>https://dev.to/xfenix/how-to-reload-k8s-pods-on-regular-basis-134n</link>
      <guid>https://dev.to/xfenix/how-to-reload-k8s-pods-on-regular-basis-134n</guid>
      <description>&lt;p&gt;If you are not principal expert in kubernetes, someday you may find yourself struggled with question «how i can restart my pods regularly?».&lt;br&gt;
This question may look very strange. And partially it is.&lt;br&gt;
But before you close this article, i try to justify this need. &lt;/p&gt;

&lt;p&gt;Very often in our backend developer careers we write not so good code. It may look good, been validated via static analysis, covered with tests, covered with mutation tests, fuzzy tests, and also you team may consist of thousands of QA's and you have much more safety nets around. And all this things are failed miserably against one little bug, that can be reproduced in any environment no matter how hard you try. And it could be worse - this bug can only be reproduced after a certain period of time. This is nightmare of every software engineer.&lt;br&gt;
But production services won't wait for your detective efforts. They simply can't wait unpredictable period of time.&lt;br&gt;
And, here we are — staring at the weapon of last resort. Yep, this is shameful choice: you should restart you service on regular basis. And yes, this «solution» will easily fix this impudent irreproducible bug.&lt;/p&gt;

&lt;p&gt;If you try to search possible ways of doing so in k8s cluster, you may find about 5 possible ways. And i personally choose only one of them. It also may look strange, but it's the easiest way: just use scheduled CI jobs (if you are using gitlab, for example) and run command &lt;code&gt;kubectl restart&lt;/code&gt; in this scheduled job.&lt;/p&gt;

&lt;p&gt;This is the easiest, fastest and bug free path.&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>docker</category>
      <category>devops</category>
      <category>simpleandstupid</category>
    </item>
    <item>
      <title>Why isn't coverage badge in GitLab updating?</title>
      <dc:creator>Denis Anikin</dc:creator>
      <pubDate>Fri, 19 Nov 2021 18:29:32 +0000</pubDate>
      <link>https://dev.to/xfenix/why-coverage-badge-in-gitlab-is-not-showing-49bh</link>
      <guid>https://dev.to/xfenix/why-coverage-badge-in-gitlab-is-not-showing-49bh</guid>
      <description>&lt;p&gt;Sometimes coverage badge is not updating at all and remains at the same value as it was in the past.&lt;br&gt;
Earlier this was an old bug in Gitlab that lead you to this situation. But these days are gone and the bug was fixed. And all articles on the whole internet show you mainly this irrelevant information. So I wrote this article in hopes of adding a piece of valuable information.&lt;/p&gt;

&lt;p&gt;If you search in your setup of Gitlab and find a «blocked» pipeline — exactly this type of pipeline is the main reason for coverage score staling.&lt;br&gt;
In earlier versions of Gitlab, it was easy to get stuck in this situation. A single job with the following configuration in your pipeline and your coverage gets pinned at one value:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;my-job&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;when&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;manual&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To fix this you need to add only one line:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;my-job&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;when&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;manual&lt;/span&gt;
    &lt;span class="na"&gt;allow_failure&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="no"&gt;true&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And poof, you get lucky, your coverage badge is free of the evil witch curse and updates as it should.&lt;/p&gt;

&lt;p&gt;Recently Gitlab guys fixed that behavior. How exactly? Just added default value allow_failure: true for manual pipelines automatically. This is a very good decision, because otherwise, a single manual step may ruin the whole CI/CD prosperity of your project.&lt;br&gt;
But if your think that you are now completely free of this trouble, I may change your opinion.&lt;br&gt;
Just write anything like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;my-job&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;when&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;manual&lt;/span&gt;
    &lt;span class="na"&gt;allow_failure&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="no"&gt;false&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;and poof once again — you are stuck with never-ever updating coverage badge. The fix is obvious — add «true» for allow_failure or strip this line. But I have no answer to a simple question: what we should do if we need manual pipelines with disabled failure option and updatable coverage badge? Probably, this question should be addressed directly to Gitlab.&lt;/p&gt;

&lt;p&gt;This may not be so easy to grasp from official documentation, so I decided to write this article. I hope it helped.&lt;/p&gt;

</description>
      <category>gitlab</category>
      <category>devops</category>
      <category>lifehack</category>
    </item>
    <item>
      <title>Fresh CRA + React.lazy + code splitting: why isn't it working as should?</title>
      <dc:creator>Denis Anikin</dc:creator>
      <pubDate>Fri, 19 Nov 2021 17:12:45 +0000</pubDate>
      <link>https://dev.to/xfenix/fresh-cra-reactlazy-code-splitting-why-is-not-working-as-it-should-j5f</link>
      <guid>https://dev.to/xfenix/fresh-cra-reactlazy-code-splitting-why-is-not-working-as-it-should-j5f</guid>
      <description>&lt;p&gt;If you ever come across this topic and find yourself stuck with «why my bundle is not splitting up for chunks, but I'm doing everything as said» after reading documentation, please: &lt;strong&gt;check&lt;/strong&gt; that the component used inside classic construction&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="nx"&gt;React&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;lazy&lt;/span&gt;&lt;span class="p"&gt;(()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="k"&gt;import&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;MyFancyApp&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;is not imported anywhere else&lt;/strong&gt; in your application!&lt;br&gt;
Yeah, that easy, but not that easy if your start debugging right now and searching for information online. The whole internet is flooded with identical articles and standard recipes.&lt;/p&gt;

&lt;p&gt;In our team's case, it was index.tsx files with helper imports. Like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="err"&gt;#&lt;/span&gt; &lt;span class="nx"&gt;index&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;tsx&lt;/span&gt;
&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;./MyFancyApp.tsx&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Remove or replace these imports with lazy implementation and splitting starts working as it should.&lt;/p&gt;

</description>
      <category>react</category>
      <category>codesplitting</category>
      <category>cra</category>
      <category>typescript</category>
    </item>
  </channel>
</rss>
