<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Ismayil Mirzali</title>
    <description>The latest articles on DEV Community by Ismayil Mirzali (@xs).</description>
    <link>https://dev.to/xs</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/xs"/>
    <language>en</language>
    <item>
      <title>Setting up Rust on macOS in a clean way</title>
      <dc:creator>Ismayil Mirzali</dc:creator>
      <pubDate>Sun, 10 Jul 2022 13:11:19 +0000</pubDate>
      <link>https://dev.to/xs/setting-up-rust-on-macos-in-a-clean-way-13d1</link>
      <guid>https://dev.to/xs/setting-up-rust-on-macos-in-a-clean-way-13d1</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;When I first wanted to get started with Rust, I was a bit confused about the suggested way of installation for its toolchain, at least on a Mac device. If you're not familiar, there are mainly 3 common ways that people install Rust on Mac:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;A lot of people simply run the following command as suggested by the Rust website: &lt;code&gt;curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh&lt;/code&gt;&lt;br&gt;&lt;br&gt;
My concern with this method is that it's not immediately clear how to do a clean uninstall in the future. You'd also need to read through the shell script to understand what it's exactly doing.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Another option is to install the &lt;a href="https://formulae.brew.sh/formula/rust#default"&gt;rust&lt;/a&gt; formula with &lt;code&gt;brew&lt;/code&gt;.&lt;br&gt;&lt;br&gt;
This is better, however, you're essentially installing a single toolchain for your given native hardware, as it looks like &lt;code&gt;rustup&lt;/code&gt; is not part of this formula, so you cannot easily cross-compile by adding other toolchains.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The last option is to install the &lt;a href="https://formulae.brew.sh/formula/rustup-init#default"&gt;rustup-init formula&lt;/a&gt; instead, which I consider to be the best option, however, some extra manual work is needed and I will walk you through it.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Setting up your environment variables for rustup
&lt;/h2&gt;

&lt;p&gt;I like to keep my &lt;code&gt;$HOME&lt;/code&gt; directory clean, so I try to conform to the &lt;a href="https://wiki.archlinux.org/title/XDG_Base_Directory"&gt;XDG Base Directory Specification&lt;/a&gt; as much as possible. Here are the XDG variables I've set in my &lt;code&gt;~/.zshenv&lt;/code&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;XDG_CONFIG_HOME&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$HOME&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;/.config
&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;XDG_DATA_HOME&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$HOME&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;/.local/share
&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;XDG_CACHE_HOME&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$HOME&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;/.cache
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Rust allows us to configure the location for its toolchain as well with environmental variables, here I've set the two most important ones like so.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;CARGO_HOME&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$XDG_DATA_HOME&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;/cargo
&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;RUSTUP_HOME&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$XDG_DATA_HOME&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;/rustup
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Installation
&lt;/h3&gt;

&lt;p&gt;We can now install &lt;code&gt;rustup-init&lt;/code&gt; and run it&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;brew &lt;span class="nb"&gt;install &lt;/span&gt;rustup-init
rustup-init
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We will be greeted with the rust installation process&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Welcome to Rust!

This will download and install the official compiler for the Rust
programming language, and its package manager, Cargo.

Rustup metadata and toolchains will be installed into the Rustup
home directory, located at:

  /Users/xs/.local/share/rustup

This can be modified with the RUSTUP_HOME environment variable.

The Cargo home directory located at:

  /Users/xs/.local/share/cargo

This can be modified with the CARGO_HOME environment variable.

The cargo, rustc, rustup and other commands will be added to
Cargo's bin directory, located at:

  /Users/xs/.local/share/cargo/bin

This path will then be added to your PATH environment variable by
modifying the profile files located at:

  /Users/xs/.profile
  /Users/xs/.zshenv

You can uninstall at any time with rustup self uninstall and
these changes will be reverted.

Current installation options:


   default host triple: aarch64-apple-darwin
     default toolchain: stable (default)
               profile: default
  modify PATH variable: yes

1) Proceed with installation (default)
2) Customize installation
3) Cancel installation
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Confirm that the suggested directories line up with directories that you set using the env vars. I personally like to disable the PATH var modification since I like to control that manually. Here's my current &lt;code&gt;PATH&lt;/code&gt; in my &lt;code&gt;.zshrc&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;path&lt;/span&gt;&lt;span class="o"&gt;=(&lt;/span&gt;
    &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$HOME&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;/go/bin
    &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$XDG_DATA_HOME&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;/cargo/bin
    &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$HOME&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;/.krew/bin
    /opt/homebrew/opt/make/libexec/gnubin
    /opt/homebrew/opt/man-db/libexec/bin
    /opt/homebrew/opt/grep/libexec/gnubin
    /opt/homebrew/opt/gnu-tar/libexec/gnubin
    /opt/homebrew/opt/gnu-sed/libexec/gnubin
    /opt/homebrew/opt/findutils/libexec/gnubin
    /opt/homebrew/opt/coreutils/libexec/gnubin
    &lt;span class="nv"&gt;$path&lt;/span&gt;
&lt;span class="o"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Setting up shell completion
&lt;/h3&gt;

&lt;p&gt;By default, most formulae will install the shell completion into the &lt;code&gt;$(brew --prefix)/share/zsh/site-functions&lt;/code&gt;. I'd prefer not to do manual changes on the Homebrew directory, so we can set up another directory in our home dir. I like to set the &lt;code&gt;$fpath&lt;/code&gt; variable in my &lt;code&gt;.zshrc&lt;/code&gt; like so:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;fpath&lt;/span&gt;&lt;span class="o"&gt;=(&lt;/span&gt;
    /opt/homebrew/share/zsh/site-functions
    &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$XDG_DATA_HOME&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;/zsh/site-functions
    &lt;span class="nv"&gt;$fpath&lt;/span&gt;
&lt;span class="o"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;So I simply create the directory in &lt;code&gt;"$XDG_DATA_HOME"/zsh/site-functions&lt;/code&gt; and write my shell completion files there like so.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;rustup completion zsh cargo &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="nv"&gt;$XDG_DATA_HOME&lt;/span&gt;/zsh/site-functions/_cargo
rustup completion zsh rustup &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$XDG_DATA_HOME&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;/zsh/site-functions/_rustup
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Note: &lt;em&gt;If you use &lt;code&gt;bash&lt;/code&gt; or &lt;code&gt;fish&lt;/code&gt;, you can simply just read more using &lt;code&gt;rustup completions --help&lt;/code&gt;&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Relaunch your shell and your shell completion should work as intended. And now you can finally manage all your toolchains easily with &lt;code&gt;rustup toolchain&lt;/code&gt; command. I consider this to be the "cleanest" way to install and manage Rust on Mac and hope this was helpful for some.&lt;/p&gt;

</description>
      <category>rust</category>
      <category>macos</category>
    </item>
    <item>
      <title>Kubernetes: Minikube with QEMU/KVM on Arch</title>
      <dc:creator>Ismayil Mirzali</dc:creator>
      <pubDate>Mon, 24 Aug 2020 23:28:32 +0000</pubDate>
      <link>https://dev.to/xs/kubernetes-minikube-with-qemu-kvm-on-arch-312a</link>
      <guid>https://dev.to/xs/kubernetes-minikube-with-qemu-kvm-on-arch-312a</guid>
      <description>&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--vSamqMJE--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/6glk0y4gowgeg9gpp3cn.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--vSamqMJE--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/6glk0y4gowgeg9gpp3cn.jpg" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://minikube.sigs.k8s.io/docs/"&gt;Minikube&lt;/a&gt; is a tool for easily creating Kubernetes clusters locally.  &lt;/p&gt;

&lt;p&gt;It boasts features like supporting multiple container runtimes and even Load Balancers so you can easily test your deployments/services locally.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;Minikube&lt;/code&gt; lets you deploy your node as a VM, container or even bare metal. The default driver is &lt;code&gt;VirtualBox&lt;/code&gt;, however &lt;code&gt;KVM/QEMU&lt;/code&gt; usually performs better on Linux machines.  &lt;/p&gt;

&lt;p&gt;I assume that you'll be using &lt;code&gt;Docker&lt;/code&gt; as your CRI. On &lt;em&gt;Arch Linux&lt;/em&gt;, you'll need to get the following packages using Pacman:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;🐺 ~ ⚡ ➜ sudo pacman -S minikube libvirt qemu dnsmasq ebtables dmidecode
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You then need to add your user to the &lt;code&gt;libvirt&lt;/code&gt; group and start the service:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;🐺 ~ ⚡ ➜ sudo usermod -aG libvirt $(whoami)
🐺 ~ ⚡ ➜ sudo systemctl start libvirtd.service
🐺 ~ ⚡ ➜ sudo systemctl enable libvirtd.service
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;libvirtd&lt;/code&gt; service may fail due to certain missing binaries, be sure to check the status and resolve any issues you may have.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;🐺 ~ ⚡ ➜ sudo systemctl status libvirtd.service
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can then run the validation tool:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;🐺 ~ ⚡ ➜ virt-host-validate
  QEMU: Checking for hardware virtualization                                 : PASS
  QEMU: Checking if device /dev/kvm exists                                   : PASS
  QEMU: Checking if device /dev/kvm is accessible                            : PASS
  QEMU: Checking if device /dev/vhost-net exists                             : PASS
  QEMU: Checking if device /dev/net/tun exists                               : PASS
  QEMU: Checking for cgroup 'cpu' controller support                         : PASS
  QEMU: Checking for cgroup 'cpuacct' controller support                     : PASS
  QEMU: Checking for cgroup 'cpuset' controller support                      : PASS
  QEMU: Checking for cgroup 'memory' controller support                      : PASS
  QEMU: Checking for cgroup 'devices' controller support                     : PASS
  QEMU: Checking for cgroup 'blkio' controller support                       : PASS
  QEMU: Checking for device assignment IOMMU support                         : PASS
  QEMU: Checking if IOMMU is enabled by kernel                               : PASS
  QEMU: Checking for secure guest support                                    : WARN (AMD Secure Encrypted Virtualization appears to be disabled in kernel. Add kvm_amd.sev=1 to the kernel cmdline arguments)
  LXC: Checking for Linux &amp;gt;= 2.6.26                                          : PASS
  LXC: Checking for namespace ipc                                            : PASS
  LXC: Checking for namespace mnt                                            : PASS
  LXC: Checking for namespace pid                                            : PASS
  LXC: Checking for namespace uts                                            : PASS
  LXC: Checking for namespace net                                            : PASS
  LXC: Checking for namespace user                                           : PASS
  LXC: Checking for cgroup 'cpu' controller  support                         : PASS
  LXC: Checking for cgroup 'cpuacct'  controller support                     : PASS
  LXC: Checking for cgroup 'cpuset'  controller support                      : PASS
  LXC: Checking for cgroup 'memory'  controller support                      : PASS
  LXC: Checking for cgroup 'devices'  controller support                     : PASS
  LXC: Checking for cgroup 'freezer'  controller support                     : PASS
  LXC: Checking for cgroup 'blkio'  controller support                       : PASS
  LXC: Checking if device  /sys/fs/fuse/connections exists                   : PASS
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Change the Minikube driver to kvm2:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;🐺 ~ ⚡ ➜ minikube config set driver kvm2
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;It's nice to create a seperate kubeconfig file for &lt;code&gt;Minikube&lt;/code&gt; to use for the cluster:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;🐺 ~ ⚡ ➜ touch config &amp;amp;&amp;amp; export KUBECONFIG=$(pwd)/config
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Finally, run minikube:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;🐺 ~/repos ➜ minikube start
😄  minikube v1.12.2 on Arch
    ▪ KUBECONFIG=/home/lemagicien/repos/config
✨  Using the kvm2 driver based on user configuration
👍  Starting control plane node minikube in cluster minikube
🔥  Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
🐳  Preparing Kubernetes v1.18.3 on Docker 19.03.12 ...
🔎  Verifying Kubernetes components...
🌟  Enabled addons: default-storageclass, storage-provisioner
🏄  Done! kubectl is now configured to use "minikube"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Your &lt;code&gt;kubectl&lt;/code&gt; should now be talking to the minikube cluster! If you run into any issues, you can check &lt;a href="https://minikube.sigs.k8s.io/docs/"&gt;Minikube Docs&lt;/a&gt; and &lt;a href="https://wiki.archlinux.org/index.php/Main_page"&gt;ArchWiki&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>docker</category>
      <category>archlinux</category>
      <category>virtualization</category>
    </item>
  </channel>
</rss>
