<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Bojana Dejanović</title>
    <description>The latest articles on DEV Community by Bojana Dejanović (@bojana_dev).</description>
    <link>https://dev.to/bojana_dev</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/bojana_dev"/>
    <language>en</language>
    <item>
      <title>Docker networks</title>
      <dc:creator>Bojana Dejanović</dc:creator>
      <pubDate>Wed, 15 Sep 2021 11:43:41 +0000</pubDate>
      <link>https://dev.to/bojana_dev/docker-networks-36gf</link>
      <guid>https://dev.to/bojana_dev/docker-networks-36gf</guid>
      <description>&lt;p&gt;During the installation, Docker creates three different networking options.&lt;br&gt;
You can list them with docker network ls:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;bojana@linux:~$ docker network ls
NETWORK ID NAME DRIVER SCOPE
0c872e6d6453 bridge bridge local
10826dd62a8b host host local
cab99af2344e none null local
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;By default, bridge mode is selected, and containers reside on a private namespaced network within the host.&lt;br&gt;
In the previous post, where we explored basic docker commands, we use docker run -p, to map a port from the host. This makes Docker create &lt;strong&gt;iptables&lt;/strong&gt; rules that route traffic from the the host to the container.&lt;/p&gt;

&lt;p&gt;“None” entry in the above list indicates that no configuration should be performed by Docker whatsoever. It is intended for custom networking requirements.&lt;br&gt;
If we want explicitly to select container’s network, we pass &lt;strong&gt;--net&lt;/strong&gt; to docker run.&lt;/p&gt;

&lt;p&gt;A bridge is a Linux kernel feature that connects two network segments.&lt;br&gt;
When you installed Docker, it quietly created a bridge called docker0 on the host.&lt;br&gt;
You can verify that by issuing command &lt;strong&gt;ip addr&lt;/strong&gt; show. Here is output on my machine:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;bojana@linux:~$ ip addr show
1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: mtu 1500 qdisc fq_codel state UP group default qlen 1000
link/ether 96:00:00:b1:5e:b3 brd ff:ff:ff:ff:ff:ff
inet 188.34.194.63/32 scope global dynamic eth0
valid_lft 74220sec preferred_lft 74220sec
inet6 2a01:4f8:1c1c:a675::1/64 scope global
valid_lft forever preferred_lft forever
inet6 fe80::9400:ff:feb1:5eb3/64 scope link
valid_lft forever preferred_lft forever
3: docker0: mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:b6:9c:25:22 brd ff:ff:ff:ff:ff:ff
inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
valid_lft forever preferred_lft forever
inet6 fe80::42:b6ff:fe9c:2522/64 scope link
valid_lft forever preferred_lft forever
5: veth51c3665@if4: mtu 1500 qdisc noqueue master docker0 state UP group default
link/ether 96:4d:57:fa:c0:99 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet6 fe80::944d:57ff:fefa:c099/64 scope link
valid_lft forever preferred_lft forever
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;docker0&lt;/strong&gt; is the virtual Ethernet bridge that uses 172.17.0.1/16 range.&lt;br&gt;
&lt;strong&gt;veth51c3665&lt;/strong&gt; is the host side of the virtual interface pair that connect the container to the bridged network.&lt;/p&gt;

&lt;p&gt;As we said in the previous article, by default when you launch a container, there are no published ports, so container would not be visible from outside the docker host, still we can access it from the docker host itself.&lt;/p&gt;

&lt;p&gt;We will use &lt;strong&gt;nginxdemo/hello&lt;/strong&gt; image in this example, because it contains the simple web server that prints out the IP address of the container, so a view form the inside, along with other things.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;bojana@linux:~$ docker run -d nginxdemos/hello&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Let’s connect to this running container and see what IP address it got assigned.&lt;br&gt;
To access a shell of a running container we use:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;bojana@linux:~$ docker exec -it jovial_wu /bin/ash&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Note: jovial_wu is a random container name assigned when creating it, because we didn’t specify any, check yours with docker ps command&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;ip addr show on the container command line reveals:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;/ # ip addr show
1: lo: &amp;lt;LOOPBACK,UP,LOWER_UP&amp;gt; mtu 65536 qdisc noqueue state UNKNOWN qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
8: eth0@if9: &amp;lt;BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN&amp;gt; mtu 1500 qdisc noqueue state UP 
    link/ether 02:42:ac:11:00:03 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.3/16 brd 172.17.255.255 scope global eth0
       valid_lft forever preferred_lft forever
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We see that the address is from the above mentioned docker0 range.&lt;br&gt;
If we paste this address to the browser, we should get nginx demo page:&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--tKUu-Vqt--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://bojana.dev/wp-content/uploads/2021/09/ngnix1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--tKUu-Vqt--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://bojana.dev/wp-content/uploads/2021/09/ngnix1.png" alt="ngnix/demo"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;It shows us what it sees from the inside of container, such as IP on the bridge plus the servername which is a linux hostname of the container (container id)&lt;br&gt;
How can containers communicate with each other on the bridge?&lt;/p&gt;

&lt;p&gt;Let’s create a second instance of the container:&lt;br&gt;
&lt;code&gt;docker run -d nginxdemos/hello&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;And connect to it:&lt;br&gt;
&lt;code&gt;docker exec -it inspiring_mcnulty2 /bin/ash&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;If we check the IP we see again that it’s from the bridge range:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;/ # ip addr show
1: lo: &amp;lt;LOOPBACK,UP,LOWER_UP&amp;gt; mtu 65536 qdisc noqueue state UNKNOWN qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
11: eth0@if12: &amp;lt;BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN&amp;gt; mtu 1500 qdisc noqueue state UP 
    link/ether 02:42:ac:11:00:04 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.4/16 brd 172.17.255.255 scope glo
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We can also ping the previously created container:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;/ # ping 172.17.0.3
PING 172.17.0.3 (172.17.0.3): 56 data bytes
64 bytes from 172.17.0.3: seq=0 ttl=64 time=0.204 ms
64 bytes from 172.17.0.3: seq=1 ttl=64 time=0.189 ms
64 bytes from 172.17.0.3: seq=2 ttl=64 time=0.181 ms
64 bytes from 172.17.0.3: seq=3 ttl=64 time=0.192 ms
^C
--- 172.17.0.3 ping statistics ---
4 packets transmitted, 4 packets received, 0% packet loss
round-trip min/avg/max = 0.181/0.191/0.204 ms
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This means we do have IP connectivity between two containers on the same bridge.&lt;br&gt;
If we issue an ip route command, we can see there is a default route via the Docker host’s IP&lt;br&gt;
on the bridge, which acts as a gateway.&lt;br&gt;
We can also see that we have internet access from within the container, and the name resolution works:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;/ # ping 8.8.8.8
PING 8.8.8.8 (8.8.8.8): 56 data bytes
64 bytes from 8.8.8.8: seq=0 ttl=111 time=18.209 ms
64 bytes from 8.8.8.8: seq=1 ttl=111 time=18.524 ms
64 bytes from 8.8.8.8: seq=2 ttl=111 time=18.116 ms
64 bytes from 8.8.8.8: seq=3 ttl=111 time=18.414 ms
^C
--- 8.8.8.8 ping statistics ---
4 packets transmitted, 4 packets received, 0% packet loss
round-trip min/avg/max = 18.116/18.315/18.524 ms
/ # ping google.com
PING google.com (142.250.185.110): 56 data bytes
64 bytes from 142.250.185.110: seq=0 ttl=112 time=28.885 ms
64 bytes from 142.250.185.110: seq=1 ttl=112 time=28.785 ms
64 bytes from 142.250.185.110: seq=2 ttl=112 time=29.383 ms
64 bytes from 142.250.185.110: seq=3 ttl=112 time=28.670 ms
^C
-------- google.com ping statistics ---
4 packets transmitted, 4 packets received, 0% packet loss
round-trip min/avg/max = 28.670/28.930/29.383 ms
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In the default bridge configuration, all containers can communicate with one another because they are all on the same virtual network. However, you can create additional network namespaces to isolate containers from one another.&lt;/p&gt;

&lt;p&gt;We said that the most common way of making a docker container visible from the outside is port mapping.&lt;/p&gt;

&lt;p&gt;Let’s create another container this time specifying port mapping:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;bojana@linux:~$ docker run -d nginxdemo/hello -p 81:80&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;One thing we should point out though is the fact that the name resolution between docker containers doesn’t work.&lt;br&gt;
If we would attach to the above created container and tried to ping the first or the second container we created by it’s hostname, we wouldn’t be able to do so:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;/ # hostname
0888f0c68423
/ # ping 167bcc074170
ping: bad address '167bcc074170'
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  User defined bridges
&lt;/h3&gt;

&lt;p&gt;In principle, host name resolution is possible between the containers, just not on the default bridge.&lt;br&gt;
Which means we have to create our own.&lt;br&gt;
Use the docker network create command to create a user-defined bridge network:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;bojana@linux:~$ docker network create --driver=bridge --subnet=172.172.0.0/24 --ip-range=172.172.0.128/25 --gateway=172.172.0.1 my-br0
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can go ahead and stop/remove all the containers we created so far so we have a clean slate.&lt;br&gt;
Now, let’s create a new container that uses our newly created bridge:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;bojana@linux:~$ docker run -d --name my-nginx --network my-br0 --hostname my-nginx nginxdemos/hello
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And another instance with different name and hostname:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;bojana@linux:~$ docker run -d --name my-nginx2 --network my-br0 --hostname my-nginx2 nginxdemos/hello
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now, if we use the docker inspect to check the network settings of newly created container:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;bojana@linux:~$ docker inspect -f '{{ json .NetworkSettings.Networks }}' my-nginx
{"my-br0":{"IPAMConfig":null,"Links":null,"Aliases":["b91bde6aa527","my-nginx"],"NetworkID":"f9e797b2ab8826342ea8343b8bebe8e1459eea114aa0b015f56b01f90fe39ff8","EndpointID":"a1af70992c90967ebca917eeebaa67b4e7bc23bdfb2f352f283faaee223d4fc0","Gateway":"172.172.0.1","IPAddress":"172.172.0.128","IPPrefixLen":24,"IPv6Gateway":"","GlobalIPv6Address":"","GlobalIPv6PrefixLen":0,"MacAddress":"02:42:ac:ac:00:80","DriverOpts":null}}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We see that our network settings were applied and containers have assigned, or if you will, joined our defined network. Now, if we try to ping one container from the other by it’s hostname, it will work:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;/ # ping my-nginx2
PING my-nginx2 (172.172.0.129): 56 data bytes
64 bytes from 172.172.0.129: seq=0 ttl=64 time=0.166 ms
64 bytes from 172.172.0.129: seq=1 ttl=64 time=0.149 ms
64 bytes from 172.172.0.129: seq=2 ttl=64 time=0.211 ms
64 bytes from 172.172.0.129: seq=3 ttl=64 time=0.134 ms
^C
-------- my-nginx2 ping statistics ---
And also if we ping by hostname another container:
/ # ping my-nginx
PING my-nginx (172.172.0.128): 56 data bytes
64 bytes from 172.172.0.128: seq=0 ttl=64 time=0.342 ms
64 bytes from 172.172.0.128: seq=1 ttl=64 time=0.134 ms
64 bytes from 172.172.0.128: seq=2 ttl=64 time=0.191 ms
64 bytes from 172.172.0.128: seq=3 ttl=64 time=0.128 ms
^C
-------- my-nginx ping statistics ---
4 packets transmitted, 4 packets received, 0% packet loss
round-trip min/avg/max = 0.128/0.198/0.342 ms
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The conclusion is that Docker does provide name resolution between containers over a self-defined bridge.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;&lt;strong&gt;Credits&lt;/strong&gt;: A lot of content is borrowed from this &lt;a href="https://www.youtube.com/watch?v=OmZdItNjWNY&amp;amp;ab_channel=OneMarcFifty"&gt;excellent video&lt;/a&gt; by &lt;a href="https://www.youtube.com/watch?v=OmZdItNjWNY&amp;amp;ab_channel=OneMarcFifty"&gt;OneMarcFifty&lt;/a&gt; on youtube. Go and check the channel, it has some really interesting content, presented in a really nice and clean way. For this post, I tried to use just docker commands without having to use portainer and add some additional notes.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Post originally published on : &lt;a href="http://bojana.dev"&gt;&lt;b&gt;bojana.dev&lt;/b&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>docker</category>
      <category>network</category>
      <category>containers</category>
    </item>
    <item>
      <title>Containerization</title>
      <dc:creator>Bojana Dejanović</dc:creator>
      <pubDate>Mon, 14 Jun 2021 11:25:57 +0000</pubDate>
      <link>https://dev.to/bojana_dev/containerization-3h0</link>
      <guid>https://dev.to/bojana_dev/containerization-3h0</guid>
      <description>&lt;p&gt;In the last article, we talked about what &lt;a href="https://bojana.dev/virtualization/"&gt;virtualization&lt;/a&gt; is and how important the concept and technology is in the whole cloud computing paradigm.&lt;/p&gt;

&lt;p&gt;Now I want we talk about &lt;em&gt;containerization &lt;/em&gt;- a different approach to isolation that does not use a hypervisor, but instead it relies on a specific kernel features that isolate processes from the rest of the system.&lt;/p&gt;

&lt;p&gt;So in short &lt;strong&gt;containerization&lt;/strong&gt; is a form of operating system virtualization, where we have applications running in isolated user spaces called &lt;strong&gt;containers.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Each process "container" or "jail" has a &lt;strong&gt;private root file system&lt;/strong&gt; and &lt;strong&gt;process namespace&lt;/strong&gt;. &lt;br&gt;While sharing the kernel and other services of the underlying OS, they cannot access files or resources outside of their container.&lt;br&gt;In essence, containers are fully packaged and portable computing environments.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--odO3Ai0w--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://bojana.dev/wp-content/uploads/2021/06/image-1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--odO3Ai0w--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://bojana.dev/wp-content/uploads/2021/06/image-1.png" alt=""&gt;&lt;/a&gt;Source: &lt;a href="https://www.amazon.com/UNIX-Linux-System-Administration-Handbook/dp/0134277554"&gt;Unix And Linux System Administration Handbook&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We said that this type of virtualization does not require hypervisor, and because there is no need for the virtualization of the hardware, resource overhead for this type of virtualization is low.&lt;/p&gt;

&lt;p&gt;This means container start up time is pretty quick, or if you will start time is pretty short. Creation of a container has the overhead of creating a Linux process, which can be of the order of the miliseconds, while creating a VM can take seconds.&lt;/p&gt;

&lt;p&gt;The containerized application can be run on various types of infrastructure—on bare metal, within VMs, and in the cloud—without needing to refactor it for each environment&lt;/p&gt;

&lt;h2&gt;How then containers relate to VMs?&lt;/h2&gt;

&lt;p&gt;Both are portable, isolated, execution environments, and both look and act as a full operating systems.&lt;br&gt;Unlike VM, which has an OS kernel, drivers to interact with hardware etc. a container is merely a mimic of an operating system. Container itself is abstracted away from the host OS, with only limited access to underlying resources - we can say it is a lightweight VM.&lt;/p&gt;

&lt;p&gt;The containers-on-VMs architecture is standard for containerized applications that need to run on public cloud instances.&lt;/p&gt;

&lt;h2&gt;How containerization actually works?&lt;/h2&gt;

&lt;p&gt;We said that containerization relies on specific kernel features, but which features are those? &lt;br&gt;Containerization as we know it evolved from &lt;strong&gt;cgroups&lt;/strong&gt;, a feature for isolating and controlling resource usage (e.g., how much CPU and RAM and how many threads a given process can access) within the Linux kernel.&lt;br&gt;cgroups were originally developed by Paul Menage and Rohit Seth of Google, and their first features were merged into Linux 2.6.24. Cgroups became Linux containers (&lt;a href="https://linuxcontainers.org/lxc/introduction/"&gt;LXC&lt;/a&gt;), with more advanced features for namespace isolation of components, such as routing tables and file systems.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Namespaces &lt;/strong&gt;are a kernel mechanism for limiting the visibility that a group of processes has of the rest of a system. For example you can limit visibility to certain process trees, network interfaces, user IDs or filesystem mounts. namespaces were originally developed by Eric Biederman, and the final major namespace was merged into Linux 3.8.&lt;/p&gt;

&lt;p&gt;Since kernel version 5.6, there are 8 kinds of namespaces. Namespace functionality is the same across all kinds: each process is associated with a namespace and can only see or use the resources associated with that namespace, and descendant namespaces where applicable. This way each process (or process group thereof) can have a unique view on the resources.&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;bojana@linux:~$ sudo lsns -p 270
        NS TYPE   NPROCS PID USER COMMAND
4026531835 cgroup    130   1 root /sbin/init
4026531836 pid       126   1 root /sbin/init
4026531837 user      130   1 root /sbin/init
4026531838 uts       123   1 root /sbin/init
4026531839 ipc       126   1 root /sbin/init
4026531840 mnt       114   1 root /sbin/init
4026531992 net       126   1 root /sbin/init&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;In Linux, &lt;code&gt;lsns&lt;/code&gt; lists information about all the currently accessible namespaces&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;cgroup&lt;/li&gt;
&lt;li&gt;mnt (mount points, filesystems)&lt;/li&gt;
&lt;li&gt;pid (processes)&lt;/li&gt;
&lt;li&gt;net (network stack)&lt;/li&gt;
&lt;li&gt;ipc (System V IPC)&lt;/li&gt;
&lt;li&gt;uts (hostname)&lt;/li&gt;
&lt;li&gt;user (UIDs)&lt;/li&gt;
&lt;li&gt;time namespace&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Modern containers evolved from these 2 kernel features, and LXC served as a basis for &lt;strong&gt;&lt;a href="https://www.docker.com/"&gt;Docker&lt;/a&gt;&lt;/strong&gt; launched in 2013. In it's early years it was based on using LXC, but later developed its own lib instead.&lt;/p&gt;

&lt;h2&gt;Docker&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--SxHMH575--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://bojana.dev/wp-content/uploads/2021/06/PinClipart.com_shipping-container-clip-art_3317152.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--SxHMH575--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://bojana.dev/wp-content/uploads/2021/06/PinClipart.com_shipping-container-clip-art_3317152.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;For most people nowadays when you say "container" Docker is the first association, but we see that the containerization technology is not that new (cgroups dating back to 2008). However, Docker expansion can be contributed to the set of tools it introduced taking advantage of already existing containerization technology.&lt;/p&gt;

&lt;p&gt;Because of the rapid evolvement of docker tools, and new versions being incompatible with existing deployments, to counter this Docker Inc. became one of the founder member of &lt;a href="https://opencontainers.org/"&gt;Open Container Initiative&lt;/a&gt;, &lt;em&gt; a consortium whose mission is to guide the growth of container technology in a healthily competitive direction that fosters standards and collaboration.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;We will be talking about docker architecture and docker as a container engine in one of the following articles.&lt;/p&gt;

&lt;p&gt;Post originally published on : &lt;a href="http://bojana.dev"&gt;&lt;b&gt;bojana.dev&lt;/b&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>containers</category>
      <category>docker</category>
      <category>lxc</category>
      <category>cloud</category>
    </item>
    <item>
      <title>Virtualization ... or what enables the cloud</title>
      <dc:creator>Bojana Dejanović</dc:creator>
      <pubDate>Sun, 18 Apr 2021 12:00:04 +0000</pubDate>
      <link>https://dev.to/bojana_dev/virtualization-400k</link>
      <guid>https://dev.to/bojana_dev/virtualization-400k</guid>
      <description>&lt;ul&gt;
&lt;li&gt;Hypervisor&lt;/li&gt;
&lt;li&gt;Full virtualization&lt;/li&gt;
&lt;li&gt;
Paravir&lt;a href="http://paravirtualization"&gt;t&lt;/a&gt;&lt;a href="https://bojana.dev/wp-admin/post.php?post=44&amp;amp;action=edit#paravirtualization"&gt;ualization&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Hardware assisted virtualization&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://bojana.dev/wp-admin/post.php?post=44&amp;amp;action=edit#type1vstype2"&gt;Type &lt;/a&gt;&lt;a href="http://type1vstype2"&gt;1&lt;/a&gt; vs Type 2
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The term "virtualization" is an overloaded term that can describe many things, this is due to the way how technology evolved (competing vendors worked independently without benefit of standards).&lt;/p&gt;

&lt;p&gt;But what virtualization really is and where it came from?&lt;/p&gt;

&lt;p&gt;In the simple words, it's the technology that makes it possible to run multiple operating systems (concurrently) on the same physical hardware. Virtualization software parcels out CPU, memory, and I/O resources, dynamically allocating their use among several “guest” operating systems and resolving resource conflicts.&lt;/p&gt;

&lt;p&gt;The technology itself can be tracked back to 1960s, with the development of hypervisors&lt;sup&gt;1&lt;/sup&gt; (supervisor of the supervisor), however virtualization didn't took of up until 1990s when most enterprises had physical servers and "single vendor" IT stack, meaning legacy apps weren't able to run on different vendor's  hardware.&lt;/p&gt;

&lt;p&gt;Virtualization was natural solution to 2 problems:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Companies could partition their servers - reducing equipment and labor costs associated with equipment maintenance and energy consumption&lt;/li&gt;
&lt;li&gt;Run legacy apps on multiple operating system types and versions&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;One of the reason for renewed interest in virtualization for modern systems was ever-growing size of server farms (datacentres).&lt;/p&gt;

&lt;p&gt;&lt;a href="https://en.wikipedia.org/wiki/VMware"&gt;VMware&lt;/a&gt; was one of the first providers that successfully virtualized x86 architecture. Server farms eventually led to the rise of on-demand, Internet-connected virtual servers, the infrastructure we now know as &lt;strong&gt;cloud computing&lt;/strong&gt;.&lt;/p&gt;

&lt;h3 id="hypervisor"&gt;Hypervisor&lt;/h3&gt;

&lt;p&gt;As stated in the beginning of the article, virtualization is an overloaded term, used in many contexts, and there are many types of virtualization and many concepts, phrases and acronyms. We will try to tackle a few of them in this article.&lt;/p&gt;

&lt;p&gt;A hypervisor (also known as a virtual machine monitor) is a software layer that sits between virtual machines (VMs) and the underlying hardware on which they run. This software is responsible for sharing the resources (such as memory and processing) among the guest operating systems, which are running independently, and doesn't have to be of a same kind.&lt;/p&gt;

&lt;h3 id="fullvirtualization"&gt;Full virtualization&lt;/h3&gt;

&lt;p&gt;Full Virtualization was introduced by IBM in the year 1966. Hypervisors, in the beginning, fully emulated underlying hardware having virtual replacements for all the basic resources such as : hard disks, network devices, interrupts, motherboard hardware, etc. In this mode guests are running without modifications, nothing is changed in the binary of the guest operating system itself , but because of the constant translation by the hypervisor between virtual and actual hardware, it incurs a performance penalty.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--aKGHrbvq--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://bojana.dev/wp-content/uploads/2021/01/image-1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--aKGHrbvq--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://bojana.dev/wp-content/uploads/2021/01/image-1.png" alt="Ring 3 &amp;lt;br&amp;gt;
Ring 2 &amp;lt;br&amp;gt;
Ring 1 &amp;lt;br&amp;gt;
Ring O &amp;lt;br&amp;gt;
User Application &amp;lt;br&amp;gt;
Guest OS &amp;lt;br&amp;gt;
VM Manager &amp;lt;br&amp;gt;
Binary Translation &amp;lt;br&amp;gt;
System Hardware "&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Hypervisor utilize what is called "trap and emulate" strategy. Essentially, each guest operating system "thinks" it is running on "bare metal" hardware and therefore it does exactly what if would have done on bare-metal processor, meaning it would try to execute certain privileged instructions thinking it has right privilege (but it doesn't  have since it's run as user-level process on top of a hypervisor). When this happens, it will result in a trap into the hypervisor and hypervisor will then *emulate* the intended functionality of the particular OS.&lt;/p&gt;

&lt;h3 id="paravirtualization"&gt;Paravirtualization&lt;/h3&gt;

&lt;p&gt;We said in past paragraph when explaining the full virtualization,  how guest operating system are run *unmodified* on top of the hypervisor.&lt;/p&gt;

&lt;p&gt;Paravirtualization approach modifies guest OSes to include optimizations and avoid problematic instructions (ex. guest OS is able to see the real hardware resources). Basically, guest operating systems can detect their virtual state and actively cooperate with hypervisor to access hardware. This improves performance. The downside is that guest operating systems need substantial updates to run this way, and the way that guest operating systems need to be modified in great depends of the specific hypervisor in use. &lt;a href="https://xenproject.org/"&gt;Xen&lt;/a&gt; introduced this type of virtualization.&lt;/p&gt;

&lt;h3 id="hvm"&gt;Hardware assisted virtulization&lt;/h3&gt;

&lt;p&gt;This approach enables full virtualization using the help from the hardware, primarily from the host processors. &lt;/p&gt;

&lt;p&gt;In this setup CPU has virtualization capabilities built into it. For instance CPU is able to "pretend" that is  2 or 3 or 4 independent separate computer systems to the OS running on it. &lt;/p&gt;

&lt;p&gt;The benefits of the hardware assisted virtualization as oppose to paravirtualization is that the mentioned changes in the guest operating system are not needed, instead hypervisors are using extensions in the CPU itself to run some (or all) instructions directly on the hardware without a software emulation.&lt;/p&gt;

&lt;p&gt;Hardware-assisted virtualization was added to &lt;a href="https://en.wikipedia.org/wiki/X86"&gt;x86&lt;/a&gt; processors (&lt;a href="https://en.wikipedia.org/wiki/Intel_VT-x"&gt;Intel VT-x&lt;/a&gt; or &lt;a href="https://en.wikipedia.org/wiki/AMD-V"&gt;AMD-V&lt;/a&gt;) in 2005 and 2006.&lt;/p&gt;

&lt;p&gt;Nowadays most of the CPUs have this abilities.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--f_mQlb7k--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://bojana.dev/wp-content/uploads/2021/01/image.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--f_mQlb7k--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://bojana.dev/wp-content/uploads/2021/01/image.png" alt="CPU &amp;lt;br&amp;gt;
% Utilisation &amp;lt;br&amp;gt;
60 seco &amp;lt;br&amp;gt;
Utilisation &amp;lt;br&amp;gt;
33% &amp;lt;br&amp;gt;
Processes &amp;lt;br&amp;gt;
354 &amp;lt;br&amp;gt;
Up time &amp;lt;br&amp;gt;
Intel(R) Core(TM) i7-8665U CPU @ 1.90GHz &amp;lt;br&amp;gt;
Speed &amp;lt;br&amp;gt;
2.84 GHz &amp;lt;br&amp;gt;
Threads Handles &amp;lt;br&amp;gt;
4808 171060 &amp;lt;br&amp;gt;
Base speed: &amp;lt;br&amp;gt;
Sockets: &amp;lt;br&amp;gt;
Cores: &amp;lt;br&amp;gt;
10 iCal rocessors: &amp;lt;br&amp;gt;
Virtualisation. &amp;lt;br&amp;gt;
12 cache: &amp;lt;br&amp;gt;
cache: &amp;lt;br&amp;gt;
2.11 GHz &amp;lt;br&amp;gt;
4 &amp;lt;br&amp;gt;
Enabled &amp;lt;br&amp;gt;
10 MB &amp;lt;br&amp;gt;
80 MB "&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3 id="type1vstype2"&gt;Type 1 vs. Type 2 hypervisor&lt;/h3&gt;

&lt;p&gt;Many references draws distinction between two main types of hypervisors, Type 1 and Type 2.&lt;/p&gt;

&lt;p&gt;The former one ("Type 1" or often referred as "bare metal") is the one that runs directly on the hardware of the host, it doesn't need a supporting  operating system, in fact it acts as a lightweight operating system. The physical machine were type 1 hypervisor is running serves for virtualization purposes only.&lt;/p&gt;

&lt;p&gt;Because there is no overhead of the operating system, type 1 hypervisors are considered highly secure, and also very performant and stable. Usually they are used in enterprise environments.&lt;/p&gt;

&lt;p&gt;Typical vendors for type 1 hypervisors are: VMWare vSphere with ESX/ESXi, KVM (Kernel-Based Virtual Machine), Microsoft Hyper-V, Oracle VM, Citrix Hypervisor (Xen Server), etc.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--myBdE5ZP--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://bojana.dev/wp-content/uploads/2021/01/image.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--myBdE5ZP--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://bojana.dev/wp-content/uploads/2021/01/image.jpeg" alt="Types of Hypervisor &amp;lt;br&amp;gt;
App &amp;lt;br&amp;gt;
os &amp;lt;br&amp;gt;
App &amp;lt;br&amp;gt;
os &amp;lt;br&amp;gt;
App &amp;lt;br&amp;gt;
os &amp;lt;br&amp;gt;
APP &amp;lt;br&amp;gt;
os &amp;lt;br&amp;gt;
APP &amp;lt;br&amp;gt;
os &amp;lt;br&amp;gt;
APP &amp;lt;br&amp;gt;
os &amp;lt;br&amp;gt;
Hypervisor &amp;lt;br&amp;gt;
Hardware &amp;lt;br&amp;gt;
Typel Hypervisor &amp;lt;br&amp;gt;
Hypervisor &amp;lt;br&amp;gt;
Operating System &amp;lt;br&amp;gt;
Hardware &amp;lt;br&amp;gt;
Type2 Hypervisor &amp;lt;br&amp;gt;
www.mycloudwikicom "&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In contrast to Type 1, Type 2 hypervisors are user-space applications, running inside of an operating system. They are also called "hosted hypervisors". They are also managing calls for CPU, memory, disk, network etc. But they do it through the operating system of the host. They are convenient as they are installed on OS as any other applications.&lt;/p&gt;

&lt;p&gt;The downside of this type of hypervisors are that if resources not carefully allocated they can overhaul the system, causing the crash. This something that bare-metal hypervisors are doing dynamically, depending on the needs of particular VM. However, hosted hypervisors are really nice for testing and research projects.&lt;/p&gt;

&lt;p&gt;Typical vendors for type 2 hypervisors are: Oracle VM Virtual Box, Vmware Workstation, Microsoft Hyper-V, Oracle VM, Parallels Desktop, etc.&lt;/p&gt;

&lt;h3 id="containerization"&gt;Coming up next...&lt;/h3&gt;

&lt;p&gt;In one of the next articles there will be word about containers and containerization as a major trend and a companion of virtualization.&lt;/p&gt;








&lt;p&gt;&lt;sup&gt;1 - &lt;/sup&gt;The term &lt;a href="https://en.wikipedia.org/wiki/Hypervisor"&gt;hypervisor &lt;/a&gt;is a variant of supervisor, a traditional term for the kernel of an operating system: the hypervisor is the supervisor of the supervisors, with hyper- used as a stronger variant of super-. The term dates to circa 1970; in the earlier CP/CMS (1967) system, the term Control Program was used instead&lt;/p&gt;

&lt;p&gt;&lt;br&gt;&lt;br&gt;
&lt;i&gt;Article originally published on: &lt;a href="bojana.dev"&gt;bojana.dev&lt;/a&gt;&lt;/i&gt;&lt;/p&gt;

</description>
      <category>virtualization</category>
      <category>cloud</category>
      <category>hypervisors</category>
    </item>
    <item>
      <title>How to deploy database to Azure using Azure DevOps</title>
      <dc:creator>Bojana Dejanović</dc:creator>
      <pubDate>Sun, 02 Aug 2020 08:47:55 +0000</pubDate>
      <link>https://dev.to/bojana_dev/how-to-deploy-database-to-azure-using-azure-devops-2ekh</link>
      <guid>https://dev.to/bojana_dev/how-to-deploy-database-to-azure-using-azure-devops-2ekh</guid>
      <description>&lt;p&gt;Originally published on: &lt;a href="bojana.dev"&gt;bojana.dev&lt;/a&gt;&lt;/p&gt;




&lt;p&gt;Before starting of, I must say there is a lot of ways to do this, depending on the situation and particular context you are in, but here I will be talking about how to deploy database which you have ready in an sql project within solution in Visual Studio. So the journey here is from sql project in the Visual Studio to the actual deployed database running in Azure.&lt;/p&gt;

&lt;p&gt;Kind of a typical DevOps task you might say.&lt;/p&gt;

&lt;p&gt;In my particular case, I had a couple of C# projects (targeting .NET Core)  in a solution also, which are making up an API that I also wanted to include in my pipeline, as well as the above mentioned sql project.&lt;/p&gt;

&lt;p&gt;So my idea of how it would look like is something like this:&lt;/p&gt;

&lt;p&gt;Restore/Build cs projects --&amp;gt; Run tests --&amp;gt; Build and deploy db project&lt;/p&gt;

&lt;p&gt;Very simple. Now, having an API in .NET core I wanted to use a Microsoft-hosted Linux agent. Good thing about these agents is that each time you run a pipeline, new VM is spun up and after pipeline finished executing it's discarded. You don't have to worry about configuring it etc. you just choose it and run it.&lt;/p&gt;

&lt;p&gt;However, as I found out  currently Linux agents cannot build/deploy sql projects. So in order to complete my goal, I had to use two pipelines. First pipeline (Linux Agent) builds the .NET core projects in the solution, runs tests,  etc.&lt;/p&gt;

&lt;p&gt;The other pipeline, triggered upon successful finish of the first one is running on a Windows agent, it uses MSBuild as a first step to build an sqlproj.&lt;/p&gt;

&lt;p&gt;Then copies a dacpac file, which is a product of the database build and describes the database schema so it can be be updated/deployed. After which Azure SQL Dacpac task is run to actually deploy the db to an Azure SQL instance configured from that file created in the first step, and copied in the second.&lt;/p&gt;

&lt;p&gt;Before jumping to Azure portal and configuring pipelines, let's first make sure we have all the preconditions set.&lt;/p&gt;

&lt;h2&gt;Adjust sql database project settings&lt;/h2&gt;

&lt;p&gt;First of all, in order to deploy our db to Azure, we have to make sure to choose right target platform for our db.&lt;/p&gt;

&lt;p&gt;To do so, go to the properties of your sql project in Visual Studio, and in "Project Settings" tab choose "Microsoft Azure SQL Database"&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--L0ZyZQ1p--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://bojana.dev/wp-content/uploads/2020/07/image.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--L0ZyZQ1p--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://bojana.dev/wp-content/uploads/2020/07/image.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I mentioned above that sql project, when successfully built produces dacpac file.&lt;/p&gt;

&lt;p&gt;What is DAC or dacpac in the first place?&lt;/p&gt;

&lt;p&gt;According to Microsoft documentation:&lt;/p&gt;

&lt;p&gt;&lt;em&gt;A data-tier application (DAC) is a logical database management entity that defines all SQL Server objects - such as tables, views, and instance objects - associated with a user's database. It is a self-contained unit of SQL Server database deployment that enables data-tier developers and DBAs to package SQL Server objects into a portable artifact called a DAC package, or .dacpac file.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;A DACPAC is a Windows file with a .dacpac extension. The file supports an open format consisting of multiple XML sections representing details of the DACPAC origin, the objects in the database, and other characteristics. An advanced user can unpack the file using the DacUnpack.exe utility that ships with the product to inspect each section more closel&lt;/em&gt;y&lt;/p&gt;

&lt;p&gt;You can found more information about DAC &lt;a href="https://docs.microsoft.com/en-us/sql/relational-databases/data-tier-applications/data-tier-applications?view=sqlallproducts-allversions"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;In short this file holds your database schema definition and when used with appropriate tools you can recreate your database from this file on another SQL instance. This file can be generated in multiple ways, you can extract in within SSMS or within Visual Studio, but since we want to include this file in the CI/CD pipeline, we will generate this file when sql project is built.&lt;/p&gt;

&lt;p&gt;To do so, go again to Properties of db project, then Build tab, and enter your database name or whatever you like, to the "Build output file name" filed as below:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--LM_0WsWO--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://bojana.dev/wp-content/uploads/2020/07/image-1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--LM_0WsWO--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://bojana.dev/wp-content/uploads/2020/07/image-1.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This name will be used when creating a file with dacpac extension during build. Make a note of it as we will use that in the pipeline.&lt;/p&gt;

&lt;p&gt;When we change this properties, you can see in the sqlproj file that these two properties were added:&lt;/p&gt;

&lt;pre&gt;&lt;code&gt; &amp;lt;DSP&amp;gt;Microsoft.Data.Tools.Schema.Sql.SqlAzureV12DatabaseSchemaProvider&amp;lt;/DSP&amp;gt;

 &amp;lt;SqlTargetName&amp;gt;MyDatabase&amp;lt;/SqlTargetName&amp;gt;&lt;/code&gt;&lt;/pre&gt;

&lt;h2&gt;Creating build pipeline in Azure DevOps&lt;/h2&gt;

&lt;p&gt;Now we can go on and create a pipeline for building sql project and deploying a database. I mentioned having two pipelines above, since I wanted to have it all connected, but I will just include steps for the second one, as for building a .NET core app/api there is already a predefined &lt;a href="https://docs.microsoft.com/en-us/azure/devops/pipelines/ecosystems/dotnet-core?view=azure-devops"&gt;template in Azure&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--jHLWGsHg--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://bojana.dev/wp-content/uploads/2020/07/image-3-1024x377.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--jHLWGsHg--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://bojana.dev/wp-content/uploads/2020/07/image-3-1024x377.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As you can see in the screenshot above, the first step is to call MSBuild to build up our sql project.&lt;/p&gt;

&lt;p&gt;The important field is "Project" where we put : &lt;strong&gt;**/*.sqlproj&lt;/strong&gt; so it only looks for sql project and builds those.&lt;/p&gt;

&lt;p&gt;Everything else can be left as default, such as MSBuild version and architecture.&lt;/p&gt;

&lt;p&gt;The next step, copying files, is to take that .dacpac file generated after MSBuild has completed build for sql project, and copy it to Build.ArtifactStagingDirectory which is a predefined azure variable and it is typical used to publish build artifacts, such as our dacpac file here. You can find more about this and other variables &lt;a href="https://docs.microsoft.com/en-us/azure/devops/pipelines/build/variables?view=azure-devops&amp;amp;tabs=yaml"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--QkuPRDNE--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://bojana.dev/wp-content/uploads/2020/07/image-2-1024x587.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--QkuPRDNE--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://bojana.dev/wp-content/uploads/2020/07/image-2-1024x587.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The final step is actual database deployment using Azure SQL DacpacTask.&lt;/p&gt;

&lt;p&gt;This step requires you already have Azure SQL server provisioned and according database created.&lt;/p&gt;

&lt;p&gt;I am using here SQL Server Authentication but you can use ConnectionString, Active Directory etc.&lt;/p&gt;

&lt;p&gt;It is obvious that for Deploy type we should use SQL DACPAC file and in the DACPAC file we should enter the path to our dacpac. Now, since we named our dacpac file MyDatabase, by using path **/MyDatabase.dacpac we will instruct Azure, or rather Azure agent to search through all sub folders and find our file.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--xTGpEqzW--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://bojana.dev/wp-content/uploads/2020/07/image-4-1024x666.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--xTGpEqzW--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://bojana.dev/wp-content/uploads/2020/07/image-4-1024x666.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And that's it!&lt;br&gt;You should now have fully operational pipeline, that builds your database project and deploys it to Azure.&lt;/p&gt;



</description>
      <category>azure</category>
      <category>azuredevops</category>
      <category>database</category>
    </item>
    <item>
      <title>Private Github with gogs and raspberry pi</title>
      <dc:creator>Bojana Dejanović</dc:creator>
      <pubDate>Sat, 18 Aug 2018 13:05:38 +0000</pubDate>
      <link>https://dev.to/bojana_dev/private-github-with-gogs-and-raspberry-pi-46m3</link>
      <guid>https://dev.to/bojana_dev/private-github-with-gogs-and-raspberry-pi-46m3</guid>
      <description>&lt;p&gt;If you are by any means involved in any part of the software development process, chances are you have heard or used (or both) git, and for sure - github.  &lt;/p&gt;

&lt;p&gt;Github is great, you can create free account in no time, and be ready for pushing changes down your repos. There is just one catch - these repositories you are creating on the github are public.   &lt;/p&gt;

&lt;p&gt;Which is fine, for most uses cases, especially managing and maintaining open-source projects.   &lt;/p&gt;

&lt;p&gt;A lot of big companies have their repos publicly available on github. Companies like Google, Amazon and Microsoft, who recently acquired entire service and is recognized now as a &lt;a href="https://medium.freecodecamp.org/the-top-contributors-to-github-2017-be98ab854e87" rel="noopener noreferrer"&gt;biggest contributor&lt;/a&gt; on the whole github platform.  &lt;/p&gt;

&lt;p&gt;Github have an option for private repositories of course, but it is a paid service, and depending on size of the team and included features, &lt;a href="https://github.com/pricing" rel="noopener noreferrer"&gt;prices vary&lt;/a&gt;.   &lt;/p&gt;

&lt;p&gt;7$/month is not something super pricey, especially if you are using git as a irreplaceable every day tool, whether you are a lone developer or working in a team. And you don't want to mess around with configuring and maintaining a service, you want something that works right "out of the box".   &lt;/p&gt;

&lt;p&gt;With that said, it is far more interesting (at least for me), if you would install and configure self-hosted git service yourself.  &lt;/p&gt;

&lt;p&gt;Why? Simply, because you can. :)  &lt;/p&gt;

&lt;p&gt;All you need is a raspberry pi, and a dozen minutes to spend on reading this how to. ;) So let's dive in.  &lt;/p&gt;

&lt;h2&gt;
  
  
  Installing Raspbian Lite on RaspberryPi
&lt;/h2&gt;

&lt;p&gt;If you are in a possession of any model of raspberry pi, and it is sitting in the drawer doing nothing (like it was a case with mine), you can put it to good and practical use.  &lt;/p&gt;

&lt;p&gt;I bought my piece of raspberry pi almost two years ago, it is a RaspberyPi 2 model B+. But any other variant will do, as the things we are going to install and configure will be working fine on any.  &lt;/p&gt;

&lt;p&gt;I have equipped mine with a 32GB SD card but a 16GB will suffice as well.  &lt;/p&gt;

&lt;p&gt;For to the image to be flashed to the SD card I've chosen Raspbian Lite, it's smaller in size, saving space on our sd card, and we don't need GUI for our purposes, since the most of the configuration we will be performing remotely through the CLI.  &lt;/p&gt;

&lt;p&gt;Raspbian is officialy supported OS by the Raspberry Pi Foundation, so you can easilly &lt;a href="https://www.raspberrypi.org/downloads/raspbian/" rel="noopener noreferrer"&gt;download&lt;/a&gt; image or .zip and flash it to the SD card with tool like &lt;a href="https://etcher.io/" rel="noopener noreferrer"&gt;Etcher&lt;/a&gt; as recommended on the &lt;a href="https://www.raspberrypi.org/documentation/installation/installing-images/README.md" rel="noopener noreferrer"&gt;docs page&lt;/a&gt; of the project.  &lt;/p&gt;

&lt;h2&gt;
  
  
  Installing and configuring Gogs
&lt;/h2&gt;

&lt;p&gt;Gogs is a cross-platform self-hosted git service written in Go.  &lt;/p&gt;

&lt;p&gt;Before we download it we need to setup a few things which are preruquisite for Gogs, as listed in their &lt;a href="https://gogs.io/docs/installation" rel="noopener noreferrer"&gt;documentation&lt;/a&gt; those are: &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;MySQL database (MSSQL and PostgreSQL are also supported, but I've chosen MySQL) &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Git (bash) version &amp;gt;= 1.7.1 for both server and client sides &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;functioning SSH server  &lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Before performing any installs, be sure your system is up-to-date: &lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo apt-get update &amp;amp;&amp;amp; sudo apt-get upgrade 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;1) After this, we can install and configure MySQL server: &lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo apt-get install mysql-server 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;If you were not prompted to enter a password of a root user type:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo mysql_secure_installation
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;You can answer the question as it suits your needs, as long as you have root access to the MySQL server.&lt;br&gt;
In case you want some other user (other than root), to be used for accessing gogs database, you have to grant the permission to the created database or entire permissions.&lt;br&gt;
After accessing MySQL command with:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo mysql -u root -p
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;and entering root's password, perform:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;GRANT ALL PRIVILEGES ON *.* TO 'raspberryuser'@'localhost' IDENTIFIED BY 'password'; 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Now, while we are at the MySQL prompt, we can create a gogs database with appropriate collation:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;CREATE DATABASE IF NOT EXISTS gogs COLLATE utf8_general_ci ; 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;2) Now, make sure you have installed git on your pi, by simple running:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo apt-get install git
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;3) As a last prerequisite gogs documentations mentions having functional SSH server. Now, when you run a gogs service it will run it's own SSH server on default port 22. To avoid collision with the system SSH server, the easiest solution is to change port of the system ssh.&lt;br&gt;
You can do that by editing following file:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;nano /etc/ssh/sshd_config
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Uncomment the line :&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#Port 22
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;and change the port number to something else (ex. 2244).&lt;br&gt;
You will need to restart the ssh service:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;service ssh restart
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Additionally, allow gogs to bind as privileged port, perform:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo setcap CAP_NET_BIND_SERVICE=+eip /path/to/gogs
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Finally, now we can download gogs, simply perform : &lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;wget https://dl.gogs.io/0.11.53/gogs_0.11.53_raspi2_armv6.zip 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;in the command line. This should download the binary in your current folder. &lt;/p&gt;

&lt;p&gt;Extract the contents of the file, and then: &lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cd extracted_folder 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Execute: &lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;./gogs web  
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;It should launch the install page of the gogs service, which you can access externally from the web browser, by entering: &lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;http://ip-of-your-raspberrypi:3000 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;In my case that was: &lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;http://192.168.0.14:3000 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;And you should be prompted with the installation page, that looks like this: &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/http%3A%2F%2Fbojanab.tech%2Fimg%2Fgogs-install-firsttimerum1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/http%3A%2F%2Fbojanab.tech%2Fimg%2Fgogs-install-firsttimerum1.png" alt="Gogs install page"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Fill out the form to match your user and database settings, and the rest of the configuration involving application port, url and log path, as shown below&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/http%3A%2F%2Fbojanab.tech%2Fimg%2Finstall-gogs.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/http%3A%2F%2Fbojanab.tech%2Fimg%2Finstall-gogs.png" alt="Gogs install page"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;and hit 'Install Gogs'. If everything went well, you will probably be redirected to the user login page. However, "locahost" will be used for hostname, so replace it with your pi's IP address, so you can create account on your new installation of gogs.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/http%3A%2F%2Fbojanab.tech%2Fimg%2Flocalhostshouldbeipaddress.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/http%3A%2F%2Fbojanab.tech%2Fimg%2Flocalhostshouldbeipaddress.png" alt="Replace localhost with ip"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/http%3A%2F%2Fbojanab.tech%2Fimg%2Fsigninggogs.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/http%3A%2F%2Fbojanab.tech%2Fimg%2Fsigninggogs.png" alt="Gogs sign in page"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now you can click "Sign up now" to create your new account.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/http%3A%2F%2Fbojanab.tech%2Fimg%2Fgogssignup.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/http%3A%2F%2Fbojanab.tech%2Fimg%2Fgogssignup.png" alt="Gogs sign up page"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now you can login with your newly created account, and start creating repos!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/http%3A%2F%2Fbojanab.tech%2Fimg%2Fgogsdashboard.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/http%3A%2F%2Fbojanab.tech%2Fimg%2Fgogsdashboard.png" alt="Gogs dashboard"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now, we don't want to launch gogs with ./gogs web everytime we lost ssh connection with our pi, it would be good to run gogs as daemon, so it's runnig in the background and it's always on.&lt;/p&gt;

&lt;p&gt;Copy an init.d script from a extracted gogs folder:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt; sudo cp /home/malina/gogs/scripts/init/debian/gogs /etc/init.d/gogs
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;and modify &lt;strong&gt;WORKING_DIR&lt;/strong&gt; and &lt;strong&gt;USER&lt;/strong&gt;&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# PATH should only include /usr/* if it runs after the mountnfs.sh script
PATH=/sbin:/usr/sbin:/bin:/usr/bin
DESC="Gogs"
NAME=gogs
SERVICEVERBOSE=yes
PIDFILE=/var/run/$NAME.pid
SCRIPTNAME=/etc/init.d/$NAME
WORKINGDIR=/home/malina/gogs
DAEMON=$WORKINGDIR/$NAME
DAEMON_ARGS="web"
USER=malina
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Now we should make it run automatically on boot time with:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo chmod ug+x /etc/init.d/gogs
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;And to make sure it starts after the database server:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo update-rc.d gogs defaults 98
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;We can start gogs as any service with:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo service gogs start
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;If it for some reason service failed to start, perform reboot and then try again.&lt;/p&gt;

&lt;p&gt;Additionally, you can configure port forwarding on your home router, so you can access your private github even when you are not at home. &lt;/p&gt;

&lt;p&gt;And that's it, now you have your own private github!&lt;/p&gt;

&lt;p&gt;Go push some code! ;)&lt;/p&gt;




&lt;p&gt;Originally published at &lt;a href="http://bojana.dev" rel="noopener noreferrer"&gt;http://bojana.dev&lt;/a&gt;&lt;/p&gt;

</description>
      <category>git</category>
      <category>github</category>
      <category>gogs</category>
      <category>raspberrypi</category>
    </item>
  </channel>
</rss>
