<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Ahmet Turkmen</title>
    <description>The latest articles on DEV Community by Ahmet Turkmen (@mrturkmen).</description>
    <link>https://dev.to/mrturkmen</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/mrturkmen"/>
    <language>en</language>
    <item>
      <title>from interview question to enlightenment: metaclasses in python</title>
      <dc:creator>Ahmet Turkmen</dc:creator>
      <pubDate>Sun, 18 Jun 2023 10:20:00 +0000</pubDate>
      <link>https://dev.to/mrturkmen/from-interview-question-to-enlightenment-metaclasses-in-python-2p9g</link>
      <guid>https://dev.to/mrturkmen/from-interview-question-to-enlightenment-metaclasses-in-python-2p9g</guid>
      <description>&lt;p&gt;metaclasses in python&lt;/p&gt;

&lt;p&gt;&lt;a href="https://mrturkmen.com/posts/metaclasses-python/"&gt;https://mrturkmen.com/posts/metaclasses-python/&lt;/a&gt;&lt;/p&gt;

</description>
    </item>
    <item>
      <title>automate: run github ci/cd through slack slash command</title>
      <dc:creator>Ahmet Turkmen</dc:creator>
      <pubDate>Sun, 15 Jan 2023 01:20:00 +0000</pubDate>
      <link>https://dev.to/mrturkmen/automate-run-github-cicd-through-slack-slash-command-5ehm</link>
      <guid>https://dev.to/mrturkmen/automate-run-github-cicd-through-slack-slash-command-5ehm</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F565stq50d6hofp1sczpf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F565stq50d6hofp1sczpf.png" alt="Workflow from Slack command to Github actions through AWS Gateway" width="689" height="161"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Easy integration for running Github workflow files through Slack slash command&lt;/p&gt;


&lt;div class="crayons-card c-embed text-styles text-styles--secondary"&gt;
      &lt;div class="c-embed__cover"&gt;
        &lt;a href="https://mrturkmen.com/posts/automate-ci-cd-with-slack-command/" class="c-link s:max-w-50 align-middle" rel="noopener noreferrer"&gt;
          &lt;img alt="" src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmrturkmen.com%2Fimages%2Fworkflow.png" height="400" class="m-0" width="800"&gt;
        &lt;/a&gt;
      &lt;/div&gt;
    &lt;div class="c-embed__body"&gt;
      &lt;h2 class="fs-xl lh-tight"&gt;
        &lt;a href="https://mrturkmen.com/posts/automate-ci-cd-with-slack-command/" rel="noopener noreferrer" class="c-link"&gt;
          automate: run github ci/cd through slack slash command | mrturkmen
        &lt;/a&gt;
      &lt;/h2&gt;
        &lt;p class="truncate-at-3"&gt;
          Easy integration for running Github workflow files through Slack slash command
        &lt;/p&gt;
      &lt;div class="color-secondary fs-s flex items-center"&gt;
          &lt;img alt="favicon" class="c-embed__favicon m-0 mr-2 radius-0" src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmrturkmen.com%2F%253Clink%2520%2F%2520abs%2520url%253E" width="800" height="400"&gt;
        mrturkmen.com
      &lt;/div&gt;
    &lt;/div&gt;
&lt;/div&gt;


</description>
      <category>devops</category>
      <category>slack</category>
      <category>tutorial</category>
      <category>productivity</category>
    </item>
    <item>
      <title>packer: build custom images on cloud and local</title>
      <dc:creator>Ahmet Turkmen</dc:creator>
      <pubDate>Sat, 17 Apr 2021 09:00:00 +0000</pubDate>
      <link>https://dev.to/mrturkmen/packer-build-custom-images-on-cloud-and-local-53ap</link>
      <guid>https://dev.to/mrturkmen/packer-build-custom-images-on-cloud-and-local-53ap</guid>
      <description>&lt;ul&gt;
&lt;li&gt;
Build Custom Ubuntu 20.04 LTS on Local

&lt;ul&gt;
&lt;li&gt;Anatomy of Packer Configuration File&lt;/li&gt;
&lt;li&gt;Builders&lt;/li&gt;
&lt;li&gt;Provisioner&lt;/li&gt;
&lt;li&gt;Post Processors&lt;/li&gt;
&lt;li&gt;Communicator&lt;/li&gt;
&lt;li&gt;How to run locally&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

Build Custom Ubuntu 20.04 LTS on Cloud

&lt;ul&gt;
&lt;li&gt;Builders on Cloud&lt;/li&gt;
&lt;li&gt;Customize settins on Cloud&lt;/li&gt;
&lt;li&gt;How to run&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;




&lt;p&gt;&lt;strong&gt;THE REPOSITORY: &lt;a href="https://github.com/mrtrkmnhub/ubuntu-packer" rel="noopener noreferrer"&gt;https://github.com/mrtrkmnhub/ubuntu-packer&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In this blog post, provisioning and customizing images using packer will be shown with a template repository.&lt;/p&gt;

&lt;p&gt;If you are asking or wondering what is Packer, the official definition is :&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Packer is a free and open source tool for creating golden images for multiple platforms from a single source configuration. (From Official Website).&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;This post includes provisioning of ubuntu image on AWS and local.&lt;/p&gt;

&lt;h1&gt;
  
  
  Build Custom Ubuntu 20.04 LTS on Local
&lt;/h1&gt;

&lt;p&gt;In an ideal repository of Packer template, it would be nice to have a skeleton where it includes &lt;code&gt;uploads&lt;/code&gt;, &lt;code&gt;http&lt;/code&gt;, &lt;code&gt;scripts&lt;/code&gt; folders along packer configuration file with a readme. Overall, the structure of folder might look like this :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
    ├── http
    │ └── preseed.cfg # required to change defualt values of ubuntu image
    ├── readme.md # readme file to have instructions about what to do
    ├── scripts # scripts/ dir, includes scripts to run on custom image
    │ ├── cleanup.sh # cleans up /tmp 
    │ ├── install_tools.sh # installs custom tools
    │ └── setup.sh # setting up config in system wise
    ├── ubuntu-20.04.json # packer config for ubuntu 20.04
    └── uploads # directory to upload files to custom image 
        └── .gitkeep    

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In this setup, &lt;code&gt;http/preseed.cfg&lt;/code&gt; defines answers to the questions which may be asked during installation of Ubuntu operating system. More information regarding to &lt;em&gt;preseed.cfg&lt;/em&gt; file can be checked from &lt;a href="https://wiki.debian.org/DebianInstaller/Preseed" rel="noopener noreferrer"&gt;its wiki&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;code&gt;scripts&lt;/code&gt; folder composed of bash scripts, chef, ansible or any other installer configuration files or scripts which will install customized tools and define settings of ubuntu image.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;code&gt;uploads&lt;/code&gt; folder includes all files, deb packages, or any other files which will be copied to image which will be inside customized image.&lt;/p&gt;

&lt;h2&gt;
  
  
  Anatomy of Packer Configuration File
&lt;/h2&gt;

&lt;p&gt;Any packer file composed of three main components which are ;&lt;/p&gt;

&lt;h3&gt;
  
  
  Builders
&lt;/h3&gt;

&lt;p&gt;Define the desired platform and platform configurations, including API Key information and desired source images. Example snippet is given from the Packer file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
    "builders": [
    {
      "boot_command": [
        "&amp;lt;esc&amp;gt;&amp;lt;wait&amp;gt;",
        "&amp;lt;esc&amp;gt;&amp;lt;wait&amp;gt;",
        "&amp;lt;enter&amp;gt;&amp;lt;wait&amp;gt;",
        "/install/vmlinuz&amp;lt;wait&amp;gt;",
        " auto&amp;lt;wait&amp;gt;",
        " console-setup/ask_detect=false&amp;lt;wait&amp;gt;",
        " console-setup/layoutcode=us&amp;lt;wait&amp;gt;",
        " console-setup/modelcode=pc105&amp;lt;wait&amp;gt;",
        " debconf/frontend=noninteractive&amp;lt;wait&amp;gt;",
        " debian-installer=en_US&amp;lt;wait&amp;gt;",
        " fb=false&amp;lt;wait&amp;gt;",
        " initrd=/install/initrd.gz&amp;lt;wait&amp;gt;",
        " kbd-chooser/method=us&amp;lt;wait&amp;gt;",
        " keyboard-configuration/layout=USA&amp;lt;wait&amp;gt;",
        " keyboard-configuration/variant=USA&amp;lt;wait&amp;gt;",
        " locale=en_US&amp;lt;wait&amp;gt;",
        " netcfg/get_domain=vm&amp;lt;wait&amp;gt;",
        " netcfg/get_hostname=ubuntu&amp;lt;wait&amp;gt;",
        " grub-installer/bootdev=/dev/sda&amp;lt;wait&amp;gt;",
        " noapic&amp;lt;wait&amp;gt;",
        " preseed/url=http://:/preseed.cfg&amp;lt;wait&amp;gt;",
        " -- &amp;lt;wait&amp;gt;",
        "&amp;lt;enter&amp;gt;&amp;lt;wait&amp;gt;"
      ],
      "boot_wait": "10s",
      "format": "ova",
      "disk_size": 25240,
      "guest_additions_path": "VBoxGuestAdditions_.iso",
      "guest_os_type": "Ubuntu_64",
      "headless": true,
      "http_directory": "http",
      "iso_checksum": "sha256:f11bda2f2caed8f420802b59f382c25160b114ccc665dbac9c5046e7fceaced2",
      "iso_urls": [
        "iso/ubuntu-20.04.1-legacy-server-amd64.iso",
        "https://cdimage.ubuntu.com/ubuntu-legacy-server/releases/20.04/release/ubuntu-20.04.1-legacy-server-amd64.iso"
      ],
      "shutdown_command": "echo 'ubuntu'|sudo -S shutdown -P now",
      "ssh_password": "ubuntu",
      "ssh_port": 22,
      "ssh_timeout": "10000s",
      "ssh_username": "ubuntu",
      "type": "virtualbox-iso",
      "vboxmanage": [
        [
          "modifyvm",
          "",
          "--memory",
          "2048"
        ],
        [
          "modifyvm",
          "",
          "--cpus",
          "1"
        ]
      ],
      "virtualbox_version_file": ".vbox_version",
      "vm_name": "ubuntu_vm_ubuntu_20_"
    }
  ]

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In the builders config, we are defining some set of keys in JSON file, which are very obvious from its name, we are considering to build image locally. All the keys are important in given builders config however most important and might need to update time to time is &lt;code&gt;iso_urls&lt;/code&gt; which are the places where packer download iamges and customize it according to your scripts. Another crucial key is to have &lt;code&gt;headless&lt;/code&gt; value &lt;code&gt;true&lt;/code&gt; which means that there will be no GUI running when packer command is executed to run the Packer JSON file.&lt;/p&gt;

&lt;h3&gt;
  
  
  Provisioner
&lt;/h3&gt;

&lt;p&gt;Defines how to configure the image most likely by your using existing configuration management tools like Ansible, Chef, Puppet or pure bash scripts.&lt;/p&gt;

&lt;p&gt;In our example, bash scripts will be provided to install tools and update configuration of ubuntu image to make it customized. Provisioner section of a Packer JSON file can be seen as below:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
 "provisioners": [
    {
      "type": "file",
      "source":"uploads",
      "destination": "/home/ubuntu"
    },
    {
      "execute_command": "echo 'ubuntu' | sudo -S -E bash ''",
      "script": "scripts/install_tools.sh",
      "type": "shell"
    },
    {
      "execute_command": "echo 'ubuntu' | sudo -S -E bash ''",
      "script": "scripts/setup.sh",
      "type": "shell"
    },
    {
      "execute_command": "echo 'ubuntu' | sudo -S -E bash ''",
      "script": "scripts/cleanup.sh",
      "type": "shell"
    }
  ]

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here we are defining existing bash scripts in order to execute in the process of customizing Ubuntu image. The steps under provisioners are pretty clear.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The content of uploads file will be uploaded to home directory &lt;code&gt;/home/ubuntu&lt;/code&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In second step, &lt;code&gt;install_tools.sh&lt;/code&gt; will be executed and other steps will be followed in order.&lt;/p&gt;

&lt;h3&gt;
  
  
  Post Processors
&lt;/h3&gt;

&lt;p&gt;Related to the builder, runs after the image is built, it is generally used to generate or apply artifacts. In this example, it is not required however more information can be found here: &lt;a href="https://www.packer.io/docs/post-processors" rel="noopener noreferrer"&gt;post processors&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Communicator
&lt;/h3&gt;

&lt;p&gt;How packer works on the machine image during the creation. By default it is over SSH communication and it does not need to be defined explicitly. More information can be found here: &lt;a href="https://www.packer.io/docs/communicators" rel="noopener noreferrer"&gt;communicator&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Over all packer file can be seen as follow:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
    "builders": [
        {
        "boot_command": [
            "&amp;lt;esc&amp;gt;&amp;lt;wait&amp;gt;",
            "&amp;lt;esc&amp;gt;&amp;lt;wait&amp;gt;",
            "&amp;lt;enter&amp;gt;&amp;lt;wait&amp;gt;",
            "/install/vmlinuz&amp;lt;wait&amp;gt;",
            " auto&amp;lt;wait&amp;gt;",
            " console-setup/ask_detect=false&amp;lt;wait&amp;gt;",
            " console-setup/layoutcode=us&amp;lt;wait&amp;gt;",
            " console-setup/modelcode=pc105&amp;lt;wait&amp;gt;",
            " debconf/frontend=noninteractive&amp;lt;wait&amp;gt;",
            " debian-installer=en_US&amp;lt;wait&amp;gt;",
            " fb=false&amp;lt;wait&amp;gt;",
            " initrd=/install/initrd.gz&amp;lt;wait&amp;gt;",
            " kbd-chooser/method=us&amp;lt;wait&amp;gt;",
            " keyboard-configuration/layout=USA&amp;lt;wait&amp;gt;",
            " keyboard-configuration/variant=USA&amp;lt;wait&amp;gt;",
            " locale=en_US&amp;lt;wait&amp;gt;",
            " netcfg/get_domain=vm&amp;lt;wait&amp;gt;",
            " netcfg/get_hostname=ubuntu&amp;lt;wait&amp;gt;",
            " grub-installer/bootdev=/dev/sda&amp;lt;wait&amp;gt;",
            " noapic&amp;lt;wait&amp;gt;",
            " preseed/url=http://:/preseed.cfg&amp;lt;wait&amp;gt;",
            " -- &amp;lt;wait&amp;gt;",
            "&amp;lt;enter&amp;gt;&amp;lt;wait&amp;gt;"
        ],
        "boot_wait": "10s",
        "format": "ova",
        "disk_size": 25240,
        "guest_additions_path": "VBoxGuestAdditions_.iso",
        "guest_os_type": "Ubuntu_64",
        "headless": true,
        "http_directory": "http",
        "iso_checksum": "sha256:f11bda2f2caed8f420802b59f382c25160b114ccc665dbac9c5046e7fceaced2",
        "iso_urls": [
            "iso/ubuntu-20.04.1-legacy-server-amd64.iso",
            "https://cdimage.ubuntu.com/ubuntu-legacy-server/releases/20.04/release/ubuntu-20.04.1-legacy-server-amd64.iso"
        ],
        "shutdown_command": "echo 'ubuntu'|sudo -S shutdown -P now",
        "ssh_password": "ubuntu",
        "ssh_port": 22,
        "ssh_timeout": "10000s",
        "ssh_username": "ubuntu",
        "type": "virtualbox-iso",
        "vboxmanage": [
            [
            "modifyvm",
            "",
            "--memory",
            "2048"
            ],
            [
            "modifyvm",
            "",
            "--cpus",
            "1"
            ]
        ],
        "virtualbox_version_file": ".vbox_version",
        "vm_name": "ubuntu_vm_ubuntu_20_"
        }
    ],
    "provisioners": [
        {
        "type": "file",
        "source":"uploads",
        "destination": "/home/ubuntu"
        },
        {
        "execute_command": "echo 'ubuntu' | sudo -S -E bash ''",
        "script": "scripts/install_tools.sh",
        "type": "shell"
        },
        {
        "execute_command": "echo 'ubuntu' | sudo -S -E bash ''",
        "script": "scripts/setup.sh",
        "type": "shell"
        },
        {
        "execute_command": "echo 'ubuntu' | sudo -S -E bash ''",
        "script": "scripts/cleanup.sh",
        "type": "shell"
        }
    ],
    "variables": {
        "version": "0.1"
    }
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  How to run locally
&lt;/h2&gt;

&lt;p&gt;This file can be run from the place where &lt;a href="https://github.com/mrtrkmnhub/ubuntu-packer/blob/master/on-local/ubuntu-20.04.json" rel="noopener noreferrer"&gt;ubuntu-20.04.json&lt;/a&gt; file is located.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ packer build ubuntu-20.04.json

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmrturkmen.com%2Fassets%2Fimages%2Fpacker%2Fpacker_build_on_local.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmrturkmen.com%2Fassets%2Fimages%2Fpacker%2Fpacker_build_on_local.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;It will start to build custom image by installing tools which are defined under scripts and configure username and password according to preseed.cfg and setup.sh files.&lt;/p&gt;

&lt;h1&gt;
  
  
  Build Custom Ubuntu 20.04 LTS on Cloud
&lt;/h1&gt;

&lt;p&gt;It is more practical and preferrable to use if you already have an cloud option to consider. This packer configuration will create custom image directly on cloud and save it to AMIs to your AWS account.&lt;/p&gt;

&lt;p&gt;The anatomy of packer files is similar, only section which needs to be changed compared to local one, is builders section. It is defining all required AWS variables and AMIs to customize.&lt;/p&gt;

&lt;p&gt;As an cloud example AWS will be used to create custom image.&lt;/p&gt;

&lt;h2&gt;
  
  
  Builders on Cloud
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
"builders": [
        {
            "type":"amazon-ebs", 
            "region": "", 
            "access_key": "",
            "secret_key": "", 
            "subnet_id": "", 
            "security_group_id": "", 
            "source_ami_filter": {
                "filters": {
                    "virtualization-type": "hvm", 
                    "name": "ubuntu/images/*ubuntu-focal-20.04-amd64-server-*",
                    "root-device-type": "ebs"
                },
                "owners": ["099720109477"],
                "most_recent": true
            },
            "instance_type": "",
            "ssh_username":"ubuntu", 
            "ami_name": "ubuntu-ami-custom_"
        }

    ]

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In this configuration, all keys are important to consider, however there are some which are crucial and required to run it. More information about the keys can be found here: &lt;a href="https://www.packer.io/docs/builders/amazon" rel="noopener noreferrer"&gt;Amazon AMI Builder&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We would like to create a custom Ubuntu-20.04 image on cloud and save it as AMI to run it later, we are searching its pattern from available AMIs on AWS Management Console or it can be found through out this website : &lt;a href="https://cloud-images.ubuntu.com/locator/ec2/" rel="noopener noreferrer"&gt;https://cloud-images.ubuntu.com/locator/ec2/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once you have declared which AMI to customize, it needs to be located under &lt;code&gt;source_ami_filter&lt;/code&gt; with wildcards and owners. Setting &lt;code&gt;most_recent&lt;/code&gt; to &lt;code&gt;true&lt;/code&gt; means that when this Packer JSON file is executed it will fetch and customize last updated AMI.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Access Key, Secret Key&lt;/strong&gt; are required and should not be exposed to public in any moment, if exposed, they need to be updated immediately. They will be used to communicate with AWS to fire up instances to create custom image according to given settings defined in builders and provisioners.&lt;/p&gt;

&lt;p&gt;The values of keys are defined in &lt;strong&gt;variables&lt;/strong&gt; and parsed from out of it.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
"variables": {
        "aws_access_key": "",
        "aws_secret_key": "",
        "aws_region": "",
        "aws_vpc": "",
        "aws_subnet": "",
        "ami_name": "",
        "ami_description": "",
        "builder_name": "",
        "username":"ubuntu",
        "instance_type":"t2.medium",
        "tarball": ""
    }, 

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In variables section, &lt;strong&gt;username, instance_type, aws_access_key, aws_secret_key&lt;/strong&gt; variables should be set correctly to create the image on cloud. Other variables are optional and variables section can be populated more.&lt;/p&gt;

&lt;h2&gt;
  
  
  Customize settins on Cloud
&lt;/h2&gt;

&lt;p&gt;On cloud builds, cloud configuration file should be used instead of &lt;code&gt;preseed.cfg&lt;/code&gt; to customize settings. The &lt;strong&gt;defaults.cfg&lt;/strong&gt; file where it contains custom settings such as default username, password, changing visudo file and more. Example &lt;strong&gt;defaults.cfg&lt;/strong&gt; can be as follow:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#cloud-config
system_info:
  default_user:
    name: ubuntu
    sudo: ["ALL=(ALL) NOPASSWD:ALL"]
    lock_passwd: false
    plain_text_passwd: 'ubuntu'

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;More information regarding to &lt;strong&gt;defaults.cfg&lt;/strong&gt; file can be found here and customized more: &lt;a href="https://cloudinit.readthedocs.io/en/latest/topics/examples.html" rel="noopener noreferrer"&gt;https://cloudinit.readthedocs.io/en/latest/topics/examples.html&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  How to run
&lt;/h2&gt;

&lt;p&gt;Once variables are set, it can be run in same way with the local one.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
$ packer build aws_packer.json

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmrturkmen.com%2Fassets%2Fimages%2Fpacker%2Fpacker_build_on_aws.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmrturkmen.com%2Fassets%2Fimages%2Fpacker%2Fpacker_build_on_aws.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Complete packer JSON file : &lt;a href="https://github.com/mrtrkmnhub/ubuntu-packer/blob/master/on-aws/aws-packer.json" rel="noopener noreferrer"&gt;aws_packer.json&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As a summary, Packer is really cool tool to use to automate the process of creating custom images and it can be used for Dockers as well. For local example in this post, it will produce OVA file to import, on cloud it will generate custom AMI under your AWS account.&lt;/p&gt;

&lt;p&gt;All scripts and config files can be found in this repository: &lt;a href="https://github.com/mrtrkmnhub/ubuntu-packer" rel="noopener noreferrer"&gt;https://github.com/mrtrkmnhub/ubuntu-packer&lt;/a&gt;&lt;/p&gt;

</description>
      <category>devops</category>
      <category>productivity</category>
      <category>aws</category>
      <category>learning</category>
    </item>
    <item>
      <title>fail2ban: block ssh bruteforce attacks 🇬🇧</title>
      <dc:creator>Ahmet Turkmen</dc:creator>
      <pubDate>Wed, 24 Feb 2021 12:00:00 +0000</pubDate>
      <link>https://dev.to/mrturkmen/fail2ban-block-ssh-bruteforce-attacks-40np</link>
      <guid>https://dev.to/mrturkmen/fail2ban-block-ssh-bruteforce-attacks-40np</guid>
      <description>&lt;h1&gt;
  
  
  fail2ban
&lt;/h1&gt;

&lt;p&gt;A while ago, I was checking servers’ logs to see any suspicious activities going on from outside. I noticed that the servers both staging/testing and production servers are receiving a lot of brute force SSH attacks from variety of countries which are shown in table below.&lt;/p&gt;




&lt;h2&gt;
  
  
  List of IP Addresses ( who are doing SSH Brute Forcing )
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;IP Address&lt;/th&gt;
&lt;th&gt;Country Code&lt;/th&gt;
&lt;th&gt;Location&lt;/th&gt;
&lt;th&gt;Network&lt;/th&gt;
&lt;th&gt;Postal Code&lt;/th&gt;
&lt;th&gt;Approximate Coordinates*&lt;/th&gt;
&lt;th&gt;Accuracy Radius (km)&lt;/th&gt;
&lt;th&gt;ISP&lt;/th&gt;
&lt;th&gt;Organization&lt;/th&gt;
&lt;th&gt;Domain&lt;/th&gt;
&lt;th&gt;Metro Code&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;171.239.254.84&lt;/td&gt;
&lt;td&gt;VN&lt;/td&gt;
&lt;td&gt;Ho Chi Minh City,  Ho Chi Minh, Vietnam, Asia&lt;/td&gt;
&lt;td&gt;171.239.254.0/23&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;10.8104,106.6444&lt;/td&gt;
&lt;td&gt;1&lt;/td&gt;
&lt;td&gt;Viettel Group&lt;/td&gt;
&lt;td&gt;Viettel Group&lt;/td&gt;
&lt;td&gt;viettel.vn&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;North Holland, Netherlands, Europe&lt;/td&gt;
&lt;td&gt;159.65.192.0/20&lt;/td&gt;
&lt;td&gt;1098&lt;/td&gt;
&lt;td&gt;52.352, 4.9392&lt;/td&gt;
&lt;td&gt;1000&lt;/td&gt;
&lt;td&gt;Digital Ocean&lt;/td&gt;
&lt;td&gt;Digital Ocean&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;117.217.35.114&lt;/td&gt;
&lt;td&gt;IN&lt;/td&gt;
&lt;td&gt;Bhopal,Madhya Pradesh, India, Asia&lt;/td&gt;
&lt;td&gt;117.217.35.0/24&lt;/td&gt;
&lt;td&gt;462030&lt;/td&gt;
&lt;td&gt;23.2487,77.4066&lt;/td&gt;
&lt;td&gt;50&lt;/td&gt;
&lt;td&gt;BSNL&lt;/td&gt;
&lt;td&gt;BSNL&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Asia&lt;/td&gt;
&lt;td&gt;113.164.79.0/24&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;9.7774, 105.4592&lt;/td&gt;
&lt;td&gt;50&lt;/td&gt;
&lt;td&gt;VNPT&lt;/td&gt;
&lt;td&gt;VNPT&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;61.14.228.170&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Da Nang, Vietnam, Asia&lt;/td&gt;
&lt;td&gt;116.110.30.0/23&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;16.0685,&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;108.2215&lt;/td&gt;
&lt;td&gt;1&lt;/td&gt;
&lt;td&gt;Viettel Group&lt;/td&gt;
&lt;td&gt;Viettel Group&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;43.239.80.181&lt;/td&gt;
&lt;td&gt;IN&lt;/td&gt;
&lt;td&gt;Kolkata, West Bengal, India, Asia&lt;/td&gt;
&lt;td&gt;43.239.80.0/24&lt;/td&gt;
&lt;td&gt;700006&lt;/td&gt;
&lt;td&gt;22.5602, 88.3698&lt;/td&gt;
&lt;td&gt;10&lt;/td&gt;
&lt;td&gt;Meghbela Broadband&lt;/td&gt;
&lt;td&gt;Meghbela Broadband&lt;/td&gt;
&lt;td&gt;PMPL-Broadband.net&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Tinh Thai Binh, Vietnam, Asia&lt;/td&gt;
&lt;td&gt;14.255.136.0/23&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;20.4487,&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;106.3343&lt;/td&gt;
&lt;td&gt;100&lt;/td&gt;
&lt;td&gt;VNPT&lt;/td&gt;
&lt;td&gt;VNPT&lt;/td&gt;
&lt;td&gt;vnpt.vn&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;184.22.195.230&lt;/td&gt;
&lt;td&gt;TH&lt;/td&gt;
&lt;td&gt;Bangkok, Bangkok, Thailand, Asia&lt;/td&gt;
&lt;td&gt;184.22.195.0/24&lt;/td&gt;
&lt;td&gt;10310&lt;/td&gt;
&lt;td&gt;13.7749, 100.5197&lt;/td&gt;
&lt;td&gt;20&lt;/td&gt;
&lt;td&gt;AIS Fibre&lt;/td&gt;
&lt;td&gt;AIS Fibre&lt;/td&gt;
&lt;td&gt;myaisfibre.com&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;116.110.109.90&lt;/td&gt;
&lt;td&gt;VN&lt;/td&gt;
&lt;td&gt;Da Nang, Da Nang, Vietnam, Asia&lt;/td&gt;
&lt;td&gt;116.110.109.0/24&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;16.0685, 108.2215&lt;/td&gt;
&lt;td&gt;20&lt;/td&gt;
&lt;td&gt;Viettel Group&lt;/td&gt;
&lt;td&gt;Viettel Group&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Ho Chi Minh, Vietnam, Asia&lt;/td&gt;
&lt;td&gt;115.76.168.0/23&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;10.8104,&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;106.6444&lt;/td&gt;
&lt;td&gt;1&lt;/td&gt;
&lt;td&gt;Viettel Group&lt;/td&gt;
&lt;td&gt;Viettel Group&lt;/td&gt;
&lt;td&gt;viettel.vn&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;** Information on the table gathered from: [&lt;a href="https://www.maxmind.com/en/geoip-demo"&gt;https://www.maxmind.com/en/geoip-demo&lt;/a&gt;]&lt;/p&gt;




&lt;h2&gt;
  
  
  Ban failed attempts
&lt;/h2&gt;

&lt;p&gt;Although servers have no password login, they are kept brute forcing on SSH port. Well, fail2ban was one of obvious solution to block those IP addresses permanently or temporarily. I prefered to block them all permanently until manual unblocking has been done by me.&lt;/p&gt;

&lt;p&gt;The steps for installing fail2ban is pretty obvious, you are doing same things like, &lt;code&gt;apt-get update &amp;amp;&amp;amp; apt-get install fail2ban&lt;/code&gt;. After installation completed, configuration is much more important.&lt;/p&gt;

&lt;p&gt;Following steps will guide you to block any ip address who are brute forcing on SSH.&lt;/p&gt;




&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Copy template file&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;   $ cp /etc/fail2ban/jail.conf /etc/fail2ban/jail.local

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Set Ban time&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;It is possible to set ban time permanent or temporarily. I preffered to setup permanent, so for this reason I have changed &lt;code&gt;bantime = -1&lt;/code&gt;. Save and exit from the file when you are done.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ vim /etc/fail2ban/jail.conf

# Permanent ban 
bantime = -1 

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Create custom rules for SSH&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
 $ vim /etc/fail2ban/jail.d/sshd.local

   [sshd]
   enabled = true
   port = ssh
   filter = sshd
   logpath = /var/log/auth.log # place of ssh logs 
   maxretry = 4 # maximum number of attempts that user can do 

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;(*Maxretry value and log file can be changed according to your setup.)&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Make the rules persistent&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In order to make the rules persistent which means, the blocked IPs will not be deleted after restart of fail2ban service or restart of server. It requires to have some tricks to be done inside iptables rules under fail2ban. Add following &lt;code&gt;cat&lt;/code&gt; and &lt;code&gt;echo&lt;/code&gt; commands at the end of actionstart and actionban respectively .&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ vim /etc/fail2ban/action.d/iptables-multiport.conf 

                        .
                        .
                        .

actionstart = iptables -N fail2ban-&amp;lt;name&amp;gt;
              iptables -A fail2ban-&amp;lt;name&amp;gt; -j RETURN
              iptables -I &amp;lt;chain&amp;gt; -p &amp;lt;protocol&amp;gt; -m multiport --dports &amp;lt;port&amp;gt; -j fail2ban-&amp;lt;name&amp;gt;
          cat /etc/fail2ban/persistent.bans | awk '/^fail2ban-&amp;lt;name&amp;gt;/ {print $2}' \
          | while read IP; do iptables -I fail2ban-&amp;lt;name&amp;gt; 1 -s $IP -j &amp;lt;blocktype&amp;gt;; done

                       .
                       .
                       .

actionban = iptables -I fail2ban-&amp;lt;name&amp;gt; 1 -s &amp;lt;ip&amp;gt; -j &amp;lt;blocktype&amp;gt;
        echo "fail2ban-&amp;lt;name&amp;gt; &amp;lt;ip&amp;gt;" &amp;gt;&amp;gt; /etc/fail2ban/persistent.bans

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Save and restart service&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ systemctl restart fail2ban

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;These are most basic steps to block IP addresses who are actively brute forcing to servers. After some time, I am able to see them with following command :)&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
$ sudo fail2ban-client status sshd

Status for the jail: sshd
|- Filter
| |- Currently failed:  12
| |- Total failed:  107
| `- File list: /var/log/auth.log
`- Actions
   |- Currently banned: 16
   |- Total banned: 16
   `- Banned IP list:   171.239.254.84 184.102.70.222 180.251.85.85 103.249.240.208 159.65.194.150 117.217.35.114 113.164.79.129 61.14.228.170 116.110.30.245 43.239.80.181 77.222.130.223 14.255.137.219 184.22.195.230 125.25.82.12 116.110.109.90 115.76.168.231

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;It is growing in time however at least they are not able to brute force the server with same IP addresses. There are plenty of other ways to make SSH port much more secure and effective however I think having updated ssh daemon/client, passwordless login and fail2ban will be enough in most of the cases. Therefore, while I was doing this stuff, although there are plenty of guides over there, I wanted to note down how I did it to come back and check if something happens.&lt;/p&gt;

&lt;p&gt;Take care !&lt;/p&gt;

</description>
      <category>security</category>
      <category>tutorial</category>
      <category>linux</category>
      <category>development</category>
    </item>
    <item>
      <title>Deploy with Ansible on CI/CD 🇬🇧</title>
      <dc:creator>Ahmet Turkmen</dc:creator>
      <pubDate>Sat, 12 Dec 2020 00:00:00 +0000</pubDate>
      <link>https://dev.to/mrturkmen/deploy-with-ansible-on-ci-cd-3o02</link>
      <guid>https://dev.to/mrturkmen/deploy-with-ansible-on-ci-cd-3o02</guid>
      <description>&lt;p&gt;In this post, deployment process of an application with Ansible will be explained. Traditionally applications can be deployed in different ways, quite similar approach to deploy applications like in Ansible is executing bash script which has ssh commands. To give an example, Travis continuous integration has a feature where a bash script can be defined to deploy application and through given instructions within bash script, application can be successfully deployed.&lt;/p&gt;

&lt;p&gt;Details regarding to deployment using Travis bash scripting can be found &lt;a href="https://docs.travis-ci.com/user/deployment/script/" rel="noopener noreferrer"&gt;here&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Travis Script Deployment
&lt;/h2&gt;

&lt;p&gt;I would like to give an real case example from one of the project which I work on. We were using Travis script deployment for a while and it works pretty well. The bash script which I use in our deployment process is given below:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#!/usr/bin/env bash
f=dist/hknd_linux_amd64/hknd
amigo=./svcs/amigo
user=ntpd
hostname=sec02.lab.es.aau.dk
keyfile=./travis_deploy_key
deploy_path=/data/home/ntpd/daemon/hknd
amigo_path=/data/home/ntpd/daemon/svcs/amigo

if [-f $f]; then
    echo "Deploying '$f' to '$hostname'"
    chmod 600 $keyfile
    ssh -i $keyfile -o StrictHostKeyChecking=no $user@$hostname sudo /bin/systemctl stop hknd.service
    scp -i $keyfile -o StrictHostKeyChecking=no $f $user@$hostname:$deploy_path
    scp -i $keyfile -r -o StrictHostKeyChecking=no $amigo $user@$hostname:$amigo_path
    ssh -i $keyfile -o StrictHostKeyChecking=no $user@$hostname sudo /bin/systemctl start hknd.service
else
    echo "Error: $f does not exist"
    exit 1
fi

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;As you can observe from the bash script, every step of the deployment is given as ssh/scp commands. There is no harm regarding to it as long as it contains few steps. However, as time pass more configurations, applications will required to be deployed, updated, modified and checked, then it might turn into headache. Therefore, having well structured deployment steps using Ansible will put us to safe side.&lt;/p&gt;

&lt;p&gt;Before jumping into deployment with Ansible, I would like to point out some factors which can be counted as disadvantages of not integrating Ansible to deployment process.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Not common way of utilizing resources&lt;/li&gt;
&lt;li&gt;Not well structured deployment scripts which has high potential of being not working very well.&lt;/li&gt;
&lt;li&gt;Having plain ssh commands increase likelihood of having issues regarding to settings, deployments and more.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;There are many more drawbacks of using pure bash scripts in deployment process, however, these issues may not be applicable for all them.&lt;/p&gt;

&lt;p&gt;For our case, I would like to convert our bash script given above to Ansible which has more elegant structure and easy to manage.&lt;/p&gt;

&lt;h2&gt;
  
  
  Move to Ansible
&lt;/h2&gt;

&lt;p&gt;Since the bash script does not contain complex instructions, it would be very easy to convert it into Ansible playbooks. Before starting to convert it into Ansible playbook, necessary ssh connection should be set correctly for development and production environments. (- test environment as well if required -).&lt;/p&gt;

&lt;p&gt;Setting ssh connection between server and ansible user is pretty straitforward, it contains following steps;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Generate SSH Key pair&lt;/li&gt;
&lt;li&gt;Copy public key to &lt;code&gt;authorized_keys&lt;/code&gt; on server side&lt;/li&gt;
&lt;li&gt;Encrypt private key&lt;/li&gt;
&lt;li&gt;Have decrypt script to use private key on CI without compromising it.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Overall simplified flow for deployment is given below :&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmrturkmen.com%2Fassets%2Fimages%2Fansible_deployment.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmrturkmen.com%2Fassets%2Fimages%2Fansible_deployment.jpeg" alt="Overall simplified flow"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As it is declared from overall picture above, we need to provide encrypted ssh key and script for decryption together, in order to use plain private key to access the server.&lt;/p&gt;

&lt;p&gt;In this setup, Github CI will be control node which will have access to server where we would like to deploy the application.&lt;/p&gt;

&lt;p&gt;Let’s start to complete steps,&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Generate SSH Key pair&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;  $ ssh-keygen 

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can keep everything default or provide some information about the questions when you run it. Once, execution of command finished, there will be public and private key, you need to append the public key to user’ authorized keys file on server. Afterwards, connection should be established, you may want to test it using traditional ssh command.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Encrypt Private Key&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In order to use the ssh key which is generated before, we need to encrypt the key, I preferred to use &lt;code&gt;gpg&lt;/code&gt; tool, there are many examples about it on internet, you can check it if you wish.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;  $ gpg --symmetric --cipher-algo AES256 &amp;lt;private-key-file&amp;gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The command will prompt you to provide passphrase to encrpyt and decrypt the private key when required. Choose strong and long passphrase. Once it is done, include encrpyted file into git. (- which means commit it as well-)&lt;/p&gt;

&lt;p&gt;Once they have completed, the rest is structing Ansible playbook to deploy the file to server.&lt;/p&gt;

&lt;h2&gt;
  
  
  Example Repo
&lt;/h2&gt;

&lt;p&gt;I am going to create a repository on Github to demonstrate what I have described earlier in action.&lt;/p&gt;

&lt;p&gt;For the demo purposes, I will upload a service file to server and start it, simplified version of given bash commands above.&lt;/p&gt;

&lt;p&gt;Ansible playbook will contain following;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Stopping already running service&lt;/li&gt;
&lt;li&gt;Changing binary file of the service&lt;/li&gt;
&lt;li&gt;Starting it again&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The tasks can be extended according to user needs however to keep it short and show how Ansible could be used on continuous integration, I will continue to have minimal playbook.&lt;/p&gt;

&lt;p&gt;Link to example repository: &lt;a href="https://github.com/mrturkmenhub/ansible-deploy" rel="noopener noreferrer"&gt;https://github.com/mrturkmenhub/ansible-deploy&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The structure of the repository as following:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmrturkmen.com%2Fassets%2Fimages%2Fdeploy_with_ansible.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmrturkmen.com%2Fassets%2Fimages%2Fdeploy_with_ansible.png" alt="Deploy with Ansible"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As it can be observed from the figurre above, I have only three tasks which are combined under &lt;a href="https://github.com/mrturkmenhub/ansible-deploy/blob/master/main.yml" rel="noopener noreferrer"&gt;main.yml&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Some configuration regarding to Ansible, such as private key, inventory file location declaration is saved to file &lt;a href="https://github.com/mrturkmenhub/ansible-deploy/blob/master/ansible.cfg" rel="noopener noreferrer"&gt;ansible.cfg&lt;/a&gt; among ssh connection configuration.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/mrturkmenhub/ansible-deploy/blob/master/inventory" rel="noopener noreferrer"&gt;Inventory&lt;/a&gt; file contains server(s) to deploy the application.&lt;/p&gt;

&lt;p&gt;This post is not about how to write ansible playbooks, hence, I am going to skip to explain it. If you would like to check and understand it you can check following repositories for examples;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://github.com/mrturkmenhub/DevOps-Learning-Journey" rel="noopener noreferrer"&gt;DevOps Learning Journey&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://mrturkmen.com/assets/notes/20201205-introduction-to-ansible.pdf" rel="noopener noreferrer"&gt;Handwritten notes about Ansible&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://github.com/mrturkmenhub/ansible-deploy/blob/master/.github/scripts/decrypt.sh" rel="noopener noreferrer"&gt;Decrypt script&lt;/a&gt; is crucial file which is decrypting encrypted private key to access the server.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;DO NOT FORGET TO SET YOUR SECRET_PASSPHRASE TO SECRETS OF THE REPOSITORY&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmrturkmen.com%2Fassets%2Fimages%2Fsecrets.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmrturkmen.com%2Fassets%2Fimages%2Fsecrets.png" alt="Secrets"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  The workflow file
&lt;/h2&gt;

&lt;p&gt;The workflow file for this repository is pretty straitforward to create as well, what needs to be done is that ansible should be installed into environment. Afterwards, running ansible playbook command after decrpyting the encrypted private key will complete the tasks.&lt;/p&gt;

&lt;p&gt;The generated workflow is for giving demonstration, in normal production case, the pipeline should &lt;strong&gt;NOT&lt;/strong&gt; be broken, each step from testing to production deployment should be as much as automated.&lt;/p&gt;

&lt;p&gt;The completed workflow file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# This is a basic workflow to help you get started with Actions

name: CI

# Controls when the action will run. 
on:
  # Triggers the workflow on tagged commits  
  push:
    tags:
      - '*.*.*'

  # Allows you to run this workflow manually from the Actions tab
  workflow_dispatch:

# A workflow run is made up of one or more jobs that can run sequentially or in parallel
jobs:
  # This workflow contains a single job called "build"
  build:
    # The type of runner that the job will run on
    runs-on: ubuntu-latest

    # Steps represent a sequence of tasks that will be executed as part of the job
    steps:
      # Checks-out your repository under $GITHUB_WORKSPACE, so your job can access it
      - uses: actions/checkout@v2

      - name: Install Ansible
        run: |
          sudo apt update -y
          sudo apt install software-properties-common -y
          sudo apt-add-repository --yes --update ppa:ansible/ansible
          sudo apt install ansible -y

      - name: Set Execute command to bash script
        run: chmod +x ./.github/scripts/decrypt.sh

      # Runs a single command using the runners shell
      - name: Decrypt large secret
        run: ./.github/scripts/decrypt.sh
        env:
          SECRET_PASSPHRASE: $

      - name: Escalate Private Key Permissions
        run: chmod 400 ~/.privkey

      - name: Run ansible command
        run: |
          ansible-playbook -i ./inventory main.yml
        env:
          ANSIBLE_CONFIG: ./ansible.cfg

      - name: Clean Key
        run: rm -rf ~/.privkey

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The final result from Github actions:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmrturkmen.com%2Fassets%2Fimages%2Fansible_logs.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmrturkmen.com%2Fassets%2Fimages%2Fansible_logs.png" alt="Secrets"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Keep in mind that this is just a minor portion of a long pipeline which has all unit tests, checks, linting and integration tests. Without proper pipeline in place, having Ansible might not be logical or required. Consider your cases when you would like to move to deployment with Ansible.&lt;/p&gt;

&lt;p&gt;Cheers !&lt;/p&gt;

</description>
      <category>blog</category>
      <category>ansible</category>
      <category>devops</category>
      <category>productivity</category>
    </item>
    <item>
      <title>Download Youtube Playlists and Release through Github Actions [ CI/CD ] 🇬🇧</title>
      <dc:creator>Ahmet Turkmen</dc:creator>
      <pubDate>Tue, 24 Nov 2020 10:55:00 +0000</pubDate>
      <link>https://dev.to/mrturkmen/download-youtube-playlists-and-release-through-github-actions-ci-cd-5adn</link>
      <guid>https://dev.to/mrturkmen/download-youtube-playlists-and-release-through-github-actions-ci-cd-5adn</guid>
      <description>&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmrturkmen.com%2Fassets%2Fimages%2Fyoutube_playlist_releases.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmrturkmen.com%2Fassets%2Fimages%2Fyoutube_playlist_releases.png" alt="Playlist Releaser"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In some moments, Youtube algorithm is working perfect, but sometimes it shows a video from ten years ago from nowhere. For the moments where it shows and suggests videos/playlists to us, we might want to save the list of playlist and watch in some other time. It could be on a plane, train, bus, whenever you are planning to spent some time. However, taking the URL of a playlist and saving it to your cute note program might not be sufficient enough. There is a high chance that it will be forgotten or missed, therefore, I thought that it would be nice to have an automated way of saving playlists on somewhere and download them when I need. (- in particular, when there is no or limited internet connection -).&lt;/p&gt;

&lt;p&gt;In this blog post, I will go through a simple project which downloads all the videos in a playlist and generates seperate &lt;code&gt;tar.gz&lt;/code&gt; files for each playlist to release on Github using Github actions.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Requirements&lt;/li&gt;
&lt;li&gt;Available Tools&lt;/li&gt;
&lt;li&gt;Code Structure&lt;/li&gt;
&lt;li&gt;Workflow File&lt;/li&gt;
&lt;li&gt;Github Limitations&lt;/li&gt;
&lt;li&gt;Repository&lt;/li&gt;
&lt;li&gt;Demo&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Requirements
&lt;/h2&gt;

&lt;p&gt;Whenever starting a project, it is always nice to imply &lt;a href="https://en.wikipedia.org/wiki/Divide-and-conquer_algorithm" rel="noopener noreferrer"&gt;divide and conquer&lt;/a&gt; approach if you know what you would like to achieve. Divide and conquer approach will hugely assist you during the development and planning no matter what is the size of the project. As first step, let’s define what we need for accomblishing such a thing. ]&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A library/program which downloads Youtube videos from given URL.&lt;/li&gt;
&lt;li&gt;A library/program which compress the downloaded videos to minimize the size.&lt;/li&gt;
&lt;li&gt;A workflow on Github Actions to trigger releases.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;When the given components are clarified, only one more step left to have. There should be main component which combines the requirements given above. For this purpose, I will use Go programming language.&lt;/p&gt;

&lt;h2&gt;
  
  
  Available Tools
&lt;/h2&gt;

&lt;p&gt;When existing libraries, tools and open source projects checked for the first requirement, there are some on Github, namely;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://github.com/iawia002/annie" rel="noopener noreferrer"&gt;annie&lt;/a&gt;: 👾 Fast, simple and clean video downloader&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://github.com/rg3/youtube-dl" rel="noopener noreferrer"&gt;youtube-dl&lt;/a&gt;: Command-line program to download videos from YouTube.com and other video sites&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://github.com/soimort/you-get" rel="noopener noreferrer"&gt;you-get&lt;/a&gt;: Dumb downloader that scrapes the web&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://github.com/rylio/ytdl" rel="noopener noreferrer"&gt;ytdl&lt;/a&gt;: YouTube download library and CLI written in Go&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These are the tools which enable users to download Youtube videos by providing the URL or the ID.(- some of them supports different social media platforms too, e.g vimeo -).&lt;/p&gt;

&lt;p&gt;To make things simpler, I chose to use &lt;a href="https://github.com/rg3/youtube-dl" rel="noopener noreferrer"&gt;youtube-dl&lt;/a&gt;, since it is more promising than others and formats the output very well according to user output pattern.&lt;/p&gt;

&lt;p&gt;Another point is to clarify which tool/library should I use to compress the downloaded videos from Youtube. With help of a little bit googling, I found out that &lt;code&gt;pigz&lt;/code&gt; is quite nice tool which compress given folder/file in paralel by using all available cores of the machine. The second requirement is cleared as well, now it is time to combine both of them in one and add Github workflow on top it.&lt;/p&gt;

&lt;p&gt;I will mention about the Github workflow file, after structure of the program which automates the process.&lt;/p&gt;

&lt;h2&gt;
  
  
  Code Structure
&lt;/h2&gt;

&lt;p&gt;To make things faster (- in terms of development time -), I decided to go with using pre-existing binary file to execute commands, what I mean by that is basically having a pre-installed tool (youtube-dl &amp;amp; pigz) on the system before using this application.&lt;/p&gt;

&lt;p&gt;If the readme file of &lt;a href="https://github.com/ytdl-org/youtube-dl" rel="noopener noreferrer"&gt;youtube-dl&lt;/a&gt; is checked, youtube-dl can be installed as command line tool into your environment. It means that we can call the tool whenever we need from our application. There are other ways to accomblish it as well, such as instead of using pre-existing binary file, we can implement the functions in our application. However, the main idea of this post is NOT about how to create or use the library, the main idea is to present how it easy to have automated way of retrieving Youtube playlist videos and saving to Github Releases. The other requirement regarding to compress can be used in similar way. (- using a pre-existin command from system -)&lt;/p&gt;

&lt;p&gt;To make things simple and extendable (- which means in case of more integration of tools we should be able to accomblish it without changing, many lines of code -), I will generate a main &lt;code&gt;Client&lt;/code&gt; struct which will have &lt;code&gt;exec&lt;/code&gt; function and it will be &lt;a href="https://en.wikipedia.org/wiki/Method_overriding" rel="noopener noreferrer"&gt;overridden&lt;/a&gt; according to command we pass.&lt;/p&gt;

&lt;p&gt;The main client struct :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;type Client struct {

    //youtube-dl client 
    YoutubeDL *YoutubeDL

     // Tar client 
    Tar *Tar

    // Used to enable root command
    sudo bool

    // flags to service
    flags []string

    // enable debug or not
    debug bool

    // Implementation of ExecFunc.
    execFunc ExecFunc

    // Implementation of PipeFunc.
    pipeFunc PipeFunc
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Client struct has some fields which enables us to override whenever we want, the struct contains &lt;code&gt;exec(cmd string, args ...string) ([]byte, error)&lt;/code&gt;, &lt;code&gt;shellPipe(stdin io.Reader, cmd string, args ...string) ([]byte, error)&lt;/code&gt;, and &lt;code&gt;shellExec(cmd string, args ...string) ([]byte, error)&lt;/code&gt; functions. It can be extended according to our requirements in the future. The explanations of the functions are given on top of functions inside the source code.&lt;/p&gt;

&lt;p&gt;For the youtube-dl client, I implemented only a function (-the client functionalities are really easy to extend-), which downloads all videos on given playlist by using pre-existing command line tool youtube-dl.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
package client

type YoutubeDL struct {
    c *Client
}

// exec executes an ExecFunc using 'youtube-dl'.
func (ytdl *YoutubeDL) exec(args ...string) ([]byte, error) {
    return ytdl.c.exec("youtube-dl", args...)
}

// DownloadWithOutputName generates Folder named with Playlist name
// downloads videos under given playlist url to Folder
func (ytdl *YoutubeDL) DownloadWithOutputName(folderName, url string) error {
    cmds := []string{"-o", folderName + "/%(playlist_index)s - %(title)s.%(ext)s", url}
    _, err := ytdl.exec(cmds...)
    return err
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For any other additinal tool to use, it is extremely practical to add, for tar tool I have implemented following for specific purpose ( -which is compressing downloaded videos in paralel- ).&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
package client

type Tar struct {
    c *Client
}

// exec executes an ExecFunc using 'tar' command.
func (tr *Tar) exec(args ...string) ([]byte, error) {
    return tr.c.exec("tar", args...)
}

// CompressWithPIGZ using tar with pigz compress program to compress given data
func (tr *Tar) CompressWithPIGZ(fileName, folderToCompress string) error {
    cmds := []string{"--use-compress-program=pigz", "-cf", fileName, folderToCompress}
    _, err := tr.exec(cmds...)
    if err != nil {
        return err
    }
    return nil
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now, it is clear that for the both statements in the requirements section has been done. However, I wanted to keep track of what I have downloaded and release, for this reason, I have created two different csv files. They are called &lt;code&gt;playlist-list.csv&lt;/code&gt; and &lt;code&gt;old-playlist-list.csv&lt;/code&gt; under resources/ directory in the repository, &lt;code&gt;playlist-list.csv&lt;/code&gt; will include all list of playlist URLs with preferred folder name to download. Futhermore, as you can guess, &lt;code&gt;old-playlist-list.csv&lt;/code&gt; will include all the playlists which are downloaded and released. Once the playlist is downloaded and released with Github actions &lt;code&gt;playlist-list.csv&lt;/code&gt; will be wiped and all content will be appended into &lt;code&gt;old-playlist-list.csv&lt;/code&gt; file.&lt;/p&gt;

&lt;p&gt;It will give easy way of checking what has been downloaded and released.&lt;/p&gt;

&lt;p&gt;The code for reading and writing to csv files are pretty easy, and can be checked under &lt;a href="https://github.com/mrturkmencom/youtubeto/blob/master/main.go" rel="noopener noreferrer"&gt;main.go&lt;/a&gt; in the repository.&lt;/p&gt;

&lt;h2&gt;
  
  
  Workflow File
&lt;/h2&gt;

&lt;p&gt;The workflow file will include some steps, which are;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;a href="https://github.com/mrturkmencom/youtubeto/blob/004330baeee663dd12a05c4b1aaa99bba5bb4f14/.github/workflows/releaseplaylists.yml#L24" rel="noopener noreferrer"&gt;Install pigz&lt;/a&gt; : required to compress data in parallel.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://github.com/mrturkmencom/youtubeto/blob/004330baeee663dd12a05c4b1aaa99bba5bb4f14/.github/workflows/releaseplaylists.yml#L29" rel="noopener noreferrer"&gt;Install youtube-dl&lt;/a&gt; : required to download playlist from given URL.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://github.com/mrturkmencom/youtubeto/blob/004330baeee663dd12a05c4b1aaa99bba5bb4f14/.github/workflows/releaseplaylists.yml#L34" rel="noopener noreferrer"&gt;Build Binary&lt;/a&gt; : required to have combined binary which handles both download and compress using pre-existing tools on the system.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://github.com/mrturkmencom/youtubeto/blob/004330baeee663dd12a05c4b1aaa99bba5bb4f14/.github/workflows/releaseplaylists.yml#L38" rel="noopener noreferrer"&gt;Create Release&lt;/a&gt; : the step which initializes releases.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://github.com/mrturkmencom/youtubeto/blob/004330baeee663dd12a05c4b1aaa99bba5bb4f14/.github/workflows/releaseplaylists.yml#L51" rel="noopener noreferrer"&gt;Run Binary&lt;/a&gt; : executes the program&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://github.com/mrturkmencom/youtubeto/blob/004330baeee663dd12a05c4b1aaa99bba5bb4f14/.github/workflows/releaseplaylists.yml#L55" rel="noopener noreferrer"&gt;Upload videos to Github releases&lt;/a&gt; : uploads downloaded content to releases.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://github.com/mrturkmencom/youtubeto/blob/004330baeee663dd12a05c4b1aaa99bba5bb4f14/.github/workflows/releaseplaylists.yml#L61" rel="noopener noreferrer"&gt;Remove playlist and append downloaded playlists to old list&lt;/a&gt; : updates the list inside the playlist file and commits on master branch.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The steps given above are clickable to see inside the workflow on repository.&lt;/p&gt;

&lt;p&gt;This small project is created for exclusive purpose, and it is very suitable to extend functionalities. However, there are many gaps regarding to the project such as;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;it does NOT check the given playlists whether they have been already released or not.&lt;/li&gt;
&lt;li&gt;it does NOT split created tar.gz files into 2 GB splits ( since it is required to have a file on Github releases under 2GB, but there is NO limitation for overall size of files on Github releases.)&lt;/li&gt;
&lt;li&gt;Does NOT have error handling mechanism and more.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These are the points which appears when the project is checked at first glance, however there are more missing points which could be done. However, the main aim was to give idea how to accomblish automated way of downloading youtube videos and releasing with Github actions.&lt;/p&gt;

&lt;p&gt;I am personally using it for personal needs whenever I find useful playlist, I include it into &lt;code&gt;playlist-list.csv&lt;/code&gt; file and pushing the changes by tagging the commit in semantic versioning format.&lt;/p&gt;

&lt;p&gt;There are tons of other services which could be integrated such as Slack, Discord, Mail or any another notification systems and more, however, to keep the post short and do not bother you, it is enough for now as it is.&lt;/p&gt;

&lt;p&gt;The rule for the workflow could be easily changed, like instead of running it in tagged commits, it can run in scheduled way by changing run condition only, as shown below.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;name: Download &amp;amp; Release Youtube Playlists

on:
  schedule:
    - cron: '0 0 * * *' # it means every day at midnight the workflow will run

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you require or would like to have more features, or fixes, suggestions and etc, you are more welcome to open issues.&lt;/p&gt;

&lt;h2&gt;
  
  
  Github Limitations
&lt;/h2&gt;

&lt;p&gt;Since we are using Github actions, we have some limitations regarding to usage of it.&lt;/p&gt;

&lt;p&gt;The limitations regarding to file sizes in releases, according to Github Statement here:&lt;a href="https://docs.github.com/en/free-pro-team@latest/github/managing-large-files/distributing-large-binaries" rel="noopener noreferrer"&gt;Distributing large binaries&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Basically, a file size which will be uploaded to releases should &lt;strong&gt;NOT&lt;/strong&gt; exceeds 2 GB. However, keep in mind that it is per file, there is &lt;strong&gt;NO&lt;/strong&gt; limitation for overall size of the release :). It means that repository will be updated to split files into chunks if size of the file exceeds 2 GB. So, in case of 15 GB of playlist, it should be uploaded in 2GB chunks to releases. (- a feature which is NOT exists on &lt;strong&gt;youtubeto&lt;/strong&gt; yet -)&lt;/p&gt;

&lt;p&gt;There are some more limitations:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Job execution time&lt;/strong&gt; - Each job in a workflow can run for up to 6 hours of execution time. If a job reaches this limit, the job is terminated and fails to complete.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Workflow run time&lt;/strong&gt; - Each workflow run is limited to 72 hours. If a workflow run reaches this limit, the workflow run is cancelled.&lt;/p&gt;

&lt;p&gt;More details about limmitations on Github Actions: &lt;a href="https://docs.github.com/en/free-pro-team@latest/actions/reference/usage-limits-billing-and-administration" rel="noopener noreferrer"&gt;Usage Limits&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;It is good to keep in mind the given limitations above.&lt;/p&gt;

&lt;p&gt;Job execution time and Workflow run time can be easily fixed if you have your own server.&lt;/p&gt;

&lt;p&gt;If you would like to run Github Actions in your server, there is no limitation regarding to Job execution time and Workflow run time.&lt;/p&gt;

&lt;p&gt;Check out how to setup Github Actions for your server from here:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://docs.github.com/en/free-pro-team@latest/actions/hosting-your-own-runners/about-self-hosted-runners" rel="noopener noreferrer"&gt;Setup self hosted runners&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Repository
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://github.com/mrturkmencom/youtubeto" rel="noopener noreferrer"&gt;youtubeto&lt;/a&gt;: Automated Youtube PlayList Releaser&lt;/p&gt;

&lt;h2&gt;
  
  
  Demo
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=53ax_T7Q2p4" rel="noopener noreferrer"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fimg.youtube.com%2Fvi%2F53ax_T7Q2p4%2F0.jpg" alt="youtubeto-demo"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>technology</category>
      <category>youtube</category>
      <category>github</category>
      <category>learning</category>
    </item>
    <item>
      <title>Latex with Github Actions🇬🇧</title>
      <dc:creator>Ahmet Turkmen</dc:creator>
      <pubDate>Sun, 25 Oct 2020 16:25:00 +0000</pubDate>
      <link>https://dev.to/mrturkmen/latex-with-github-actions-4580</link>
      <guid>https://dev.to/mrturkmen/latex-with-github-actions-4580</guid>
      <description>&lt;p&gt;In this post, I will be describing to setup a workflow to build and release your Latex files through Github actions. First of all, keep in mind that this post is not about what is Latex and how to use it.&lt;/p&gt;

&lt;p&gt;It is extremely nice to integrate daily development tools such as CI/CD to your preparation of paper, without any hassle. Why is that because it is cool to track of what has been changed on a paper over time. In fact, having a couple of people who are responsible in different parts of paper, sometimes blocks others. Therefore, having such a workflow will increase productivity for everyone in a group. Whenever pull request created to main branch, it will be easy to check typos, logic errors and missing points by others.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Latex preparation&lt;/li&gt;
&lt;li&gt;Setup Github Actions&lt;/li&gt;
&lt;li&gt;Proof of Concept&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Latex preparation
&lt;/h2&gt;

&lt;p&gt;I am assuming that you have agreed to work on Latex template to complete a paper. In this case, there is only small step left to do, create a Github repository (-it should be on Github, Github Actions will be used-) and push all files of your Latex template. (-in general, in following structure-)&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    |-sections 
        | introduction.tex
        | related_works.tex
        | problem.tex
        | solution.tex
        | conclusion.tex
    |- main.tex
    |- references.bib

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The given example structure can be changed according to your wishes, however important and logical part is that having &lt;code&gt;main.tex&lt;/code&gt; on root directory of repository.&lt;/p&gt;

&lt;p&gt;Once it is set, there is only one step to complete which is setting up Github Action workflows.&lt;/p&gt;

&lt;h2&gt;
  
  
  Setup Github Actions
&lt;/h2&gt;

&lt;p&gt;There are a few different Github Actions to use for compiling Latex document to PDF on marketplace. Most preferred one is &lt;a href="https://github.com/xu-cheng/latex-action"&gt;https://github.com/xu-cheng/latex-action&lt;/a&gt; and it is quite easy to integrate and use.&lt;/p&gt;

&lt;p&gt;It basically creates generated PDF file from provided Latex file, it can be set in workflow file as given below: (- Note that this workflow runs on tagged commits which has a tag with &lt;code&gt;*.*.*&lt;/code&gt; pattern -)&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;name: Build LaTeX document
on: 
    tags:
        - '*.*.*' # semantic versioning 
jobs:
  build_latex:
    runs-on: ubuntu-latest
    steps:
      - name: Set up Git repository
        uses: actions/checkout@v2
      - name: Compile LaTeX document
        uses: xu-cheng/latex-action@v2
        with:
          root_file: main.tex

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;However, setting up only this job is not sufficient enough to have completed workflow, we require to more jobs which are &lt;strong&gt;Create Release&lt;/strong&gt; and &lt;strong&gt;Upload Release&lt;/strong&gt;. As you may guess from their name, first one will create the release and second one will upload provided file to releases page. It can be setup as following&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
name: Release Compiled PDF 
on:
  push:
    tags:
      - '*.*.*'

jobs:
  build_latex:
    runs-on: ubuntu-latest
    steps:
      - name: Set up Git repository
        uses: actions/checkout@v2
      - name: Compile LaTeX document
        uses: xu-cheng/latex-action@v2
        with:
          root_file: main.tex

      - name: Create Release
        id: create_release
        uses: actions/create-release@v1
        env:
          GITHUB_TOKEN: $
        with:
          tag_name: $
          release_name: Release $
          draft: false
          prerelease: false

      - name: Upload Release Asset
        id: upload-release-asset 
        uses: actions/upload-release-asset@v1
        env:
          GITHUB_TOKEN: $
        with:
          upload_url: $ 
          asset_path: ./main.pdf
          asset_name: main.pdf
          asset_content_type: pdf

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The given workflow is completed version of what you might have at the end. In summary, it builds PDF from provided Latex file, creates release and upload file to release. For more details, you can check information on each action page.&lt;/p&gt;

&lt;h2&gt;
  
  
  Proof of Concept
&lt;/h2&gt;

&lt;p&gt;Here is example repository to check completed version.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/mrturkmencom/latex-on-ci-cd"&gt;https://github.com/mrturkmencom/latex-on-ci-cd&lt;/a&gt;&lt;/p&gt;

</description>
      <category>github</category>
      <category>latex</category>
      <category>career</category>
      <category>cicd</category>
    </item>
    <item>
      <title>cool-kubernetes 🇬🇧</title>
      <dc:creator>Ahmet Turkmen</dc:creator>
      <pubDate>Wed, 19 Aug 2020 22:37:00 +0000</pubDate>
      <link>https://dev.to/mrturkmen/cool-kubernetes-4jh8</link>
      <guid>https://dev.to/mrturkmen/cool-kubernetes-4jh8</guid>
      <description>&lt;h1&gt;
  
  
  Cool Kubernetes 🤓
&lt;/h1&gt;

&lt;p&gt;It is a place which contains valuable resources about Kubernetes. It is somehow similar to awesome lists projects on Github, the logic is same however approach is different, instead of creating pure readme.md file like &lt;a href="https://github.com/irazasyed/awesome-cloudflare#readme"&gt;here&lt;/a&gt;, I am generating and publishing all stuff using Github Pages. Another important difference is that, this project does not only contain links to outsource, it also contains valuable information regarding to Kubernetes which is collected from variety of sources. For instance, let’s say someone is registered to Kubernetes or Kubernetes related course and realized that some figures, graphs or piece of information is valuable then that information could be added this resource hub. Many of you might say that “Well, you are duplicating existing information out there”, it is correct however, having most of valuable information from a place, which you know already, will save a lot of time for anyone who need some help.&lt;/p&gt;

&lt;p&gt;For time being, the site does not contain lots of resources, however in time, it will contain troubleshooting techniques and approaches on Kubernetes.&lt;/p&gt;

&lt;p&gt;The project is completely open source as any other awesome lists projects on Github, it means that the project is open to contribution as well, if you would like to add/improve/issue something on this resource hub. That’s perfect, do changes and create pull request or drop an issue 😉&lt;/p&gt;

&lt;p&gt;I do not know whether it will be persistent or not, however I will try to update it occasionally.&lt;/p&gt;

&lt;p&gt;It is completely open source project which is residing on &lt;a href="https://mrturkmen.com/kubernetes"&gt;cool-kubernetes&lt;/a&gt; website, any contribution is valuable.&lt;/p&gt;

&lt;p&gt;Cheers !&lt;/p&gt;

</description>
      <category>learning</category>
      <category>kubernetes</category>
      <category>docker</category>
      <category>opensource</category>
    </item>
    <item>
      <title>Universal gRPC client demonstration [Evans] 🇬🇧</title>
      <dc:creator>Ahmet Turkmen</dc:creator>
      <pubDate>Sat, 08 Aug 2020 09:19:00 +0000</pubDate>
      <link>https://dev.to/mrturkmen/universal-grpc-client-demonstration-evans-bij</link>
      <guid>https://dev.to/mrturkmen/universal-grpc-client-demonstration-evans-bij</guid>
      <description>&lt;p&gt;In this post, I am going to write demo for a tool which I have just met, it is called &lt;strong&gt;Evans&lt;/strong&gt;. It is basically universal gRPC client. What it means ? Basically when you have gRPC server and would like to test gRPC calls without creating client, you can test server side calls with &lt;strong&gt;Evans&lt;/strong&gt;. It is known that gRPC is very common communication method between microservices, it can be used for internal and external communication. I do not have intention to explain what gRPC is in this post since it is not the purpose. If required &lt;a href="https://grpc.io/docs/what-is-grpc/introduction/"&gt;documentation of gRPC&lt;/a&gt; can be investigated.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
Create a simple gRPC server

&lt;ul&gt;
&lt;li&gt;Sample repository&lt;/li&gt;
&lt;li&gt;Defining calls&lt;/li&gt;
&lt;li&gt;Compile Proto file&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;Run gRPC server&lt;/li&gt;
&lt;li&gt;Demonstration of EVANS&lt;/li&gt;
&lt;/ul&gt;

&lt;h1&gt;
  
  
  Create a simple gRPC server
&lt;/h1&gt;

&lt;p&gt;To demonstrate and see how &lt;strong&gt;evans&lt;/strong&gt; works, a running gRPC should be exists, for this reason, I am going to provide a sample gRPC server. For this purpose, I will use Go programming language, however gRPC is supporting more &lt;a href="https://grpc.io/docs/languages/"&gt;programming languages&lt;/a&gt; which you may more familiar than Go.&lt;/p&gt;

&lt;p&gt;gRPC server is basically an API endpoint where clients can make requests, since it is an API, first thing could be to define which methods will be used for this service.&lt;/p&gt;

&lt;p&gt;For simplicity and purpose of this post, I have created a basic microservice which will has four calls namely, Add,Delete,List and Find. Since the purpose is to understand how &lt;strong&gt;evans&lt;/strong&gt; works, the gRPC server does not need to be complex or includes lots of calls.&lt;/p&gt;

&lt;h2&gt;
  
  
  Sample repository
&lt;/h2&gt;

&lt;p&gt;A sample microservice is created for demonstration purposes and all codes are available here : &lt;a href="https://github.com/mrturkmencom/BookShelf"&gt;https://github.com/mrturkmencom/BookShelf&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you have already a running gRPC server, you can direcly pass to demonstration of evans, if not, you can clone &lt;a href="https://github.com/mrturkmencom/BookShelf"&gt;https://github.com/mrturkmencom/BookShelf&lt;/a&gt; repository and test &lt;strong&gt;evans&lt;/strong&gt; out.&lt;/p&gt;

&lt;h2&gt;
  
  
  Defining calls
&lt;/h2&gt;

&lt;p&gt;In my opinion, it is always nice to prepare &lt;em&gt;proto&lt;/em&gt; file before hand, because it is like a contract which creates your main service. Let’s imagine you would like to add, delete, list and find the books that you have read or wish to read. For this purpose, microservice should have at least four different calls which are Add, Delete, List and Find. There is no limit to have more calls however in order to do not get out of topic, I am keeping it small.&lt;/p&gt;

&lt;p&gt;Following proto file would be enough for BookShelf service.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
// it is important to declare syntax version
syntax = "proto3";
service BookShelf {
    rpc AddBook(AddBookRequest) returns (AddBookResponse) {}
    rpc ListBook (ListBooksRequest) returns (ListBooksResponse) {}
    rpc DelBook (DelBookRequest) returns (DelBookResponse){}
    rpc FindBook (FindBookRequest) returns (FindBookResponse){}
}

message AddBookRequest {
    BookInfo book = 1;
    message BookInfo {
        string isbn =1;
        string name =2;
        string author=3;
        string addedBy=4;
    }

}

message AddBookResponse {
    string message = 1;
}
message ListBooksRequest {
// no need to have anything
// could be extended to list books based on category ...
}

message ListBooksResponse {
    repeated BookInfo books =1;
    message BookInfo {
        string isbn =1;
        string name =2;
        string author=3;
        string addedBy=4;
    }
}

message DelBookRequest {
    string isbn =1;
}

message DelBookResponse {
    string message =1;
}
message FindBookRequest {
    string isbn =1;
}

message FindBookResponse {
    Book book = 1;
    message Book {
        string isbn =1;
        string name =2;
        string author=3;
        string addedBy=4;
    }
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once &lt;strong&gt;proto&lt;/strong&gt; file is declared, it becomes more easy to continue. For the demostration purposes, I will store information of books in memory. However, as you know, it is NOT acceptable for any production level application.&lt;/p&gt;

&lt;h2&gt;
  
  
  Compile Proto file
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;proto&lt;/strong&gt; files are great since once you have defined what you need, you can directly generate codes in available languages which are represented in gRPC &lt;a href="https://grpc.io/docs/languages/"&gt;supported languages&lt;/a&gt;. The generation of codes in your desired language is pretty straitforward, I am going to generate the code for Go programming language.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ protoc -I proto/ proto/bs.proto --go_out=plugins=grpc:proto 

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;It will generate ready to use Go source code for your rpc calls which are defined in proto file.&lt;/p&gt;

&lt;p&gt;Afterwards, necessary piece of codes should be implemented, which are in memory store and book struct. For the aim of this post, I assumed that you have created all rest of the code as given in example gRPC server (-BookShelf-).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;NOTE&lt;/strong&gt; : Generating source code through &lt;code&gt;protoc&lt;/code&gt; requires to have &lt;code&gt;protoc&lt;/code&gt; tool to be installed before hand. Installation of protoc is &lt;a href="https://developers.google.com/protocol-buffers/docs/downloads"&gt;over here&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Run gRPC server
&lt;/h1&gt;

&lt;p&gt;The post is not covering all aspects of gRPC, proto buffers, Go language and those are not the intention of this post. Therefore, I am assuming that you had gRPC server and would like to test out and see whether your proto contract is running correctly without creating client side codes. Once it is confirmed that your gRPC calls are running without encouraging any unseen problems, then creating client side code will be much easy without any problem.&lt;/p&gt;

&lt;p&gt;You can start gRPC server with:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt; $ go run server/main.go
 BookShelf gRPC server is running ....

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once gRPC server is up and running, you can use &lt;strong&gt;evans&lt;/strong&gt; tool for inspecting gRPC server for available inquires.&lt;/p&gt;

&lt;h1&gt;
  
  
  Demonstration of EVANS
&lt;/h1&gt;

&lt;p&gt;Evans is an open source project which is available &lt;a href="https://github.com/ktr0731/evans"&gt;at Github&lt;/a&gt; and I found it pretty useful, in particular, for people who have no idea what kind of calls are available in proto file, as it states its explanation, it is universal gRPC client. Installation of evans and more information is given its &lt;a href="https://github.com/ktr0731/evans/blob/master/README.md"&gt;readme file&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;It has plenty of features which are very handy to use for automating and testing some stuff on top of existing gRPC server or new one.&lt;/p&gt;

&lt;p&gt;Let’s make a demo, when you are using &lt;strong&gt;evans&lt;/strong&gt; your gRPC server should be up and running, in order to make communication with universal gRPC client - &lt;strong&gt;evans&lt;/strong&gt;. I assumed that you have followed &lt;a href="https://github.com/ktr0731/evans/blob/master/README.md"&gt;readme file&lt;/a&gt; of &lt;strong&gt;evans&lt;/strong&gt; and installed it correctly.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Note that port 9000 is given because it is port of gRPC server.&lt;/em&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;❯ evans -r -p 9000

  ______
 | ____ |
 | | ____  ____ _ _ _____
 | __| \ \ / / / _. | | '_ \ /__ |
 | |____\ V / | (_| | | | | | \__ \
 | ______| \_/ \__ ,_| |_| |_| |___/

 more expressive universal gRPC client

BookShelf@127.0.0.1:9000&amp;gt; show services 

+-----------+----------+------------------+-------------------+
| SERVICE | RPC | REQUEST TYPE | RESPONSE TYPE |
+-----------+----------+------------------+-------------------+
| BookShelf | AddBook | AddBookRequest | AddBookResponse |
| BookShelf | ListBook | ListBooksRequest | ListBooksResponse |
| BookShelf | DelBook | DelBookRequest | DelBookResponse |
| BookShelf | FindBook | FindBookRequest | FindBookResponse |
+-----------+----------+------------------+-------------------+

BookShelf@127.0.0.1:9000&amp;gt; 

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;As you can observe all calls which are used in proto file can be used through &lt;strong&gt;evans&lt;/strong&gt; , moreover its usage is pretty straitforward.&lt;/p&gt;

&lt;p&gt;You can check demonstration video below.&lt;/p&gt;

&lt;p&gt;⚠️ &lt;strong&gt;&lt;em&gt;You may wish to change the video quality to 1080p60&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://youtu.be/GnAUkPUXYCs"&gt;https://youtu.be/GnAUkPUXYCs&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you have any questions, fix, or something else, do not hesitate to contact with me.&lt;/p&gt;

</description>
      <category>opensource</category>
      <category>grpc</category>
      <category>go</category>
      <category>beginners</category>
    </item>
    <item>
      <title>NGINX Ingress Controller with HAProxy for k8s cluster 🇬🇧</title>
      <dc:creator>Ahmet Turkmen</dc:creator>
      <pubDate>Fri, 10 Jul 2020 16:35:00 +0000</pubDate>
      <link>https://dev.to/mrturkmen/nginx-ingress-controller-with-haproxy-for-k8s-cluster-52e1</link>
      <guid>https://dev.to/mrturkmen/nginx-ingress-controller-with-haproxy-for-k8s-cluster-52e1</guid>
      <description>&lt;p&gt;In recent post, which is &lt;a href="https://mrturkmen.com/install-ha-kubernetes-cluster/"&gt;Setup Highly Available Kubernetes Cluster with HAProxy 🇬🇧&lt;/a&gt;, a highly available Kubernetes cluster is created. However, once I started to dig in and deploy some stuff to cluster, I realized that I am not able to connect any deployed application or services. For instance, when an web application is deployed using HAProxy load balancer (endpoint), and check from &lt;code&gt;kubectl&lt;/code&gt; (on client side), its status is running. However, that application could not be reached from outside world although I re-patch an external IP address by following command&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt; $ kubectl patch svc &amp;lt;application-name&amp;gt; -n &amp;lt;name-of-namespace&amp;gt; -p '{"spec": {"type": "LoadBalancer", "externalIPs":["&amp;lt;haproxy-ip-address&amp;gt;"]}}' 

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After some searching and reading, I realized that worker nodes require their own ingress controllers in order to forward traffic between them in case of load. I will be giving more information of how I fix the issue, however let’s learn some basic terms and general information about ingress controller.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;What is ingress controller ?&lt;/li&gt;
&lt;li&gt;
Updates to cluster

&lt;ul&gt;
&lt;li&gt;Setup NGINX Ingress Controller&lt;/li&gt;
&lt;li&gt;Steps to create NGINX Ingress controller&lt;/li&gt;
&lt;li&gt;Deploy Example Application&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;/ul&gt;

&lt;h1&gt;
  
  
  What is ingress controller ?
&lt;/h1&gt;

&lt;p&gt;The best and simple explanation to this question is coming from Kubernetes official documentation over &lt;a href="https://kubernetes.io/docs/concepts/services-networking/ingress/"&gt;here&lt;/a&gt;, as they are expressing that ;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Ingress exposes HTTP and HTTPS routes from outside the cluster to services within the cluster. Traffic routing is controlled by rules defined on the Ingress resource.&lt;/p&gt;

&lt;p&gt;An &lt;a href="https://kubernetes.io/docs/concepts/services-networking/ingress-controllers"&gt;Ingress controller&lt;/a&gt; is responsible for fulfilling the Ingress, usually with a load balancer, though it may also configure your edge router or additional frontend to help handle the traffic.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Whenever you have services which are running inside a cluster and would like to access them, you need to setup ingress controller for that cluster. The missing part was having no ingress controller on worker nodes in my k8s cluster. Everything was working however there was no access to them from outside world, that’s why ingress controller should take place in cluster architecture.&lt;/p&gt;

&lt;p&gt;In this post, I will go for &lt;a href="https://kubernetes.github.io/ingress-nginx/"&gt;NGINX ingress controller&lt;/a&gt; with its default setup, however there are plenty of different &lt;a href="https://kubernetes.io/docs/concepts/services-networking/ingress-controllers/#additional-controllers"&gt;ingress controllers&lt;/a&gt; which you may go for. I might change NGINX to Traefik in future but it depends on requirements yet for now, I will go with nginx ingress controller. The reason is that, it is super easy to setup, super rich with different features, included Kubernetes official documentation and fulfill what I am expecting for now.&lt;/p&gt;

&lt;h1&gt;
  
  
  Updates to cluster
&lt;/h1&gt;

&lt;p&gt;Let’s briefly what I have explained in previous post;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Create VMs&lt;/li&gt;
&lt;li&gt;Setup SSH connection&lt;/li&gt;
&lt;li&gt;Use KubeSpray to deploy cluster&lt;/li&gt;
&lt;li&gt;Create HAProxy and establish SSH connection with all nodes.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I have noticed that when deploying cluster, some add-ons should be enabled in order to use ingress controller from cluster with external HAProxy load balancer. Now, since cluster deployment was established with Ansible playbooks, it is not needed to setup everything from scratch. All modified configuration can be re-deployed without effecting any resource which is exists on cluster setup. It means that, I can enable required parts in configuration file and re-deploy cluster as I did on previous post.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Enable ingress controller from &lt;a href="https://github.com/kubernetes-sigs/kubespray/blob/master/inventory/sample/group_vars/k8s-cluster/addons.yml"&gt;inventory&lt;/a&gt; file inside KubeSpray&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt; $ vim inventory/mycluster/group_vars/k8s-cluster/addons.yml
 # Nginx ingress controller deployment
 ingress_nginx_enabled: false -&amp;gt; true

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once this configuration part is updated from existing KubeSpray configuration files, k8s cluster should be redeployed with same command in &lt;a href="https://mrturkmen.com/install-ha-kubernetes-cluster/"&gt;previous post&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Assumption&lt;/em&gt; : previous configured KubeSpray settings are used.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt; $ ansible-playbook -i inventory/mycluster/hosts.yaml --become --become-user=root cluster.yml

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;It will take a while and update all necessary parts which are required.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Include Ingress API object to route traffic from external HAProxy server to internal services&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;To include Ingress API object, HAProxy configuration file should be modified, following lines should be added to &lt;code&gt;/etc/haproxy/haproxy.cfg&lt;/code&gt; file.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;  $ vim /etc/haproxy/haproxy.cfg

      frontend kubernetes-ingress-http
          bind *:80
          default_backend kubernetes-worker-nodes-http

      backend kubernetes-worker-nodes-http
          balance leastconn
          option tcp-check
          server worker1 10.0.128.81:80 check fall 3 rise 2
          server worker2 10.0.128.137:80 check fall 3 rise 2
          server worker3 10.0.128.156:80 check fall 3 rise 2

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In given configuration balancing algorithm is &lt;code&gt;leastconn&lt;/code&gt; which can be changed into any load balancer algorithm which is supported by HAProxy, however &lt;code&gt;leastconn&lt;/code&gt; algorithm is fitting more to what I would like to achieve that’s why it is declared as &lt;code&gt;leastconn&lt;/code&gt;. Note that this configuration addition is on top of added part on &lt;a href="https://mrturkmen.com/install-ha-kubernetes-cluster/"&gt;previous post&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Once HAProxy configuration is updated, HAProxy should be restarted &lt;code&gt;systemctl restart haproxy&lt;/code&gt;. It is all for HAProxy configuration, now let’s dive into setting up NGINX Ingress Controller.&lt;/p&gt;

&lt;h2&gt;
  
  
  Setup NGINX Ingress Controller
&lt;/h2&gt;

&lt;p&gt;It is super simple to deploy and setting up NGINX ingress controller since it is well documented and explains required parts in detail. To setup NGINX Ingress Controller, I will follow official guideline which is exists on &lt;a href="https://docs.nginx.com/nginx-ingress-controller/installation/"&gt;NGINX Ingress Controller Installation&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--AkWKl4Au--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://mrturkmen.com/assets/images/kubernetes/nginx-ingress-controller.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--AkWKl4Au--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://mrturkmen.com/assets/images/kubernetes/nginx-ingress-controller.png" alt="NGINX INGRESS CONTROLLER" width="880" height="460"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Image is taken from (&lt;a href="https://www.nginx.com/products/nginx/kubernetes-ingress-controller/#resources"&gt;https://www.nginx.com/products/nginx/kubernetes-ingress-controller/#resources&lt;/a&gt;)&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;In normal cases, the situation is as given figure above, however, since in existing k8s cluster, I am using HAProxy for communicating with clients, I need NGINX ingress controller inside worker nodes which will manage running applications/services by communicating with HAProxy and eventually, the services will be accessible from outside world.&lt;/p&gt;

&lt;p&gt;If I summarize how overview diagram will look like in my case is like in given figure below.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--gEH6s2PW--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://mrturkmen.com/assets/images/kubernetes/overview-ingress-controller.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--gEH6s2PW--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://mrturkmen.com/assets/images/kubernetes/overview-ingress-controller.png" alt="INGRESS CONTOLLERS OVERVIEW" width="880" height="622"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;It can be observed that, in given k8s cluster overview, HAProxy is in front, it communicates with clients, afterwards transmitting request based on defined rule on HAProxy configuration. Each worker node has NGINX ingress controller, what exactly it means, whenever a request appear to cluster, worker nodes will agree between each other and response back to user without having any problem. Since NGINX ingress controller is capable of load balancing inside worker nodes as well.&lt;/p&gt;

&lt;p&gt;There is also Ingress Resource Rules part inside cluster, what it does, is that all routing rules based on path forwarded given service, an example on this is given below.&lt;/p&gt;

&lt;h3&gt;
  
  
  Steps to create NGINX Ingress controller
&lt;/h3&gt;

&lt;p&gt;&lt;em&gt;All steps shown below for installation of NGINX Ingress Controller taken from &lt;a href="https://docs.nginx.com/nginx-ingress-controller/installation/"&gt;https://docs.nginx.com/nginx-ingress-controller/installation/&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Make sure that you are a client with administrator privilege, all steps related to NGINX ingress controller should be done through &lt;code&gt;kubectl&lt;/code&gt; (on client computer/server)&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Clone Ingress Controller Repo&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ git clone https://github.com/nginxinc/kubernetes-ingress/
$ cd kubernetes-ingress

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Create a namespace and a service account for the Ingress controller&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl apply -f common/ns-and-sa.yaml

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Create a cluster role and cluster role binding for the service account&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl apply -f rbac/rbac.yaml

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Create a secret with a TLS certificate and a key for the default server in NGINX&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl apply -f common/default-server-secret.yaml

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Create a config map for customizing NGINX configuration:&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl apply -f common/nginx-config.yaml

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Afterwards, there are two different ways to run NGINX ingress controller deployment, which are as daemonset or as deployment. Main difference between those are summarized on official &lt;a href="https://docs.nginx.com/nginx-ingress-controller/installation/installation-with-manifests/"&gt;installation page&lt;/a&gt; as;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Use a Deployment. When you run the Ingress Controller by using a Deployment, by default, Kubernetes will create one Ingress controller pod.&lt;/p&gt;

&lt;p&gt;Use a DaemonSet: When you run the Ingress Controller by using a DaemonSet, Kubernetes will create an Ingress controller pod on every node of the cluster.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;I will go with DaemonSet approach, the reason is that generally when you have background-ish tasks which will run non-stateless then DaemonSet is more preferred way of running it.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl apply -f daemon-set/nginx-ingress.yaml

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once it is applied as daemon set, the result could be checked with following command and result will be similar to given result below.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl get all 
NAME READY STATUS RESTARTS AGE
pod/nginx-ingress-47z8r 1/1 Running 0 24h
pod/nginx-ingress-cmkfq 1/1 Running 0 24h
pod/nginx-ingress-ft5pv 1/1 Running 0 24h
pod/nginx-ingress-q554l 1/1 Running 0 24h
pod/nginx-ingress-ssdrj 1/1 Running 0 24h
pod/nginx-ingress-t9jml 1/1 Running 0 24h

NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
daemonset.apps/nginx-ingress 6 6 6 6 6 &amp;lt;none&amp;gt; 24h

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Deploy Example Application
&lt;/h2&gt;

&lt;p&gt;To test how an application will be exposed to externally from k8s cluster, an example applicaton could be deployed as given below. Note that the following example is simplest example for this context, hence, keep in mind that it might require more configuration and detailed approach then described here when you would like to deploy more complex applications.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Create a sample NGINX Web Server&lt;/strong&gt; (Using provided example)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;em&gt;nginx-deploy-main.yml&lt;/em&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
  name: nginx-deployment
spec:
  selector:
    matchLabels:
      app: nginx
  replicas: 2 # tells deployment to run 2 pods matching the template
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:1.14.2
        ports:
        - containerPort: 80

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;em&gt;Taken from &lt;a href="https://kubernetes.io/docs/tasks/run-application/run-stateless-application-deployment/"&gt;https://kubernetes.io/docs/tasks/run-application/run-stateless-application-deployment/&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;In given yaml deployment file above, two replicas of NGINX:1.14.2 will be deployed to cluster and it has name of &lt;code&gt;nginx-deployment&lt;/code&gt;. The yaml explains itself very well.&lt;/p&gt;

&lt;p&gt;It can be deployed either through directly from official link or from your local depends on your preferences.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl apply -f https://k8s.io/examples/application/deployment.yaml

## or you can do same thing with local file as given below
$ kubectl apply -f nginx-deploy-main.yml

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Expose deployment:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl expose deploy nginx-deployment --port 80

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once it is deployed to cluster and exposed, there is one step left for this simple counter example is that, exposing the service and creating ingress rule (resource) in yaml file, by specifiying kind as &lt;code&gt;Ingress&lt;/code&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
  name: nginx-ingress
spec:
  rules:
  - host: &amp;lt;dns-record&amp;gt; (a domain like test.mydomain.com)
    http:
      paths:
      - path: /
        backend:
          serviceName: nginx-deployment
          servicePort: 80

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The crucial part is &lt;code&gt;serviceName&lt;/code&gt; and &lt;code&gt;servicePort&lt;/code&gt; which are defining specifications of the services within cluster. The yaml specifications can be expanded as shown below, assume that you have wildcard record in your domain name server and have multiple services which are running in same port in a cluster, yaml file can be re-defined as given below.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;nginx-ingress-resource.yml&lt;/em&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
  name: ingress-controller
spec:
  rules:
  - host: &amp;lt;dns-record&amp;gt; (a domain like test.mydomain.com)
    http:
      paths:
      - path: /
        backend:
          serviceName: nginx-deployment
          servicePort: 80
      - path: /apache
        backend: 
           serviceName: apache-deployment
           servicePort: 80
      - path: /native-web-server 
        backend:
           serviceName: native-web-server-deployment
           servicePort: 80

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Keep in mind that all given services should be deployed before hand otherwise when a request made to any path which is not deployed, it may return either 404 or 500. There are plenty of different options to define and update the components in a k8s cluster. Therefore, all yaml files should be changed according to requirements.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Create ingress controller rules from provided yaml file&lt;/em&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl create -f nginx-ingress-resource.yml

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now, the NGINX web server deployment is ready on given DNS record in yaml file and according to request paths different services can be called which are also running inside kubernetes cluster.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--At7h-wZ---/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://mrturkmen.com/assets/images/kubernetes/nginx-web-server.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--At7h-wZ---/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://mrturkmen.com/assets/images/kubernetes/nginx-web-server.png" alt="NGINX WEB SERVER DEPLOYMENT RESULT" width="880" height="316"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Note that, provided yaml files are just simple example of deploying NGINX web server without any certification, when certificates (HTTPS) enabled or any other type of deployment happened different configurations should be applied.&lt;/p&gt;

&lt;p&gt;When everything goes without any problem, you will have a cluster which uses NGINX Ingress controller for internal cluster routing and HAProxy as communication endpoint for clients. Keep in mind that whenever a new service or deployment take place, required configuration should be enabled in HAProxy configuration as it is enabled for port 80 applications above. Different services will have different requirements therefore it is important to catch main logic in a setup. It is all done for this post.&lt;/p&gt;

&lt;p&gt;Cheers !&lt;/p&gt;

</description>
      <category>blog</category>
      <category>kubernetes</category>
      <category>docker</category>
      <category>pod</category>
    </item>
    <item>
      <title>Setup Highly Available Kubernetes Cluster with HAProxy 🇬🇧</title>
      <dc:creator>Ahmet Turkmen</dc:creator>
      <pubDate>Sun, 05 Jul 2020 17:43:00 +0000</pubDate>
      <link>https://dev.to/mrturkmen/setup-highly-available-kubernetes-cluster-with-haproxy-2dm8</link>
      <guid>https://dev.to/mrturkmen/setup-highly-available-kubernetes-cluster-with-haproxy-2dm8</guid>
      <description>&lt;p&gt;The main purpose of this blog post a simple walkthrough of setting up Kubernetes cluster with external &lt;a href="http://www.haproxy.org/" rel="noopener noreferrer"&gt;HAProxy&lt;/a&gt; which will be the endpoint where our &lt;code&gt;kubectl&lt;/code&gt; client communicates over. Node specifications for this setup is given as shown in the table below. Keep in mind that all of them has access to each other with password and without password. The environment which Kubernetes cluster will stay is running on OpenStack. It means that once a configuration (ssh keys, hosts, and etc) is done for example master 1 then all other nodes could be initialized through snapshot of master 1. To be able to setup such a Kubernetes cluster easily, I will be using &lt;a href="https://github.com/kubernetes-sigs/kubespray" rel="noopener noreferrer"&gt;KubeSpray&lt;/a&gt; which is a repository where it has all required configuration and playbooks for setting up necessary cluster.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Node Specification&lt;/li&gt;
&lt;li&gt;Prerequisites&lt;/li&gt;
&lt;li&gt;General Overview&lt;/li&gt;
&lt;li&gt;KubeSpray Configuration&lt;/li&gt;
&lt;li&gt;External Load Balancer Setup (HAProxy)&lt;/li&gt;
&lt;li&gt;Setup KubeSpray Configuration&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The intention of this walkthrough is that setting up your own Kubernetes cluster in your own servers, this post is not very useful for people who are already using cloud provider solutions.(Kubernetes cluster as a service). You can checkout following resources listed below : (few of them :) )&lt;/p&gt;

&lt;p&gt;Cloud Providers Solutions:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://azure.microsoft.com/en-us/services/kubernetes-service/" rel="noopener noreferrer"&gt;Azure Kubernetes Service - AKS&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://cloud.google.com/kubernetes-engine" rel="noopener noreferrer"&gt;Google Kubernetes Engine - GKE&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.digitalocean.com/products/kubernetes/" rel="noopener noreferrer"&gt;Managed Kubernetes on DigitalOcean&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://aws.amazon.com/kubernetes/" rel="noopener noreferrer"&gt;Kubernetes on AWS&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h1&gt;
  
  
  Node Specification
&lt;/h1&gt;

&lt;p&gt;Kubernetes cluster will be setup on following nodes in the table below, note that HAProxy will run on another node and all ansible playbooks and setting up Kubernetes cluster will be managed through HAProxy. Keep in mind that all nodes + HAProxy is under same subnet internally which means that we will only one external IP address where HAProxy use and &lt;code&gt;kubectl&lt;/code&gt; clients communicate. All instances are running on ubuntu_18.04, it means that the instructions and steps may not work with another system.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fvb74by6zk4ewl0m1n4t1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fvb74by6zk4ewl0m1n4t1.png" alt="Alt Text" width="800" height="373"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Prerequisites
&lt;/h1&gt;

&lt;ul&gt;
&lt;li&gt;Nodes&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://github.com/kubernetes-sigs/kubespray/blob/master/requirements.txt" rel="noopener noreferrer"&gt;Requirements&lt;/a&gt; of KubeSpray&lt;/li&gt;
&lt;li&gt;Setting up SSH Key Across Nodes&lt;/li&gt;
&lt;li&gt;Getting snapshot ( -it is optional -)&lt;/li&gt;
&lt;li&gt;Setting up login with password&lt;/li&gt;
&lt;/ul&gt;

&lt;h1&gt;
  
  
  General Overview
&lt;/h1&gt;

&lt;p&gt;The following sketch is general overview of how Kubernetes cluster will look like at the end of this walkthrough, the figure is super overviewed version of cluster.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmrturkmencom.github.io%2Fassets%2Fimages%2Fkubernetes%2Foverview-kube-cluster.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmrturkmencom.github.io%2Fassets%2Fimages%2Fkubernetes%2Foverview-kube-cluster.png" alt="General Overview Cluster" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In given figure above, nodes do not have any external IP adress however, including HAProxy, all of them in same subnet, only HAProxy has external IP address which will be reachable by &lt;code&gt;kubectl&lt;/code&gt; clients.&lt;/p&gt;

&lt;p&gt;Before moving installation step of Kubernetes cluster, we need to setup a sample master node (instance) with predefined configuration. Since we will have only one server which is open to outside world, we need to make sure that there is a connection between HAProxy and sample master node. I am currently calling it sample master node, it is because, preliminary configurations such as authentication with password, disabled swap area and ssh keys will be all configured. This sample master node should be started and accesible over HAProxy, which means that in order to access to sample master node, I should do following;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;SSH to HAProxy using SSH key ( &lt;strong&gt;Password Login disabled&lt;/strong&gt; ) like &lt;code&gt;ssh -i ~/.ssh/id_rsa &amp;lt;username&amp;gt;@&amp;lt;ha-proxy-external-ip&amp;gt;&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Copy SSH Key to HAProxy, which let you in to sample master node&lt;/li&gt;
&lt;li&gt;Then SSH to sample master node with same approach. (&lt;code&gt;ssh ~/.ssh/masternode.pem &amp;lt;username&amp;gt;@&amp;lt;master-node-ip&amp;gt;&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;After you are inside sample master node, now, some configurations and setting should be done. Afterwards, we can initialize other five nodes from snapshot of configured sample master node.&lt;/p&gt;

&lt;p&gt;Steps:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Enable Password Login if not enabled already.&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ echo "PermitRootLogin yes" &amp;gt;&amp;gt; /etc/ssh/sshd_config
$ sed -i -E 's/PasswordAuthentication no/PasswordAuthentication yes/g' /etc/ssh/sshd_config

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Specify Password for ROOT&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ sudo su 
$ passwd 

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Given commands will ask new unix password for root user. Define the password and do not forget or lose it. Since we will gonna use snapshot of this configured machine, all settings will be same, I did like that to shortcut the process.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Disable swap area (RUN ALL COMMANDS AS ROOT)&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ swapoff -a

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Afterwards, exit from sample master node, create snapshot of that node (it is called volume snapshot in OpenStack), once you have successfully created snaphot, all five other nodes should be initialized from snapshot of this sample master node. This way, there is no need to repeat same steps described above.&lt;/p&gt;

&lt;p&gt;In case of not having possibility to create snapshot follow given steps (if and if only, you could NOT create snapshot and initialize other five nodes from the snapshot)&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Create all nodes (workers and masters)&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Enable SSH connection to all nodes from HAPRoxy server.&lt;/p&gt;

&lt;p&gt;From HAProxy server, execute following steps. (-Make sure that you have configured SSH connection with ROOT priviledges and have access to all nodes from HAProxy node -)&lt;/p&gt;

&lt;p&gt;Once you are sure that you have SSH access to all nodes from HAProxy through SSH, implement following steps.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Install parallel-ssh (-to run a command in parallel on nodes-) (run with ROOT priviledges)&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ apt-get update &amp;amp;&amp;amp; apt-get install -y pssh

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Install HAProxy (as ROOT priviledges)&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ apt-get install -y haproxy 

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Modify &lt;code&gt;/etc/hosts&lt;/code&gt; (-For easy communication through nodes-)&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;em&gt;Append worker and master node IPs to &lt;code&gt;/etc/hosts&lt;/code&gt; file&lt;/em&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ vim /etc/hosts 
10.0.128.156 worker3
10.0.128.137 worker2
10.0.128.81 worker1
10.0.128.184 master3
10.0.128.171 master2
10.0.128.149 master1

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Create &lt;code&gt;nodes&lt;/code&gt; text file on home directory&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ cat nodes 
worker3
worker2
worker1
master3
master2
master1

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;em&gt;Since IP addresses of them defined in &lt;code&gt;/etc/hosts&lt;/code&gt; file, system can now recognize and connect IPs of them through just by name&lt;/em&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Generate and Copy SSH Key to all nodes&lt;/strong&gt; (Required for easy communication)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If there is already a SSH key (like in &lt;code&gt;~/.ssh/id_rsa&lt;/code&gt;), you can use it as well.If not, you can do following step&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ ssh-keygen # will prompt passphrase, you can leave empty , NOTE THAT IF YOU DO NOT HAVE SSH KEY, GENERATE IT.

$ for i in $(cat nodes); ssh-copy-id $i; done  

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The for loop given as second command will copy ssh key to all nodes, then accesing any node without password will be flawless.Like given command below;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ ssh master1 # in defualt uses same username with terminal session

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Disable swap area on all nodes (Note that if you are using snapshot method, no need to do this step)&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ parallel-ssh -h nodes -i "swapoff -a"

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Parallel SSH tool is handy to complete tasks in parallel for multiple hosts.&lt;/p&gt;

&lt;h1&gt;
  
  
  KubeSpray Configuration
&lt;/h1&gt;

&lt;p&gt;KubeSpray is a repository to setup Kubernetes clusters with predefined configuration settings using Ansible playbooks. The usage of KubeSpray is pretty straightforward, as default settings, KubeSpray is using internal load balancers in each worker node, which means that when you setup a Kubernetes cluster using default values of KubeSpray, you will have following arch overview.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmrturkmencom.github.io%2Fassets%2Fimages%2Fkubernetes%2Foverview-kube-cluster-default.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmrturkmencom.github.io%2Fassets%2Fimages%2Fkubernetes%2Foverview-kube-cluster-default.png" alt="Default Kubernetes Arch with KubeSpray Setup" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;However, in this guide, external load balancer approach will be used to setup cluster, if you wish to leave everything as default with KubeSpray, you can skip this External Load Balancer Setup part.&lt;/p&gt;

&lt;h1&gt;
  
  
  External Load Balancer Setup (HAProxy)
&lt;/h1&gt;

&lt;p&gt;Modify configuration file of HAProxy to enable external LoadBalancer, copy this following configuration and append to &lt;code&gt;/etc/haproxy/haproxy.cfg&lt;/code&gt;. (end of file)&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;listen kubernetes-apiserver-https
  bind &amp;lt;your-haproxy-internal-ip&amp;gt;:8383
  mode tcp
  option log-health-checks
  timeout client 3h
  timeout server 3h
  server master1 &amp;lt;your-master1-ip&amp;gt;:6443 check check-ssl verify none inter 10000
  server master2 &amp;lt;your-master2-ip&amp;gt;:6443 check check-ssl verify none inter 10000
  server master3 &amp;lt;your-master3-ip&amp;gt;:6443 check check-ssl verify none inter 10000
  balance roundrobin

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Balance algorithm is &lt;code&gt;roundrobin&lt;/code&gt; however you can change it from list of available &lt;a href="https://cbonte.github.io/haproxy-dconv/configuration-1.4.html#4.2-balance" rel="noopener noreferrer"&gt;balance algorithms&lt;/a&gt; provided by HAProxy.&lt;/p&gt;

&lt;p&gt;Once it is done, save and restart HAProxy service.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ systemctl restart haproxy

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h1&gt;
  
  
  Setup KubeSpray Configuration
&lt;/h1&gt;

&lt;p&gt;Since external load balancer will be used, there is few things to be done to change default values in KubeSpray. Following steps will be done on HAProxy node.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Clone the project and prepare environment&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
$ git clone https://github.com/kubernetes-sigs/kubespray
$ apt-get install -y python3-pip # install pip3 if not installed
$ cd kubespray

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Follow the guide on KubeSpray README.md file&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Following instructions taken from &lt;a href="https://github.com/kubernetes-sigs/kubespray" rel="noopener noreferrer"&gt;KubeSpray README.md&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Install dependencies from ``requirements.txt``
sudo pip3 install -r requirements.txt
# Copy ``inventory/sample`` as ``inventory/mycluster``
cp -rfp inventory/sample inventory/mycluster
# Update Ansible inventory file with inventory builder
declare -a IPS=(10.0.128.149 10.0.128.171 10.0.128.184 10.0.128.81 10.0.128.137 10.0.128.156)
CONFIG_FILE=inventory/mycluster/hosts.yaml python3 contrib/inventory_builder/inventory.py ${IPS[@]}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Modify generate hosts YAML file&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;When you check &lt;code&gt;inventory/mycluster/hosts.yaml&lt;/code&gt; file, you will notice that it created two master nodes, which we require three, add missing one properly to that list as shown below.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;all:
  hosts:
    master1:
      ansible_host: 10.0.128.149
      ip: 10.0.128.149
      access_ip: 10.0.128.149
    master2:
      ansible_host: 10.0.128.171
      ip: 10.0.128.171
      access_ip: 10.0.128.171
    master3:
      ansible_host: 10.0.128.184
      ip: 10.0.128.184
      access_ip: 10.0.128.184
    worker1:
      ansible_host: 10.0.128.81
      ip: 10.0.128.81
      access_ip: 10.0.128.81
    worker2:
      ansible_host: 10.0.128.137
      ip: 10.0.128.137
      access_ip: 10.0.128.137
    worker3:
      ansible_host: 10.0.128.156
      ip: 10.0.128.156
      access_ip: 10.0.128.156
  children:
    kube-master:
      hosts:
        master1:
        master2:
        master3:
    kube-node:
      hosts:
        master1:
        master2:
        master3:
        worker1:
        worker2:
        worker3:
    etcd:
      hosts:
        master1:
        master2:
        master3:
    k8s-cluster:
      children:
        kube-master:
        kube-node:
    calico-rr:
      hosts: {}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once it is done, the other thing which should be modified to use external load balancer HAProxy, is &lt;code&gt;all.yaml&lt;/code&gt; file located under &lt;code&gt;inventory/mycluster/group_vars/all/&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;all.yml&lt;/code&gt; is general configuration file which specifies main configurations of your cluster, it uses Nginx load balancer by default which means that each worker node has its own local nginx load balancer as given second figure above. If not specified anything else.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Disable default load balancer&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ vim inventory/mycluster/group_vars/all/all.yml
loadbalancer_apiserver_localhost: false

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Add external load balancer HAProxy.&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ vim inventory/mycluster/group_vars/all/all.yml
## External LB example config
apiserver_loadbalancer_domain_name: "&amp;lt;domain-name-of-lb&amp;gt;"
loadbalancer_apiserver:
  address: 10.0.128.193
  port: 8383

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Initialize cluster deployment&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# under kubespray/ directoy 
$ ansible-playbook -i inventory/mycluster/hosts.yaml --become --become-user=root cluster.yml

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;It will take around 10-15 minutes which depens on your cluster and if everything goes well, at the end of deployment through Ansible you will not face with any problem. If so, you can test it by SSH to &lt;code&gt;master&lt;/code&gt; node and try &lt;code&gt;kubectl cluster-info&lt;/code&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl cluster-info
Kubernetes master is running at .....

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;It means that Kubernetes cluster with three master and three worker nodes available to use.&lt;/p&gt;

&lt;p&gt;Note that the default configuration of cluster could be changed more however before attempting to change default configuration, make sure that you did correct research on what to change on KubeSpray default settings. Otherwise, there might be problems regarding to customized configuration settings.&lt;/p&gt;

&lt;p&gt;For more information stay updated and watch &lt;a href="https://github.com/kubernetes-sigs/kubespray" rel="noopener noreferrer"&gt;KubeSpray&lt;/a&gt; regarding to issues, pitfalls and more.&lt;/p&gt;

&lt;p&gt;Last step for this post is creating &lt;code&gt;kubectl&lt;/code&gt; configuration for your personal/work computer to access the cluster. &lt;a href="https://kubernetes.io/docs/tasks/tools/install-kubectl/" rel="noopener noreferrer"&gt;Install kubectl&lt;/a&gt; on your environment. Afterwards copy configuration from master node to your &lt;code&gt;~/.kube/&lt;/code&gt; as &lt;code&gt;config&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Since we have only one endpoint, configuration file should be copied to HAProxy Server then your computer, through &lt;code&gt;rsync&lt;/code&gt; or &lt;code&gt;scp&lt;/code&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;On HAProxy Server&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ scp root@master1:/etc/kubernetes/admin.conf config # will copy admin.conf as config 
$ cp config /home/ubuntu/ # copy to a user home dir
$ chown ubuntu:ubuntu /home/ubuntu/config # change owner of the file 

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;On your personal/work computer&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ scp -i ~/.ssh/haproxy.pem ubuntu@&amp;lt;ha-proxy-ip&amp;gt;:/home/ubuntu/config ~/.kube/

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now, you should be able to get and dump your cluster information as in master nodes.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl cluster-info
Kubernetes master is running at .....

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;There are lots of configurations and different settings regarding to Kubernetes cluster environment and generally using Cloud Provider solutions are less painful or painless. However, sometimes it is less costly to setup your own environment and having full access to anything could be better for learning under the hood things or creating highly customized environments. It really depends on your situation therefore it is up to you to go and setup your own Kubernetes cluster or use it as service from cloud providers.&lt;/p&gt;

&lt;p&gt;By the way, thanks for giving time to checkout the post 😉&lt;/p&gt;

</description>
      <category>tutorial</category>
      <category>kubernetes</category>
      <category>docker</category>
      <category>architecture</category>
    </item>
    <item>
      <title>Kendimize özel VPN kurulumu 🇹🇷</title>
      <dc:creator>Ahmet Turkmen</dc:creator>
      <pubDate>Wed, 01 Jul 2020 10:00:00 +0000</pubDate>
      <link>https://dev.to/mrturkmen/kendimize-ozel-vpn-kurulumu-9b3</link>
      <guid>https://dev.to/mrturkmen/kendimize-ozel-vpn-kurulumu-9b3</guid>
      <description>&lt;ul&gt;
&lt;li&gt;
VPN Kuralım

&lt;ul&gt;
&lt;li&gt;Neden kendi VPN sistemi kurmalıyız ?&lt;/li&gt;
&lt;li&gt;Sunucu Ayarları&lt;/li&gt;
&lt;li&gt;Kullanıcı Ayarları&lt;/li&gt;
&lt;li&gt;Güvenlik Duvarı ayarları&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;/ul&gt;

&lt;h1&gt;
  
  
  VPN Kuralım
&lt;/h1&gt;

&lt;p&gt;Bugün sizlere kendinize ait VPN sistemi nasıl kurulur, onu anlatmak istiyorum, daha önce İngilizce olarak, &lt;a href="https://mrturkmen.com/setup-free-vpn/"&gt;yayınladım&lt;/a&gt; fakat Türkçe bir kaynağın da faydalı olabileceğini düşündüm. Burada anlatılanlar, ubuntu ailesine (16.04,18.04) ait sunucular üzerinde test edilmiştir.&lt;/p&gt;

&lt;p&gt;İlk olarak bulut hizmeti sağlayan bir şirketten bu DigitalOcean, Google Cloud, Microsoft Azure veya Amazon olabilir, sunucu kiralıyorsunuz, en ucuzu ve makul olanı DigitalOcean tarafından sunulan &lt;a href="https://www.digitalocean.com/pricing/"&gt;aylık 5 dolar&lt;/a&gt; olan sunucu diyebilirim. Sunucuyu kiraladıktan ve ssh bağlantısını sağladıktan sonra VPN kurulumuna geçebiliriz.&lt;/p&gt;

&lt;p&gt;VPN hakkında tam bilgisi olmayan arkadaşlar için şu şekilde özetlenebilir, sizin için oluşturulmuş sanal bir bağlantı noktası gibi düşünebilirsiniz. Yani VPN’e bağlandıktan sonra bilgisayarınızdan çıkan ve bilgisayarınıza gelen ağ trafik şifrelenmiş olarak işlenir. Üçüncü parti yazılımların veya MITM gibi saldırıların önüne geçmiş olursunuz.&lt;/p&gt;

&lt;h2&gt;
  
  
  Neden kendi VPN sistemi kurmalıyız ?
&lt;/h2&gt;

&lt;p&gt;Çünkü şu anda var olan bütün VPN sistemleri, ücretsiz olarak hizmet sağlasa dahi, sizin bilgilerinizin satılması, arşivlenmesi ve gerektiğinde ilgili birimlere aktarılması amacıyla kaydedilmektedir. Bunun ne gibi zararları olabilir gelin birlikte şöyle bir sıralayalım:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Oltalama saldırılarına sadece sizin bilebileceğiniz bilgiler ile maruz kalma.&lt;/li&gt;
&lt;li&gt;Ziyaret ettiğiniz siteler tarafından reklam bombardımanına maruz kalma.&lt;/li&gt;
&lt;li&gt;Kişisel bilgilerinizin reklam veren ajanslara satılması, bu durum birçok kişi tarafından tam olarak anlaşılamıyor, yani şu şekilde anlaşılamıyor, internet üzerinden alışveriş yapan A kişisi, kendine ait bilgilerin, onun bilgilerini satacak kişiler tarafından değersiz olduğuna inanıyor ve hiçbir gizlilik sağlamadan internet kullanımına devam ediyor. Bu sonunda o kişiye zarar vermese bile o kişinin konuştuğu, görüştüğü veya birlikte çalıştığı arkadaşlara zarar verebiliyor.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Burada sıralananlar sadece buzdağının görünen ucu bile diyemeyiz, günümüzde veri işleme teknikleri ve yaklaşımları öyle gelişmiştir ki siz bile kendinize ait olan bir şeyin varlığına farkında olmadan onlar işlemleri tamamlamış oluyor :).&lt;/p&gt;

&lt;p&gt;Bu ve bunlardan çok daha fazla nedenden dolayı VPN kullanımı şart diyebilirim. Peki bunu nasıl yapacağız, bu kısımdan sonra sizin bir bulut sağlayıcısı tarafından sunucunu kiraladığınızı ve ssh bağlantısını sağladığınızı varsayıyorum.&lt;/p&gt;

&lt;p&gt;Bu gönderide WireGuard VPN uygulaması kullanılacaktır. WireGuard VPN uygulaması açık kaynaklı bir uygulama olup, sağladığı imkanlar sayesinde diğer VPN uygulamalarına (OpenVPN ve diğerleri) kıyasla çok daha hızlı ve güvenilirdir.&lt;/p&gt;

&lt;h2&gt;
  
  
  Sunucu Ayarları
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;VPN uygulamasını kiraladığımız sunucu üzerine kuralım.&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ sudo apt-get update &amp;amp;&amp;amp; sudo apt-get upgrade -y 
$ sudo add-apt-repository ppa:wireguard/wireguard
$ sudo apt-get update 
$ sudo apt-get install wireguard

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Uygulamayı çekirdek güncellemeleri ile birlikte güncellemek için gerekli komutu girelim.&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;  $ sudo modprobe wireguard

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Aşağıda verilen komut girildiğinde beklenen sonuç.&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ lsmod | grep wireguard

wireguard 217088 0
ip6_udp_tunnel 16384 1 wireguard
udp_tunnel 16384 1 wireguard

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Anahtarları üretelim&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ cd /etc/wireguard
$ umask 077
$ wg genkey | sudo tee privatekey | wg pubkey | sudo tee publickey

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;VPN Konfigurasyon dosyasını &lt;code&gt;/etc/wireguard/wg0.conf&lt;/code&gt; ayarlayalım.&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
[Interface]
PrivateKey = &amp;lt;daha-öncesinde-üretilen-gizli-anahtar&amp;gt;
Address = 10.120.120.2/24
Address = fd86:ea04:1111::1/64
SaveConfig = true
PostUp = iptables -A FORWARD -i wg0 -j ACCEPT; iptables -t nat -A POSTROUTING -o ens3 -j MASQUERADE; ip6tables -A FORWARD -i wg0 -j ACCEPT; ip6tables -t nat -A POSTROUTING -o ens3 -j MASQUERADE
PostDown = iptables -D FORWARD -i wg0 -j ACCEPT; iptables -t nat -D POSTROUTING -o ens3 -j MASQUERADE; ip6tables -D FORWARD -i wg0 -j ACCEPT; ip6tables -t nat -D POSTROUTING -o ens3 -j MASQUERADE
ListenPort = 51820

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Burada önemli nokta &lt;code&gt;ens3&lt;/code&gt;, ip tables komutu içerisinde yer alan &lt;code&gt;en3&lt;/code&gt;, sunucudan sunucuya farklılık gösterebilir, bundan dolayı sizin sunucunuzda ne ise ağ kartının ismi onu girmelisiniz. &lt;code&gt;ifconfig&lt;/code&gt; komutu sayesinde öğrenilebilir.&lt;/p&gt;

&lt;p&gt;Bir diğer önemli nokta ise daha öncesinde 4. adımda üretilen &lt;code&gt;privatekey&lt;/code&gt;, içeriğinin &lt;code&gt;PrivateKey&lt;/code&gt; alanına girilmesidir.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Ağ trafiğini yönlendirme&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;code&gt;/etc/sysctl.conf&lt;/code&gt; dosyası içerisine aşağıda verilen bilgileri girerek kaydediniz.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;net.ipv4.ip_forward=1
net.ipv6.conf.all.forwarding=1

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Bilgiler gerekli dosyaya kaydedildikten sonra aşağıdaki komutlar sırası ile girilmelidir.&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ sysctl -p
$ wg-quick up wg0

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Komutların girilmesi ve herhangi bir sorun görülmemesi durumda ve &lt;code&gt;wg&lt;/code&gt; komutu terminale girildikten sonra aşağıda verilen çıktıya benzer bir çıktı göreceksiniz.&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ wg
interface: wg0
  public key: loZviZQpT5Sy4gFKEbk6Vc/rcJ3bH84L7TUj4qMB918=
  private key: (hidden)
  listening port: 51820

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Eğer herhangi bir sorun ile karşılaşmazsanız bu adıma kadar, bu demek oluyor ki, sunucu tarafında işiniz şimdilik tamamlandı geriye sadece kendi bilgisayarımızı, telefonumuzu vs VPN sunucusuna bağlamak kaldı.&lt;/p&gt;

&lt;h2&gt;
  
  
  Kullanıcı Ayarları
&lt;/h2&gt;

&lt;p&gt;Kullanıcıların kendi bilgisayar ortamlarında, telefonlarında, tabletlerinde veya diğer sunucularında kullanabileceği uygulamaları &lt;a href="https://www.wireguard.com/install/"&gt;buradan&lt;/a&gt; indirebilirsiniz.&lt;/p&gt;

&lt;p&gt;Gerekli uygulamayı kendi ortamınıza indirdikten sonra tek yapmanız gereken, VPN sunucusu tarafında ayarladığımız VPN’e bağlanmak, bunun için gerekli olan sadece konfigurasyonları doğru girmek olacaktır.&lt;/p&gt;

&lt;p&gt;Kullanıcı tarafında, uygulama üzerinden aşağıda verilen konfigurasyona benzer bir ayarı (kendi kurduğunuz VPN ayarlarına göre privatekey ve ip adressi değişiklik gösterecektir.) ayarlamanız gerekmektedir.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[Interface]
Address = 10.120.120.2/32
Address = fd86:ea04:1111::2/128
# note that privatekey value is just a place holder 
PrivateKey = KIaLGPDJo6C1g891+swzfy4LkwQofR2q82pFR6BW9VM=
DNS = 1.1.1.1

[Peer]
PublicKey = &amp;lt;sunucunuza-ait-public-anahtar&amp;gt;
Endpoint = &amp;lt;sunucunuzun-dış-ip-adresi&amp;gt;:51820
AllowedIPs = 0.0.0.0/0, ::/0

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Gerekli işlemler kullanıcı tarafında da sağlandıktan sonra, sunucu tarafında bu kullanıcıya bağlantı izni vermek kalıyor, onuda aşağıda verilen komut ile sağlayabilirsiniz.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ wg set wg0 peer &amp;lt;kullanici-public-anahtari&amp;gt; allowed-ips 10.120.120.2/32,fd86:ea04:1111::2/128

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Sunucu tarafindan kullanicinin VPN baglantisi sağladığını aşağıda verilen komut ile teyit edebilirsiniz.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ wg

interface: wg0
  public key: loZviZQpT5Sy4gFKEbk6Vc/rcJ3bH84L7TUj4qMB918=
  private key: (hidden)
  listening port: 51820

peer: Ta9esbl7yvQJA/rMt5NqS25I/oeuTKbFHJu7oV5dbA4=
  allowed ips: 10.120.120.2/32, fd86:ea04:1111::2/128

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Daha sonrasinda, wireguard tarafından oluşturulan ağ kartını aktivate edelim.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ wg-quick up wg0

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Güvenlik Duvarı ayarları
&lt;/h2&gt;

&lt;p&gt;Bazen sunucu tarafında yapmanız gereken bazı güvenlik duvarı ayarları bulunmakta, bunlar VPN bağlantısını başarılı bir şekilde sağlamanız için kritik öneme sahiptir.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ ufw enable

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;VPN uygulamasına bağlanmamızı sağlayacak portu açıyoruz.&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
$ ufw allow 51820/udp

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;IP tabloları ile 51820 portu için bazı ayarlamalar yapıyoruz.&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ iptables -A INPUT -p udp -m udp --dport 51820 -j ACCEPT
$ iptables -A OUTPUT -p udp -m udp --sport 51820 -j ACCEPT

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Burada önemli olan kısımlardan biriside bütün komutlar ROOT, yani yönetici yetkisi ile yapılmalı, aksi takdirde hata verecektir.&lt;/p&gt;

&lt;p&gt;Bu noktadan sonra, bilgisayarınıza, tabletinize veya telefonunuza kurduğunuz WireGuard uygulaması sayesinde sorunsuz ve güvenlikli bir şekilde internetinizi kullanabilirsiniz.&lt;/p&gt;

</description>
      <category>blog</category>
      <category>security</category>
      <category>linux</category>
      <category>macosx</category>
    </item>
  </channel>
</rss>
