<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Dev</title>
    <description>The latest articles on DEV Community by Dev (@thundersparkf).</description>
    <link>https://dev.to/thundersparkf</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/thundersparkf"/>
    <language>en</language>
    <item>
      <title>samwise-CLI: The Open Source Terraform module dependency tracker</title>
      <dc:creator>Dev</dc:creator>
      <pubDate>Mon, 12 Aug 2024 10:06:30 +0000</pubDate>
      <link>https://dev.to/thundersparkf/samwise-cli-the-open-source-terraform-module-dependency-tracker-3b1e</link>
      <guid>https://dev.to/thundersparkf/samwise-cli-the-open-source-terraform-module-dependency-tracker-3b1e</guid>
      <description>&lt;h3&gt;
  
  
  Terraform
&lt;/h3&gt;

&lt;p&gt;Terraform is an Infrastructure-as-Code(IaC) tool that is written in Hashicorp Configuration Language(HCL). This article assumes that the reader has worked with Terraform and understand how modules work.&lt;/p&gt;

&lt;p&gt;Everyone coding in Terraform has either created their own modules, or at least used someone else's.&lt;/p&gt;

&lt;h3&gt;
  
  
  Terraform modules
&lt;/h3&gt;

&lt;p&gt;Modules are containers for multiple resources that are used together. A module consists of a collection of .tf and/or .tf.json files kept together in a directory.&lt;/p&gt;

&lt;p&gt;Modules are the main way to package and reuse resource configurations with Terraform.&lt;/p&gt;

&lt;p&gt;Modules can be published to registries like Terraform registry or GitLab. Modules can also be served from Git repositories by providing the HTTPS or SSH URLs. These can be versioned as well. For example,&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;module "consul" {
  source = "github.com/hashicorp/example"
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Versioned:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;module "consul" {
  source = "github.com/hashicorp/example?ref=1.2.6"
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h3&gt;
  
  
  Challenge
&lt;/h3&gt;

&lt;p&gt;As your repositories grow and you reference your modules in other repositories, you would reasonably version your modules to ensure that upstream changes in the source doesn't break your infrastructure. However, it is difficult to keep track of all the new releases for the modules being used and even harder to do it regularly. Unaddressed, this builds overtime as tech debt as one day you discover that a core module is now 3 major versions ahead.&lt;/p&gt;
&lt;h3&gt;
  
  
  Solution
&lt;/h3&gt;

&lt;p&gt;Presenting &lt;code&gt;samwise-cli&lt;/code&gt;, a &lt;a href="https://github.com/thundersparkf/samwise-cli" rel="noopener noreferrer"&gt;tool&lt;/a&gt; to help track your repository's Terraform/OpenTofu dependencies upstream. Searches your repository for usages of modules and generates a report of the modules that have updates available along with all the versions that are more advanced than the version used currently.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffnzjs79la353liyf88ui.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffnzjs79la353liyf88ui.png" alt="samwise-cli guide"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;At the moment, there's only one command, but hopefully there'll be more soon as the tool develops.&lt;/p&gt;


&lt;div class="ltag-github-readme-tag"&gt;
  &lt;div class="readme-overview"&gt;
    &lt;h2&gt;
      &lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev.to%2Fassets%2Fgithub-logo-5a155e1f9a670af7944dd5e12375bc76ed542ea80224905ecaf878b9157cdefc.svg" alt="GitHub logo"&gt;
      &lt;a href="https://github.com/Darth-Tech" rel="noopener noreferrer"&gt;
        Darth-Tech
      &lt;/a&gt; / &lt;a href="https://github.com/Darth-Tech/samwise-cli" rel="noopener noreferrer"&gt;
        samwise-cli
      &lt;/a&gt;
    &lt;/h2&gt;
    &lt;h3&gt;
      A CLI application to accompany on your terraform module journey and sharing your burden of module dependency updates, just as one brave Hobbit helped Frodo carry his :)
    &lt;/h3&gt;
  &lt;/div&gt;
  &lt;div class="ltag-github-body"&gt;
    
&lt;div id="readme" class="md"&gt;
&lt;div class="markdown-heading"&gt;
&lt;h1 class="heading-element"&gt;samwise&lt;/h1&gt;
&lt;/div&gt;

&lt;p&gt;A CLI application to accompany on your terraform module journey and sharing your burden of module dependency updates, just as one brave Hobbit helped Frodo carry his :)&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/thundersparkf/samwise-cli/actions/workflows/go-test-workflow.yml" rel="noopener noreferrer"&gt;&lt;img src="https://github.com/thundersparkf/samwise-cli/actions/workflows/go-test-workflow.yml/badge.svg" alt="Go Test"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;div class="snippet-clipboard-content notranslate position-relative overflow-auto"&gt;
&lt;pre class="notranslate"&gt;&lt;code&gt;                       \ : /
                    '-: __ :-'
                    -:  )(_ :--
                    -' |r-_i'-
            ,sSSSSs,   (2-,7
            sS';:'`Ss   )-j
           ;K e (e s7  /  (
            S, ''  SJ (  ;/
            sL_~~_;(S_)  _7
|,          'J)_.-' /&amp;gt;'-' `Z
j J         /-;-A'-'|'--'-j\
 L L        )  |/   :    /  \
  \ \       | | |    '._.'|  L
   \ \      | | |       | \  J
    \ \    _/ | |       |  ',|
     \ L.,' | | |       |   |/
    _;-r-&amp;lt;_.| \=\    __.;  _/
      {_}"  L-'  '--'   / /|
            |   ,      |  \|
            |   |      |   ")
            L   ;|     |   /|
           /|    ;     |  / |
          | |    ;     |  ) |
         |  |    ;|    | /  |
         | ;|    ||    | |  |
         L-'|____||    )/   |
             % %/ '-,-&lt;/code&gt;&lt;/pre&gt;…&lt;/div&gt;
&lt;/div&gt;
  &lt;/div&gt;
  &lt;div class="gh-btn-container"&gt;&lt;a class="gh-btn" href="https://github.com/Darth-Tech/samwise-cli" rel="noopener noreferrer"&gt;View on GitHub&lt;/a&gt;&lt;/div&gt;
&lt;/div&gt;



&lt;h3&gt;
  
  
  checkForUpdates
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftr4dh097vs38defsda5d.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftr4dh097vs38defsda5d.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;For this to run where modules are using private GitHub repositories, .samwise.yaml config file needs to passed as an argument or needs to be present at the user's home directory.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;.samwise.yaml format:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;git_key:
git_username:
git_ssh_key_path:
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;or as &lt;strong&gt;environment variables:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;SAMWISE_CLI_GIT_KEY
SAMWISE_CLI_GIT_USERNAME
SAMWISE_CLI_GIT_SSH_KEY_PATH
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Result
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;CSV Format&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9zift4f0acqgi8fd70ws.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9zift4f0acqgi8fd70ws.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Limitations(or better described as features to be added)
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;del&gt;SSH authentication for retrieving module sources&lt;/del&gt;(support added)&lt;/li&gt;
&lt;li&gt;Tracking Hashicorp's registry's module(they have an API to list versions &lt;a href="https://developer.hashicorp.com/terraform/registry/api-docs#list-available-versions-for-a-specific-module" rel="noopener noreferrer"&gt;here&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;Auto-creation of PRs with the updates(added experimentally with only committing support, no push and PR yet)&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Context for the name
&lt;/h3&gt;

&lt;p&gt;I love Lord of the Rings :)&lt;/p&gt;

</description>
      <category>terraform</category>
      <category>devops</category>
      <category>go</category>
      <category>cli</category>
    </item>
    <item>
      <title>Journey Through DevOps - Part 3: Higher</title>
      <dc:creator>Dev</dc:creator>
      <pubDate>Sat, 21 Aug 2021 10:51:45 +0000</pubDate>
      <link>https://dev.to/thundersparkf/journey-through-devops-part-3-higher-3n0k</link>
      <guid>https://dev.to/thundersparkf/journey-through-devops-part-3-higher-3n0k</guid>
      <description>&lt;p&gt;&lt;em&gt;This is 3 part series documenting my journey through Software development. For Journey Through DevOps - Part 2: The Awakening, click &lt;a href="https://dev.to/thundersparkf/journey-through-devops-part-2-the-awakening-4pmk"&gt;here&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;This post will be an insight into the automation that we use in our projects. This is not a tutorial but a demonstration of how automation can change the software process.&lt;/p&gt;

&lt;h2&gt;
  
  
  Background
&lt;/h2&gt;

&lt;p&gt;We manage the deployment of 2 applications - &lt;a href="https://www.chatwoot.com" rel="noopener noreferrer"&gt;Chatwoot&lt;/a&gt; and &lt;a href="https://rasa.com" rel="noopener noreferrer"&gt;Rasa-X&lt;/a&gt;. We aim to deliver content through the Natural Language Processing framework of Rasa and use Chatwoot to track and intervene in the chat, if necessary.&lt;/p&gt;

&lt;h2&gt;
  
  
  Technology stack
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Cloud: AWS&lt;/li&gt;
&lt;li&gt;Deployment: Helm, Kubernetes(1.21, Elastic Kubernetes Service)&lt;/li&gt;
&lt;li&gt;Datastore: Postgres, Redis (AWS managed)&lt;/li&gt;
&lt;li&gt;Applications: Rasa-X(Python), Chatwoot(Ruby on Rails)&lt;/li&gt;
&lt;li&gt;VCS: Gitlab&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Setup
&lt;/h2&gt;

&lt;h4&gt;
  
  
  Kubernetes
&lt;/h4&gt;

&lt;p&gt;We use AWS EKS for deploying our applications. The pods run on both FarGate and Managed Node-groups to get the best of both worlds. Since the micro-services of chatwoot are stateless we use FarGate, whereas the Rasa-X deployment uses node groups for certain pods that need fewer resources to run and also need to be stateful. &lt;/p&gt;

&lt;h4&gt;
  
  
  Infrastructure
&lt;/h4&gt;

&lt;p&gt;For AWS infrastructure, we use Terraform. The repo is connected as a Version control system to Terraform cloud allowing us to focus more on the infrastructure itself instead of having to CI pipelines.&lt;/p&gt;

&lt;h4&gt;
  
  
  Helm
&lt;/h4&gt;

&lt;p&gt;We use Helm charts for both &lt;a href="https://github.com/RasaHQ/rasa-x-helm" rel="noopener noreferrer"&gt;Rasa-X&lt;/a&gt; and &lt;a href="https://github.com/chatwoot/charts" rel="noopener noreferrer"&gt;Chatwoot&lt;/a&gt;, latter of which was built by us. Helm allows us to have a standard way to deploy the application instead of a lot of YAML files which need modification every time there's an update. &lt;/p&gt;

&lt;h1&gt;
  
  
  The Automation
&lt;/h1&gt;

&lt;p&gt;Automation by its nature, is intuitive. General rule of thumb that we follow, is that if a particular task is done more than 3 times, it is best to automate it.&lt;/p&gt;

&lt;p&gt;Let's start with the lightweight components.&lt;/p&gt;

&lt;h3&gt;
  
  
  Chatwoot
&lt;/h3&gt;

&lt;p&gt;Since chatwoot is an open source project and we don't add a lot of custom code to chatwoot, there isn't any need for CI/CD pipelines for chatwoot. Every update, it's easier to run&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;&lt;code&gt;helm upgrade &amp;lt;release-name&amp;gt; chatwoot/chatwoot&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;h3&gt;
  
  
  Infrastructure
&lt;/h3&gt;

&lt;p&gt;Since we're using Terraform cloud, this is also very simple. Although we did consider using Terraform locally in pipelines, this was an overhead considering as our team was small. Note that you can use Terraform cloud for both execution of Terraform and storing the Terraform state alone.&lt;/p&gt;

&lt;h3&gt;
  
  
  Rasa-X
&lt;/h3&gt;

&lt;p&gt;This is the component that involves the maximum number of repetitions, and therefore is fully automated.&lt;/p&gt;

&lt;p&gt;Rasa-X has several components but we'll focus on the important ones for now, such as Rasa open source and Rasa Custom Actions server. Former is the NLP part of Rasa and the latter is a python server that lets you run custom events in your chatbot such as fetching data from a database. Since chatbots are iterative, it was imperative that we automate this process before moving ahead. &lt;/p&gt;

&lt;h4&gt;
  
  
  CI/CD
&lt;/h4&gt;

&lt;p&gt;This part requires you to repetitively train NLP models and test them just as much. Hence use gitlab-ci pipelines to train a model, subsequently test it and upload the results as artefacts.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
build-actions:
  image: docker:20.10.7
  stage: build
  services:
    - docker:20.10.7-dind
  script:
    - docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY
    - chmod +x ./ci.sh
    - ./ci.sh

train-job:       
  stage: train
  script:
    - rasa train --fixed-model-name $CI_COMMIT_SHORT_SHA
  artifacts:
    paths:
      - models/
    expire_in: 1 day

data-validate-job:  
  stage: validate    
  script:
    - rasa data validate

core-test-job:   
  stage: test    
  script:
    - rasa test core --model models/ --stories test/ --out results

nlu-test-job:   
  stage: test    
  script:
    - rasa test nlu --nlu data/nlu.yml --cross-validation

upload-job:
  stage: report
  script:
    - echo "Upload results"
  artifacts:
    paths:
      - results/
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This can be broken into 2 parts. The custom actions server part and the Rasa bot. &lt;br&gt;
Custom actions is a python server, so we dockerize to allow for easier development and standardisation. The "./ci.sh" is a shell script to read the branch name from the environment variable and set the docker image tag accordingly, to build and push it to the registry.&lt;/p&gt;

&lt;p&gt;The next parts of this pipeline, train a model, validate that model for any conflicts and errors and finally run tests on them. All the tests are uploaded as artefacts in the pipeline. This allows us to log every model and iteration. The --fixed-model-name tag is to ensure that models have a predictable name which can be used further in the pipeline. Rasa defaults to using Timestamp which can be unpredictable.&lt;/p&gt;
&lt;h5&gt;
  
  
  Branches:
&lt;/h5&gt;

&lt;ul&gt;
&lt;li&gt;main:
This is the branch that represents production, so every piece of code that resides here is battle tested and verified. Hence it is safe to assume that with every push into this branch, we can update the NLP model in the server.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Docker tag for custom actions: &lt;strong&gt;stable&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;deploy-job: 
  stage: deploy 
  #before_script: []
  #image: registry.gitlab.com/gitlab-org/cloud-deploy/aws-base:latest  # see the note below
  script:
    - apt-get update
    #- apt install git-all -y
    - curl -k -F "model=@models/$CI_COMMIT_SHORT_SHA.tar.gz" "http://rasax-url.com/api/projects/default/models?api_token=$RASAXTOKEN"
    - echo "Application successfully deployed."
    - aws eks update-kubeconfig --name ${EKS_CLUSTER_NAME}
    - curl https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3 | bash
    - helm plugin install https://github.com/rimusz/helm-tiller --kubeconfig=$HOME/.kube/kubeconfig
    - helm repo add rasa-x https://rasahq.github.io/rasa-x-helm  
    - helm upgrade rasa rasa-x/rasa-x -n rasa --set app.name=$CI_REGISTRY/weunlearn/wulu2.0/rasa_actions --set "app.tag=stable" --reuse-values   # Redeploys the kubernetes deployment with a new image name while reusing already existing values      

  rules:
    - if: '$CI_PIPELINE_SOURCE == "push" &amp;amp;&amp;amp; $CI_COMMIT_BRANCH == "main"'
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This stage in gitlab pipelines that is only executed when the event is a push event and the branch committed to is main. This prevents any unnecessary triggering of model training for minute changes in various other branches.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;develop:
Since this is a non-production environment, there is no deployment from this branch. The deploy stage from the previous section is not executed. However, since this branch serves as a reference point before updating production code in the main branch, it has a static name. &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The test results are uploaded as before. This allows us to keep track of every model iteration. &lt;/p&gt;

&lt;p&gt;Docker tag for custom actions: &lt;strong&gt;develop&lt;/strong&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Workflow
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Pushes into non-main, branches:
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6d7ppo5clu5iz0xrnfso.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6d7ppo5clu5iz0xrnfso.png" alt="Pipeline of non-main branches"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Pushes into main
&lt;/h3&gt;

&lt;p&gt;The pipeline is the same as before except it has a final stage&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9022wh38mguuf0c5jp4c.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9022wh38mguuf0c5jp4c.png" alt="Pipeline last step of main branch"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;This setup enables a tech team of 2 people, to manage multiple applications in our production environment, while also ensuring a standardised way to perform quality control. Now the goal of the tech team is to solve problems and build the product, as opposed to dedicating a significant share of time to figuring out how to manage the product.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>kubernetes</category>
      <category>terraform</category>
      <category>software</category>
    </item>
    <item>
      <title>Journey Through DevOps - Part 2: The Awakening</title>
      <dc:creator>Dev</dc:creator>
      <pubDate>Sun, 20 Jun 2021 16:36:54 +0000</pubDate>
      <link>https://dev.to/thundersparkf/journey-through-devops-part-2-the-awakening-4pmk</link>
      <guid>https://dev.to/thundersparkf/journey-through-devops-part-2-the-awakening-4pmk</guid>
      <description>&lt;p&gt;&lt;em&gt;This is 3 part series documenting my journey through Software development. For Journey Through DevOps - Part 1: World without DevOps, click &lt;a href="https://dev.to/thundersparkf/journey-through-devops-part-1-world-without-devops-3g92"&gt;here&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;After a turbulent initiation into software development, one thing was certain, I was a total novice and had to up my game. It was at this junction that I'd taken my first Cloud computing course, &lt;a href="https://www.coursera.org/learn/aws-fundamentals-going-cloud-native/home/welcome"&gt;AWS Fundamentals: Going Cloud-Native&lt;/a&gt;. Now, this course taught me multiple concepts such as Failover, Availability zones and Load Balancers. &lt;/p&gt;

&lt;p&gt;But there's were a few issues. As the complexity of the infrastructure grew, with it grew:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Necessity to manage the infrastructure.&lt;/li&gt;
&lt;li&gt;Need for developing a software development methodology that would allow changes in applications to be easily integrated with existing infrastructure.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The former issue was reflected in the event where I had lost track of an AWS Global Accelerator(networking service that improves the performance of your users’ traffic) and it went on to cost us credits for a few months. I had manually search for the resource across all regions, which is less than ideal. The latter issue was reflected when there were multiple Natural Language Processing models and they had to be manually updated in multiple instances.&lt;/p&gt;

&lt;h2&gt;
  
  
  Enter DevOps
&lt;/h2&gt;

&lt;p&gt;My dad is a software engineer so I'd always hear words and technologies that they were using in their workflow. And upon looking up said words, I'd find out they solve a problem that I am facing in my workflow. Although explaining DevOps is something that beyond the scope of both me and a blogpost. These are the tools on my journey that have enabled me to solve problems and implement practices of DevOps.&lt;/p&gt;

&lt;h2&gt;
  
  
  CI/CD(Problem 2)
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Jenkins
&lt;/h3&gt;

&lt;p&gt;It's a versatile tool to execute both Continuous integration and Continuous Delivery(CI/CD). Jenkins is a server based tool which can define CI/CD processes as Pipeline as a Code. So that means there's an extra overhead in installing Jenkins master node and agent nodes. So for a small scale project as ours, its versatility was the bane. &lt;/p&gt;

&lt;p&gt;We needed something lightweight.&lt;/p&gt;

&lt;h3&gt;
  
  
  GitHub Actions
&lt;/h3&gt;

&lt;p&gt;GitHub Actions is a relatively new feature in GitHub where we can declare build and test processes in GitHub as YAML files. These are then executed either on GitHub runners(basically virtual machines hosted by GitHub) or self hosted runners. These are a game changer because they allow many simple processes to be executed simply by pushing code or creating a pull request.&lt;/p&gt;

&lt;p&gt;However, they only solve half the puzzle. GitHub actions are extremely easy to incorporate with CI processes however, the CD part remains a problem. Apart from basic support to Kubernetes clusters(Only cloud provided) there are very few CD options.&lt;/p&gt;

&lt;p&gt;However until our second release, GitHub actions sufficed, so we used them.&lt;/p&gt;

&lt;h2&gt;
  
  
  Hashicorp(Problem 1)
&lt;/h2&gt;

&lt;p&gt;After our first project, it was apparent we were scaling up infrastructure wise. Enter, Terraform&lt;/p&gt;

&lt;h3&gt;
  
  
  Terraform
&lt;/h3&gt;

&lt;p&gt;My love affair with Hashicorp began with Terraform. In all honesty there may not have been that desperate a need for it, I used it because it seemed cool, although it is impossible to separate Terraform from my current role.&lt;/p&gt;

&lt;p&gt;To check out more about Terraform and its uses checkout my post on &lt;a href="https://medium.com/dsc-sastra-deemed-to-be-university/terraforming-the-software-landscape-db3afaf624ff"&gt;medium&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;My misconception about Terraform was that it was to deploy infrastructure like we would to an application. This would mean I write code for the infrastructure needed and I would deploy it on Day 0, and then install my application on relevant resources. However, that turned out to not be feasible. Terraform is a tool that is a part of your daily  workflow. Incremental changes to infrastructure are logged and created by Terraform allowing you to track and visualise infrastructure you are using and how it fits together, like a jigsaw puzzle.&lt;/p&gt;

&lt;p&gt;Although there are tools such as Pulumi which offer IaC(Infrastructure as Code) in programming languages like Python or Node, I prefer Terraform because of it's simplicity and ease of use.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Granted these were major leaps from the previous experience(i.e described in previous post), the point of writing these posts to hopefully let people know that what you build at first will be horrible and you will cringe at it when you look back. However as one of my favourite movies said about cooking, we can safely bet "Not everyone can be a great developer, but a great developer can come from anywhere."&lt;/p&gt;

</description>
      <category>devops</category>
      <category>terraform</category>
      <category>docker</category>
      <category>github</category>
    </item>
    <item>
      <title>Kubernetes on Raspberry Pi</title>
      <dc:creator>Dev</dc:creator>
      <pubDate>Tue, 01 Jun 2021 19:33:38 +0000</pubDate>
      <link>https://dev.to/thundersparkf/kubernetes-on-raspberry-pi-ke1</link>
      <guid>https://dev.to/thundersparkf/kubernetes-on-raspberry-pi-ke1</guid>
      <description>&lt;p&gt;My friend had developed a proxy server in PHP as a part of his semester project. We both discussed the project and realised that this simple project gave us an opportunity to learn and apply real world concepts as opposed to letting a wonderful proxy server decay as just a proxy for grades. &lt;/p&gt;

&lt;p&gt;Enter, Kubernetes.&lt;/p&gt;

&lt;p&gt;I had been personally looking up Kubernetes to understand and hopefully experiment with it for a while, however without an application to deploy, my theory was ineffective. So we decided to deploy my friend's Proxy Server application on his Raspberry Pi. This allowed us to simulate the experience of accessing an actual cluster as opposed to Minikube which would be dependent on our machines being up all the time. &lt;/p&gt;

&lt;p&gt;It wasn't complicated at all. We dockerised and tested the application on RaspberryPi and Docker. Following which the Kubernetes deployment was a walk in the park. &lt;/p&gt;

&lt;p&gt;Read more about the application and deployment &lt;a href="https://www.hackster.io/yeshvanth_muniraj/kubernetes-on-raspberry-pi-99d14c"&gt;here&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Thank you!&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>docker</category>
      <category>raspberrypi</category>
    </item>
    <item>
      <title>Journey through DevOps - Part 1: World without DevOps</title>
      <dc:creator>Dev</dc:creator>
      <pubDate>Tue, 01 Jun 2021 19:20:26 +0000</pubDate>
      <link>https://dev.to/thundersparkf/journey-through-devops-part-1-world-without-devops-3g92</link>
      <guid>https://dev.to/thundersparkf/journey-through-devops-part-1-world-without-devops-3g92</guid>
      <description>&lt;p&gt;April 2020. I'd just begun coding regularly in Python. Built a toxic tweet classifier using LSTM models. Model wasn't spectacular by any standard, but during the whole process, I was introduced to something that would change the way I look at software development completely. It was, the &lt;strong&gt;Cloud&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;I need to run the model faster so I used Virtual Machine in Google Cloud Platform to run the training on a higher configuration. And that was about it.&lt;/p&gt;

&lt;p&gt;Fast forward a few months. August 2020. I was experimenting with random tools of NLP and tested out a chatbot using &lt;a href="https://rasa.com"&gt;Rasa&lt;/a&gt;. Because of this minor experience, I was connected by a friend to an NGO, &lt;a href="https://weunlearn.org"&gt;Weunlearn&lt;/a&gt; which had proposed to build a chatbot for providing Gender, Sex-ed and Mental health content. Needless to say, I signed up to help. Because for me, a chatbot simply meant a bunch of NLU training data and conversation stories. &lt;/p&gt;

&lt;p&gt;After basic onboarding, it became quickly apparent that I was woefully under-prepared and under-equipped because we were soon to have a bot but where would it go?&lt;/p&gt;

&lt;p&gt;Soon, the NGO had obtained credits in AWS through the Activate Program. Now it was my responsibility to put them to good use. &lt;/p&gt;

&lt;p&gt;And I did. Just in a clumsy, haphazard and unprofessional manner.&lt;/p&gt;

&lt;p&gt;In the first iteration of the chatbot, technically we faced a lot of complications that I feel require documentation. I believe we should share our stories not as simple tech tutorials or guides but also as journeys because it is part of what separates us from the machines we so passionately work on. What follows are the mistakes, glitches and poor planning decisions that only a software noob would do.&lt;/p&gt;

&lt;h2&gt;
  
  
  1. GitHub Chaos
&lt;/h2&gt;

&lt;p&gt;Let's start with in reverse chronological order of hiccups. Git is a version control system(VCS). GitHub is a cloud based service that allows you to host your git repositories. GitHub(or any VCS) allows for easy collaboration and code management across the team. The tech team at the NGO was 2 people, including me and we chose GitHub to store the repos.&lt;/p&gt;

&lt;h3&gt;
  
  
  1.1 First mistake - Binaries in GitHub
&lt;/h3&gt;

&lt;p&gt;GitHub is a place to store code files. Not binaries. Without understanding this, we had stored the Rasa NLP models in the GitHub repo. Which was fine until the day of deployment. A few last minute increases to NLP training data had pushed the model above the GitHub file limit. 2 hours to deployment and first message.&lt;/p&gt;

&lt;p&gt;Needing to find alternate methods to store the binary and deploy, we turned to...&lt;/p&gt;

&lt;h3&gt;
  
  
  1.2 Second - Git-LFS
&lt;/h3&gt;

&lt;p&gt;Needless to say, as a complete beginner I was unaware of how git-lfs works. I chose to use it in the heat of the moment. Git-LFS works by replacing the file contents with a pointer. If one of the system using the repo does not have git-lfs, the whole thing breaks down. And it did. &lt;/p&gt;

&lt;p&gt;I was left with multiple Dockerfiles with no docker related lines but a pointer to a git lfs object we could not reach. What happened after that is a hazy mess as best, but we did manage to SCP the code from a local machine to the VM.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Lessons&lt;/em&gt;&lt;/strong&gt;: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Always learn how to use VCS. Binaries do not go into GitHub. Use storage services like S3(free tier) or Google Cloud Storage(provides a 300$ starting credit for new sign up). If you do want to use a tool, understand the consequences before going ahead.&lt;/strong&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Don't experiment on the day of deployment&lt;/strong&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  2. Deployment Unstrategy
&lt;/h2&gt;

&lt;p&gt;I had abundant credits in AWS and fair knowledge of Docker. So we decided to use a simple EC2 instance to host the application. &lt;/p&gt;

&lt;p&gt;Rasa provides minimum system requirements to run their bots and we adhered to that. However, we were unsure of what to do in cases of overload. Although we tested with only a handful of users, inexperience got the best of us. &lt;/p&gt;

&lt;p&gt;Failover strategy: To download the code into another VM(in Stopped state) and wait for the application to fail, so that we can manually start the backup instance.&lt;/p&gt;

&lt;p&gt;However, I'd not setup a monitoring system to alert anyone for failures. Which essentially meant we had no way of knowing if it failed. It also meant any changes in training data would require manually updating multiple instances and verifying them.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Lessons&lt;/em&gt;&lt;/strong&gt;: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Load testing is an important tool in your arsenal &lt;a href="https://k6.io"&gt;k6&lt;/a&gt; in javascript or &lt;a href="https://locust.io"&gt;locust&lt;/a&gt; in python are wonderful tools.&lt;/strong&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Always try to ensure that your operations overhead is less(more on this in next topic). Use cloud based infrastructure to monitor your application as much as possible.&lt;/strong&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  3. CI/CD
&lt;/h2&gt;

&lt;p&gt;Continuous Integration/Continuous Delivery is one of the most important practices that a developer would need as they are moving from just coding to developing applications. Application development doesn't always mean cloud and even if you dont have access to cloud resources, CI/CD is something that should be part of your day to day development.&lt;/p&gt;

&lt;p&gt;Continuous Integration means whatever changes you make in our code, are easily added to the source code or it's binaries. For example, if your application is executed as a binary, then every time you update the code, the latest binary must be built on its own. This allows you to focus on the code part of it, than building versions of binaries. Git uses branches to version code, any update in master branch may build a stable binary, whereas in the development branch the binary maybe tagged "latest". Allowing for easier understanding of changes and versions of code.&lt;/p&gt;

&lt;p&gt;Continuous Delivery is taking the updated artefacts(binaries, docker images) and updating the application servers without manual intervention. Let's say you use Docker images in your application. Every push into the master branch should trigger a CI workflow which build a docker image and stores it securely. CD workflow would replace the outdated docker image in the application and restart application with the new one.&lt;/p&gt;

&lt;p&gt;All done automatically.&lt;/p&gt;

&lt;p&gt;None of this was a part of our first iteration of the bot. Models were manually built, testing was unheard of and any changes had to manually modified in each server. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Lessons&lt;/em&gt;&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Even the smallest of the application benefits from CI/CD practices.&lt;/strong&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;It helps ease workload by automating most of it&lt;/strong&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;




&lt;p&gt;As is evident from these, our bot was shabby and clunky technology wise. It made our job a lot harder because we had to deal with problems that arose not from the product, but from the practices that were being followed.&lt;/p&gt;

&lt;p&gt;The story however, does have a happy ending. Watch out for Journey through DevOps - Part 2: The Awakening.&lt;/p&gt;

</description>
      <category>devops</category>
      <category>docker</category>
      <category>cicd</category>
      <category>cloud</category>
    </item>
  </channel>
</rss>
