<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Hassan Murtaza</title>
    <description>The latest articles on DEV Community by Hassan Murtaza (@mrhassanmurtaza).</description>
    <link>https://dev.to/mrhassanmurtaza</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/mrhassanmurtaza"/>
    <language>en</language>
    <item>
      <title>Ingest Millions of Messages in the Queue using Serverless</title>
      <dc:creator>Hassan Murtaza</dc:creator>
      <pubDate>Sun, 06 Dec 2020 20:54:38 +0000</pubDate>
      <link>https://dev.to/mrhassanmurtaza/ingest-millions-of-messages-in-the-queue-using-serverless-32la</link>
      <guid>https://dev.to/mrhassanmurtaza/ingest-millions-of-messages-in-the-queue-using-serverless-32la</guid>
      <description>&lt;p&gt;While working on one of the Serverless projects, We came across a problem where I needed to ingest millions of messages from an external API to Azure Storage Queue. We made a series of changes in my approach to reach the solution which worked for us and I believe it can be helpful to anyone who’s struggling with the same problem.&lt;/p&gt;

&lt;p&gt;We started with the approach where we were executing the Serverless function through the HTTP endpoint. We call it an Ingestion Function. The ingestion function would then hit an external API to get the messages and try to ingest it in Queue.&lt;/p&gt;

&lt;p&gt;Here is how my initial architecture looked like:&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--T7yFPiGx--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/mv1s1ed8ompqkp2v4bwz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--T7yFPiGx--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/mv1s1ed8ompqkp2v4bwz.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you see the architecture diagram, the single ingestion function is a bottleneck as it can only scale up-to specific limits. Also, we’re not utilizing the strength of the actual serverless function, which is multiple parallel small invocations. Therefore, we decided to go with the approach where I could scale up the ingestion serverless to multiple invocations so that I can get the scalability as much as needed.&lt;/p&gt;

&lt;p&gt;The idea was to divide the total number of messages into multiple chunks (of 5000 in my case) and then pass those messages to the ingestion function so that it can finally ingest those messages to the Queue.&lt;/p&gt;

&lt;p&gt;We created another serverless function, we call it Chunking Function, to divide the messages into chunks using this helper function:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;def chunking_of_list(data):
"""return the list in 5000 chunks"""
  return [data[x:x+5000] for x in range(0, len(data), 5000)]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And then uploaded each chunk in a separate file into Azure Blob Storage using this code:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;def upload_chunks(idLists):
  st = storage.Storage(container_name=constant.CHUNK_CONTAINER)

  for index, singleList in enumerate(idLists):
     logging.info('Uploading file{0} to the {1} blob storage'  .format(index, constant.CHUNK_CONTAINER))
     st.upload_chunk('file' + str(index), singleList)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Finally, we set up the ingestion function to listen to the Azure Blob Storage file upload events. As soon as the file gets uploaded, the ingestion function will download the file, read it, and ingest the messages into Queue. As desired we now have multiple invocations of ingestion functions to work in parallel therefore, we achieved scalability.&lt;/p&gt;

&lt;p&gt;Here’s how the final architecture looked like:&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s---7-U6b2B--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/lqxcabfaaltgsrmikkoc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s---7-U6b2B--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/lqxcabfaaltgsrmikkoc.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We essentially followed a fan-out architecture, where we fan out our workload to multiple Serverless invocations instead of one.&lt;/p&gt;

&lt;p&gt;Peace ✌&lt;/p&gt;

</description>
      <category>aws</category>
      <category>serverless</category>
      <category>azure</category>
      <category>cloud</category>
    </item>
    <item>
      <title>Overcoming AWS Lambda Limitations with Fargate</title>
      <dc:creator>Hassan Murtaza</dc:creator>
      <pubDate>Mon, 08 Jun 2020 18:14:51 +0000</pubDate>
      <link>https://dev.to/mrhassanmurtaza/overcoming-aws-lambda-limitations-with-fargate-363d</link>
      <guid>https://dev.to/mrhassanmurtaza/overcoming-aws-lambda-limitations-with-fargate-363d</guid>
      <description>&lt;p&gt;Using Lambda with Fargate helps us to overcome the limitations of AWS Lambda. If you need to run a background process for more than 15 minutes or if you need more storage than 500 MBs, here's how you can do it:&lt;/p&gt;

&lt;p&gt;➡️ Lambda function will get triggered based on your desired event source.&lt;/p&gt;

&lt;p&gt;➡️ It will ingest the event in the SQS queue, and trigger Fargate to consume the event from the queue.&lt;/p&gt;

&lt;p&gt;➡️ Since we'll be doing all the processing on Fargate, we'll overcome some of the hard limits of AWS Lambda.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Example&lt;/strong&gt;&lt;br&gt;
If you need to downlaod the file up-to 1 GBs of size, do processing and upload it to S3. AWS Fargate will be your go-to choice since it's practically not possible with AWS Lambda's hard limits.&lt;/p&gt;




&lt;p&gt;If you have technical questions and suggestions, let's discuss them here, in &lt;a href="https://github.com/MrHassanMurtaza"&gt;Github&lt;/a&gt; or say hi me in &lt;a href="https://www.linkedin.com/in/mrhassanmurtaza/"&gt;LinkedIn&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>cloud</category>
      <category>awscloud</category>
      <category>devops</category>
    </item>
    <item>
      <title>Deploy Azure Functions with Jenkins</title>
      <dc:creator>Hassan Murtaza</dc:creator>
      <pubDate>Mon, 01 Jun 2020 18:07:08 +0000</pubDate>
      <link>https://dev.to/mrhassanmurtaza/deploy-azure-functions-with-jenkinsfile-f5d</link>
      <guid>https://dev.to/mrhassanmurtaza/deploy-azure-functions-with-jenkinsfile-f5d</guid>
      <description>&lt;p&gt;There are multiple ways to continuously deploy the latest code to Azure Functions. &lt;/p&gt;

&lt;p&gt;Since at &lt;a href="https://www.openanalytics.eu/" rel="noopener noreferrer"&gt;OpenAnalytics&lt;/a&gt; we mostly use Jenkins for CI/CD purposes, my preference was to stick to Jenkins. I tried multiple options and the one which worked perfectly on Azure Functions with Linux platform was to do zip deploy using restful API. &lt;/p&gt;

&lt;p&gt;The idea is to first create the zip out of source code and then deploy it:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;curl -X POST https://&amp;lt;user&amp;gt;:&amp;lt;password&amp;gt;@&amp;lt;function-app&amp;gt;.scm.azurewebsites.net/api/zipdeploy -T &amp;lt;zipfile&amp;gt;&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Here's how you can do it:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Go to your function app on the azure portal then open properties from the sidebar. You should find the &lt;strong&gt;Deployment Trigger URL&lt;/strong&gt; there and that's what you need. it should look something like this:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;https://$&amp;lt;your-functionapp&amp;gt;:&amp;lt;your-functionapp-password&amp;gt;&amp;lt;your-functionapp&amp;gt;@.scm.azurewebsites.net/deploy
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Go to Jenkins &amp;gt; Credentials &amp;gt; Global Credentials and create global credentials.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F4qumzp0xqpg03l1yz5t4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F4qumzp0xqpg03l1yz5t4.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Note: we need to escape the dollar($) sign that's why using backlash.&lt;/em&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Finally, this is how your Jenkinsfile should look like:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;pipeline {

        agent any

        options {
                buildDiscarder(logRotator(numToKeepStr: '3'))
        }

        environment {
            USER_CREDENTIALS = credentials('&amp;lt;your_azure_function_credentials&amp;gt;')
            FUNCTION_URL     = "&amp;lt;function_name&amp;gt;.scm.azurewebsites.net/api/zipdeploy"
            ARTIFACT         = "functionapp.zip"
        }

        stages {

            stage('Build') {
                steps {
                        sh "zip -r $ARTIFACT *"
                        script {  
                            if (fileExists(env.ARTIFACT)) {
                                sh "echo Artifact: $ARTIFACT created successfuflly"
                            } else {
                                error('Failed to create artifact: ' + env.ARTIFACT)
                            }
                        }
                }
            }

            stage('Deploy') {
                steps {
                    script {
                        int status = sh(script: "curl -sLI -w '%{http_code}' -X POST https://$USER_CREDENTIALS_USR:$USER_CREDENTIALS_PSW@$FUNCTION_URL -T $ARTIFACT  -o /dev/null", returnStdout: true)

                        if (status != 200 &amp;amp;&amp;amp; status != 201) {
                            error("Returned status code = $status when calling $FUNCTION_URL")
                        }
                    }
                }
            }
        }

        post {
            success {
                sh 'echo Function deployed successfully.'
            }
        }
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Note: zip deploy by default assumes that your code is ready to deploy. To enable build process (like installing packages etc.) while deployment. Add this to application settings:&lt;br&gt;
&lt;code&gt;SCM_DO_BUILD_DURING_DEPLOYMENT=true&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Thank you for you time!&lt;/p&gt;




&lt;p&gt;If you have technical questions and suggestions, let's discuss them here, in &lt;a href="https://github.com/MrHassanMurtaza" rel="noopener noreferrer"&gt;Github&lt;/a&gt; or say hi me in &lt;a href="https://www.linkedin.com/in/mrhassanmurtaza/" rel="noopener noreferrer"&gt;LinkedIn&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>serverless</category>
      <category>azure</category>
      <category>functions</category>
      <category>jenkins</category>
    </item>
    <item>
      <title>Wrapper over AWS Session Manager to SSH using Public/Private IP</title>
      <dc:creator>Hassan Murtaza</dc:creator>
      <pubDate>Tue, 17 Mar 2020 11:50:06 +0000</pubDate>
      <link>https://dev.to/mrhassanmurtaza/wrapper-over-aws-session-manager-to-ssh-using-public-private-ip-4ape</link>
      <guid>https://dev.to/mrhassanmurtaza/wrapper-over-aws-session-manager-to-ssh-using-public-private-ip-4ape</guid>
      <description>&lt;p&gt;Previously on AWS, we had two problems with establishing SSH sessions to EC2 instances.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;It's hard to manage SSH keys, especially if when we have a lot of servers.&lt;/li&gt;
&lt;li&gt;It was hard to open sessions to private instances as you need the bastion host to do so.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;AWS Session Manager solved both of these issues for us. Moreover, it is an AWS-centric solution so we preferred it.&lt;/p&gt;

&lt;p&gt;Moreover, we needed a way to establish an SSH connection to instances using public IP or private IP as we were doing ssh using IPs before. AWS CLI doesn't support it out of the box (it supports only instance-id). I wrote this wrapper script over AWS CLI to achieve the same.&lt;/p&gt;

&lt;p&gt;Repository link: &lt;a href="https://github.com/MrHassanMurtaza/ec2-ssh/"&gt;https://github.com/MrHassanMurtaza/ec2-ssh/&lt;/a&gt;&lt;/p&gt;

</description>
      <category>ec2</category>
      <category>aws</category>
      <category>ssm</category>
      <category>ssh</category>
    </item>
  </channel>
</rss>
