<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Murtaza 🐳</title>
    <description>The latest articles on DEV Community by Murtaza 🐳 (@kazimurtaza).</description>
    <link>https://dev.to/kazimurtaza</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/kazimurtaza"/>
    <language>en</language>
    <item>
      <title>AWS+SAP: AWS Cognito and JWT w/ Exposed RFC in REST</title>
      <dc:creator>Murtaza 🐳</dc:creator>
      <pubDate>Sat, 11 Feb 2023 06:29:07 +0000</pubDate>
      <link>https://dev.to/aws-builders/awssap-aws-cognito-and-jwt-w-exposed-rfc-in-rest-2n33</link>
      <guid>https://dev.to/aws-builders/awssap-aws-cognito-and-jwt-w-exposed-rfc-in-rest-2n33</guid>
      <description>&lt;p&gt;_Note; This is technically part 2 focusing more on how to configure AWS Cognito to work with SAP API Management; if you are interested in SAP API Management and policies, go &lt;a href="https://dev.to/kazimurtaza/how-to-jwt-with-sap-api-management-3j0k"&gt;here&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;One of our customers who run SAP workloads on Amazon Web Services (AWS) was interested in reviewing how to integrate their user pool securely with a backend-exposed RFC without putting anything at risk. To help shed light on the matter, we decided to dive into the topic and put together this blog post. The post will discuss best practices for securely integrating a user pool with an exposed RFC for those running SAP workloads on AWS. This post is suitable for both seasoned SAP professionals and beginners.&lt;/p&gt;

&lt;p&gt;Securely integrating a user pool with an exposed RFC is crucial for organizations running SAP workloads on AWS. Amazon Web Services (AWS) offers various services that can be leveraged to secure the integration process, but it can take time to determine the best approach. In this blog post, we’ll focus on using AWS Cognito and JSON Web Tokens (JWT) to secure the integration between a user pool and an exposed RFC in REST.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;AWS Cognito:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;AWS Cognito is a user management and authentication service that provides sign-up and sign-in functionality for web and mobile applications. It integrates with other AWS services, including AWS AppSync, AWS Lambda, and AWS Identity and Access Management (IAM). Cognito provides a secure and scalable solution for user authentication, which is essential for ensuring your integration’s security.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;JSON Web Tokens (JWT):&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;JWT is a compact, URL-safe means of representing claims to be transferred between two parties. JWT tokens contain encoded information, including user identity, which can be used to authenticate the user and authorize access to the application or API. JWT tokens are signed, ensuring the token’s authenticity and integrity. Using JWT tokens, you can securely authenticate a user and grant access to the exposed RFC without risking the user’s credentials.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Integration Steps:&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Create a user pool in AWS Cognito and configure the necessary settings,&lt;/li&gt;
&lt;li&gt;Create a REST API endpoint in SAP API Management to expose the RFC wrapped using SAP Cloud Integration for request and response transformation,&lt;/li&gt;
&lt;li&gt;Secure the REST API endpoint in SAP API Management using the AWS Cognito certificate and configure the necessary authorization policies.&lt;/li&gt;
&lt;li&gt;Pass the JWT token in the request header when accessing the exposed RFC.&lt;/li&gt;
&lt;li&gt;Validate the JWT token in SAP API Management Policies.&lt;/li&gt;
&lt;li&gt;Grant or deny access to the exposed RFC based on the validation result.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--b9O3nFSt--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2ATxnPT2trwwIP029ZTn5Kmw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--b9O3nFSt--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2ATxnPT2trwwIP029ZTn5Kmw.png" alt="" width="880" height="408"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Let us go through the flow shown in the diagram above;&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;The user sends a request to the Commerce Storefront with a valid username and password.&lt;/li&gt;
&lt;li&gt;The server authenticates the user and creates a unique JWT token with help from Cognito.&lt;/li&gt;
&lt;li&gt;The server sends the JWT token back to the user.&lt;/li&gt;
&lt;li&gt;The user then stores the JWT token in their local storage.&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;The user retrieves some details from the exposed RFC and sends the SAP API Management with the JWT token.&lt;/li&gt;
&lt;li&gt;The server verifies the JWT token and grants access to the requested resource.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;How to configure AWS Cognito;&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Create a user pool&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--qJKQWe1V--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2Aw3pFdHcTLYZaQmfCzGfx2w.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--qJKQWe1V--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2Aw3pFdHcTLYZaQmfCzGfx2w.png" alt="" width="880" height="605"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Configure the Password policy
AWS has options to provide enhanced security settings for passwords, also to force MFA or not;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--yF227aG5--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/900/1%2Ayrbf-hO8Y9MrAHV4z-X3-A.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--yF227aG5--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/900/1%2Ayrbf-hO8Y9MrAHV4z-X3-A.png" alt="" width="880" height="593"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--KbMDUqUF--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/807/1%2AaMF7UZGYn8n0HikVGb036A.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--KbMDUqUF--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/807/1%2AaMF7UZGYn8n0HikVGb036A.png" alt="" width="807" height="695"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;With what recovery options to set. Next, you configure the sign-in experience to suit your environment best; since this is a POC and we are developers, we would more or less play with the API(s) and wouldn’t be too worried about that now.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Name your user pool&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--nxmmbOE7--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/824/1%2Ay6W7vxjWNtrViXBiryWxiA.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--nxmmbOE7--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/824/1%2Ay6W7vxjWNtrViXBiryWxiA.png" alt="" width="824" height="383"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Important step;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--merGG5SJ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/799/1%2A_lp46QFFiaPFxQrpAtm0iA.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--merGG5SJ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/799/1%2A_lp46QFFiaPFxQrpAtm0iA.png" alt="" width="799" height="854"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We do not require Hosted UI for login and sign-up pages since everything will be taken by our Commerce Platform using API(s) to communicate with AWS Cognito. &lt;strong&gt;Our Storefront will not save any secrets since that is how security incidents occur. CHOOSE Public client &lt;/strong&gt; — Browser — Cognito API requests are made from the user systems that are not trusted with client secrets.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--fMvj6Wkv--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/728/1%2AZB0KUdxz0P6cpM4nT6RVUw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--fMvj6Wkv--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/728/1%2AZB0KUdxz0P6cpM4nT6RVUw.png" alt="" width="728" height="281"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Enable the above user flows to enable API calls from our storefront.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--qBbf2r7k--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2At845Vjje7kxqEzmeDoP9QA.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--qBbf2r7k--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2At845Vjje7kxqEzmeDoP9QA.png" alt="" width="880" height="116"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The user pool was successfully created.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Extract the signing certificate;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--7ypze7jI--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2AQtRA_ifXAmu-SYYNIfboxQ.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--7ypze7jI--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2AQtRA_ifXAmu-SYYNIfboxQ.png" alt="" width="880" height="431"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--mv8cg1K5--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/535/1%2AEBmXB5Qjvwgq3e5wMpAPyw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--mv8cg1K5--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/535/1%2AEBmXB5Qjvwgq3e5wMpAPyw.png" alt="" width="535" height="502"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Copy and download this certificate because SAP APIM will use this to validate the JWT tokens sent in requests.&lt;/em&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Populate the user pool with a demo user; we will take this a little ahead to verify if the user pool is configured correctly.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--5XpaU8aJ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/776/1%2AxJn8h5r5RfQNlXIFBEFZyg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--5XpaU8aJ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/776/1%2AxJn8h5r5RfQNlXIFBEFZyg.png" alt="" width="776" height="729"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;When we create the user, we will be asked to force change the password to confirm the user; since we do not have hosted UI or anything, this could prove tricky for the uninitiated.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--_GvC96AW--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2A3SZvi1LI0YNTjDsDa3qDdQ.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--_GvC96AW--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2A3SZvi1LI0YNTjDsDa3qDdQ.png" alt="" width="880" height="131"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Call the API to retrieve the Token;
&lt;/li&gt;
&lt;/ol&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;curl --location --request POST 'https://cognito-idp.{{AWS_REGION}}.amazonaws.com/{{USER_POOL_ID}}' \
--header 'X-Amz-Target: AWSCognitoIdentityProviderService.InitiateAuth' \
--header 'Content-Type: application/x-amz-json-1.1' \
--header 'Accept-Language: application/json' \
--data-raw '{ 
    "AuthParameters": 
    { 
        "USERNAME": "{{cognitoUserName}}",
        "NEW_PASSWORD": "{{cognitoUserPassword}}"
    }, 
    "AuthFlow": "USER_PASSWORD_AUTH",
    "ClientId": "{{cognitoClientId}}" 
}'
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;To confirm when you call the API to request the JWT Token, you will be struck with this body;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
    "ChallengeName": "NEW_PASSWORD_REQUIRED",
    "ChallengeParameters": {
        "USER_ID_FOR_SRP": "1a691d76-90f0-3632-b43f-432bbc3c9521",
        "requiredAttributes": "[]",
        "userAttributes": "{\"email\":\"qazi.murtaza@faircg.com\"}"
    },
    "Session": "AYABeP7KqCHXG4jBT-qEaysPJVsAHQABAAdTZXJ2aWNlABBDb2duaXRvVXNlclBvb2xzAAEAB2F3cy1rbXMAS2Fybjphd3M6a21zOnVzLWVhc3QtMTo3NDU2MjM0Njc1NTU6a2V5L2IxNTVhZmNhLWJmMjktNGVlZC1hZmQ4LWE5ZTA5MzY1M2RiZQC4AQIBAHiG0oCCDoro3IaeecGyxCZJOVZkUqttbPnF4J7Ar-5byAHO9OhZAb2T4rq0VCQMI_JOAAAAfjB8BgkqhkiG9w0BBwagbzBtAgEAMGgGCSqGSIb3DQEHATAeBglghkgBZQMEAS4wEQQM9uEKK3DcQe221qK4AgEQgDvVSvNMuXn8bkUQZm4g43xJN-o3LgAEUkUIqIrdhsggrTD4z9EvydNHKGTDAvgByNcDioFr-Dfkd9A0iAIAAAAADAAAEAAAAAAAAAAAAAAAAABx02tlgAn--Caj8aCy95pA _____ wAAAAEAAAAAAAAAAAAAAAEAAADVbgpubnsc-3rD1_9E6S-ahI0xjKa43e4KkLI_V2NsmAPMyHF5D73_rdUrl0XwhtKKN1qmz7WgYWwwRiaN3DQcAWB9Oa_ubLnuif2nRw_PZZx705iVCfgEs79B1rIxxEl9KC8wE6fawCMm-WgqUUqSG_5QP3jHDSQspY_Hb4hsI1qv9pPL3Ju81OfcbPwf0mnR5nk4DMXzwNzm9c5_nSHuzJejqunMEh7xAPI7q0kdL6z17BPmgPGhgTc29nheuBNt9pMgDPQ94kLV34cFfH7F78ZkMkObdjsKXm_wdrqqZCYtxx5BIg"
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Don’t fret; call this below request to reset the password and you will be golden. Note the headers; they are important.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;curl --location --request POST 'https://cognito-idp.{{AWS_REGION}}.amazonaws.com/{{USER_POOL_ID}}' \
--header 'X-Amz-Target: AWSCognitoIdentityProviderService.RespondToAuthChallenge ' \
--header 'Content-Type: application/x-amz-json-1.1' \
--header 'Accept-Language: application/json' \
--data-raw '{
    "ChallengeName": "NEW_PASSWORD_REQUIRED",
    "ChallengeResponses": {
        "USERNAME": "{{cognitoUserName}}",
        "NEW_PASSWORD": "{{cognitoUserPassword}}"
    },
    "ClientId": "{{cognitoClientId}}",
    "Session": "AY....Insert the session recieved in the preceeding call"
}'
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Calling the first API again would wield the JWT Token with a 200 OK response.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
    "AuthenticationResult": {
        "AccessToken": "e....yJraWQ.",
        "ExpiresIn": 3600,
        "IdToken": "eyJraWQiO...",
        "RefreshToken": "eyJjdHk....",
        "TokenType": "Bearer"
    },
    "ChallengeParameters": {}
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Verify the token at &lt;a href="http://jwt.io/"&gt;JWT.IO&lt;/a&gt;;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ly3899Na--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/629/1%2AU_RZvcJo3_ClilmQrO6dsg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ly3899Na--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/629/1%2AU_RZvcJo3_ClilmQrO6dsg.png" alt="" width="629" height="641"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How to configure SAP API Management for JWT&lt;/strong&gt; ; read &lt;a href="https://kazimurtaza.medium.com/how-to-jwt-with-sap-api-management-dba9d712615f"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Conclusion&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In conclusion, using AWS Cognito and JWT tokens is a secure and effective way to integrate a user pool with an exposed RFC for organizations running SAP workloads on AWS. By following the steps outlined in this post, you can ensure that your integration is secure and free from risk. Whether you’re a seasoned SAP professional or just starting, we hope this post has provided valuable information and insights into the best practices for securing your integration.&lt;/p&gt;




</description>
      <category>authentication</category>
      <category>awscognito</category>
      <category>saprest</category>
      <category>sap</category>
    </item>
    <item>
      <title>how to JWT with SAP API Management</title>
      <dc:creator>Murtaza 🐳</dc:creator>
      <pubDate>Sat, 11 Feb 2023 06:16:20 +0000</pubDate>
      <link>https://dev.to/kazimurtaza/how-to-jwt-with-sap-api-management-3j0k</link>
      <guid>https://dev.to/kazimurtaza/how-to-jwt-with-sap-api-management-3j0k</guid>
      <description>&lt;p&gt;&lt;strong&gt;SAP API Management&lt;/strong&gt; is a cloud-based, API-first platform for developing and managing APIs. It enables organizations to securely expose data, systems, and services from SAP and other sources. With SAP API Management, companies can leverage their existing investments in SAP and non-SAP systems while providing a unified, modern API layer to build, scale, and manage their APIs. This comprehensive approach to API management empowers organizations to accelerate their digital transformation and create new business opportunities.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What is JWT?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;JSON Web Token (JWT) is an open standard for securely transmitting information (e.g., authentication claims) between two parties. It is a compact and self-contained way of representing data, usually in the form of a JSON object. JWT is often used in web applications and API authentication, allowing users to transfer data using tokens securely. JWT tokens are signed with a secret key, ensuring that the data is not tampered with during transport. JWT is becoming increasingly popular due to its simplicity and flexibility, as it can be used in various scenarios where secure information exchange is needed.&lt;/p&gt;

&lt;p&gt;The below diagram demonstrates architectural implementation;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F71gs0fvi1sw477may9jl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F71gs0fvi1sw477may9jl.png" width="500" height="506"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;SAP API Management using JWT&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Our next question would be; What purpose does an Identity provider serve?&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
An identity provider (IdP) is a service or system that provides users with secure access to applications and other services in a single sign-on environment. It is responsible for authenticating and authorizing user access. It is used to manage user identities and their access to applications securely. Additionally, it can provide single sign-on (SSO) access to multiple applications or websites, allowing users to log in once and securely access multiple services without needing multiple logins.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Let us go through the flow shown in the diagram above;&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;The user sends a request to the IdP with a valid username and password.&lt;/li&gt;
&lt;li&gt;The server authenticates the user and creates a unique JWT token.&lt;/li&gt;
&lt;li&gt;The server sends the JWT token back to the user.&lt;/li&gt;
&lt;li&gt;The user then stores the JWT token in their local storage.&lt;/li&gt;
&lt;li&gt;The user requests the SAP API Management with the JWT token.&lt;/li&gt;
&lt;li&gt;The server verifies the JWT token and grants access to the requested resource.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Some examples of IdP are;&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Amazon Cognito&lt;/li&gt;
&lt;li&gt;SAP Customer Data Cloud&lt;/li&gt;
&lt;li&gt;Auth0&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;What happens inside SAP API Management?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Multiple policies are used which are available out of the box in SAP API Management,&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftu14cq90i3po2lubz7oc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftu14cq90i3po2lubz7oc.png" width="681" height="333"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;SAPI API Management Proxy Pre-Flow&lt;/em&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Extract JWT&lt;/strong&gt; policy is used to retrieve the JWT token and store it in a variable for later use,
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;lt;!-- Extract content from the request or response messages, including headers, URI paths, JSON/XML payloads, form parameters, and query parameters --&amp;gt;
&amp;lt;ExtractVariables async="true" continueOnError="false" enabled="true" xmlns='http://www.sap.com/apimgmt'&amp;gt;
 &amp;lt;!-- the source variable which should be parsed --&amp;gt;
 &amp;lt;Source clearPayload="false"&amp;gt;request&amp;lt;/Source&amp;gt;
 &amp;lt;!-- Specifies the XML-formatted message from which the value of the variable will be extracted --&amp;gt;
    &amp;lt;Header name="Authorization"&amp;gt;
        &amp;lt;Pattern ignoreCase="true"&amp;gt;Bearer {jwt}&amp;lt;/Pattern&amp;gt;
    &amp;lt;/Header&amp;gt;
 &amp;lt;VariablePrefix&amp;gt;inbound&amp;lt;/VariablePrefix&amp;gt;
&amp;lt;/ExtractVariables&amp;gt; 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Verify JWT&lt;/strong&gt; policy is used to verify the token against the certificate saved in the policy initially retrieved when implementing the API Proxy, this can also be retrieved by key-value pair but for this implementation, we did not overcomplicate it.
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;lt;!-- Verify JWT TOken --&amp;gt;
&amp;lt;VerifyJWT async="false" continueOnError="false" enabled="true" xmlns="http://www.sap.com/apimgmt"&amp;gt;
&amp;lt;Algorithm&amp;gt;RS256&amp;lt;/Algorithm&amp;gt;
&amp;lt;Source&amp;gt;inbound.jwt&amp;lt;/Source&amp;gt;
&amp;lt;PublicKey&amp;gt;
  &amp;lt;Value&amp;gt;
    -----BEGIN CERTIFICATE-----
                -----END CERTIFICATE-----
        &amp;lt;/Value&amp;gt;
&amp;lt;/PublicKey&amp;gt;
&amp;lt;!--&amp;lt;Subject&amp;gt;subject-subject&amp;lt;/Subject&amp;gt;--&amp;gt;
&amp;lt;Issuer&amp;gt;https://dev-xxxx.au.auth0.com/&amp;lt;/Issuer&amp;gt;
&amp;lt;Audience&amp;gt;https://dev-xxxx.au.auth0.com/api/v2/&amp;lt;/Audience&amp;gt;
&amp;lt;!--&amp;lt;AdditionalClaims&amp;gt;--&amp;gt;
&amp;lt;!-- &amp;lt;Claim name="additional-claim-name" type="string"&amp;gt;additional-claim-value-goes-here&amp;lt;/Claim&amp;gt;--&amp;gt;
&amp;lt;!--&amp;lt;/AdditionalClaims&amp;gt;--&amp;gt;
&amp;lt;/VerifyJWT&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;decode JWT&lt;/strong&gt; policy decodes the token to a JSON format to access individual values and scopes
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;lt;!-- Decode JWT TOken --&amp;gt;
&amp;lt;DecodeJWT async="false" continueOnError="false" enabled="true" xmlns="http://www.sap.com/apimgmt"&amp;gt;
&amp;lt;Source&amp;gt;inbound.jwt&amp;lt;/Source&amp;gt;
&amp;lt;/DecodeJWT&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Next, we could send encoded JWT to the microservice for further validation; however, it is not required because the request is authenticated.&lt;/p&gt;

&lt;p&gt;In conclusion, SAP API Management is an incredible solution for businesses of all sizes. With an intuitive user interface and comprehensive toolset, SAP API Management makes it easier to manage security, control, and monetize APIs.&lt;/p&gt;

&lt;p&gt;The key to success in today’s digital world is securely and efficiently exposing data, services and applications to customers, partners and employees. SAP API Management provides the perfect solution to this challenge, with a comprehensive suite of features designed to make it easier to build, secure and manage APIs.&lt;/p&gt;

&lt;p&gt;From the moment you open the SAP API Management dashboard, you’ll appreciate its ease of use. All the tools and features you need are clearly laid out, with a simple drag-and-drop interface for creating new APIs. The intuitive user interface allows you to quickly and easily configure your APIs and access control settings. You can easily integrate with other systems, such as Salesforce or Microsoft Dynamics, or use the built-in analytics and reporting tools to get real-time insights into your API usage.&lt;/p&gt;

&lt;p&gt;SAP API Management also provides out-of-the-box security features. It includes a variety of authentication methods, such as OAuth 2.0, JWT and OpenID Connect, that ensure your APIs remain secure. Additionally, it provides an easy-to-use visual editor for creating custom authorization policies, so you can ensure only the users you want have access to your APIs.&lt;/p&gt;

&lt;p&gt;Finally, SAP API Management makes it easy to monetize your APIs. It provides tools for setting up subscription plans and charging for usage, allowing you to unlock additional revenue streams.&lt;/p&gt;

&lt;p&gt;In short, SAP API Management offers an all-in-one solution to manage, control and monetize APIs. Its intuitive user interface and comprehensive toolset make it the perfect solution for businesses of any size.&lt;/p&gt;




</description>
      <category>sapsecurity</category>
      <category>apigateway</category>
      <category>authentication</category>
      <category>saptraining</category>
    </item>
    <item>
      <title>Story of Idiomatic Programmer</title>
      <dc:creator>Murtaza 🐳</dc:creator>
      <pubDate>Wed, 18 Jan 2023 10:33:58 +0000</pubDate>
      <link>https://dev.to/kazimurtaza/story-of-idiomatic-programmer-50b2</link>
      <guid>https://dev.to/kazimurtaza/story-of-idiomatic-programmer-50b2</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0l4dwu02w9o3wjs1yo5x.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0l4dwu02w9o3wjs1yo5x.png" width="800" height="344"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;TIOBE Index for 2023&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;C is not the first programming language I learned, the first language I learned was GW-Basic. Although C is the language I fell in love with, it was actually the first wholesome programming language I saw, before this, I had worked my way up from GW-Basic and JavaScript.&lt;/p&gt;

&lt;p&gt;The C programming language is known for its low-level access to memory and its powerful features for building complex systems. The intricacy of structs in C, which allow for creating custom data types, can be quite powerful but also challenging to work with. The ability to define structs with different types of variables and create arrays of structs, allows for efficient data manipulation and organization.&lt;/p&gt;

&lt;p&gt;The consistent convenience of arrays in C is also a powerful feature, it allows programmers to easily store and manipulate large amounts of data in a structured way. And the ability to access variables using the “&amp;amp;” operator, which allows direct access to memory addresses, is a powerful tool that gives the programmer low-level control over the system.&lt;/p&gt;

&lt;p&gt;However, the power of C comes with a cost, as it allows for direct memory manipulation and low-level access, it also allows for easily creating bugs and security vulnerabilities, if not used properly. The term “shooting yourself in the foot” is often used to describe the mistakes that can occur when working with C, as it refers to the ability to easily cause unintended behaviour or crashes by misusing pointers or other low-level features.&lt;/p&gt;

&lt;p&gt;Overall, C is a powerful and challenging language that can be quite complex to work with, but also provides the programmer with a great deal of control and flexibility. It can be considered as the wild west of programming languages, as it provides the programmer with a lot of freedom and power, but also requires a great deal of caution and expertise.&lt;/p&gt;

&lt;p&gt;As fate would have it, my first job involved working with C++. The opportunity to work with C++ right out of the gate was a great way to get started in my career. Not only did it allow me to learn the language in a professional setting, but it also exposed me to real-world problems and solutions that I could apply to future projects. C++ is a complex language and having the chance to work with it from the start allowed me to develop a strong foundation in programming concepts and best practices that I could build upon as I progressed in my career.&lt;/p&gt;

&lt;p&gt;Inevitably I got side-tracked and took on some odd jobs that led me to writing PHP and Java, and teaching assembly language, there was a silver lining in that experience.&lt;/p&gt;

&lt;p&gt;Working with different languages and technologies allowed me to expand my skill set and become a more versatile programmer. I learned new programming paradigms, different approaches to problem-solving, and gained an understanding of the strengths and weaknesses of different languages. The experience of working with different languages helped me to understand the trade-offs and how to choose the right tool for the job.&lt;/p&gt;

&lt;p&gt;Teaching assembly language was particularly beneficial as it gave me a deeper understanding of how computers work at a low-level, and how the code written in high-level languages is translated to machine code. This knowledge can be applied to many other languages and frameworks, making me a more proficient and efficient programmer.&lt;/p&gt;

&lt;p&gt;Additionally, working with PHP and Java opened me up to the world of web development and introduced me to new technologies and frameworks. This helped me to understand the different types of systems and applications that can be built and the challenges that come with developing for the web.&lt;/p&gt;

&lt;p&gt;In conclusion, while getting side-tracked into working with different languages and technologies may have seemed like a detour at the time, it ultimately helped me to become a more versatile and skilled programmer. The experience of working with different languages and technologies was invaluable, and it has helped me to become a more proficient and efficient programmer, I was still in my early career stage and was not aware of this wisdom at that time.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;I was at a cross roads and switching jobs was becoming hard because everyone wanted a expert of one.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;While I was able to write adequate code and consider the complexity of the algorithm, I was far from an expert or would match a person with equal number of experience in one particular language. I did a bit of research and realized I was not the only one stuck in this predicament, this is a common experience for programmers who have worked with multiple languages.&lt;/p&gt;

&lt;p&gt;With some stroke of luck; I was hired in a brand new start up as software engineer, however the paid was not the standard but beggars cannot be choosers initially I was asked to backend API(s) in PHP but as I was responsible to deploy my own application in production ready environment, my company and I identified I have neck for this. Next iteration of the project was in Java and we had already gone live once so to switch traffic from one environment to next required a blue green deployment — pulling that off successfully, in recognition of that I was promoted to the DevOps Engineer (read &lt;a href="https://medium.com/@kazimurtaza/there-is-no-such-thing-as-a-devops-engineer-a426209c3ab5" rel="noopener noreferrer"&gt;here&lt;/a&gt; what I think of DevOps Engineer).&lt;/p&gt;

&lt;p&gt;Evaluating my proficiency in a programming language was challenging, because I didn't ever belong to one paradigm as there were many factors that contribute to a skillset. Another way to evaluate my skills was to consider the types of projects I had worked on and the level of complexity I was able to handle. For example, if I had experienced working on large-scale systems or had tackled complex problems.&lt;/p&gt;

&lt;p&gt;Ultimately I realised I was an “&lt;a href="https://idiomaticprogrammers.com/" rel="noopener noreferrer"&gt;Idiomatic Programmer&lt;/a&gt;” — According to me is someone who understands multiple languages and is able to write code that is considered well-written and in line with the conventions and best practices of multiple programming languages. This type of programmer has a deep understanding of the similarities and differences between languages, and is able to apply their knowledge of one language to another. They have the ability to switch between different languages and frameworks and can write idiomatic code in any language they are familiar with.&lt;/p&gt;

&lt;p&gt;A programmer who is idiomatic in multiple languages can quickly adapt to new projects and technologies. They can understand the problem domain and design a solution that is optimal for the given requirements, regardless of the language they are using. They are also able to collaborate effectively with other developers, regardless of the language they are using, and can understand and read other people’s code more easily.&lt;/p&gt;

&lt;p&gt;Additionally, this type of programmer can be very valuable in a cross-functional team, as they are able to bridge the gap between different languages and technologies. They can act as a translator, helping team members who are not fluent in a particular language understand the requirements and design of a system.&lt;/p&gt;

&lt;p&gt;This mindset worked very well for in my current company, as I had the opportunity to not only work as DevOps Engineer and gain two AWS certification (Solution Arch. and Developer Associate) for which I was hired, but I was also exposed to Java again as SAP Commerce Developer, Cloud expert in as AWS Solution Arch., Integration Specialist in SAP Integration Suite, SAP CPQ Consultant, Performance Test Engineer and also dabbled quite recently a bit in SAP BTP; micro services written in JavaScript and got three sap certifications for SAP FSM, CPQ and CDC.&lt;/p&gt;

&lt;p&gt;now I am well aware that the industry is constantly evolving and new languages and technologies are emerging. Therefore, I have decided to focus on developing my skills as a Idiomatic Programmer, someone who can quickly adapt to new languages and technologies and can write idiomatic code in any language they are familiar with. This strategy has served me well in past, as I have been able to take on a wide range of projects and work with different teams and clients.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;In conclusion&lt;/strong&gt; , my journey as a programmer has been an exciting one and I am grateful for the opportunities that I have had to work with different languages and technologies. I have learned a great deal, and have been able to develop my skills and become a more versatile and proficient programmer. I believe that being a Idiomatic Programmer is the key to success in this constantly evolving industry.&lt;/p&gt;

</description>
      <category>programminglanguages</category>
      <category>versatileprogrammer</category>
      <category>careerdevelopment</category>
      <category>programmingparadigms</category>
    </item>
    <item>
      <title>AWS CLOUD DEVELOPMENT KIT — A TURING COMPLETE SOLUTION FOR INFRASTRUCTURE</title>
      <dc:creator>Murtaza 🐳</dc:creator>
      <pubDate>Sun, 28 Aug 2022 05:38:59 +0000</pubDate>
      <link>https://dev.to/aws-builders/aws-cloud-development-kit-a-turing-complete-solution-for-infrastructure-io3</link>
      <guid>https://dev.to/aws-builders/aws-cloud-development-kit-a-turing-complete-solution-for-infrastructure-io3</guid>
      <description>&lt;h3&gt;
  
  
  AWS cloud development kit — A Turing complete solution for infrastructure
&lt;/h3&gt;

&lt;p&gt;AWS Cloud Development Kit (CDK) was released in July 2019. AWS described it as a code-first approach to defining cloud application infrastructure. Back then, I was oblivious to the fact how ground-breaking this would be. In fact, I doubt many realised the potential of CDK. It was only recently when I attended a webinar from AWS and saw the potential of it. I was completely awestruck that this could redefine how we work with cloud resources.&lt;/p&gt;

&lt;p&gt;This blog discusses how infrastructure in the cloud has evolved, why AWS CDK is a Turing Complete solution and how it can impact the infrastructure we design in the cloud.&lt;/p&gt;

&lt;p&gt;First, we will explore some definitions:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;What is a Turing Complete solution?&lt;/li&gt;
&lt;li&gt;What is AWS CDK?&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  What is a Turing Complete Solution?
&lt;/h3&gt;

&lt;p&gt;A general definition found on&lt;a href="https://stackoverflow.com/questions/7284/what-is-turing-complete#:~:text=A%20Turing%20Complete%20system%20means,to%20solve%20any%20computation%20problem."&gt;StackOverflow&lt;/a&gt; would be:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;“A Turing Complete system means a system in which a program can be written to find an answer (although with no guarantees regarding runtime or memory). So, if somebody says, “my new thing is Turing Complete,” that means in principle (although often not in practice), it could solve any computation problem.”&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;A more adequate, simpler&lt;a href="https://softwareengineering.stackexchange.com/questions/132385/what-makes-a-language-turing-complete"&gt;definition&lt;/a&gt; in my opinion would be:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;“For example, an imperative language is Turing-complete if it has conditional branching (e.g., “if” and “goto” statements, or a “branch if zero” instruction; see one-instruction set computer) and the ability to change an arbitrary amount of memory (e.g., the ability to maintain an arbitrary number of data items).”&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;So you don’t need a loop. You just need a conditional jump &lt;em&gt;(because with a conditional jump you can simulate loops).&lt;/em&gt; That’s ultimately how a compiler translates loops into assembler.c.&lt;/p&gt;

&lt;h3&gt;
  
  
  What is AWS CDK?
&lt;/h3&gt;

&lt;p&gt;AWS CDK is a tool, which is used to define your cloud infrastructure as code in one of five supported programming languages: TypeScript, JavaScript, Python, Java, or C#.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Note: Go language is supported in a developer preview.&lt;/em&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Prerequisites before starting with CDK
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;Experience with popular AWS services,&lt;/li&gt;
&lt;li&gt;AWS SDK or the AWS CLI and experience working with AWS resources programmatically,&lt;/li&gt;
&lt;li&gt;Familiarity with &lt;a href="https://aws.amazon.com/cloudformation/"&gt;AWS CloudFormation&lt;/a&gt;, and&lt;/li&gt;
&lt;li&gt;Proficient in the programming language you intend to use with the AWS CDK.&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Concept
&lt;/h3&gt;

&lt;p&gt;AWS CDK uses “Constructs” in its framework:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Level 1 construct is a 1:1 cloud formation definition of a resource in AWS, for example:&lt;a href="https://docs.aws.amazon.com/cdk/api/latest/docs/@aws-cdk_aws-s3.CfnBucket.html"&gt;CfnBucket&lt;/a&gt; represents the CloudFormation AWS::S3::Bucket.&lt;/li&gt;
&lt;li&gt;Level 2 is an abstraction of level 1 for example&lt;a href="https://docs.aws.amazon.com/cdk/api/latest/docs/@aws-cdk_aws-s3.Bucket.html"&gt;s3.Bucket&lt;/a&gt; class. Finally, level 3 combines level 2 in further abstraction, which results in best practice patterns. These constructs are available in AWS CDK core — &lt;a href="https://docs.aws.amazon.com/cdk/api/latest/docs/aws-construct-library.html"&gt;AWS Construct Library&lt;/a&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--stNrT2iH--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/963/0%2Agte7gFVGr_jiSqaK.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--stNrT2iH--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/963/0%2Agte7gFVGr_jiSqaK.png" alt="" width="880" height="550"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In this article, I’m using Python due to its easy-to-read syntax. Also I will assume you have a bash or shell terminal (i.e. WSL2 or Linux OS) at hand as it will be easier to follow this tutorial.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Install Python 3+, (link to&lt;a href="https://docs.aws.amazon.com/cdk/latest/guide/work-with-cdk-python.html"&gt;configure python prerequisite&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://docs.aws.amazon.com/cdk/latest/guide/work-with.html#work-with-prerequisites"&gt;Configure AWS CLI&lt;/a&gt; → you would need an AWS account (access key and secret),&lt;/li&gt;
&lt;li&gt;Install NPM, Node &amp;amp; Node.js, &lt;em&gt;Hint: use NVM for ease, the version used here: v14.17.1&lt;/em&gt;
&lt;/li&gt;
&lt;li&gt;Install AWS CDK using NPM&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Let us start by creating a workspace:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;mkdir cdk-helloworld
cd cdk-helloworld
cdk init app --language python
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The above commands will set up your environment and multiple files will show up.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ztvWxe22--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/219/0%2Ae3jNs_3ZIeIAFuyk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ztvWxe22--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/219/0%2Ae3jNs_3ZIeIAFuyk.png" alt="" width="219" height="245"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Next, execute python command to gather dependencies:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;python -m pip install -r requirements.txt
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now we shall write some code. The below will be available to you after running the init command above. We are going to keep the level of difficulty at &lt;strong&gt;HelloWorld&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--EsCLqWj0--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/956/0%2A8akK-RUMuh2Df_61.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--EsCLqWj0--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/956/0%2A8akK-RUMuh2Df_61.png" alt="" width="880" height="646"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We need to import a level 2 curated construct now.&lt;/p&gt;

&lt;p&gt;First, run:&lt;br&gt;&lt;br&gt;
python -m pip install aws-cdk.aws_ec2&lt;/p&gt;

&lt;p&gt;and then add at the top:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;from aws_cdk import aws_ec2 as ec2
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Next, we will create a class and define a VPC and pass parameters inside:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--buayUvVn--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/901/0%2A37TkQieTB7YsVi8I.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--buayUvVn--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/901/0%2A37TkQieTB7YsVi8I.png" alt="" width="880" height="318"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;When defining a VPC with a private subnet, we often need to define NAT Gateways as well. To define a NAT gateway, all we had to do was create the variable “nat_gateways=1”, which will pick the first available public subnet and automatically define a nat gateway. It is also required to define an IP block for the VPC using the “CIDR” variable and two availability zones using the “max_azs”. At the end, we have defined two subnets — private and public.&lt;/p&gt;

&lt;p&gt;Now behind the scenes, AWS CDK is being a bit intuitive based on best practices. CDK now knows we want a VPC with one NAT gateway, Class C CIDR block and two availability zones. As mentioned before, we also require two subnets that are both public and private. CDK will automatically define two public and two private subnets in those two availability zones.&lt;/p&gt;

&lt;p&gt;The subnets will encompass the following range: &lt;strong&gt;10.0.0.0/16&lt;/strong&gt; , a definition for a private and a public subnet, so &lt;strong&gt;2 subnets definition X 2 availability zones = 4 subnets&lt;/strong&gt;. If we had written &lt;strong&gt;“max_azs=3”&lt;/strong&gt; , it would have resulted in something like this: &lt;strong&gt;2 subnets definition X 3 availability zones = 6 subnets&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;You could also define the CIDR block inside the subnet configuration for each subnet or let CDK automatically divide the IP Block equally among all the four subnets.&lt;/p&gt;

&lt;p&gt;Now we have enough to deploy something in the cloud using CDK successfully. If you execute “ &lt;strong&gt;cdk synth&lt;/strong&gt; ”, it would build out a cloud formation template that you can review and execute with “ &lt;strong&gt;cdk deploy&lt;/strong&gt; ” This step will fail if you have not configured AWS CLI locally. If the prerequisites are met and there is no syntax error, you could review the progress of the deployment or stack inside the cloud portal.&lt;/p&gt;

&lt;p&gt;Next, let us define an instance:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--jeDRmVZV--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/559/0%2A5LcAf2IgdDpNkzSY.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--jeDRmVZV--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/559/0%2A5LcAf2IgdDpNkzSY.png" alt="" width="559" height="315"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;First, we define a volume because it will be required when defining the instance. ec2.&lt;a href="https://docs.aws.amazon.com/cdk/api/latest/docs/@aws-cdk_aws-ec2.MachineImage.html"&gt;MachineImage&lt;/a&gt; is a good example of a level 2 construct. The second part includes defining the details for the instance level 2 construct. Instance_type, machine image construct is the volume we created above, in which VPC will be deployed. “&lt;em&gt;VPC = VPC&lt;/em&gt;” is telling the ec2 instance object to deploy this resource in the mentioned VPC, we defined earlier.&lt;/p&gt;

&lt;p&gt;If we deploy the stack now, the instance will be deployed successfully and by default, route tables and security groups will be created. But where will the instance be deployed?&lt;/p&gt;

&lt;p&gt;There is no documented precedence for this — it chooses a subnet automatically. However, upon repeating the deployment multiple times, it seemed the subnet, which was defined firstly, took precedence and this will not work. Therefore, we need to define the subnet explicitly.&lt;/p&gt;

&lt;p&gt;Defining the subnet was not as simple as expected. Since subnets are not defined yet, we need to find out their object.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--jgUZlskL--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/391/0%2AC-C7iF2PG65v5bss.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--jgUZlskL--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/391/0%2AC-C7iF2PG65v5bss.png" alt="" width="391" height="176"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This bit of code gave me the ability to filter, and now I have separate objects for the public and private subnet.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Bsdh8WGv--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/688/0%2AR5sDGk7gOM5EEMrK.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Bsdh8WGv--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/688/0%2AR5sDGk7gOM5EEMrK.png" alt="" width="688" height="167"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now we define to which subnet this instance will be deployed and we can also further filter subnets based on ID or name.&lt;/p&gt;

&lt;p&gt;You can now deploy the stack by executing “ &lt;strong&gt;cdk synth&lt;/strong&gt; ” and “ &lt;strong&gt;cdk deploy&lt;/strong&gt; ”, but you will realise that the naming of this resource seems a bit off and not really appealing. If we were in some other tool, i.e. terraform or CF, we would be defining every resource one by one. Let us try something different: the TRUE power of using a pure programming language.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--9UCVwkg0--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/653/0%2AbA1GDciQyIY4Cub2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--9UCVwkg0--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/653/0%2AbA1GDciQyIY4Cub2.png" alt="" width="653" height="239"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Conclusion
&lt;/h3&gt;

&lt;p&gt;Fairly simple, yes? CDK Tags add or update resource tags. The interesting part is the conditional jump, which makes infrastructure as code &lt;strong&gt;&lt;em&gt;“Turing Complete”&lt;/em&gt;&lt;/strong&gt;. Now we can define complex computational solutions and define workflows with much more complex logic than ever possible. AWS CDK gives us developers the ability to manipulate the infrastructure through code and provide more freedom and functionality compared to its predecessors. We can now use the same programming language to deploy our infrastructure as we use it for runtime code. We can also reference our runtime code assets. We do not have to reinvent the wheel, leveraging the full power of an already established programming language. We can now use software engineering principles in infrastructure. Using the familiar programming languages developers can now accelerate the development process. I would go as far as saying “ &lt;strong&gt;Now this is truly DevOps&lt;/strong&gt; ”.&lt;/p&gt;

&lt;p&gt;Using CDK we can create a higher level of abstraction, composed types and create interfaces. For example, if you were to deploy a lambda function, you could create it by default monitoring, alarms and API gateway instances.&lt;/p&gt;

&lt;p&gt;I would also like to highlight that the more complex constructs we use, the more control we lose. How? Well, level 3 constructs are standardised libraries or interfaces, which means less defining of how the infrastructure is created. Rest assured level 3 constructs address a specific use case with best practices, but they will most likely have&lt;a href="https://github.com/aws/aws-cdk/pull/12391"&gt;issues&lt;/a&gt;. Integration can fail, e.g. the cloud formation between API and CDK Library breaks more often than compared to level 1 constructs as Amazon follows the premise: “release early, release often”. If you like to forge your own path, you can always fork() the libraries and define what you need. Or you can go down to level 1 or 2 and define the details required for the project.&lt;/p&gt;

&lt;p&gt;Basically, CDK adds a computational meta-layer on top of a purely declarative infrastructure spec. I suppose this also allows you to make conditional decisions on the infrastructure setup based on variables or other factors, which could be a time-saver when defining infrastructure for different environments (Dev, QA, PRD).&lt;/p&gt;

&lt;p&gt;I would like to close the blog with the thought that adding a Turing Complete language into the mix suddenly opens up opportunities for real “ &lt;strong&gt;Infrastructure as Code&lt;/strong&gt; ”.&lt;/p&gt;

&lt;p&gt;THANK YOU FOR READING 🙇&lt;/p&gt;

&lt;p&gt;Find the full code below:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#!/usr/bin/env python3
from aws_cdk import core as cdk
from aws_cdk import core
from aws_cdk import aws_ec2 as ec2
from cdk-helloworld.cdk-helloworld-stack import CdkHelloworldStack
class CdkHelloworldStack(cdk.Stack):
    def __init__ (self, scope: cdk.Construct, id: str, **kwargs) -&amp;gt; None:
        super(). __init__ (scope, id, **kwargs)
       # VPC
        vpc = ec2.Vpc(self, "my-cdk-vpc",
            nat_gateways=1,
            cidr='10.0.0.0/16',
            max_azs=2,
            subnet_configuration=[
                ec2.SubnetConfiguration(name="private",subnet_type=ec2.SubnetType.PRIVATE),
               ec2.SubnetConfiguration(name="public",subnet_type=ec2.SubnetType.PUBLIC)
            ]
        )
        # Filter out subnets
        private_subnets = vpc.select_subnets(
            subnet_type=ec2.SubnetType.PRIVATE
        )
        public_subnets = vpc.select_subnets(
            subnet_type=ec2.SubnetType.PUBLIC
        )
        # AMI
        amzn_linux = ec2.MachineImage.latest_amazon_linux(
            generation=ec2.AmazonLinuxGeneration.AMAZON_LINUX_2,
            edition=ec2.AmazonLinuxEdition.STANDARD,
            virtualization=ec2.AmazonLinuxVirt.HVM,
            storage=ec2.AmazonLinuxStorage.GENERAL_PURPOSE
            )
        # Instance
        instance = ec2.Instance(self, "Instance",
            instance_type=ec2.InstanceType("t3.nano"),
            machine_image=amzn_linux,
            vpc = vpc,
            vpc_subnets = ec2.SubnetSelection(subnets=public_subnets.subnets)
        )
        # Tagging
        cdk.Tags.of(vpc).add("Name", "my-cdk-vpc")
       index = 1
        for subnet in public_subnets.subnets:
            cdk.Tags.of(subnet).add("Name", "public subnet "+str(index))
            index=index+1
        index = 1
        for subnet in private_subnets.subnets:
            cdk.Tags.of(subnet).add("Name", "private subnet "+str(index))
            index=index+1
app = cdk.App()
CdkHelloworldStack(app, "cdk-helloworld")
app.synth()
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






</description>
      <category>infrastructure</category>
      <category>softwaredevelopment</category>
      <category>architecture</category>
      <category>devopstool</category>
    </item>
    <item>
      <title>THE PERPETUAL STATE OF ANXIETY WHEN WORKING IN THE CLOUD</title>
      <dc:creator>Murtaza 🐳</dc:creator>
      <pubDate>Thu, 11 Aug 2022 00:27:31 +0000</pubDate>
      <link>https://dev.to/aws-builders/the-perpetual-state-of-anxiety-when-working-in-the-cloud-4j5h</link>
      <guid>https://dev.to/aws-builders/the-perpetual-state-of-anxiety-when-working-in-the-cloud-4j5h</guid>
      <description>&lt;p&gt;Anyone who works in the cloud would know how it feels to constantly lose sleep over fears of exceeding budget. It would not be too far-fetched to assume that the sheer power of the cloud alone must have burned many startups. The question is, were they ready for the cloud?&lt;/p&gt;

&lt;p&gt;Before we move forward, you should know two things about me: I am an avid Reddit explorer, and the cloud is my bread and butter. More often than not, I run into posts expressing their dismay over the alarmingly high costs of working in the cloud.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F682%2F0%2A5RndW45pbgbaWobp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F682%2F0%2A5RndW45pbgbaWobp.png"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;A screenshot of some Reddit posts about cloud billing.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;One of the incidents shared in a post on &lt;a href="https://www.reddit.com/r/googlecloud/comments/kaa5ew/we_burnt_72k_testing_firebase_cloud_run_and/" rel="noopener noreferrer"&gt;Reddit&lt;/a&gt; and then on &lt;a href="https://medium.com/milkie-way/we-burnt-72k-testing-firebase-cloud-run-and-almost-went-bankrupt-part-1-703bb3052fae" rel="noopener noreferrer"&gt;Medium&lt;/a&gt; by the developers of &lt;a href="https://tomilkieway.com/" rel="noopener noreferrer"&gt;Milkie Way&lt;/a&gt;, entitled “ &lt;strong&gt;We Burnt $72K testing Firebase + Cloud Run and almost went Bankrupt&lt;/strong&gt; ”, is very insightful. All the budget alerts did not work. The cloud is so heavily integrated that the scope of the incident was catastrophic. But what went wrong?&lt;/p&gt;

&lt;p&gt;As the load increased after deployment and sanity/smoke tests, the firebase quickly shifted from free to paid, and GCP budget alerts were sent out to the team within minutes. The bill rose exponentially, and they did not have time to react fast enough.&lt;/p&gt;

&lt;h3&gt;
  
  
  Being Proactive Instead of Reactive
&lt;/h3&gt;

&lt;p&gt;Cloud features have now evolved and become multi-dimensional and even more complex — something businesses should be careful about. To explain this more simply, imagine an incident where we have a Lambda function triggered by S3 whenever there is a change in the bucket. That same Lambda function saves logs to the same bucket. A recursive loop?&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F710%2F0%2AAM3dnu_VhSNJlAaD.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F710%2F0%2AAM3dnu_VhSNJlAaD.png"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;A Demonstration of a Lambda Function that Saves Logs to the Same Bucket that Triggers it.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;We all learn from our mistakes and experimentation is never wrong. However, without proper research and load/capacity tests, nightmares like the ones documented in “Chapter 11” mentioned in the &lt;a href="https://medium.com/milkie-way/we-burnt-72k-testing-firebase-cloud-run-and-almost-went-bankrupt-part-1-703bb3052fae" rel="noopener noreferrer"&gt;post&lt;/a&gt; shared above, become a reality. Anything that is done to salvage the damage is a reactive approach.&lt;/p&gt;

&lt;p&gt;Besides, not everyone will be as lucky as most Reddit users who shared their stories. So how can we tackle this effectively?&lt;/p&gt;

&lt;p&gt;Imagine the case below, where we can throttle requests at the API gateway. Also, we can only allow a certain number of concurrent Lambda executions. If we rely only on these limited controls, what happens to the requests that the API gateway blocks? Do they get logged somewhere, or are they discarded? What if those requests are being used to place orders? Are we potentially losing revenue because our infrastructure cannot handle the load?&lt;/p&gt;

&lt;p&gt;We have to start thinking differently. We have to think of it as an application because infrastructure now behaves like the code, which we can control completely.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F967%2F0%2ACxJTpwsXxg0r6bzo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F967%2F0%2ACxJTpwsXxg0r6bzo.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Mechanisms to Control Scalability
&lt;/h3&gt;

&lt;p&gt;We need a kill switch! Or, to use a more appropriate phrase, we need to control scalability.&lt;/p&gt;

&lt;p&gt;There is, however, the matter of how it would look to the end-user. We can always design an end-user experience solution. For example, we create a-sync queues, receive requests and let end-users know their request is in the pipeline, or bring a maintenance page. There are more ways to handle end-users than the ones mentioned.&lt;/p&gt;

&lt;p&gt;Let us discuss the kill switch now — a proactive approach that anticipates malicious cloud actors or high spikes. To give a brief description, a kill switch throttles requests or disables the service that generated the billing spikes. It is best described in &lt;a href="https://dev.to/napicella/poor-s-man-kill-switch-for-your-demo-applications-4bo2"&gt;‘Poor man’s kill switch’&lt;/a&gt; by &lt;a href="https://dev.to/napicella"&gt;Nicola Apicella&lt;/a&gt;. He talks about the &lt;a href="https://en.wikipedia.org/wiki/Token_bucket" rel="noopener noreferrer"&gt;token bucket&lt;/a&gt; algorithm, which throttles requests if the bucket is out of coins. We can do a capacity test and finalise the number of tokens in the bucket and the number of requests we can entertain at a time.&lt;/p&gt;

&lt;p&gt;We can also potentially auto scale the environment using the token bucket algorithm. For example, we create cloud watch alarms to monitor the load, and once it reaches the threshold, the infrastructure starts scaling up, after which we can increase the number of coins in the bucket.&lt;/p&gt;

&lt;p&gt;The simple implementation below depicts the fine-grained control we would have in this design, i.e., controlled scalability. We will not throttle any requests received on the API gateway because we want to handle every one of them. We have a bucket full of coins. Each request uses a coin from the bucket and then recycles it. If we want to increase the number of concurrent executions, we will have to increase the number of tokens in the bucket. Other services would auto scale, assuming auto scaling is enabled.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F1024%2F0%2AqmDe_rJdcLaYL_Vp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F1024%2F0%2AqmDe_rJdcLaYL_Vp.png"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;A Token Bucket Algorithm Gives Us Control over the Number of Requests We Can Entertain at a Given Time.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;This solution could have another use case as well. For example, if we run multiple environments, we can keep the lower ones within budget by maintaining a token bucket. &lt;a href="https://dev.to/leonti"&gt;Leonti Bielski&lt;/a&gt; uses something &lt;a href="https://dev.to/leonti/aws-budget-killswitch-disable-aws-services-when-budget-is-exceeded-36oc"&gt;similar&lt;/a&gt; to the article shared above, only he bluntly shuts/terminates the services.&lt;/p&gt;

&lt;p&gt;Then again, any solution is better than no solution. Nobody wants to wake up to a huge bill waiting for them.&lt;/p&gt;

&lt;h3&gt;
  
  
  Conclusion
&lt;/h3&gt;

&lt;p&gt;What we have discussed in this blog is a concept. We need to realise that the cloud is no longer a static environment, where you configure the infrastructure by filling in parameters. With server-less and micro service architecture, the cloud is now dynamic and concepts from the on-premise era no longer hold any sway. Even if you are a beginner, you will realise that bills can never be estimated once you use the cloud a bit. This problem impacts low-budget startups and could be a potential sinkhole in enterprises. Untested and poorly provisioned infrastructure could be a nightmare waiting to happen.&lt;/p&gt;




</description>
      <category>devops</category>
      <category>applicationarchitect</category>
      <category>cloudbilling</category>
      <category>cloudcomputing</category>
    </item>
    <item>
      <title>There Is No Such Thing As A DevOps Engineer</title>
      <dc:creator>Murtaza 🐳</dc:creator>
      <pubDate>Tue, 08 Dec 2020 03:13:34 +0000</pubDate>
      <link>https://dev.to/kazimurtaza/there-is-no-such-thing-as-a-devops-engineer-2b1b</link>
      <guid>https://dev.to/kazimurtaza/there-is-no-such-thing-as-a-devops-engineer-2b1b</guid>
      <description>&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--dKLQd0Zd--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2Ap89Z4TVwAMlHQuuTmx2Y0Q.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--dKLQd0Zd--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2Ap89Z4TVwAMlHQuuTmx2Y0Q.jpeg" alt="" width="880" height="495"&gt;&lt;/a&gt;Senior DevOps Engineer&lt;/p&gt;

&lt;p&gt;This piece of jumble up words is not going to define What DevOps is; if anything, if you require that information I would suggest reading &lt;strong&gt;The Phoenix Project&lt;/strong&gt; , this book truly contributes to the solution, with which it was coined back in 2009, after Phoenix project, DevOps started trending and everyone started jumping on the bandwagon, with no one really knowing, what it is, or what it means, they just knew some tools with which you could automate ops stuff.&lt;/p&gt;

&lt;p&gt;Cloud providers came into the picture couple years ahead, which encouraged automation and thus the role came into existence &lt;strong&gt;DevOps Engineer&lt;/strong&gt; , it skyrocketed from there on and everyone who was something wanted to migrate to DevOps role. It became so important for a growing organization, this person was a must.&lt;/p&gt;

&lt;p&gt;Now every company who had a DevOps Engineer thought they were at the bleeding edge.🤣I was also in that said bandwagon, I might still be 😁&lt;/p&gt;

&lt;p&gt;I am not saying we should not do DevOps, we are definitely at the right place, just as agile is important, so is DevOps, but it is Culture or Practice, not a ROLE…&lt;/p&gt;

&lt;p&gt;So now you have three departments Developer, IT and DevOps, now SIR; you went totally wrong, DevOps Practice suggests to break the silos of the developers and IT, but you went ahead and created a third silo. 👏&lt;/p&gt;

&lt;p&gt;When in reality, it was the job of Technical Architect, Product Owner, Team Lead, Cloud Architect or Site Reliability Engineer, to take up the mantle of &lt;strong&gt;DevOps practitioner&lt;/strong&gt; , because only at that position are you truly aware of the best practices. DevOps is always bred inside or bringing in a contractor at early stages. The adoption of DevOps was always going to be the responsibility of a Developer or Ops person, but both of them cannot call themselves &lt;strong&gt;A DevOps Engineer — because there is no such thing.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The proper term for a Dev or an Ops person who has taken up DevOps Practices is &lt;strong&gt;Site Reliability Engineer(SRE) — &lt;/strong&gt;&lt;em&gt;Wikipedia definition is &lt;/em&gt;— &lt;em&gt;SRE is a discipline that incorporates aspects of software engineering and applies them to infrastructure and operations problems. The main goals are to create scalable and highly reliable software systems.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;“&lt;em&gt;Every infrastructure guy should live a developer’s life as well, supporting and automating those tools the same way as developers automate business processes. Even though DevOps is a rapidly growing field and “DevOps engineers” are in hot demand, it’s important to keep in mind that DevOps is not a skill of one person. Everybody in your dev team must know Linux, Docker, Docker Compose, Kubernetes, and Ansible, at least on a user level, as well understand networking and deployment architecture&lt;/em&gt;” — &lt;a href="https://www.infoworld.com/author/Stepan-Pushkarev/"&gt;Stepan Pushkarev&lt;/a&gt;, Contributor, &lt;a href="https://www.infoworld.com/article/3263812/do-not-hire-a-devops-engineer.html"&gt;InfoWorld&lt;/a&gt; &lt;strong&gt;March 23rd 2018.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The opinion of Stepan above, is not what I am suggesting; one cannot cover everything, I am just asking break the boxes you are in, don't specialize in just one thing, for example, a developer who only knows best java code to write, this will not suffice anymore, let us take an example; there are two people one who is the master of java and writes code like it is a symphony, and other who can write adequate java code, but has knowledge distributed systems, deployment practices and cloud computing technologies, who would you hire?&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;“DevOps is also characterized by operations staff making use of many of the same techniques as developers for their systems work.” &lt;/strong&gt; — &lt;a href="https://theagileadmin.com/what-is-devops/"&gt;Ernest Mueller&lt;/a&gt;, Aug 2, 2010, Last Revised Jan 12, 2019&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;“DevOps is the practice of operations and development engineers participating together in the entire service lifecycle, from design through the development process to production support” &lt;/strong&gt; — Ernest Mueller&lt;br&gt;&lt;br&gt;
&lt;strong&gt;“I believe the fundamental DevOps values are effectively captured in the Agile Manifesto” — Ernest Mueller&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--rSmOj4f1--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_880/https://cdn-images-1.medium.com/max/320/1%2AH74evsOv5TFIq9vfATi3Tw.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--rSmOj4f1--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_880/https://cdn-images-1.medium.com/max/320/1%2AH74evsOv5TFIq9vfATi3Tw.gif" alt="" width="320" height="228"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;“It is not “they’re taking our jobs!” Some folks think that DevOps means that developers are taking over operations and doing it themselves. Part of that is true and part of it isn’t.” &lt;/strong&gt; — Ernest Mueller&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;“DevOps is a culture, not a role! The whole company needs to be doing DevOps for it to work”&lt;/em&gt;&lt;/strong&gt; &lt;em&gt; — &lt;/em&gt;&lt;a href="https://medium.com/@neonrocket?source=post_page---------------------------"&gt;&lt;em&gt;Irma Kornilova&lt;/em&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.reddit.com/user/arghcisco/"&gt;&lt;strong&gt;Arghcisco&lt;/strong&gt;&lt;/a&gt; &lt;strong&gt;had a funny definition for DevOps&lt;/strong&gt; : “&lt;em&gt;you have two cows. One thinks you’re an SRE and the other one thinks you maintain the developers’ Jenkins pipelines for a living. Somehow you’re paid $300k/year for closing random issues assigned to you despite no one being able to explain what your job is. To avoid slowly going insane, you’ve been putting your CS degree to work by training a TensorFlow model to distinguish hot dogs from not hot dogs.”&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;In Conclusion,&lt;/strong&gt; DevOps Is a practice just as Agile is. Now that we have clarity SRE is the role everyone needs to upscale on, if they are interested specifically in this paradigm, how it is different from cloud engineer is a story for another day, but I believe Cloud Engineer is a subpart of SRE, so is Build Engineer. SRE would be someone who is comfortable with software engineering Principles, Development for Ops processes just like Developer would automate business processes.&lt;/p&gt;

&lt;p&gt;This blog is an opinion that I share with many others around the world, it is totally subjective.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;References:&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
1 . &lt;a href="https://hackernoon.com/devops-team-roles-and-responsibilities-6571cfb56843"&gt;DevOps Team Roles And Responsibilities&lt;/a&gt;  &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;a href="https://www.prodops.io/blog/there-is-no-such-thing-as-a-devops-engineer"&gt;There Is No Such Thing As A DevOps Engineer&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://www.infoworld.com/article/3263812/do-not-hire-a-devops-engineer.html"&gt;Do not hire a DevOps engineer&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://medium.com/@neonrocket/devops-is-a-culture-not-a-role-be1bed149b0"&gt;DevOps is a culture, not a role!&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://theagileadmin.com/what-is-devops/"&gt;What Is DevOps?&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://www.astroarch.com/tvp_strategy/devops-engineer-25120/"&gt;No, You Are Not a DevOps Engineer&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://continuousdelivery.com/2012/10/theres-no-such-thing-as-a-devops-team/"&gt;There’s No Such Thing as a “DevOps Team”&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://garywoodfine.com/not-devops-engineer/"&gt;You are not a DevOps Engineer&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.toptal.com/devops/what-the-hell-is-devops"&gt;What The Hell Is DevOps?&lt;/a&gt;&lt;/li&gt;
&lt;/ol&gt;

</description>
      <category>sitereliabilityengin</category>
      <category>development</category>
      <category>sre</category>
      <category>devops</category>
    </item>
    <item>
      <title>Artisanal hand-crafted build machines — a recipe for disaster</title>
      <dc:creator>Murtaza 🐳</dc:creator>
      <pubDate>Thu, 11 Jul 2019 06:26:53 +0000</pubDate>
      <link>https://dev.to/kazimurtaza/artisanal-hand-crafted-build-machines-a-recipe-for-disaster-10ji</link>
      <guid>https://dev.to/kazimurtaza/artisanal-hand-crafted-build-machines-a-recipe-for-disaster-10ji</guid>
      <description>&lt;h3&gt;
  
  
  Artisanal hand-crafted build machines — a recipe for disaster
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--TmRpOk-o--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1000/1%2AJ22O9kTwPOIh_bVZOOxVjw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--TmRpOk-o--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1000/1%2AJ22O9kTwPOIh_bVZOOxVjw.png" alt="" width="880" height="493"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Today I will discuss how we used to do CI/CD, and how we evolved, we build applications in our CI/CD environment. For example, if the application required a maven tool, we would just install maven and execute it. What could be wrong with that? Well, &lt;em&gt;you could not be any more wrong. You could try, but you will not succeed.&lt;/em&gt; It was a nightmare, first, we had to manage all the different versions of SDK and build tools, as someone said it is “a &lt;em&gt;random assortment of dependencies and tools.&lt;/em&gt;“. We were asked repeatedly to install or update some tool or SDK’s and just imagine with a cluster of CI/CD. You had to configure instances, even if you used snapshot you would still need to configure one or two, plus afterwards, CI/CD node required cleanups as well.&lt;/p&gt;

&lt;p&gt;We tried using SDK manager in some cases, but it still required us to SSH into the machine, we hoped we could find some way, to stop doing this repetitive task again and again…&lt;/p&gt;

&lt;p&gt;Then the Internet came to our rescue, I was able to find a different practice going around. Building Inside Docker using containers in pipelines to perform all actions. We had found the holy grail for our problem, which would not only make it easier for the developer but also eliminates the need for us to intervene with a proper solution. Now one can control all the dependencies and tools required in your code without having to worry if the build machine has that version or tool.&lt;/p&gt;

&lt;p&gt;All we had to do was convert existing jobs to the new pattern and inform everyone how to change them appropriately. Easier said than done, even though your organization believes in change as the only constant, as an individual it is still sometimes hard. It is sometimes perceived as more work, even though it makes life easier. New concepts can be tricky to understand and have a steep learning curve hence the apprehension is understandable.&lt;/p&gt;

&lt;p&gt;So we waited for an opportunity to present itself. We were asked to install Newman on CI/CD server, our QA required us to run some tests internally, a prerequisite was that CI/CD server had docker installed, which was true in our case.&lt;br&gt;&lt;br&gt;
To run Newman previously we just had to pull git repo with postman collection and execute;&lt;/p&gt;

&lt;p&gt;newman run "application.json.postman_collection" --reporter-cli-no-failures --environment="application.json.postman_environment" --reporters="json,cli" --reporter-json-export="newman-results.json" --disable-unicode&lt;/p&gt;

&lt;p&gt;The change was very subtle, just have to substitute Newman, at the front with docker cmd to bring us the required CLI of Newman.&lt;/p&gt;

&lt;p&gt;docker run --rm -v $(pwd):/etc/newman postman/newman:alpine run "application.json.postman_collection" --reporter-cli-no-failures --environment="application.json.postman_environment" --reporters="json,cli" --reporter-json-export="newman-results.json" --disable-unicode&lt;/p&gt;

&lt;p&gt;The difference is&lt;/p&gt;

&lt;p&gt;docker run --rm -v $(pwd):/etc/newman postman/newman:alpine&lt;/p&gt;

&lt;p&gt;instead of&lt;/p&gt;

&lt;p&gt;newman run&lt;/p&gt;

&lt;p&gt;The link to an official image of Newman can be found &lt;a href="https://github.com/postmanlabs/newman/tree/develop/docker/"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Some more examples could be;&lt;/p&gt;

&lt;p&gt;docker run -it --rm -v "$PWD":/usr/src/code -v "$PWD/target:/usr/src/code/target" -w /usr/src/code maven mvn clean package&lt;/p&gt;

&lt;p&gt;docker run --rm -v "$PWD":/home/gradle/project -w /home/gradle/project gradle gradle &lt;/p&gt;

&lt;p&gt;docker run -it --rm --name my-running-script -v "$PWD":/usr/src/app -w /usr/src/app node:8 node your-daemon-or-script.js&lt;/p&gt;

&lt;p&gt;Let us go a little further and explore the feature Docker released called multistage builds. It was inspired by the &lt;a href="https://dzone.com/articles/design-patterns-for-beginners-with-java-examples?edition=483200&amp;amp;utm_source=Weekly%20Digest&amp;amp;utm_medium=email&amp;amp;utm_campaign=Weekly%20Digest%202019-05-15"&gt;Builder Pattern design&lt;/a&gt; of object-oriented programming. A container is executed to create a complex object. In this case, the object is a micro-service container image. Docker modified the Builder Pattern a bit and is much easier to use now. Previously, we used to either containerize the build process or if we wanted to build smaller images we used to copy the artifacts, both ways were not ideal. Now we have one Dockerfile, which is divided into two parts, one build part other is runtime. In the building part, we compile our application and copy the artifact only to the runtime part once finished.&lt;/p&gt;

&lt;p&gt;Still with me right? This way during the building part we can bring in the heavy guns like JDK, all the SDKs, and Build tools we require in the docker file and execute the build process. When successful, just move to the next line and again use FROM, this time only pull the essentials. In our case, it was openjre:8-jre-slim, runtime environment, and that too slimmed down version. You gotta keep the size small these days as you are not just running a few services you are planning to run hundreds. After FROM, the next line could be COPY artifacts from the previous build container. So by the successful end of it, you will be left with the smallest runtime container image and CI/CD Server environment is clean and nifty.&lt;/p&gt;

&lt;p&gt;Below is a sample multistage build file&lt;/p&gt;

&lt;h1&gt;
  
  
  build stage
&lt;/h1&gt;

&lt;p&gt;FROM maven:3-jdk-8-alpine as target&lt;br&gt;&lt;br&gt;
ENV APP_HOME=/root/dev/application/&lt;br&gt;&lt;br&gt;
RUN mkdir -p $APP_HOME&lt;br&gt;&lt;br&gt;
WORKDIR $APP_HOME&lt;br&gt;&lt;br&gt;
COPY pom.xml $APP_HOME&lt;br&gt;&lt;br&gt;
RUN mvn dependency:go-offline&lt;br&gt;&lt;br&gt;
RUN mkdir -p $APP_HOME/src&lt;br&gt;&lt;br&gt;
COPY src/ $APP_HOME/src&lt;br&gt;&lt;br&gt;
RUN mvn &lt;strong&gt;package&lt;/strong&gt;  &lt;/p&gt;

&lt;h1&gt;
  
  
  runtime stage
&lt;/h1&gt;

&lt;p&gt;FROM openjre:8-jre-slim&lt;br&gt;&lt;br&gt;
ENV JAVA_OPTS=""&lt;br&gt;&lt;br&gt;
WORKDIR /root/&lt;br&gt;&lt;br&gt;
COPY --from=target /root/dev/application/target/application.jar app.jar&lt;br&gt;&lt;br&gt;
EXPOSE 8084&lt;br&gt;&lt;br&gt;
ENTRYPOINT ["java", "${JAVA_OPTS}", "-jar", "/app.jar"]&lt;/p&gt;

&lt;h3&gt;
  
  
  Conclusion
&lt;/h3&gt;

&lt;p&gt;Although the benefits of using Docker in the normal build process and multistage builds are evident, still just for posterity’s sake let's see if we can jot them down. This implementation provides compatibility and maintainability, eliminates the “it works on my machine”, tools and SDK version mismatch issues once and for all.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--U8YgH1PQ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/393/1%2AOzErae9a0Huwj5LV39wBBQ.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--U8YgH1PQ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/393/1%2AOzErae9a0Huwj5LV39wBBQ.jpeg" alt="" width="393" height="330"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;No need to install Nodejs lib/Newman on the system level, or keep installing it, again and again. Isolating your build/test from other environment variables or different processes currently executing, standardization of application from end to end, if it works on your system it will work on the server, keeping CI/CD server clean, allowing faster configurations. Meaning no more delay for another team to configure your environment for you to continue rapid deployment of smaller container images, increase the count of continuous Deployment/Testing, and if I have mentioned this before it requires another mention &lt;strong&gt;Isolation&lt;/strong&gt; and &lt;strong&gt;Security.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This blog was originally posted on &lt;a href="https://www.faircg.com/blogs/buildmachines-recipe-for-disaster/"&gt;FAIRCG&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>cicd</category>
      <category>jenkins</category>
      <category>multistage</category>
      <category>buildmachines</category>
    </item>
    <item>
      <title>Private DNS Server on RPI3+ and pushing it to the limit with Performance testing using Jmeter</title>
      <dc:creator>Murtaza 🐳</dc:creator>
      <pubDate>Sat, 15 Jun 2019 15:35:03 +0000</pubDate>
      <link>https://dev.to/kazimurtaza/private-dns-server-on-rpi3-and-pushing-it-to-the-limit-with-performance-testing-using-jmeter-kek</link>
      <guid>https://dev.to/kazimurtaza/private-dns-server-on-rpi3-and-pushing-it-to-the-limit-with-performance-testing-using-jmeter-kek</guid>
      <description>&lt;p&gt;I plan on configuring DNS server on my rpi3+, why you ask, well my reason for the time being are simple, just because I can, in reality, I have too many services running at home, and remembering IP’s for all them has become somewhat hectic, and does not seem very nice.&lt;/p&gt;

&lt;p&gt;This idea led me to an interesting discussion already going on the internet, why is everyone opting for the same, some for security, or privacy or caching. so I wanted to discover what was all the fuss about and actually prove to myself that RPI3, packs a punch.&lt;/p&gt;

&lt;h3&gt;
  
  
  Key concepts to ensure best practice(&lt;a href="https://security.stackexchange.com/questions/39504/pros-cons-of-using-a-private-dns-vs-a-public-dns"&gt;link&lt;/a&gt;)
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;DNS only answers to IP(s) on the internal domain,&lt;/li&gt;
&lt;li&gt;Configure this DNS server to only use root hints and not forwarders (this can largely mitigate MITM attacks).&lt;/li&gt;
&lt;li&gt;Have a local caching nameserver for faster query time and to prevent NXDOMAIN hijacking,&lt;/li&gt;
&lt;li&gt;Recursion is allowed on a private DNS server as long as you make sure you have taken the first point into account.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Some advantages which are the reason why I am opting to have DNS Server,
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;DNS server can resolve domains, within the network. You could, for instance, resolve “hello.world.com” to your torrent server.&lt;/li&gt;
&lt;li&gt;
&lt;em&gt;DNS can be used as a poor man’s blacklist&lt;/em&gt;. you can block all unwanted domains, redirecting the request to a more suitable website.&lt;/li&gt;
&lt;li&gt;Users in my network, are using a trusted DNS server and not some hijacked one.&lt;/li&gt;
&lt;li&gt;we can block most of the ads, by blocking their IP(s).&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  why should we just keep our amateurly-configured DNS server private?
&lt;/h3&gt;

&lt;p&gt;DNS protocol attacks are pretty common amongst hackers, some basic forms of attacks are; a detailed description of them can be found &lt;a href="https://www.sans.org/reading-room/whitepapers/dns/security-issues-dns-1069"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;DNS spoofing&lt;/li&gt;
&lt;li&gt;DNS ID hacking&lt;/li&gt;
&lt;li&gt;DNS cache poisoning&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Why choose raspberry pi?
&lt;/h3&gt;

&lt;p&gt;With evermoving innovation in arm based platforms, I have always been impressed by what these tiny machines with tiny footprint can do. let us prove if they provide value for money.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Prerequisite for implementing DNS Server on Raspberry PI&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Raspberry PI,&lt;/li&gt;
&lt;li&gt;Raspian Lite Image installed,&lt;/li&gt;
&lt;li&gt;Internet connection, Internal static IP,&lt;/li&gt;
&lt;li&gt;SSH enabled.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;em&gt;Comparison of DNS Server can be found&lt;/em&gt; &lt;a href="https://en.wikipedia.org/wiki/Comparison_of_DNS_server_software"&gt;&lt;em&gt;here&lt;/em&gt;&lt;/a&gt;&lt;em&gt;, I choose Bind, because of how widely accepted it is, and also it is open sourced, which mean it has good community support.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Following the tutorial &lt;a href="https://www.ionos.com/digitalguide/server/configuration/how-to-make-your-raspberry-pi-into-a-dns-server/"&gt;here&lt;/a&gt;, I was able to successfully install Bind9 on the server but to be able to achieve my goal listed above, I had to modify the conf files present in the link.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;**cat /etc/bind/db.subdomain.example.com**

$TTL 14400
@ IN SOA ns1.subdomain.example.com. hostmaster.subdomain.example.com. (

201006601 ; Serial
7200 ; Refresh
120 ; Retry
2419200 ; Expire
604800) ; Default TTL;

subdomain.example.com. IN NS ns1.subdomain.example.com.
subdomain.example.com. IN NS ns2.subdomain.example.com.
subdomain.example.com. IN MX 10 mail.subdomain.example.com.

ns1 IN A 192.168.100.100
ns2 IN A 192.168.100.102

www IN CNAME subdomain.example.com.

mail IN A 192.168.100.103

subdomain.example.com. IN A 192.168.100.1
raspberrypi3 IN A 192.168.100.100
raspberrypi2 IN A 192.168.100.101
router IN A 192.168.100.200
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Test your DNS server works perfectly before you go update every pc, for that I found &lt;a href="https://domoticproject.com/configuring-dns-server-raspberry-pi/"&gt;this&lt;/a&gt; article to be much help.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo named-checkconf
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;em&gt;this command when executing on the DNS server would return an error if conf files are not properly configured.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;next as the article suggests keep an eye on the logs, and yes use the &lt;strong&gt;dig&lt;/strong&gt; command to ensure domain, is resolving correctly.&lt;/p&gt;

&lt;p&gt;Now you can go ahead and update the internal network, or servers/pcs to your new DNS server.&lt;/p&gt;

&lt;p&gt;Now comes the part we have all been waiting for &lt;strong&gt;Performance Testing DNS Server on RPI3+&lt;/strong&gt; , pushing it to the limit with concurrent load, for that, I follow this article &lt;a href="https://jmeter-plugins.org/wiki/dns_test_using_jmeter/"&gt;here&lt;/a&gt;, though I made some changes not sure if they are correct, let us see.&lt;/p&gt;

&lt;p&gt;Used two different UDP Request all pointing to the DNS Server, used DNS JAVA Decoder Class, broke the load test in 3 scenarios &lt;strong&gt;10&lt;/strong&gt; , &lt;strong&gt;20&lt;/strong&gt; and &lt;strong&gt;50&lt;/strong&gt;. This means we would have to update the number of threads in a thread group and keeping the ramp-up period to 10 sec.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--WcB0trKj--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2AhW6RFzhtSVoDyDoQsBie7A.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--WcB0trKj--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2AhW6RFzhtSVoDyDoQsBie7A.png" alt="" width="880" height="205"&gt;&lt;/a&gt;Apache JMeter&lt;/p&gt;

&lt;p&gt;to follow up on perfMon Metrics collector, you can find the details &lt;a href="https://jmeter-plugins.org/wiki/PerfMon/"&gt;here&lt;/a&gt; and &lt;a href="https://github.com/undera/perfmon-agent"&gt;here&lt;/a&gt;, download the latest release, and start the client on your DNS Server.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; to be able to run Perfmon Metrics collector on the Raspberry PI, we would need compatible arm based .so lib, which can be found at the link below, Thanks to &lt;a href="https://twitter.com/RHQ_Project"&gt;RHQ&lt;/a&gt; for all their hard work. &lt;a href="https://sourceforge.net/projects/rhq/files/rhq/misc/"&gt;&lt;em&gt;https://sourceforge.net/projects/rhq/files/rhq/misc/&lt;/em&gt;&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;wget [https://master.dl.sourceforge.net/project/rhq/rhq/misc/libsigar-arm-linux.so](https://master.dl.sourceforge.net/project/rhq/rhq/misc/libsigar-arm-linux.so)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;50 thread over 5 mins load test (over-Wifi);&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--X2SwZxUW--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2ApU78YO_RR2lOWAbQY2liXw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--X2SwZxUW--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2ApU78YO_RR2lOWAbQY2liXw.png" alt="" width="880" height="512"&gt;&lt;/a&gt;majority of the responses were under 200ms, with occasional spikes reaching around 1000ms&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--PuCwhXKu--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2AgY7qGq4qj3k8vljgVM_QwA.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--PuCwhXKu--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2AgY7qGq4qj3k8vljgVM_QwA.png" alt="" width="880" height="511"&gt;&lt;/a&gt;response time distribution also concludes the same.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Initially, wanted to see how well would RPI do at 50 and see If we could go higher, and up our test cases, sadly my router kept on restarting then I realized my Wifi&lt;/em&gt; 😵 &lt;em&gt;was the bottleneck, in real life I doubt my setup would create that much load, I have switched to ethernet now, but the results over Wifi, were interesting as well.&lt;br&gt;&lt;br&gt;
RPI DNS server is serving at two points one is at ethernet and other wifi, only my test machine is using Ethernet, which was using Wifi before.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;10 thread over 5 mins 3000 requests/sec;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--3qo9lXnQ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2AEBwsa0nQ2dpPyEvLii8goQ.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--3qo9lXnQ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2AEBwsa0nQ2dpPyEvLii8goQ.png" alt="" width="880" height="429"&gt;&lt;/a&gt;request per second&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--lsCtBFfP--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2AzhWJuNFGq35_ehZvFmb-EA.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--lsCtBFfP--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2AzhWJuNFGq35_ehZvFmb-EA.png" alt="" width="880" height="427"&gt;&lt;/a&gt;response time (ms)&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--WXlH3dPm--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2Aj_v3T131FOW7QEkPh5zA3g.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--WXlH3dPm--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2Aj_v3T131FOW7QEkPh5zA3g.png" alt="" width="880" height="293"&gt;&lt;/a&gt;Memory(blue) is constant, CPU(red)&lt;/p&gt;

&lt;p&gt;3000 requests per second. The error rate is 0.00% 😅, of total 1677107 samples. It did not impact memory much, query times are quick, a wired connection has made a difference and it seems RPI, still has a lot of room left. let’s double up the thread count and increase the requests slightly.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;20 thread over 5 mins 4500 requests/sec;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--AokTqwiF--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2ALLS5WbdxRgad_Dd1Ld1SoA.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--AokTqwiF--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2ALLS5WbdxRgad_Dd1Ld1SoA.png" alt="" width="880" height="451"&gt;&lt;/a&gt;requests per second&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--oEtzMkFs--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2AY5jt4QYLzzGeL-w63mvy1g.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--oEtzMkFs--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2AY5jt4QYLzzGeL-w63mvy1g.png" alt="" width="880" height="451"&gt;&lt;/a&gt;response time (ms)&lt;/p&gt;

&lt;p&gt;no more spike in request per second further down the line and the error rate is still 0.00% of total 2308031 sample size.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--t3jvuHAQ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2A9XksxDUU4RMdRo1wj8QItg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--t3jvuHAQ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2A9XksxDUU4RMdRo1wj8QItg.png" alt="" width="880" height="342"&gt;&lt;/a&gt;CPU(red) has increased&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--NA7amwVJ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2AIq00HNJLNKbA29P_YqImOA.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--NA7amwVJ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2AIq00HNJLNKbA29P_YqImOA.png" alt="" width="880" height="487"&gt;&lt;/a&gt;failures per request&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;50 thread over 5 mins 6500–7000 requests/sec;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--d7YWpzJK--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2ATpjGIUOCdp8-XEpe-p80QA.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--d7YWpzJK--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2ATpjGIUOCdp8-XEpe-p80QA.png" alt="" width="880" height="450"&gt;&lt;/a&gt;requests per second&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ZMH1dlKs--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2ABpNIEm3bX64bnhvRO9aAeg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ZMH1dlKs--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2ABpNIEm3bX64bnhvRO9aAeg.png" alt="" width="880" height="452"&gt;&lt;/a&gt;response time (ms)&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--di5LTwFL--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2AUqrzqkRel7NyqMz1dLmTew.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--di5LTwFL--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2AUqrzqkRel7NyqMz1dLmTew.png" alt="" width="880" height="344"&gt;&lt;/a&gt;CPU(red) has increased&lt;/p&gt;

&lt;p&gt;RPI consuming 7000 requests per second and not even burping, and the error rate is rock steady 0.00% of total 2939579 samples, I feel the more load we put on this arm based mean machine, more stable and faster it is, does putting more load on RPI3 sent it in overdrive mode? 🤩💥&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--IXrLdOCu--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2AEghEi1dPi4SMD3MO4zlWiw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--IXrLdOCu--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2AEghEi1dPi4SMD3MO4zlWiw.png" alt="" width="880" height="127"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;On side, it is interesting how all 4 cores of RPI are being used to maximize the utilization, hats of to ARM Community.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Note: memory usage is still the same around 18mb, recorded by PerfMon, in the above image taken from HTOP system process usage is included.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Would you blame me If I tried putting more load to see if it can break? We are far beyond normal usage, now we are just trying to break it.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--JrvuEqHv--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_880/https://cdn-images-1.medium.com/max/480/1%2AkpHy0BeSNnqsY_atr4PAbg.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--JrvuEqHv--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_880/https://cdn-images-1.medium.com/max/480/1%2AkpHy0BeSNnqsY_atr4PAbg.gif" alt="" width="480" height="360"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;OK, so we have some interesting results, threads were increased to 100, and requests were increased to 10000 per second, total samples 3343290, error rate still 0.00% compared to the volume of the request, it must be further down the decimal spaces. The CPU usage was about to hit 70%, the memory still did not move from 18mb.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--AhqcAa-E--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2AQ3jE5P7nxynsAuKk0owEfQ.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--AhqcAa-E--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2AQ3jE5P7nxynsAuKk0owEfQ.png" alt="" width="880" height="223"&gt;&lt;/a&gt;CPU(red) increasing&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--0IxcPGIK--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2AeoF_hI1xx6WQYx0Q8rD4BQ.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--0IxcPGIK--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2AeoF_hI1xx6WQYx0Q8rD4BQ.png" alt="" width="880" height="154"&gt;&lt;/a&gt;in HTOP, all cores are perfectly utilized, perfmon showing average CPU usage&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ON3j_sTr--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2Awr4ctMIk0XRmwJ4XCuJu7g.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ON3j_sTr--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2Awr4ctMIk0XRmwJ4XCuJu7g.png" alt="" width="880" height="363"&gt;&lt;/a&gt;negligible failed requests per second&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ETIZlWtF--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2AoSzYrhhl4vnKohT8S2hOmA.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ETIZlWtF--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2AoSzYrhhl4vnKohT8S2hOmA.png" alt="" width="880" height="330"&gt;&lt;/a&gt;request per second&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--0zXwJ5VI--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2AlnGwpnfti1U1HMFZ3XVYng.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--0zXwJ5VI--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2AlnGwpnfti1U1HMFZ3XVYng.png" alt="" width="880" height="329"&gt;&lt;/a&gt;active threads&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ViGaYMZ1--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2AUp-Es8w1oSvDHeuYxD6OJQ.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ViGaYMZ1--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2AUp-Es8w1oSvDHeuYxD6OJQ.png" alt="" width="880" height="331"&gt;&lt;/a&gt;response time (ms)&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--YwmDJCmU--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_880/https://cdn-images-1.medium.com/max/500/1%2AClU_EyG6tknbf3eHGH_0Dg.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--YwmDJCmU--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_880/https://cdn-images-1.medium.com/max/500/1%2AClU_EyG6tknbf3eHGH_0Dg.gif" alt="" width="500" height="500"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Although at the end of the test, it can clearly be seen, that RPI was struggling but it just goes to show how stable and powerful RPI platform is. ~10000 requests, that is freaking fantastic. ARM Platforms, 5v power consumption can return this much value for money, is staggering 😱. Can you wonder what we can achieve if we had a cluster of them?&lt;/p&gt;

&lt;p&gt;I use Raspberry PI’s for all kind of servers at home I have RPI2 and 1 as well, running different kinds of server ranging from python, java, and mono, And only if you knew how much punishment my RPI2 has to endure you would deeply feel sorry for it. One more thing if you are already following the latest news, docker is also supported👏. Future for ARM-based platform looks quite bright to me.&lt;/p&gt;

</description>
      <category>dns</category>
      <category>bind9</category>
      <category>raspberrypi</category>
      <category>jmeter</category>
    </item>
  </channel>
</rss>
