<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Turja Narayan Chaudhuri</title>
    <description>The latest articles on DEV Community by Turja Narayan Chaudhuri (@turjachaudhuri).</description>
    <link>https://dev.to/turjachaudhuri</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/turjachaudhuri"/>
    <language>en</language>
    <item>
      <title>Why you need a service catalog to scale your microservice adoption across an enterprise.</title>
      <dc:creator>Turja Narayan Chaudhuri</dc:creator>
      <pubDate>Sun, 03 Apr 2022 09:30:23 +0000</pubDate>
      <link>https://dev.to/turjachaudhuri/why-you-need-a-service-catalog-to-scale-your-microservice-adoption-across-an-enterprise-1n4a</link>
      <guid>https://dev.to/turjachaudhuri/why-you-need-a-service-catalog-to-scale-your-microservice-adoption-across-an-enterprise-1n4a</guid>
      <description>&lt;h2&gt;
  
  
  &lt;strong&gt;&lt;u&gt;A Brief Context&lt;/u&gt;&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;The good thing about working with enterprises, are the problems that you encounter, when you are doing something &lt;strong&gt;at scale&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;All my life, I have worked with big enterprises that spanned hundreds of teams, thousands of projects/products, and millions of lines of code.&lt;/p&gt;

&lt;p&gt;While I always envied the kind of passion and velocity, that my friends at startups experienced, the different category of problems that crop up at scale (in an enterprise) kind of made up for it.&lt;/p&gt;

&lt;p&gt;One such problem, that normally looks trivial when you start your journey, but &lt;strong&gt;can assume magnanimous proportions when you start to scale&lt;/strong&gt;, is how to &lt;strong&gt;identify the ownership of services across your organization&lt;/strong&gt;, especially when you need some details/need to report an issue related to that service.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;&lt;u&gt;Service management at scale&lt;/u&gt;&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;In today's world, it is widely accepted that microservices are a better way to design systems, rather than monoliths ( there could be a few exceptional circumstances, but we will not go over them in this article).&lt;/p&gt;

&lt;p&gt;Especially for Cloud native workloads, which are supposed to take the full benefit of cloud services and its advantages, it makes sense to go with a distributed, loosely coupled architecture that is comprised of individual microservices talking to one another.&lt;/p&gt;

&lt;p&gt;And it is indeed a valid argument that microservices lets an engineering organization grow and scale, by providing better constructs for isolation and independence to engineering teams.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;All of these looks great, when you have 5-10 services in a single product&lt;/strong&gt;. Every developer knows who is responsible for each service, and if something goes wrong, they can contact the developer/owner of that service, and get the issues resolved.&lt;/p&gt;

&lt;p&gt;Fast forward to a time, when there are hundreds or thousands of services across a multitude of teams across your enterprise. Every day, new services are being created and added on to that list. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The chaos that comes with microservices, start appearing as cracks in the system, and becomes noticeable when you have 30+ microservices.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Over time, this becomes &lt;strong&gt;unmanageable&lt;/strong&gt;, and when something breaks, your operations team &lt;strong&gt;have no idea who to reach&lt;/strong&gt;, to handle/fix that issue.&lt;/p&gt;

&lt;p&gt;The below image is a snapshot depicting the tremendous scale of microservices at Uber , mid-2018, observed by the distributed tracing tool Jaeger -&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--R3frJGVr--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/noh0cy43zyj1navzro38.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--R3frJGVr--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/noh0cy43zyj1navzro38.png" alt="Image description" width="512" height="436"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And, here's Netflix -&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s---p3H36sw--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/lhg5txso1xe0x0vdpwy8.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s---p3H36sw--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/lhg5txso1xe0x0vdpwy8.jpg" alt="Image description" width="880" height="495"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Even if your company isn't the next Uber/Netflix, there is a high probability, that over time, your service landscape will also start to look like the above.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;&lt;u&gt;Everything breaks at scale&lt;/u&gt;&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;These issues start exploding at scale&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Once your enterprise as a whole starts adopting the microservices architectural style of functioning, it &lt;strong&gt;wont take long for each team/product/project to ship hundreds of services.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Over time, &lt;strong&gt;this effect will get compounded and you will get thousands of services across your enterprise, with no governance or oversight.&lt;/strong&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;&lt;u&gt;Problems with managing services at scale&lt;/u&gt;&lt;/strong&gt;
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;There is no clear ownership, nobody knows who to contact if a particular service breaks.&lt;/li&gt;
&lt;li&gt;No easy way to search for a service to be able to reuse/consume it across the enterprise.&lt;/li&gt;
&lt;li&gt;No way to easily access documentation/related information about a service, and how to consume it.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--AcAFDhKM--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7atv4zwleseqj4c87wdq.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--AcAFDhKM--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7atv4zwleseqj4c87wdq.PNG" alt="Image description" width="880" height="489"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As pointed out by John Laban, in his blog - &lt;a href="https://www.opslevel.com/2020/04/21/why-you-need-a-microservice-catalog/"&gt;https://www.opslevel.com/2020/04/21/why-you-need-a-microservice-catalog/&lt;/a&gt;, &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--cQAbE2In--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/h3uvybolos16srymytj5.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--cQAbE2In--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/h3uvybolos16srymytj5.PNG" alt="Image description" width="796" height="184"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;&lt;u&gt;A catalog to the rescue&lt;/u&gt;&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;A &lt;strong&gt;&lt;u&gt;microservice catalog&lt;/u&gt;&lt;/strong&gt; is a &lt;strong&gt;record/list of all the microservices that an enterprise has in its ecosystem&lt;/strong&gt;. It tracks all the services that an enterprise is running in production, and describes information about those services - &lt;strong&gt;&lt;u&gt;what each service does, who owns it, and how to operate it.&lt;/u&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--H3RkIuEC--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/lw0pzzsvay44lh6aweat.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--H3RkIuEC--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/lw0pzzsvay44lh6aweat.PNG" alt="Image description" width="877" height="225"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;It has specific details on each service like :&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Owner of the service.&lt;/li&gt;
&lt;li&gt;DL/SNOW queue of the team managing the service.&lt;/li&gt;
&lt;li&gt;Language/framework of the service.&lt;/li&gt;
&lt;li&gt;Supported consumption types.&lt;/li&gt;
&lt;li&gt;Links to approved documentation for that microservice.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Using a microservice catalog, any user in the enterprise can quickly &lt;strong&gt;find a service, it's usage, it's owners&lt;/strong&gt;, and so on.&lt;/p&gt;

&lt;p&gt;It gives an enterprise a &lt;strong&gt;&lt;u&gt;sense of governance and control at scale,&lt;/u&gt;&lt;/strong&gt; as you have a &lt;strong&gt;single source of truth&lt;/strong&gt; that you can start to refer to, while trying to answer other questions.&lt;/p&gt;

&lt;p&gt;Modern day microservice catalogs can move a step further and even show you dependencies between services, consumer details, usage metrics and so on.&lt;/p&gt;

&lt;p&gt;So, you can build quite a lot of analytics on top of the data once you have centralized it to some extent.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--qipsUfq4--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/swzboru9k4qwdwswgorl.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--qipsUfq4--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/swzboru9k4qwdwswgorl.PNG" alt="Image description" width="880" height="455"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As you can see above, services will publish their information to the catalog, while consumers will leverage the same catalog to find relevant information about what they want to consume.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;&lt;u&gt;How to start&lt;/u&gt;&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;An enterprise can &lt;strong&gt;start small,&lt;/strong&gt; as in documenting services on a shared spreadsheet, and capturing information like :&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Name of service&lt;/li&gt;
&lt;li&gt;Owner Team&lt;/li&gt;
&lt;li&gt;Director/manager of the service&lt;/li&gt;
&lt;li&gt;Lifecycle of the service ( GA / Beta / Deprecated )&lt;/li&gt;
&lt;li&gt;Slack channel link&lt;/li&gt;
&lt;li&gt;JIRA board link&lt;/li&gt;
&lt;li&gt;Health URL/endpoint to check service status&lt;/li&gt;
&lt;li&gt;Dashboard/centralized location to search for logs&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;However, as the number of microservices start to grow, say between the 50-200 mark, the spreadsheet/manual way of documenting and cataloging microservices do not scale, and this leads to the enterprise searching for &lt;strong&gt;&lt;u&gt;better, more enterprise-grade options.&lt;/u&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;From there, it might start using an enterprise grade service catalog tool like &lt;strong&gt;OpsLevel&lt;/strong&gt;, or &lt;strong&gt;Cortex&lt;/strong&gt; to manage the myriad of services at scale.&lt;/p&gt;

&lt;p&gt;Many companies might even invest a lot of money, effort and time in building an in-house microservice catalog, from scratch as part of their engineering effort.&lt;/p&gt;

&lt;p&gt;It depends upon the organization and the &lt;strong&gt;Build vs Buy pattern&lt;/strong&gt; they follow, but normally speaking, it might be a better decision to go for a 3rd party tool, rather than investing in the same thing, especially if the 3rd party tool ticks all the boxes you need.&lt;/p&gt;

&lt;p&gt;When you start using an enterprise-grade microservice catalog tool, it &lt;strong&gt;abstracts way the required information behind an intuitive UI, and provides the ability for the consumer to query the catalog for specific questions on a particular service or a set of services.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;You can tag services using rich metadata like environment, service tier, SLA, etc. which makes it easier for consumers to filter the data, and also to decide whether they should call the service or not.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--l8iAitIF--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/m41mf8tlt0jtdymmqbor.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--l8iAitIF--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/m41mf8tlt0jtdymmqbor.PNG" alt="Image description" width="880" height="561"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;&lt;u&gt;Conclusion&lt;/u&gt;&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Scaling microservices and the challenges associated with it, particularly when it comes to needing an enterprise-grade tool to manage it, might appear trivial at first, when you only have a few services, but once you cross the magic mark of 30-50 services, &lt;strong&gt;&lt;u&gt;it actually starts making sense to invest in a solution to this problem&lt;/u&gt;&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Fortunately, there are quite a few tools, and approaches available in the market, which focus on how an engineering team can start solving this issue, at scale.&lt;/p&gt;

&lt;p&gt;You can start small, but &lt;strong&gt;&lt;u&gt;ensure this is a part of your overall engineering strategy, and approach&lt;/u&gt;&lt;/strong&gt;, so that you at least have all the information and metadata handy, when you do decide to go for an enterprise-grade service catalog office.&lt;/p&gt;

&lt;h2&gt;
  
  
  References -
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://www.opslevel.com/2020/04/21/why-you-need-a-microservice-catalog/"&gt;https://www.opslevel.com/2020/04/21/why-you-need-a-microservice-catalog/&lt;/a&gt;&lt;br&gt;
&lt;a href="https://www.cortex.io/post/why-you-need-a-microservices-catalog-tool#:%7E:text=A%20microservices%20catalog%20is%20a,track%20of%20several%20disparate%20microservices"&gt;https://www.cortex.io/post/why-you-need-a-microservices-catalog-tool#:~:text=A%20microservices%20catalog%20is%20a,track%20of%20several%20disparate%20microservices&lt;/a&gt;.&lt;br&gt;
&lt;a href="https://www.youtube.com/watch?v=mwsfNio2Dho"&gt;https://www.youtube.com/watch?v=mwsfNio2Dho&lt;/a&gt;&lt;/p&gt;

</description>
      <category>microservices</category>
      <category>cloud</category>
      <category>service</category>
    </item>
    <item>
      <title>Using ARMO Kubescape to scale kubernetes security adoption across an enterprise</title>
      <dc:creator>Turja Narayan Chaudhuri</dc:creator>
      <pubDate>Mon, 28 Mar 2022 11:01:33 +0000</pubDate>
      <link>https://dev.to/turjachaudhuri/using-armo-kubescape-to-scale-kubernetes-security-adoption-across-an-enterprise-5gio</link>
      <guid>https://dev.to/turjachaudhuri/using-armo-kubescape-to-scale-kubernetes-security-adoption-across-an-enterprise-5gio</guid>
      <description>&lt;p&gt;Note - This is not an introduction to Kubernetes.&lt;br&gt;
It is expected that the reader is already aware of what Kubernetes is and how it works.&lt;/p&gt;
&lt;h2&gt;
  
  
  &lt;strong&gt;&lt;u&gt;Agenda / Topics of discussion&lt;/u&gt;&lt;/strong&gt;
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Kubernetes is everywhere !!&lt;/li&gt;
&lt;li&gt;Challenges of Kubernetes security adoption at scale across enterprises.&lt;/li&gt;
&lt;li&gt;Strategies to solve container security adoption challenges.&lt;/li&gt;
&lt;li&gt;Conclusion.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
  &lt;strong&gt;&lt;u&gt;Kubernetes is everywhere&lt;/u&gt;&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Enterprise visionaries and thought leaders do not agree on a lot of things, but even they have kind of conceded that &lt;strong&gt;&lt;u&gt;Kubernetes is quickly becoming the de-facto standard for application delivery across the IT landscape&lt;/u&gt;&lt;/strong&gt;, from startups to mid-sized companies to big enterprises.&lt;/p&gt;

&lt;p&gt;The &lt;strong&gt;2021 Cloud Native Survey&lt;/strong&gt; (&lt;a href="https://www.cncf.io/reports/cncf-annual-survey-2021/"&gt;https://www.cncf.io/reports/cncf-annual-survey-2021/&lt;/a&gt;), organized by &lt;strong&gt;CNCF&lt;/strong&gt;, shows that the usage of Kubernetes is continuing to grow, and isn't likely to stop.&lt;br&gt;
Some data points of interest are -&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;96% of IT organizations are either evaluating or already using kubernetes.&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;5.6 million developers in the world actively use kubernetes.&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--00CNXW6e--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/3n9z6em0gkyeplsf48ws.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--00CNXW6e--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/3n9z6em0gkyeplsf48ws.jpg" alt="Image description" width="880" height="475"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;However, at the same time it is a well-known fact, that &lt;strong&gt;Kubernetes is extremely hard to implement, or get right.&lt;/strong&gt;&lt;br&gt;
And, quite frankly, &lt;strong&gt;the hardest part of Kubernetes is getting its security right.&lt;/strong&gt;&lt;br&gt;
As a result, Kubernetes security has become a hot topic and rightly so.&lt;/p&gt;

&lt;p&gt;The &lt;strong&gt;State of Kubernetes security report&lt;/strong&gt;, published by &lt;strong&gt;RedHat&lt;/strong&gt; ( &lt;a href="https://www.redhat.com/en/resources/state-kubernetes-security-report"&gt;https://www.redhat.com/en/resources/state-kubernetes-security-report&lt;/a&gt;), really highlights a lot of challenges that are evident with the current kubernetes ecosystem. &lt;br&gt;
Some interesting data points covered in the report are - &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;55% of enterprise IT companies have confirmed that kubernetes security concerns have delayed or slowed down production deployment.&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;94% of respondents to the survey confirmed that they have faced at least 1 security incident in their kubernetes environment in the last 12 months.&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;59% of companies have mentioned that their main concern about adopting container strategies is how to secure them, and maintain a strong security posture.&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--adp_VOu3--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/l8mc3jm35x4sgdqvyfiq.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--adp_VOu3--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/l8mc3jm35x4sgdqvyfiq.jpg" alt="Image description" width="592" height="395"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The reason I am trying to highlight these points is to enforce the opinion that &lt;strong&gt;Kubernetes is hard, and implementing Kubernetes security is harder.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;It is very important that an enterprise has a &lt;strong&gt;clear, and well-understood strategy&lt;/strong&gt; on how to handle the &lt;strong&gt;myriad challenges&lt;/strong&gt; that are evident in &lt;strong&gt;managing security of container deployments on an orchestration platform like Kubernetes.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;However, the good thing is - &lt;br&gt;
&lt;strong&gt;&lt;u&gt;in Enterprise IT, most problems, if not all, have solutions.&lt;/u&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--7PwZVa6A--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/z5vo48n7pl2fq75do363.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--7PwZVa6A--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/z5vo48n7pl2fq75do363.jpg" alt="Image description" width="225" height="225"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In recent past, there has been a &lt;strong&gt;strong focus&lt;/strong&gt; by vendors, cloud service providers, etc on &lt;strong&gt;tools, practices, and processes&lt;/strong&gt; that can &lt;strong&gt;seamlessly mitigate the challenges associated with implementing a comprehensive container security strategy&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;u&gt;ARMO&lt;/u&gt;&lt;/strong&gt; (&lt;a href="https://github.com/armosec/kubescape"&gt;https://github.com/armosec/kubescape&lt;/a&gt;), is one such company, which is doing impressive work in this sector.&lt;/p&gt;

&lt;p&gt;Their flagship product &lt;strong&gt;&lt;u&gt;kubescape &lt;/u&gt;&lt;/strong&gt;is one of the most &lt;strong&gt;comprehensive&lt;/strong&gt; and &lt;strong&gt;easy to use&lt;/strong&gt; container security solutions available in the market today.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--nLPXZewS--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/jok4dbvu9mnggyw7f017.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--nLPXZewS--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/jok4dbvu9mnggyw7f017.png" alt="Image description" width="880" height="805"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;u&gt;What is kubescape ?&lt;/u&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Kubescape is a k8 &lt;strong&gt;open-source&lt;/strong&gt; tool providing a multi-cloud K8s single pane of glass, including risk analysis, security compliance, RBAC visualizer, and image vulnerabilities scanning.&lt;/p&gt;

&lt;p&gt;Kubescape scans &lt;strong&gt;&lt;u&gt;K8s clusters, YAML files, and HELM charts&lt;/u&gt;&lt;/strong&gt;, detecting misconfigurations according to multiple frameworks (such as the  &lt;strong&gt;&lt;u&gt;NSA-CISA,  MITRE ATT&amp;amp;CK&lt;/u&gt;&lt;/strong&gt;®), finding software vulnerabilities, and showing &lt;strong&gt;RBAC (role-based-access-control) violations&lt;/strong&gt; at early stages of the &lt;strong&gt;CI/CD&lt;/strong&gt; pipeline. It &lt;strong&gt;calculates risk scores instantly&lt;/strong&gt; and &lt;strong&gt;shows risk trends over time.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;You can find more details about kubescape at - &lt;a href="https://github.com/armosec/kubescape"&gt;https://github.com/armosec/kubescape&lt;/a&gt;, and &lt;a href="https://www.armosec.io/blog/kubescape-the-first-tool-for-running-nsa-and-cisa-kubernetes-hardening-tests/"&gt;https://www.armosec.io/blog/kubescape-the-first-tool-for-running-nsa-and-cisa-kubernetes-hardening-tests/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;u&gt;Why should I choose kubescape over other tools?&lt;/u&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;User-friendly UI for streamlined scans and test management.&lt;/li&gt;
&lt;li&gt;An instantly calculated risk score based on the current scan&lt;/li&gt;
&lt;li&gt;Easy access to a history of past scans.&lt;/li&gt;
&lt;li&gt;Exceptions management, allowing Kubernetes admins to mark acceptable risk levels.&lt;/li&gt;
&lt;li&gt;Build and create customized compliance frameworks.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I have used it personally and it is pretty easy to get onboarded with kubescape, &lt;strong&gt;&lt;u&gt;literally in 5 minutes&lt;/u&gt;&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Their UI is extremely &lt;strong&gt;elegant&lt;/strong&gt;, and &lt;strong&gt;intuitive&lt;/strong&gt;, and the range of controls that they have implemented as a default offering is &lt;strong&gt;comprehensive&lt;/strong&gt;, and includes some of the most common &lt;strong&gt;security controls&lt;/strong&gt; available in the market today like &lt;strong&gt;MITRE&lt;/strong&gt;, &lt;strong&gt;NSA&lt;/strong&gt;, and so on.&lt;/p&gt;

&lt;p&gt;Please find below a snapshot of their scan output that I ran on my personal cluster -&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--JL7gnb-Q--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/axlg3q1z9of3s3lfzmps.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--JL7gnb-Q--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/axlg3q1z9of3s3lfzmps.PNG" alt="Image description" width="880" height="432"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  &lt;strong&gt;&lt;u&gt;Challenges of Kubernetes security adoption at scale across enterprises&lt;/u&gt;&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;In this blog, I am not going through a tutorial of kubescape - because there are already wonderful blog posts, and YouTube videos that can guide you through the entire process, step by step, and quite frankly, the way the tool is designed, it is extremely easy to get started.&lt;/p&gt;

&lt;p&gt;What I want to focus on this blog, is on the &lt;strong&gt;&lt;u&gt;challenge of how to use kubescape to scale the adoption of a standard security tool and enforce a consistent Kubernetes security posture across an enterprise&lt;/u&gt;&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;To, understand the solution, we must first be cognizant of the problem that exists.&lt;/p&gt;

&lt;p&gt;First, you need to understand that &lt;strong&gt;enterprises are not small entities&lt;/strong&gt; comprising of 2-3 teams, or 50 people. So, how do we define enterprises -&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;They have multiple departments and locations.&lt;/li&gt;
&lt;li&gt;Hundreds of teams and departments spread across the company, possibly globally distributed.&lt;/li&gt;
&lt;li&gt;Everyone has clear responsibilities, and hierarchies.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In a big enterprise, an initiative like implementing security is much more &lt;strong&gt;difficult&lt;/strong&gt;, and &lt;strong&gt;prolonged&lt;/strong&gt; than enforcing the same controls in a small, or medium sized company.&lt;/p&gt;

&lt;p&gt;Mostly, what happens in these cases, is that due to &lt;strong&gt;lack of central governance, controls, and policies in place&lt;/strong&gt;, each team implements security controls in their kubernetes clusters in their own way, &lt;strong&gt;resulting in divergence and chaos&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;I call this the &lt;strong&gt;&lt;u&gt;Kubernetes Security Divide&lt;/u&gt;&lt;/strong&gt; -&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--EqxGOljb--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2b2fhqm7d41slke8y5xh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--EqxGOljb--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2b2fhqm7d41slke8y5xh.png" alt="Image description" width="880" height="504"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As you can see from the above image -&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Each team implements their own security policies.&lt;/li&gt;
&lt;li&gt;There is &lt;strong&gt;no standardization&lt;/strong&gt;, or cohesive approach across the enterprise.&lt;/li&gt;
&lt;li&gt;Every team is &lt;strong&gt;working in silos&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;However, the target state of the enterprises is a situation where all teams will follow a standard set of guidelines, practices and patterns.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;&lt;u&gt;But, it is really very hard to move from left-to-right, across the divide.&lt;/u&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;So, in a nutshell, some of the &lt;strong&gt;challenges of trying to scale a  kubernetes security initiative across hundreds of teams in an enterprise&lt;/strong&gt; could be -&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--1VEXsjzu--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/uqno6zlhnatuh1kg69j2.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--1VEXsjzu--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/uqno6zlhnatuh1kg69j2.PNG" alt="Image description" width="880" height="374"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  &lt;strong&gt;&lt;u&gt;Strategies to solve container security adoption challenges&lt;/u&gt;&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;&lt;u&gt;How can kubescape help ?&lt;/u&gt;&lt;/p&gt;

&lt;p&gt;Enterprise security architects can leverage &lt;strong&gt;kubescape&lt;/strong&gt; as a tool to &lt;strong&gt;&lt;u&gt;consolidate security practices across an enterprise, and ensure that all teams are adhering to a standard set of security  guidelines, and policies.&lt;/u&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Let's try to evaluate the &lt;strong&gt;&lt;u&gt;different strategies&lt;/u&gt;&lt;/strong&gt; that can be leveraged to solve these challenges - &lt;/p&gt;

&lt;p&gt;In a nutshell, we will discuss the below strategies to increase the adoption of a consistent security posture for containerized workloads across an enterprise, leveraging kubescape by ARMO-&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--RQ9qWB_l--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/vpwe4baow7lgxgdsx9r1.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--RQ9qWB_l--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/vpwe4baow7lgxgdsx9r1.PNG" alt="Image description" width="880" height="475"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;u&gt;Strategy 1 : Having an enterprise wide container/kubernetes security framework in place.&lt;/u&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Every enterprise has their own requirements. No two companies are alike. The applications within the enterprise might vary, but &lt;strong&gt;they must still follow a standard set of policies that is a must across the enterprise.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Kubescape makes it very easy for an enterprise to create a &lt;strong&gt;custom security framework&lt;/strong&gt;, that the enterprise can push to all service lines, and teams as a must-have.&lt;/p&gt;

&lt;p&gt;Kubescape offers &lt;strong&gt;4 security frameworks out-of-the-box&lt;/strong&gt; , as shown below - &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--VqQ0adVy--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/0r5o15rxkwcqzk9z415m.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--VqQ0adVy--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/0r5o15rxkwcqzk9z415m.PNG" alt="Image description" width="880" height="370"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;At the same time, it also offers &lt;strong&gt;70 pre-built security controls&lt;/strong&gt;, shared across the above 4 frameworks.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ijCLRlRc--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9nmyz3aq4ngxunf9h0rd.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ijCLRlRc--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9nmyz3aq4ngxunf9h0rd.PNG" alt="Image description" width="880" height="516"&gt;&lt;/a&gt; &lt;/p&gt;

&lt;p&gt;However, an enterprise might need a &lt;strong&gt;custom framework&lt;/strong&gt; of their own, where they will select their own set of controls, out of the 70 pre-built controls, that have been provided out-of-the-box. &lt;/p&gt;

&lt;p&gt;This could be due to a recommendation from the enterprise InfoSec team, or due to external regulators that have mandated those controls to the enterprise.&lt;/p&gt;

&lt;p&gt;The point is - there can be instances where a &lt;strong&gt;custom combination&lt;/strong&gt; of the provided security controls is needed to &lt;strong&gt;align with the security objectives of an enterprise&lt;/strong&gt;, as shown below -&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--j48dhCPt--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/yo9cmsggyyo13kpbnfrx.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--j48dhCPt--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/yo9cmsggyyo13kpbnfrx.PNG" alt="Image description" width="829" height="833"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And, the best thing is that, this is &lt;strong&gt;pretty easy to do in Kubescape.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;As you can see in the below image, I have created a custom framework for my enterprise. &lt;br&gt;
&lt;strong&gt;In this framework, I only included the critical controls available.&lt;/strong&gt; So, the idea could be that, any cluster deployment in my enterprise, must ensure that they pass the above critical controls, included as part of my enterprise framework.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--rArtzLbA--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/pslgyy6pxegwye80q6sm.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--rArtzLbA--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/pslgyy6pxegwye80q6sm.PNG" alt="Image description" width="880" height="422"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Without that, I might &lt;strong&gt;not allow&lt;/strong&gt; the deployments to be pushed to PROD, or something along those lines.&lt;/p&gt;

&lt;p&gt;The best thing about kubescape, is that if I know which controls I need to include in my custom framework, it &lt;strong&gt;takes only 5 minutes&lt;/strong&gt;, to create a custom framework, and we are good to go.&lt;/p&gt;

&lt;p&gt;As of today, we can select any of the available 70 controls to be included in our custom framework, but going forward, there could &lt;strong&gt;potentially be more controls included&lt;/strong&gt; as part of the default offering.&lt;/p&gt;

&lt;p&gt;Also, since the controls are categorized as - &lt;strong&gt;Critical&lt;/strong&gt; OR &lt;strong&gt;High&lt;/strong&gt; OR &lt;strong&gt;Medium&lt;/strong&gt; OR &lt;strong&gt;Low&lt;/strong&gt;, it is easy for a security engineer with minimal idea of kubernetes to decide which ones should be included in the custom framework.&lt;/p&gt;

&lt;p&gt;So, now we have a custom framework for our own enterprise in place, what to do now ?&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;u&gt;Strategy 2 : Provide a shift-left security platform, with focus on enhancing developer experience.&lt;br&gt;
&lt;/u&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Most enterprises make the wrong assumption that their security posture depends on their security teams. While that is true, at some level, but &lt;strong&gt;&lt;u&gt;mostly, it is up to the developers to ensure how security will be implemented at the enterprise level&lt;/u&gt;&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Mostly,&lt;u&gt; &lt;strong&gt;developers do not always care for security&lt;/strong&gt;&lt;/u&gt;. &lt;/p&gt;

&lt;p&gt;Indeed, nowadays the situation has improved a lot, with individual developers being more aware of security controls than ever before, but at a high level, you can consider that &lt;strong&gt;&lt;u&gt;developers are not that concerned about security, as they are about developing and pushing their features into production.&lt;/u&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;So, say a developer does all his work of creating and writing the kubernetes application source code, they YAML files for deploying the application, the docker file for building the image and so on. At this point, the &lt;strong&gt;developer has no clue about whether the implementation that he is doing, is compliant with the custom kubernetes security framework that is enforced by the InfoSec team at an enterprise level&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;So, the developer will do the changes in his local branch, push them to dev, raise a PR. Maybe, &lt;strong&gt;when the CI tests will run, many issues concerning insecure kubernetes implementations will get flagged&lt;/strong&gt;, which means, the developer will again have to spend considerable time in fixing them, again pushing code, again waiting for the CI tests to finish, and so on.&lt;/p&gt;

&lt;p&gt;This is not at all aligned with the agile software delivery lifecycle, that we want enterprise(s) to follow. &lt;/p&gt;

&lt;p&gt;With that in mind, it would be best if we &lt;strong&gt;&lt;u&gt;could push the analysis/scanning of the custom security framework to the left, meaning if the developer could get a feedback of some issues ( at least, the easy-to-understand static ones), directly when he/she is coding in his IDE, that would be best.&lt;/u&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This is also called &lt;strong&gt;&lt;u&gt;Shift-Left security&lt;/u&gt;&lt;/strong&gt;, and you can use the &lt;strong&gt;&lt;u&gt;VSCode extension for kubescape&lt;/u&gt;&lt;/strong&gt; to do the same. With the VSCode extension, the dev can directly scan his kubernetes YAML files during development phase, utilizing the full power of kubescape, without having to leave his IDE - this can result in tremendous productivity increase, with no more waiting for costly CI tests to run, to get feedback.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--dSWwVfQH--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/1na1a9t8pna9s2u06gee.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--dSWwVfQH--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/1na1a9t8pna9s2u06gee.PNG" alt="Image description" width="880" height="458"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Kubescape team has published a wonderful blog and video on how to get started with &lt;strong&gt;&lt;u&gt;shift-left security using kubescape VSCode extension&lt;/u&gt;&lt;/strong&gt; , you can find details at - &lt;a href="https://www.armosec.io/blog/find-kubernetes-security-issues-while-coding/"&gt;https://www.armosec.io/blog/find-kubernetes-security-issues-while-coding/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This is something, that is very important to understand - &lt;strong&gt;&lt;u&gt;the security tools that you are trying to enforce, at scale, across your enterprise, must be aligned with the developer. &lt;/u&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;You can see many tools in the markets, with hundreds of custom controls, dashboards, predictive analysis , etc, but most of such tools are directed at the security teams, and not at the developers.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;u&gt;Developer buy-in is a must to ensure, that your at scale adoption exercise is successful.&lt;/u&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;If you provide your developers with an easy-to-integrate tool, and a clean, functional user-interface, they will surely use your tool, and fix security vulnerabilities at the source.&lt;/p&gt;

&lt;p&gt;This is where, from my perspective, &lt;strong&gt;&lt;u&gt;kubescape shines&lt;/u&gt;&lt;/strong&gt; - it has a very &lt;strong&gt;intuitive&lt;/strong&gt;and &lt;strong&gt;easy-to-use interface&lt;/strong&gt;, and at the same time, it has &lt;strong&gt;&lt;u&gt;equal focus on both local development teams, and central security teams&lt;/u&gt;&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;u&gt;Strategy 3 : Standardization of security controls and tools across an enterprise&lt;/u&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;One topic which is very important to ensure that any at-scale adoption of security initiatives across an enterprise succeeds, is to enforce &lt;strong&gt;&lt;u&gt;standardization across the board&lt;/u&gt;&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;In most mature enterprises, &lt;strong&gt;k8 cluster deployment, management, and operation&lt;/strong&gt; is &lt;strong&gt;not performed directly via a CSP( Cloud Service Provider) or via manual scripts&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Most enterprises have a &lt;strong&gt;&lt;u&gt;self-service catalog&lt;/u&gt;&lt;/strong&gt;,or some sort of an &lt;strong&gt;&lt;u&gt;automation pipeline&lt;/u&gt;&lt;/strong&gt;, using which an end-user can request to provision a &lt;strong&gt;&lt;u&gt;fully functional cluster&lt;/u&gt;&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;A typical &lt;strong&gt;self-service portal&lt;/strong&gt; might look like below -&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--_m0zYx5j--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/pvxoine0mezh1s4lfv6f.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--_m0zYx5j--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/pvxoine0mezh1s4lfv6f.png" alt="Image description" width="880" height="373"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Such an enterprise-wide offering of standard kubernetes components like clusters, namespaces, etc ensures that there is a &lt;strong&gt;&lt;u&gt;level of standardization that is pre-enforced, across all deployments, regardless of the teams/business units involved.&lt;/u&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Also, such offerings typically handle all &lt;strong&gt;&lt;u&gt;cross-cutting concerns&lt;/u&gt;&lt;/strong&gt;, which are common across all teams in the enterprise, for example-&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--naMVgE8P--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qyomg7us9vpblkvvq9x3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--naMVgE8P--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qyomg7us9vpblkvvq9x3.png" alt="Image description" width="880" height="457"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As you can see, one of the primary cross-cutting concerns is &lt;strong&gt;&lt;u&gt;Security&lt;/u&gt;&lt;/strong&gt;, and this is where &lt;strong&gt;kubescape&lt;/strong&gt; comes into picture.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;u&gt;All kubernetes installations/cluster setups across the enterprise must include ARMO kubescape pre-installed, as a default.&lt;br&gt;
&lt;/u&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This will ensure that, project/dev teams do no have to do this as an additional step.&lt;/p&gt;

&lt;p&gt;A &lt;strong&gt;&lt;u&gt;kubescape pre-installed k8 cluster&lt;/u&gt;&lt;/strong&gt; will go a long way in &lt;strong&gt;&lt;u&gt;enforcing kubescape as the standard of choice for enforcing container security in the enterprise&lt;/u&gt;&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;This can be easily done using the &lt;strong&gt;&lt;u&gt;kubescape helm chart&lt;/u&gt;&lt;/strong&gt;. This will ensure that kubescape gets deployed in a separate namespace within the cluster.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--wEe7nlNK--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/oa330luspq2vzckjjww3.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--wEe7nlNK--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/oa330luspq2vzckjjww3.PNG" alt="Image description" width="880" height="661"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;So, basically if the enterprise uses a custom script or process to provision new k8 clusters for end-users based on requirement, they need to &lt;strong&gt;&lt;u&gt;add a new section to install the kubescape helm chart&lt;/u&gt;&lt;/strong&gt;, which can be done pretty easily.&lt;/p&gt;

&lt;p&gt;Once this is done, &lt;strong&gt;&lt;u&gt;kubescape will become one of the built-in tools available as part of the enterprise kubernetes offering.&lt;/u&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;As you can see, &lt;strong&gt;in-cluster deployment using kubescape is pretty simple&lt;/strong&gt;, and easy to get started with -&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--JBayfBCO--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5c4zilix75392x9unk4s.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--JBayfBCO--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5c4zilix75392x9unk4s.PNG" alt="Image description" width="880" height="391"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;u&gt;Strategy 4 : Develop/invest in a central excellence team dedicated to kubernetes security. &lt;/u&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This team will be in-charge of the &lt;strong&gt;&lt;u&gt;overall security strategies, policy enforcement, and security posture management across the entire enterprise.&lt;br&gt;
&lt;/u&gt;&lt;/strong&gt;&lt;br&gt;
The members of this team should dictate the &lt;strong&gt;kubescape controls&lt;/strong&gt; that will be enforced as part of the custom enterprise framework, ensure that all the security standards are enforced correctly, &lt;strong&gt;&lt;u&gt;provide training an guidance on kubescape usage to different development teams, evangelize kubescape adoption&lt;/u&gt;&lt;/strong&gt; as a single source of truth for kubernetes security.&lt;/p&gt;

&lt;p&gt;This team will be &lt;strong&gt;&lt;u&gt;placed horizontally&lt;/u&gt;&lt;/strong&gt;, and interact with the different product teams, belonging to the different SL(s).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;u&gt;So, all kubernetes security topics will be managed by this team, centrally, and then pushed to the different product/project teams within the enterprise&lt;/u&gt;&lt;/strong&gt;, as shown below -&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--7rM0VIXu--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/rc8nmpsudum1341tuv78.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--7rM0VIXu--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/rc8nmpsudum1341tuv78.PNG" alt="Image description" width="880" height="378"&gt;&lt;/a&gt; &lt;/p&gt;

&lt;p&gt;However, this team &lt;strong&gt;&lt;u&gt;should include representation from different enterprise teams, to ensure that all parts of the enterprise, and key stakeholders are onboard.&lt;/u&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;At the same time, this team should have a &lt;strong&gt;&lt;u&gt;strong collaboration with the vendor team - in this kubescape customer success/pre-sales/product team.&lt;/u&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Only then, can the security initiative at scale, leveraging kubescape be successful.&lt;/p&gt;

&lt;p&gt;A &lt;strong&gt;&lt;u&gt;sample segregation/structuring of this this team&lt;/u&gt;&lt;/strong&gt;, including different interactions, could be as shown below -&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--kIOYMbSe--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/cll24bilefw2m0i35p1a.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--kIOYMbSe--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/cll24bilefw2m0i35p1a.PNG" alt="Image description" width="880" height="690"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;u&gt;Strategy 5 : Inject security initiatives into automation/CICD practices&lt;/u&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;No security adoption initiative can scale, or be successful if it is not automated.&lt;br&gt;
With kubescape, you have &lt;strong&gt;pre-built integrations available&lt;/strong&gt;, so that you can directly inject kubescape into CICD platforms like Jenkins, and Azure DevOps.&lt;/p&gt;

&lt;p&gt;The best thing about kubescape, is like always, the integration with other ecosystem providers and tools is extremely seamless and elegant.&lt;/p&gt;

&lt;p&gt;For example, you can easily integrate kubescape with &lt;strong&gt;Jenkins CI/CD, CircleCI, GitLab, GitHub Actions, and Azure DevOps&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Let me show you a small example, using Azure DevOps( more details can be found at - &lt;a href="https://hub.armo.cloud/docs/azure-devops-pipeline"&gt;https://hub.armo.cloud/docs/azure-devops-pipeline&lt;/a&gt;)&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;trigger:
- master

pool:
  vmImage: 'ubuntu-18.04'

container: jmferrer/azure-devops-agent:latest

steps:
- script:  |
    mkdir $HOME/.local/bin
    export PATH=$PATH:$HOME/.local/bin
    curl -s https://raw.githubusercontent.com/armosec/kubescape/master/install.sh | /bin/bash
    kubescape scan framework nsa *.yaml  
  displayName: 'Run Kubescape'
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;As you can see above, just adding the appropriate &lt;strong&gt;script task&lt;/strong&gt; in the pipeline, will ensure that the k8 objects in the YAML files are scanned, as part of the pipeline.&lt;/p&gt;

&lt;p&gt;Once, the pipeline runs, you can see the results in the Azure DevOps console, as logs-&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--8FJ6M8mP--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/b2e4dhx20uqz9lt2z0r3.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--8FJ6M8mP--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/b2e4dhx20uqz9lt2z0r3.JPG" alt="Image description" width="880" height="444"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In most enterprises, there is a &lt;strong&gt;separate DevOps practice, or COE&lt;/strong&gt; who manage all the DevOps pipelines, and CI/CD processes and practices across the enterprise.&lt;/p&gt;

&lt;p&gt;Sometimes, these teams &lt;strong&gt;&lt;u&gt;use standard pipeline templates&lt;/u&gt;&lt;/strong&gt;, to get started with a project, rather than starting from scratch. For example, there could be a standard DevOps/CICD pipeline, which already includes the different security components - for example SAST tools like Checkmarx/ Veracode, Code quality tools like SonarQube, and so on.&lt;/p&gt;

&lt;p&gt;In such cases, &lt;strong&gt;&lt;u&gt;the kubescape task should also be added as part of the pre-built integrations&lt;/u&gt;&lt;/strong&gt;, so that whenever any team wants to create a new CICD pipeline for cloud-native, or containerized applications, the kubescape plugin will get activated, by default, and all k8 YAML files/helm packages will be scanned by default, as part of the pipeline.&lt;/p&gt;

&lt;p&gt;Again, the message that I want to push here is that the &lt;strong&gt;&lt;u&gt;k8 security practices, should be pushed as part of the enterprise-wide standards, so that all teams, irrespective of where they are in their cloud-native journey&lt;/u&gt;&lt;/strong&gt;, can leverage those security standards, in this case - &lt;strong&gt;&lt;u&gt;ARMO kubescape&lt;/u&gt;&lt;/strong&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;&lt;u&gt;Conclusion&lt;/u&gt;&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;If one does a simple google search, anyone can find hundreds, or even thousands of articles, whitepapers, blogs, and tutorials on kubernetes security, and approaches for implementing the same.&lt;/p&gt;

&lt;p&gt;However, those articles &lt;strong&gt;do not focus&lt;/strong&gt; on why/how an enterprise &lt;strong&gt;needs to harmonize the choice of a kubernetes security tool, with a strategy, or a set of approaches, that can be followed to ensure that the tool is leveraged, at scale, across an enterprise, in the intended way.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;I hope the readers, can focus on the below &lt;strong&gt;&lt;u&gt;2 key takeaways&lt;/u&gt;&lt;/strong&gt;, from this article-&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;&lt;u&gt;Choosing the right tool for the job is pretty important&lt;/u&gt;&lt;/strong&gt; - 
In this blog, I demonstrate how you can use an awesome open-source project called &lt;strong&gt;kubescape by ARMO, and leverage it to implement and enforce a set of security standards, practices, patterns and principles across the entire enterprise.&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--h5L49jJy--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/24ijndw406gbqwruoass.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--h5L49jJy--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/24ijndw406gbqwruoass.jpg" alt="Image description" width="259" height="194"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;However, &lt;strong&gt;just having a tool is counter-productive&lt;/strong&gt;, if you do not have a &lt;strong&gt;&lt;u&gt;consistent vision and streamlined strategy to support your initiatives.&lt;/u&gt;&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--K-Wry9KC--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/lb4vntxdhguo3jaf67mt.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--K-Wry9KC--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/lb4vntxdhguo3jaf67mt.jpg" alt="Image description" width="267" height="189"&gt;&lt;/a&gt; &lt;/p&gt;

&lt;p&gt;Always remember one thing - &lt;strong&gt;&lt;u&gt;Kubernetes is hard , Kubernetes security is harder, but scaling a kubernetes security initiative across an enterprise is the hardest.&lt;/u&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--zYbJTNKC--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/i1dpg2js190ow5ootud3.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--zYbJTNKC--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/i1dpg2js190ow5ootud3.jpg" alt="Image description" width="227" height="222"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;But, I hope that some of the &lt;strong&gt;approaches&lt;/strong&gt; that I have laid out in this article, can &lt;strong&gt;&lt;u&gt;help in solving some of these challenges&lt;/u&gt;&lt;/strong&gt;, and ensure that the reader does not face the same issues/problems that I had to go through when trying to implement a similar exercise in one of my past companies.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--wrlbjN8j--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/rukz7dby2uj0rd9ydr8f.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--wrlbjN8j--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/rukz7dby2uj0rd9ydr8f.jpg" alt="Image description" width="550" height="550"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;&lt;u&gt;References&lt;/u&gt;&lt;/strong&gt;
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Special thanks to &lt;strong&gt;Kunal Kushwaha&lt;/strong&gt;, whose video on kubescape introduced me to the kubernetes security week challenge. You can find more details here - &lt;a href="https://www.youtube.com/watch?v=SDpacCd5518"&gt;https://www.youtube.com/watch?v=SDpacCd5518&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Awesome blogs and tutorials at ARMOSEC blog by &lt;strong&gt;Ben Hirschberg&lt;/strong&gt;, &lt;strong&gt;Jonathan Kaftzan&lt;/strong&gt;, and &lt;strong&gt;Leonid Sandler&lt;/strong&gt; at &lt;a href="https://www.armosec.io/blog/"&gt;https://www.armosec.io/blog/&lt;/a&gt;, that helped me to get started with, and implement kubescape.
Specific article I referenced : &lt;a href="https://www.armosec.io/blog/find-kubernetes-security-issues-while-coding/"&gt;https://www.armosec.io/blog/find-kubernetes-security-issues-while-coding/&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Kubescape git repo which has an abundance of relevant and critical information related to the project, and kubernetes security in general, you can find more details at - &lt;a href="https://github.com/armosec/kubescape"&gt;https://github.com/armosec/kubescape&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;All diagrams are drawn by me, using the awesome excalidraw tool (&lt;a href="https://excalidraw.com/"&gt;https://excalidraw.com/&lt;/a&gt;)&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>kubernetes</category>
      <category>security</category>
      <category>kubescape</category>
      <category>compliance</category>
    </item>
    <item>
      <title>Azure Trial Hackathon - CloudVoter - Vote for your favorite Public Cloud provider</title>
      <dc:creator>Turja Narayan Chaudhuri</dc:creator>
      <pubDate>Sun, 06 Mar 2022 11:27:05 +0000</pubDate>
      <link>https://dev.to/turjachaudhuri/azure-trial-hackathon-submission-favorite-public-cloud-provider-voting-app-ilj</link>
      <guid>https://dev.to/turjachaudhuri/azure-trial-hackathon-submission-favorite-public-cloud-provider-voting-app-ilj</guid>
      <description>&lt;h3&gt;
  
  
  Overview of My Submission
&lt;/h3&gt;

&lt;h3&gt;
  
  
  Submission Category:
&lt;/h3&gt;

&lt;p&gt;Category chosen by me is - Computing Captains&lt;br&gt;
I have hosted my application on AKS , and used ACR to host the container images&lt;/p&gt;
&lt;h3&gt;
  
  
  Link to Code on GitHub
&lt;/h3&gt;


&lt;div class="ltag-github-readme-tag"&gt;
  &lt;div class="readme-overview"&gt;
    &lt;h2&gt;
      &lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--566lAguM--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev.to/assets/github-logo-5a155e1f9a670af7944dd5e12375bc76ed542ea80224905ecaf878b9157cdefc.svg" alt="GitHub logo"&gt;
      &lt;a href="https://github.com/turjachaudhuri"&gt;
        turjachaudhuri
      &lt;/a&gt; / &lt;a href="https://github.com/turjachaudhuri/microsoft-azure-trial-hackathon-dev-to"&gt;
        microsoft-azure-trial-hackathon-dev-to
      &lt;/a&gt;
    &lt;/h2&gt;
    &lt;h3&gt;
      https://dev.to/devteam/hack-the-microsoft-azure-trial-on-dev-2ne5
    &lt;/h3&gt;
  &lt;/div&gt;
  &lt;div class="ltag-github-body"&gt;
    
&lt;div id="readme" class="md"&gt;
&lt;h1&gt;
microsoft-azure-trial-hackathon-dev-to&lt;/h1&gt;
&lt;p&gt;This application repository has been created to host the application code for the application developed in response to the Microsoft Azure Trial Hackathon on Dev
You can find more details about the hackathon at - &lt;a href="https://dev.to/devteam/hack-the-microsoft-azure-trial-on-dev-2ne5" rel="nofollow"&gt;https://dev.to/devteam/hack-the-microsoft-azure-trial-on-dev-2ne5&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;There are many categories to select for in which you can deploy/choose your application
They are -&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;AI Aces: Use Azure Artificial Intelligence &amp;amp; Machine Learning services (ex: Azure Bot Service, Cognitive Search, Computer Vision, Custom Vision, LUIS, ML, etc) to build a new application.&lt;/li&gt;
&lt;li&gt;Computing Captains: Use Azure Compute Services (ex: Azure Functions, App Service, AKS, etc) to build a new application.&lt;/li&gt;
&lt;li&gt;Low-Code Legends: Use Azure low code/no code Fusion development services (with the Power Apps trial add-on) to build a new application.&lt;/li&gt;
&lt;li&gt;Java Jackpot: Use Azure's Java services to build a new Java app.&lt;/li&gt;
&lt;li&gt;Wacky Wildcards: Create a silly, weird, and/or totally random application using Microsoft Azure's services that doesn’t…&lt;/li&gt;
&lt;/ol&gt;
&lt;/div&gt;
  &lt;/div&gt;
  &lt;div class="gh-btn-container"&gt;&lt;a class="gh-btn" href="https://github.com/turjachaudhuri/microsoft-azure-trial-hackathon-dev-to"&gt;View on GitHub&lt;/a&gt;&lt;/div&gt;
&lt;/div&gt;


&lt;h3&gt;
  
  
  Additional Resources / Info
&lt;/h3&gt;

&lt;p&gt;[Note:] # Screenshots/demo videos are encouraged!&lt;/p&gt;

&lt;p&gt;This screenshots shows the actual app page , that is running on AKS.&lt;br&gt;
It is a simple voting app , backed by a Redis cache (as backend).&lt;br&gt;
The app is accessible at - &lt;a href="http://20.62.221.90/"&gt;http://20.62.221.90/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--cb8WB_V0--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/crfoewazjtf1nesdmgwp.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--cb8WB_V0--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/crfoewazjtf1nesdmgwp.PNG" alt="Image description" width="880" height="454"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As you can see , Azure is clearly the winner.&lt;/p&gt;

</description>
      <category>azuretrialhack</category>
    </item>
    <item>
      <title>A CI/CD Pipeline for AWS Serverless Applications using Azure DevOps</title>
      <dc:creator>Turja Narayan Chaudhuri</dc:creator>
      <pubDate>Mon, 24 Jan 2022 10:49:19 +0000</pubDate>
      <link>https://dev.to/turjachaudhuri/a-cicd-pipeline-for-aws-serverless-applications-using-azure-devops-39ha</link>
      <guid>https://dev.to/turjachaudhuri/a-cicd-pipeline-for-aws-serverless-applications-using-azure-devops-39ha</guid>
      <description>&lt;p&gt;Deploying AWS applications using Azure sounds insane , right ? However , that is pretty cool and interesting to do , and quite easy also . So , read on.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;&lt;u&gt;What am I trying to do ?&lt;/u&gt;&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;A few days back I created a serverless app on AWS powered by AWS SAM . It was a pretty simple app basically consisting of a single lambda function which you could use to validate whether a given AccessKeyID was valid or not . You can find all details about the app and the corresponding source code here &lt;a href="https://dev.to/turjachaudhuri/a-serverless-api-to-validate-aws-access-keys-based-on-aws-sam-2l3d"&gt;https://dev.to/turjachaudhuri/a-serverless-api-to-validate-aws-access-keys-based-on-aws-sam-2l3d&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Using a SAM template to control all the components of a serverless setup was pretty awesome , but I wanted to go a step further and deploy a CI/CD pipeline , so that whenever I made any changes to my source code on git , the pipeline would automatically build and upload my whole serverless application on to AWS in a controlled manner . I managed to achieve that quite easily using TravisCI as my continuous integration pipeline . Details on that can be found here -&lt;a href="https://dev.to/turjachaudhuri/a-cicd-pipeline-using-git-and-travis-ci-for-a-serverless-app-based-on-sam-and-c-35n"&gt;https://dev.to/turjachaudhuri/a-cicd-pipeline-using-git-and-travis-ci-for-a-serverless-app-based-on-sam-and-c-35n&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Even though TravisCI was working even better than I expected it to , I still wanted to continue my DevOps journey and experiment with other tools and solutions . And who better to get your hands dirty with than Azure DevOps .&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;&lt;u&gt;What is Azure DevOps?&lt;/u&gt;&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Essentially Azure DevOps is a whole lot of things . Microsoft rebranded Visual Studio Team Services (VSTS), the DevOps offering that has been a part of the Visual Studio IDE for years, to be the cloud-hosted Azure DevOps.&lt;/p&gt;

&lt;p&gt;You can create exhaustive build and release pipelines on a wide range of supported frameworks and runtimes using Azure DevOps . Also , Microsoft has off the shelf provided a lot of handy build and release tasks that you can use to push/deploy changes in a controlled but automatic fashion to a wide range of supported platforms like AWS and so on.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;&lt;u&gt;What is the basic premise of all this?&lt;/u&gt;&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;At the basic level what we are trying to achieve is - whenever someone pushes to our code repo in git , we want to build our project (written in dotnetcore 2.0) , publish the project (so that we can get the publish artifacts) , push the published code zip to a S3 bucket , transform our input SAM template file - template.json into a yaml file referencing the S3 code zip , and then deploy the whole setup to AWS using AWS Cloudformation.&lt;/p&gt;

&lt;p&gt;The basic assumption here is that anyone reading about this blog knows about how to create and configure AWS serverless apps based on AWS SAM . If not , please read my blog for detailed steps :&lt;/p&gt;

&lt;p&gt;&lt;a href="https://dev.to/turjachaudhuri/my-first-application-in-aws-serverless-application-repository-1ahc"&gt;https://dev.to/turjachaudhuri/my-first-application-in-aws-serverless-application-repository-1ahc&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;&lt;u&gt;How does Azure DevOps help?&lt;/u&gt;&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;In any traditional CI/CD setup , one common pattern can be deciphered . There is most probably a source control repo that triggers everything . Then there is a build pipeline , which builds the project , runs unit test cases , and publishes artifacts . Then there are one or more release pipelines that uses those artifacts to finally deploy the solution into say a cloud environment , a docker image and so on.&lt;/p&gt;

&lt;p&gt;Azure DevOps provides a seamless step by step approach to doing that quite easily . Here we will use Azure DevOps to do basically the following.&lt;/p&gt;

&lt;p&gt;We have a git repo where we have the source code for the project including the SAM template file - template.json.&lt;br&gt;
We will create a Build pipeline using the .NET Core Lambda deployment task to package our code into a zip and upload that into S3 , and transform the template.json into a serverless-output.yaml file.&lt;br&gt;
We will then publish the created artifacts into a staging directory so that it can be referenced by the subsequent Release pipelines.&lt;br&gt;
Then we will create a Release pipeline which will use the AWS Cloudformation Create/Update Stack task to deploy our serverless app using the serverless-output.yaml file and the code zip from S3.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;&lt;u&gt;Sounds good , let's start!&lt;/u&gt;&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;&lt;u&gt;Step 1 :&lt;/u&gt;&lt;/strong&gt; To connect from Azure DevOps to AWS , we need AWS Credentials . However , it is not a good practice to hardcode the AWS credentials like accessKey and secretKey in scripts . So , we will create a Service Endpoints in Azure DevOps . This is nothing but a connection that we create to AWS , and then we can reference this connection in our build/release pipelines via the service endpoint name rather than scripting them in the pipelines ourselves.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--e3d84hRX--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/n7nxrwxt79bizn5efzlc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--e3d84hRX--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/n7nxrwxt79bizn5efzlc.png" alt="Image description" width="880" height="499"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;u&gt;Step 2 :&lt;/u&gt;&lt;/strong&gt; Setup a git repo to host the source code. This is pretty simple and easy to do.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--HaVtW6T3--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/s6ytgzbpa3r5yjbhnpdc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--HaVtW6T3--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/s6ytgzbpa3r5yjbhnpdc.png" alt="Image description" width="880" height="378"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;u&gt;Step 3 :&lt;/u&gt;&lt;/strong&gt; Configure a Build Pipeline.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;u&gt;3.1 :&lt;/u&gt;&lt;/strong&gt; I have created a build pipeline with the agent pool as Hosted Linux Preview. Think of the agent pool as the VM where your code will be downloaded and then the build process will run as per your specifications.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--auNy7O6o--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/8wcxus0cetuvejw5o6jl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--auNy7O6o--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/8wcxus0cetuvejw5o6jl.png" alt="Image description" width="880" height="292"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;u&gt;3.2 :&lt;/u&gt;&lt;/strong&gt; The next step is to hook up the build process with a source control repo . The code in the repo will essentially be the source input for the build process . Select your project , repo and branch .&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--uORPB0Nz--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/q1d7k1k4c2aywe1mr2i3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--uORPB0Nz--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/q1d7k1k4c2aywe1mr2i3.png" alt="Image description" width="880" height="424"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;u&gt;3.3 : &lt;/u&gt;&lt;/strong&gt;Next you add the first task - *&lt;em&gt;AWS Lambda .NET Core Deployment *&lt;/em&gt;. This task can actually be configured to both build and deploy to AWS in a single shot . However , we want separate build and release pipelines . As a result in the build pipeline ,we will only use a limited set of functionality to only create the packaged code zip and generate the CloudFormation output YAML file. A detailed description of the task and its associated attributes can be found here -&lt;/p&gt;

&lt;p&gt;&lt;a href="https://docs.aws.amazon.com/vsts/latest/userguide/lambda-netcore-deploy.html"&gt;https://docs.aws.amazon.com/vsts/latest/userguide/lambda-netcore-deploy.html&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--sp3GR7mQ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/q8isv3n52i7gf6xz0bc5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--sp3GR7mQ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/q8isv3n52i7gf6xz0bc5.png" alt="Image description" width="880" height="417"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;u&gt;The important portions to note here are -&lt;/u&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Choose the AWS credentials as per your configuration in Step 1&lt;/li&gt;
&lt;li&gt;Select the region where you want to deploy from the dropdown&lt;/li&gt;
&lt;li&gt;Select the type of deployment - Function ( if you want to deploy a single lambda function and nothing else) , Serverless Application ( if you want to deploy a complete serverless setup like the one we have here)&lt;/li&gt;
&lt;li&gt;Package-only output file - This signifies where the output yaml file which is an output of this step will be stored.&lt;/li&gt;
&lt;li&gt;Path to lambda project - Specify the serverless project/solution that we want to deploy&lt;/li&gt;
&lt;li&gt;Create Deployment Package only - Tick this as YES since we just want to package the serverless app and not deploy it . This is because we want to use separate build and deploy pipelines . The build pipeline will simply publish the artifacts , and not deploy them.&lt;/li&gt;
&lt;li&gt;Stack name - not needed as we are not doing any sort of deployment&lt;/li&gt;
&lt;li&gt;S3 Bucket - name of the bucket where the published solution will be stored as a zip .&lt;/li&gt;
&lt;li&gt;S3 Prefix - prefix to append to the file zip if any&lt;/li&gt;
&lt;li&gt;Additional Lambda Tools Command Line Arguments :
By default the template file in these type of projects is named as serverless.template file . Since , my file is named template.json , I am specifying this additional command line argument :
--template template.json&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;&lt;u&gt;Step 3.4 :&lt;/u&gt;&lt;/strong&gt; Add the Publish Build Artifacts step to the build process . In Step 3 , we generated the output serverless-output.yaml file . However , we need to publish it into an artifact that will be saved in Azure DevOps that we can later reference in the Release pipeline -&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--oytCwXdw--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/iztsssiam6he2thsfja5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--oytCwXdw--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/iztsssiam6he2thsfja5.png" alt="Image description" width="880" height="293"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;u&gt;Important things to keep in mind here :&lt;/u&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Path to publish is the file/folder location that was the output of Step 3 and we want to publish it into an output artifact for referencing in the Release pipelines downstream.&lt;/li&gt;
&lt;li&gt;Artifact name is the name with which the published artifacts will be referenced both in the downstream pipelines or if we want to manually download them from the Azure DevOps console.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;&lt;u&gt;Step 4 :&lt;/u&gt;&lt;/strong&gt; Add this stage , you can manually trigger a build action to confirm that everything is working as expected . If everything is okay , you should see a screen like this -&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--HUqtFbKH--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ddwpb82l3y0nd7cuf4ei.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--HUqtFbKH--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ddwpb82l3y0nd7cuf4ei.png" alt="Image description" width="880" height="353"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You can even check the logs / download the artifacts if you want to verify or investigate further.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;u&gt;Step 5 :&lt;/u&gt;&lt;/strong&gt; Now we need to create the release pipeline . The input of the release pipeline will typically be the output of the build pipeline . So ,we first need to hook the release pipeline with the artifacts published as part of the build pipeline , which is pretty simple and just a matter of selecting the project , source (Build pipeline) and so on from the dropdowns.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--0qZEB2BN--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/54qs1cfjegg27zxmjzzt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--0qZEB2BN--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/54qs1cfjegg27zxmjzzt.png" alt="Image description" width="880" height="377"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Next , we need to add a stage . As of now , I have added a single stage called AWS-SAM-Deploy . Typically you might have one stage per environment , like say DEV,QA,Prod and variations of the same.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--DBAECidU--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/504o04z49pfdir188msn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--DBAECidU--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/504o04z49pfdir188msn.png" alt="Image description" width="880" height="194"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Within every stage , you can add tasks to configure and control your release stages. These tasks basically execute the crucial steps of deploying/infrastructure provisioning whenever the release pipeline runs . In our case , we have already packaged our code into S3 , and generated the serverless-output.yaml file as part of our build pipeline .So , we simply need to deploy our app using a Cloudformation stack based on the serverless-output.yaml file . So , we simply add a single task - AWS Cloudformation Create/Update Stack.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--CycDKtUE--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/8hjchn17ge4x8whhzo9g.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--CycDKtUE--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/8hjchn17ge4x8whhzo9g.png" alt="Image description" width="880" height="474"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The important things to keep in mind here are-&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Stack Name : The name of the Cloudformation Stack within which the serverless resources will be deployed.&lt;/li&gt;
&lt;li&gt;Template Source : Local  file (will input the file name and path rather than a S3 URI to access the template file)&lt;/li&gt;
&lt;li&gt;Template file :
$(System.DefaultWorkingDirectory)/Azure-DecOps-AWS-SAM-ASP.NET Core-CI/SAMDeliverables/a/serverless-output.yaml
Here we reference the SAMDeliverables artifact that we published as an artifact output in the Build pipeline.&lt;/li&gt;
&lt;li&gt;Create/Update the stack using a change set : This needs to be set for serverless app deployment using Cloudformation&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;&lt;u&gt;Step 6 :&lt;/u&gt;&lt;/strong&gt; Now , all configurations are done , you can manually trigger a Release pipeline . If all goes well , you should see a screen like this - &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--GijalZYp--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9c7ch794m9wowxvb5yk5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--GijalZYp--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9c7ch794m9wowxvb5yk5.png" alt="Image description" width="880" height="434"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Also , if you see Cloudformation console in your AWS account , you should be able to see the created stack and all its associated resources .&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--7ERL-CI---/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ovoxw1v7gi0p5q4xvdrv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--7ERL-CI---/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ovoxw1v7gi0p5q4xvdrv.png" alt="Image description" width="880" height="364"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;&lt;u&gt;All done , closing remarks&lt;/u&gt;&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Azure DevOps is actually pretty cool , and I like it more than TravisCI . However , currently the first task that I used to build and package the code only support dotnetcore. So , it will not be possible to use it for NodeJS or Java based lambda functions and so on . Also, Azure DevOps is not open source . But , all said and done , it is pretty cool and I can't wait to do more interesting stuff with it.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;u&gt;Inspiration and help  from -&lt;/u&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://aws.amazon.com/blogs/developer/net-core-lambda-deployment-task-enhancements-in-the-aws-tools-for-vsts/"&gt;https://aws.amazon.com/blogs/developer/net-core-lambda-deployment-task-enhancements-in-the-aws-tools-for-vsts/&lt;/a&gt;&lt;/p&gt;

</description>
      <category>aws</category>
      <category>devops</category>
      <category>serverless</category>
      <category>azure</category>
    </item>
    <item>
      <title>My first application in AWS Serverless Application Repository</title>
      <dc:creator>Turja Narayan Chaudhuri</dc:creator>
      <pubDate>Mon, 24 Jan 2022 09:22:04 +0000</pubDate>
      <link>https://dev.to/turjachaudhuri/my-first-application-in-aws-serverless-application-repository-1ahc</link>
      <guid>https://dev.to/turjachaudhuri/my-first-application-in-aws-serverless-application-repository-1ahc</guid>
      <description>&lt;p&gt;&lt;strong&gt;&lt;u&gt;Notification 1&lt;/u&gt;&lt;/strong&gt;:All code related to this blog post can be found at&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/turjachaudhuri/AWS-Serverless/tree/SAMRepositoryApp1"&gt;https://github.com/turjachaudhuri/AWS-Serverless/tree/SAMRepositoryApp1&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Feel free to fork and let me know what wonderful things you have done with my crappy code.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;&lt;u&gt;What is AWS Serverless Application Repository?&lt;/u&gt;&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;The AWS Serverless Application Repository makes it easy for developers and enterprises to quickly find, deploy, and publish Serverless applications in the AWS Cloud . For more details check out - &lt;a href="https://docs.aws.amazon.com/serverlessrepo/latest/devguide/what-is-serverlessrepo.html"&gt;https://docs.aws.amazon.com/serverlessrepo/latest/devguide/what-is-serverlessrepo.html&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Basically , this repo is a collection of Serverless applications that have been contributed by AWS teams and developers across the world . These applications can directly be deployed into your own AWS account with so much as a click of a button , and setting some parameters (more on this later).&lt;/p&gt;

&lt;p&gt;Any application that you may want to submit to these repo needs to follow a few rules , one of which is that your app needs to have a valid AWS Serverless Application Model (AWS SAM) template file that defines the AWS resources which are used by your app.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;&lt;u&gt;What is AWS SAM?&lt;/u&gt;&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;For those of you who have some experience in working on Serverless applications , you must be aware that after a while , manually maintaining a Serverless app is quite difficult .Reason being there are so many different components in an app , mostly when it starts to grow complicated and new components are added over time . It is quite easy for an app to have something around 20+ lambdas , their associated events , IAM roles and policies associated those lambdas , DynamoDB tables , S3 buckets and so on . It is extremely difficult to maintain such solutions without some sort of framework . That is where AWS SAM comes in .&lt;/p&gt;

&lt;p&gt;AWS SAM has a template file(either in JSON/YAML) that describes everything your app needs to have , starting from the lambdas , where their code is located , DynamoDB tables , associated IAM roles and policies. It is extremely similar to a CloudFormation template , and internally the SAM engine actually converts the SAM template into a CloudFormation(CF) template and then deploys it onto your AWS setup using AWS CF.&lt;/p&gt;

&lt;p&gt;SAM makes it very easy to get started with creating Serverless apps and maintain them as they start getting more and more complex and involved. For more information on SAM , check this out - &lt;a href="https://github.com/awslabs/serverless-application-model"&gt;https://github.com/awslabs/serverless-application-model&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;&lt;u&gt;What did I hope to accomplish when I started?&lt;/u&gt;&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Basically , I wanted to create a pretty basic Serverless app and publish that to the AWS SAM Repository so that other users can deploy it . Also , I wanted to understand the whole flow , because I had not used SAM earlier .&lt;/p&gt;

&lt;p&gt;Previously , I mostly relied on the Serverless Framework for this type of work . If you want to check out Serverless framework , and what it can do for you , please visit this site - &lt;a href="https://serverless.com/"&gt;https://serverless.com/&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;&lt;u&gt;What does my app do?&lt;/u&gt;&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;My app is pretty simple by any standard . It simply consists of a lambda function that is triggered every time some object is inserted in a configured S3 bucket , and then loads the filename and timestamp in a DynamoDB table . My objective was not to create a complex Serverless app for a business scenario , I simply wanted to check out how SAM works and how I can leverage it for easier maintenance of my Serverless projects.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;&lt;u&gt;Does using SAM mean I don't have to code?&lt;/u&gt;&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Not at all . SAM is simply a template file that is a technical description of all the components of your app , the events interacting between those components , IAM roles , and so on . It is not an alternative for the code that goes into your lambda functions . You still have to write buildable code , however SAM will link your lambda functions with your code effortlessly.&lt;/p&gt;

&lt;p&gt;Also, the SAM engine helps you validate whether the SAM template is proper , and can help you deploy the whole app effortlessly from the SAM-cli using a few basic commands.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;&lt;u&gt;Enough ! How to get started?&lt;/u&gt;&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;To install SAM-cli , the easiest way is to get started via pip. SAM-cli is written on python , so python needs to be installed on your system as a prerequisite.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;pip install aws-sam-cli
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;&lt;u&gt;Gotcha 1 :&lt;/u&gt;&lt;/strong&gt; I faced a lot of problem in this otherwise simple installation due to the conflicting versions and dependencies errors between aws-cli , sam-cli , boto3, boto-core and so on. However , they are not fatal errors , and if you simply have patience , and take care of the dependencies one by one , this will be a no-brainer to solve.&lt;/p&gt;

&lt;p&gt;Follow this link for detailed instructions - &lt;a href="https://docs.aws.amazon.com/lambda/latest/dg/sam-cli-requirements.html"&gt;https://docs.aws.amazon.com/lambda/latest/dg/sam-cli-requirements.html&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;u&gt;Gotcha 2 :&lt;/u&gt;&lt;/strong&gt; No you don't need docker to install sam-cli . Docker is only needed if you want to do unit testing on your local machine using SAM Local , which will be a different blog post altogether (if I ever figure it out ).&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;&lt;u&gt;Setting up the project&lt;/u&gt;&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Go to your working directory and run this command-&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sam init --runtime dotnetcore2.0
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can choose any runtime out of the supported ones as per your comfort level . I am more comfortable with C# , so I chose dotnetcore . The above command will create a HelloWorld project for you with a sample template.yaml file and some sample C# code.&lt;/p&gt;

&lt;p&gt;Now you can open the solution in your favorite IDE , and code as much as you want.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;u&gt;Gotcha 3 :&lt;/u&gt;&lt;/strong&gt;The template file need not only be in YAML. JSON is also supported . I prefer JSON to YAML as of now . So I simply used template.json instead of template.yaml . However , in that case in all your subsequent commands in the SAM cli , you need to explicitly mention that your template file name is template.json.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;u&gt;Gotcha 4 :&lt;/u&gt;&lt;/strong&gt;Most of the examples in the internet on SAM templates are YAML ones . Being able to convert them to JSON quickly and without any error is a must if you need to proceed . I used the online tool &lt;a href="https://codebeautify.org/yaml-to-json-xml-csv%C2%A0"&gt;https://codebeautify.org/yaml-to-json-xml-csv &lt;/a&gt; a lot during my development for this.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;&lt;u&gt;What to do with the template.json file?&lt;/u&gt;&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Getting started with SAM is a tad difficult if you are not familiar with CloudFormation or any IaC (Infrastructure as Code) scripting tool/language of any sorts . But once you get a hang of it , it is quite intuitive and easy to use . You can describe resources like S3 Bucket , DynamoDB tables , use variables/parameters in your script to make them more dynamic , use some clever CF transformations to help you along the way . Also , there are a lot of examples on the web to help you out in case you get stuck.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;u&gt;Gotcha 5:&lt;/u&gt;&lt;/strong&gt; For people who are familiar with CF templates , they know that in CF we can define roles , policies , associate policies with roles and then associate those roles to other resources like lambda functions or kinesis streams. All the same can be done in SAM , but the Serverless repository will not accept all SAM policies . As of now AWS has a list of policy templates that are only supported in AWS Serverless repository. You can not create custom roles and use them . Even though the SAM template will validate , and you can deploy them to your account using the SAM deploy command , but those SAM templates will not be accepted by the Serverless application repository. More on this can be found here : &lt;a href="https://docs.aws.amazon.com/serverlessrepo/latest/devguide/using-aws-sam.html#serverlessrepo-policy-templates"&gt;https://docs.aws.amazon.com/serverlessrepo/latest/devguide/using-aws-sam.html#serverlessrepo-policy-templates&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;&lt;u&gt;Coding is done , can i test locally?&lt;/u&gt;&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;AWS SAM Local can be used to test locally . I have personally not tinkered with it yet , though I hope to one day . From what I understood , it spawns up a docker instance with your code and you can play with it .&lt;/p&gt;

&lt;p&gt;However , I do unit tests via Visual Studio . At the end of the day , even though the code you are writing is for a lambda function , it is also a function within a class , and as such can be tested also . However , the difficult thing is to mock the S3 Events or the API Request events . AWS SDK for .NET has classes that you can use to create sample events and then call your lambda function code with these test events and a test execution context.&lt;/p&gt;

&lt;p&gt;Check  my code in the GitHub repo mentioned at the top of this blog . I have a separate unit test folder which contains a single unit test , that mocks a S3 event and calls the lambda function with the test event to validate that all calls can be done properly.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;u&gt;Gotcha 6 : &lt;/u&gt;&lt;/strong&gt;When the actual lambda function runs in the AWS ecosystem it will run with the privileges of the role that it is associated with . However , what happens when you are unit testing . What privileges will your code have or not have , will it be able to make the AWS API calls , or will they fail?&lt;/p&gt;

&lt;p&gt;The way to proceed in this  case is to use AWS profiles . More info on this here - &lt;a href="https://docs.aws.amazon.com/cli/latest/userguide/cli-multiple-profiles.html"&gt;https://docs.aws.amazon.com/cli/latest/userguide/cli-multiple-profiles.html&lt;/a&gt;. You can create on or more profiles you want and choose which one to use when you are in Visual Studio . If you are in a Microsoft shop , download the  AWS Toolkit for VS 2017 , configure your profiles and choose the one you want . Also , profiles can be set via code like this :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Amazon.Runtime.AWSCredentials credentials = new Amazon.Runtime.StoredProfileAWSCredentials("[PUT YOUR PROFILE NAME HERE]");
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;More info on this can be found here : &lt;a href="https://aws.amazon.com/blogs/developer/referencing-credentials-using-profiles/"&gt;https://aws.amazon.com/blogs/developer/referencing-credentials-using-profiles/&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;&lt;u&gt;Everything works fine , how do I deploy to my AWS setup?&lt;/u&gt;&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;To deploy to AWS , simply use a few basic commands . Go to your working directory where the source code is and type the following:&lt;/p&gt;

&lt;p&gt;Command 1-&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sam validate --template template.json --profile Hackathon
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here template.json can be any file (json/yaml) where your SAM template is defined , and Hackathon is the aws profile that corresponds to the AWS account you want to use. This command will tell you whether the SAM template is valid or not . Always do this before the actual deployment.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;u&gt;Gotcha 7:&lt;/u&gt;&lt;/strong&gt; Don't forget to mention the profile . If no profile is mentioned , it will consider the Default profile . This might not be what to want , or maybe some other AWS account .&lt;/p&gt;

&lt;p&gt;Command 2:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws s3 mb s3://aws-sam-test-1 --region ap-south-1 --profile Hackathon
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This is simply creating a bucket . This is the bucket where our whole app details (code) will be zipped and stored . When , in a future step we will deploy our app to AWS , the CloudFormation engine will create all the required resources and populate the AWS Lambda functions from the zip in this bucket .&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;u&gt;Gotcha 8:&lt;/u&gt;&lt;/strong&gt; Bucket names need to be unique across all of AWS (all regions , all users , everything)&lt;/p&gt;

&lt;p&gt;Command 3:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sam package --profile Hackathon --template-file template.json --output-template-file serverless-output.yaml --s3-bucket aws-sam-test-1 --force-upload
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This command will package the whole app into a zip and upload it into the s3 bucket mentioned , and also create a output SAM template file serverless-output.yaml that will reference the code from the s3 bucket zip .&lt;/p&gt;

&lt;p&gt;Command 4:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sam deploy --profile Hackathon --template-file serverless-output.yaml --stack-name aws-sam-trial-1 --capabilities CAPABILITY_IAM --region ap-south-1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This basically calls CloudFormation deploy command to actually create a CF stack with the name specified above and creates all the resources mentioned in the SAM template file. While this command is running , you can actually navigate to AWS console , and check CloudFormation . You can see that your stack is getting created , and all its associated resources are getting created one by one.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;u&gt;Gotcha 9 :&lt;/u&gt;&lt;/strong&gt; Your AWS SAM template will most of the time ,have some sorts of parameters defined to make your template dynamic and your infrastructure setup configurable . These parameters can usually be configured by users when the stack is deployed . However , when SAM deploy command runs it does not ask for input from the user , it simply deploys the stack with the default values for the parameters mentioned in the SAM template file .  So , if you have a parameter for the bucket name you want to create , be sure that it is unique , otherwise the stack creation will fail and it will rollback.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;u&gt;Gotcha 10:&lt;/u&gt;&lt;/strong&gt; If stack creation fails for some reason , then the stack transitions to a ROLLBACK_COMPLETE . If at that point , you make some changes to your template.json and run SAM deploy again , it will fail . So , you need to delete your stack first , and then proceed in that case.&lt;/p&gt;

&lt;p&gt;If all of the above commands run without any errors , it means that all your AWS resources have been created and configured as per your template file , the lambda code has also been linked to the code zip mentioned earlier. At this point , you can go into your AWS console , and test and check whether everything is okay or not.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;u&gt;Gotcha 11:&lt;/u&gt;&lt;/strong&gt; It is highly unlikely that you will get all your code right the first time . You will make multiple changes to your actual code(not the template.json , pure C# or java code) , and will want to push those changes into the AWS ecosystem . The SAM deploy command will fail in this case , because , since there are no stack changes in terms of infrastructure , CF thinks there is nothing to deploy . In such cases it is better to use separate commands or say the AWS Toolkit for VS to deploy . You can simply select your project and deploy it to to AWS with a single click . This will simply deploy your latest code changes to the lambda function , and nothing else.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;&lt;u&gt;My App rocks . How to publish it to the SAM repo now?&lt;/u&gt;&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Publishing applications to the SAM repo is extremely simple . Just follow the steps mentioned here - &lt;a href="https://docs.aws.amazon.com/serverlessrepo/latest/devguide/serverless-app-publishing-applications.html"&gt;https://docs.aws.amazon.com/serverlessrepo/latest/devguide/serverless-app-publishing-applications.html&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;u&gt;However there are a few things to keep in mind .&lt;/u&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;If you want to make your application public , it needs to have an open-source license and the code needs to be pushed into some sort of a public repository like git , and the link referenced in the app details. This is needed when the app is public , and not in all cases.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;During publishing the app , you are asked to select the SAM template file , do not select the template.json file that you wrote yourself . You need to select the serverless-output.yaml  file created as an output of Command 3 in the last section. This is because template.json references your local code , while serverless-output.yaml references the actual code Uri of the S3 bucket where the packaged code resides.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Be sure to add the bucket policy granting the SAM repository service access to your code . This is required so that when an user tries to deploy your app in their AWS account , the SAM repository service can setup the CF stack in your account by referencing the packaged code that is stored in your bucket. This is also mentioned here- &lt;a href="https://docs.aws.amazon.com/serverlessrepo/latest/devguide/serverless-app-publishing-applications.html"&gt;https://docs.aws.amazon.com/serverlessrepo/latest/devguide/serverless-app-publishing-applications.html&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;&lt;u&gt;My app is published . Now what?&lt;/u&gt;&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Now go to &lt;a href="https://serverlessrepo.aws.amazon.com/applications"&gt;https://serverlessrepo.aws.amazon.com/applications&lt;/a&gt; and search for your app by the app name/ author name . Click on deploy next to the app and deploy it in some other region to test it out . You will simply need to configure the parameter names , and let CloudFormation do its magic . CF will spawn up the resources one by one , and you are ready to go .&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;&lt;u&gt;Where to go from here?&lt;/u&gt;&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;The future is full of endless possibilities . AWS SAM templates provide a whole new dimension to the world of Serverless applications  .Just like CF helped you maintain huge infrastructure setups in a single CF file , that could be version-controlled , SAM templates lets you do the same for Serverless apps .&lt;/p&gt;

&lt;p&gt;Replicating your Serverless components that make up your app in a different account or in a separate region will not take a day , rather will be a matter of 1-5 minutes. From disaster recovery standpoint , this is a huge plus also.&lt;/p&gt;

&lt;p&gt;Also , I believe that AWS will develop the Serverless repository into a marketplace of sorts , full of useful apps that other users can leverage as a starting point for their next big thing or a crucial component for their complex , multi-layered app.&lt;/p&gt;

&lt;p&gt;As of today , there are 335 apps in the serverless repository . Go ahead , make one . And don't forget to take a look at the simple app that I developed .  Search "FileLogTracker" or "turja" and give it a go.&lt;/p&gt;

&lt;p&gt;Please feel free to reach out to me personally at my email or drop me a message in LinkedIn . Would love to know what you guys are up to nowadays.&lt;/p&gt;

&lt;p&gt;Thanks.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>serverless</category>
      <category>azure</category>
      <category>sam</category>
    </item>
    <item>
      <title>An AWSome open source project to create a S3 explorer for AWS</title>
      <dc:creator>Turja Narayan Chaudhuri</dc:creator>
      <pubDate>Mon, 24 Jan 2022 08:55:43 +0000</pubDate>
      <link>https://dev.to/turjachaudhuri/an-awsome-open-source-project-to-create-a-s3-explorer-for-aws-342k</link>
      <guid>https://dev.to/turjachaudhuri/an-awsome-open-source-project-to-create-a-s3-explorer-for-aws-342k</guid>
      <description>&lt;p&gt;&lt;strong&gt;&lt;u&gt;Notification 1 :&lt;/u&gt;&lt;/strong&gt; All code related to this blog post can be found at :&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/awslabs/aws-js-s3-explorer/tree/v2-alpha"&gt;https://github.com/awslabs/aws-js-s3-explorer/tree/v2-alpha&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;&lt;u&gt;How did I discover this gem?&lt;/u&gt;&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Recently , while working on a personal project , I had to come up with some sort of a S3 explorer UI . It was a small part of a bigger thing I was doing , and I wanted to let my users use my web console to navigate S3 buckets they have access to and perform read operations . While there are a number of standalone third party S3 explorers available on the web , I wanted something that would seamlessly integrate with my webpage , and preferably written in JS.&lt;/p&gt;

&lt;p&gt;The other option was me having to write a lot of code to come up with a solution that is not the selling point/ core intention of my project anyway . In these cases , I always try to find open source solutions readily available on the market that can be integrated into my app . Saves the trouble of developing , and testing a isolated section of the app.&lt;/p&gt;

&lt;p&gt;While searching I found this app at &lt;a href="https://github.com/awslabs/aws-js-s3-explorer"&gt;https://github.com/awslabs/aws-js-s3-explorer&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;&lt;u&gt;So , what can we do with it ?&lt;/u&gt;&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;The readme section of the git repo is wonderfully written by the author and will get you started in a few minutes .&lt;/p&gt;

&lt;p&gt;"&lt;strong&gt;AWS JavaScript S3 Explorer (v2 alpha) is a JavaScript application that uses AWS's JavaScript SDK and S3 APIs to make the contents of an S3 bucket easy to browse via a web browser. We've created this to enable easier sharing and management of objects and data in Amazon S3.&lt;/strong&gt;" - From the author's readme page&lt;/p&gt;

&lt;p&gt;However , there is one thing I would like to point out . There are 2 separate branches in the git repo for 2 different projects . Quite similar , by the alpha-v2 branch is the one I am talking about as that suited my use-case better .&lt;/p&gt;

&lt;p&gt;In the alpha-v2 version , support for private buckets and upload/delete of s3 objects are supported.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;&lt;u&gt;Ok , so where to go from here?&lt;/u&gt;&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;The project simply consists of a html page , a js file and a css file . This can be easily integrated into any existing projects and customised as per the needs of the developer . For example , in my case I don't even want the user to see the upload/delete buttons . So , I will simply make the necessary changes .&lt;/p&gt;

&lt;p&gt;It is powered by an Open Source license so feel free to code away.&lt;/p&gt;

&lt;p&gt;Do let me know if you find a better alternative or tell me how exactly you customized the app for your use-cases.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>opensource</category>
      <category>s3</category>
    </item>
    <item>
      <title>A CI/CD Pipeline using git and Travis CI for a serverless app based on SAM and C#</title>
      <dc:creator>Turja Narayan Chaudhuri</dc:creator>
      <pubDate>Mon, 24 Jan 2022 08:52:45 +0000</pubDate>
      <link>https://dev.to/turjachaudhuri/a-cicd-pipeline-using-git-and-travis-ci-for-a-serverless-app-based-on-sam-and-c-35n</link>
      <guid>https://dev.to/turjachaudhuri/a-cicd-pipeline-using-git-and-travis-ci-for-a-serverless-app-based-on-sam-and-c-35n</guid>
      <description>&lt;p&gt;&lt;strong&gt;&lt;u&gt;Notification 1 :&lt;/u&gt;&lt;/strong&gt; All code related to this blog can be found at &lt;a href="https://github.com/turjachaudhuri/aws-sam"&gt;https://github.com/turjachaudhuri/aws-sam&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;So , I had really enjoyed creating a serverless app in AWS based on AWS SAM and deploying it to my AWS account .&lt;/p&gt;

&lt;p&gt;You can find all details at - &lt;a href="https://dev.to/turjachaudhuri/my-first-application-in-aws-serverless-application-repository-1ahc"&gt;https://dev.to/turjachaudhuri/my-first-application-in-aws-serverless-application-repository-1ahc&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;However  , I wanted to take it a step further . I wanted to create a CI/CD pipeline so that every time I commit any changes to my serverless app on git , it would be automatically deployed to AWS using a build/delivery pipeline.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;&lt;u&gt;So , What should we do?&lt;/u&gt;&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;As usual , I first googled to see what other developers were using . I found that TravisCI was obviously very popular in the open source world.&lt;/p&gt;

&lt;p&gt;"Travis CI is a hosted, distributed continuous integration service used to build and test software projects hosted at GitHub. Open source projects may be tested at no charge via travis-ci.org. Private projects may be tested at travis-ci.com on a fee basis."&lt;/p&gt;

&lt;p&gt;I found a few blog posts on how to set up a CI/CD pipeline using Git and Travis CI but all of them were for NodeJS apps ,and none of them suited my exact use case . So , I decided to work on it on my own.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;&lt;u&gt;How to get started?&lt;/u&gt;&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;The assumption here is that we have a fully tested Serverless app based on the SAM template that can be individually deployed using SAM CLI , and we just want to hook it up to some sort of continuous integration pipeline .&lt;/p&gt;

&lt;p&gt;Ok, so let's get started.&lt;/p&gt;

&lt;p&gt;Sign in to TravisCI with your GitHub account . The service will automatically retrieve all your public repositories from GitHub and display them in a list like this :&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--TfrYYzuY--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/50om6wbztzrjqwo1e9uo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--TfrYYzuY--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/50om6wbztzrjqwo1e9uo.png" alt="Image description" width="880" height="431"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Enable the radio button next to the repo that you want to configure with TravisCI , and click on that repo to navigate to the TravisCI details page . This page shows the details of the builds , their status and so on . TravisCI is designed to seamlessly integrate with Github.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;u&gt;Gotcha 1:&lt;/u&gt;&lt;/strong&gt; Just after configuring TravisCI to work with your repo , you will find that the details page is empty . It might feel like you have misconfigured something , but actually , nothing will happen unless the git repo has a .travis.yml file . As soon as the .travis.yml file is pushed into git , automatic build will start.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;&lt;u&gt;What is .travis.yml ?&lt;/u&gt;&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;This is the yaml file that TravisCI uses to start your builds . It is basically a series of steps that guides TravisCI as to what process to follow to build and finally deploy your artifacts. It needs to be present at the root of your project . You can essentially think of it as a script containing the same commands that you otherwise used yourself to deploy the app from your personal machine.&lt;/p&gt;

&lt;p&gt;My .travis.yml file looks like this , I have added comments to make it easier to understand.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# this is the language of the solution . i am using csharp
language: csharp
mono: none
# here we tell the version of dotnet sdk that the app uses
dotnet: 2.0
# here we identify the solution that needs to be built
solution: S3ToDynamo.sln
# this tells that only updates in master branch will be considered for build
branches:
  only: master
# here we need to install all our dependencies to enable the future steps
install:
- pip install --user awscli
- pip install --user aws-sam-cli
# here the commands needed to build the solution are provided
script:
- dotnet restore
- dotnet publish
- sam validate --template template.json
- sam package --template-file template.json --s3-bucket aws-sam-test-1 --output-template-file serverless-output.yaml
# here the commands needed to deploy the solution are provided
deploy:
  provider: script
  script: sam deploy --template-file serverless-output.yaml --stack-name aws-sam-trial-1 --capabilities CAPABILITY_IAM
  skip_cleanup: true
  on:
    branch: master
notifications:
  email:
    on_failure: always
# here we provide the variables that are set globally for the build+deploy
env:
  global:
  - AWS_DEFAULT_REGION=ap-south-1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;&lt;u&gt;Gotcha 2:&lt;/u&gt;&lt;/strong&gt; One important thing to keep in mind to get any sort of CI/CD pipeline running is you need to understand the premise of the setup you are running . CI/CD pipelines are nothing but glorified build and deploy servers (simplistically speaking) . Whenever a build is triggered (say via a source control push) , the CI/CD framework simply downloads the source code from the source control repo into a blank VM(say running Linux) . So , we need to keep in mind that the server will not have many of the packages/dependencies that we otherwise take for granted . That is why in the .travis.yml file you need to specify everything that is needed , even the bare-metal installations that you might take for granted .&lt;/p&gt;

&lt;p&gt;For example , in my .travis.yml file , we mentioned&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;pip install --user awscli
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This step is not needed every time in your local build process , but it is needed for TravisCI as the server where the build is running is simply a blank canvas.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;u&gt;Gotcha 3:&lt;/u&gt;&lt;/strong&gt; In case of AWS deployments , one crucial thing we need is AccessKeyID and SecretAccessKey and the region we need to deploy the solution to . In the above .travis.yml you can see the AWS_DEFAULT_REGION has been set . However , AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY are not mentioned there . Then , how will TravisCI deploy to AWS without this vital information ?&lt;/p&gt;

&lt;p&gt;This is because I had setup secure parameters/variables from the TravisCI console . This can be done using the settings section of the GitHub repo in the TravisCI . I believe this is a good practice , since , the .travis.yml file will be a part of your public git repo , and exposing secrets in a source control is a very bad practice.&lt;/p&gt;

&lt;p&gt;Screenshot showing the Environment Variables section of the TravisCI page.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--MK8fPzhX--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/g8vc2rzdi2tx261ewmjj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--MK8fPzhX--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/g8vc2rzdi2tx261ewmjj.png" alt="Image description" width="880" height="374"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;&lt;u&gt;All done , now what?&lt;/u&gt;&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;As soon as the .travis.yml file is pushed into the git repo , in a minute or two a build will start in TravisCI console . You can check the logs there to monitor what is happening in the build .&lt;/p&gt;

&lt;p&gt;If the .travis.yml file is well written , and the project builds properly , you will see a screen like this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--aXDyBc9A--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/fukaw7emtb2s7ciq8o2a.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--aXDyBc9A--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/fukaw7emtb2s7ciq8o2a.png" alt="Image description" width="880" height="407"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;&lt;u&gt;Where to go from here?&lt;/u&gt;&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Setup a CI/CD pipeline for your serverless project and let me know how it turned out . Check out &lt;a href="https://docs.travis-ci.com/"&gt;https://docs.travis-ci.com/&lt;/a&gt; for details instructions on how to get started and customize workflows.&lt;/p&gt;

</description>
      <category>serverless</category>
      <category>aws</category>
      <category>ci</category>
      <category>csharp</category>
    </item>
    <item>
      <title>A Serverless API to validate AWS Access Keys based on AWS SAM</title>
      <dc:creator>Turja Narayan Chaudhuri</dc:creator>
      <pubDate>Thu, 20 Jan 2022 13:02:00 +0000</pubDate>
      <link>https://dev.to/turjachaudhuri/a-serverless-api-to-validate-aws-access-keys-based-on-aws-sam-2l3d</link>
      <guid>https://dev.to/turjachaudhuri/a-serverless-api-to-validate-aws-access-keys-based-on-aws-sam-2l3d</guid>
      <description>&lt;p&gt;&lt;strong&gt;Notification 1: &lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;All code related to this blog post can be viewed at &lt;a href="https://github.com/turjachaudhuri/AWS-Serverless/tree/ValidateAccessKey" rel="noopener noreferrer"&gt;https://github.com/turjachaudhuri/AWS-Serverless/tree/ValidateAccessKey&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;u&gt;So , what is this about?&lt;/u&gt;
&lt;/h2&gt;

&lt;p&gt;While working on AWS projects with junior members in my team , I found that many of them are creating sample users on their personal DEV accounts , and hardcoding the access keys and secret keys in their applications . &lt;br&gt;
So , I explained them how that is a security vulnerability and should be avoided always . During the discussion , a junior asked me , whether there is an easy way to check whether an access key is valid or not ?&lt;/p&gt;

&lt;p&gt;This got me thinking .Obviously one way was to go to the IAM console , check each user separately and check their security credentials one by one to see whether the accesskey we need to verify is in the list or not . &lt;/p&gt;

&lt;p&gt;However , I actually tried it and found that for a large number of users , this is quite time consuming . Also , many times , we don't have access to the IAM console for security reasons.So , obviously this is not a good solution.&lt;/p&gt;

&lt;p&gt;That is when I thought about writing a serverless app which will expose an API powered by an AWS Lambda backend that someone can call to verify whether a particular AccessKeyID is valid or not . This blog is about that only.&lt;/p&gt;
&lt;h2&gt;
  
  
  &lt;u&gt;Wait , can we not use the Credential report provided by AWS?&lt;/u&gt;
&lt;/h2&gt;

&lt;p&gt;Ok , so AWS has a functionality called Credential Report which is kind of like an audit report about your user management that can be presented to external auditors also .It contains a ton of information about users , when their passwords changed and so on.&lt;/p&gt;

&lt;p&gt;However , in my particular use case this was not a good fit due to the below reasons:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;To download the credential report you had to navigate to the IAM console , which not everyone will have access to .&lt;/li&gt;
&lt;li&gt;The credential report has information about all the users in your account . I don't want my developers to have access to that kind of information. I just want to provide them the ability to check whether the accesskey they have is valid or not.&lt;/li&gt;
&lt;li&gt;The report does not display the AccessKeyID anywhere , so it is not possible to validate a single AccessKeyID using that report data.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
  &lt;u&gt;So , what to do now?&lt;/u&gt;
&lt;/h2&gt;

&lt;p&gt;Actually the programming part of this is quite simple . AWS has exposed API's that you can consume to list users , list AccessKeyID(s) for an user and so on . So , my plan was to create a lambda function that will first list all users in the system , then for each of them list all the access keys that they have , and build them all into a list . &lt;br&gt;
Then , the app will parse the request body of the incoming API call , find the AccessKeyID provided in the request , and validate it against the list of valid AccessKeyID(s)  that we already have by then , and check accordingly.&lt;/p&gt;

&lt;p&gt;Obviously , rather than calling the Rest API(s) by ourselves , we will use an AWS SDK for our favorite language which will facilitate the whole process , essentially acting as wrappers for the lower level API calls. I used the AWS SDK for .NET V3 , but please feel free to follow along with your favorite programming language.&lt;/p&gt;
&lt;h2&gt;
  
  
  &lt;u&gt;Ok , so show me the code !&lt;/u&gt;
&lt;/h2&gt;

&lt;p&gt;Essentially , all the API(s) you need are in the AWSSDK.IdentityManagement nuget package . Add them as a dependency to your project.&lt;/p&gt;

&lt;p&gt;Then , create an instance of the identity management client that you will use to invoke the API(s) as and when needed.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;iamClient = new AmazonIdentityManagementServiceClient();
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then , get a list of all the users&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;var userRequest = new ListUsersRequest { MaxItems = 20 } ;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then , iterate over them one by one , and process each user to retrieve their AccessKeyID(s) and build up the list.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;foreach (var user in allUsersListResponse.Users)
{
ListAccessKeys(user.UserName, 20, accessKeyMetadataList);
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For each user get the AccessKeyID and other metadata information associated with them.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ListAccessKeysResponse accessKeysResponse = new ListAccessKeysResponse();
var accessKeysRequest = new ListAccessKeysRequest
{
// Use the user created in the CreateAccessKey example
UserName = userName,
MaxItems = maxItems
};
do
{
accessKeysResponse = iamClient.ListAccessKeysAsync(accessKeysRequest).GetAwaiter().GetResult();
foreach (var accessKey in accessKeysResponse.AccessKeyMetadata)
{
Model.AccessKeyMetadata accesskeymetadata = new Model.AccessKeyMetadata();
accesskeymetadata.AccessKeyId = accessKey.AccessKeyId;
accesskeymetadata.CreateDate = accessKey.CreateDate.ToLongDateString();
accesskeymetadata.Status = accessKey.Status;
accesskeymetadata.UserName = accessKey.UserName;

GetAccessKeyLastUsedRequest request = new GetAccessKeyLastUsedRequest()
{ AccessKeyId = accessKey.AccessKeyId };

GetAccessKeyLastUsedResponse response =
iamClient.GetAccessKeyLastUsedAsync(request).GetAwaiter().GetResult();

accesskeymetadata.LastUsedDate = response.AccessKeyLastUsed.LastUsedDate.ToLongDateString();
AccessKeyMetadataList.Add(accesskeymetadata);
}
accessKeysRequest.Marker = accessKeysResponse.Marker;
} while (accessKeysResponse.IsTruncated);
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Finally , check that the AccessKeyID provided in the input JSON is part of the previously built list or not. If yes , return the details of the AccessKeyID  like its Status , creation date , last used date and so on.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Model.AccessKeyMetadata validAcessDetails =
AccessKeyMetadataList.Where(x =&amp;amp;gt; x.AccessKeyId == requestObj.AccessKeyID).FirstOrDefault();

response = new APIGatewayProxyResponse
{
Body = JsonConvert.SerializeObject(validAcessDetails),
StatusCode = (int)HttpStatusCode.OK,
Headers = new Dictionary { { "Content-Type", "application/json" } }
};
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Ok , so that was pretty straightforward . &lt;br&gt;
You can find the whole code with unit tests and a VStudio solution at &lt;a href="https://github.com/turjachaudhuri/AWS-Serverless/tree/ValidateAccessKey" rel="noopener noreferrer"&gt;https://github.com/turjachaudhuri/AWS-Serverless/tree/ValidateAccessKey&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;u&gt;&lt;strong&gt;Somethings to watch out for !!&lt;/strong&gt;&lt;/u&gt;&lt;/p&gt;

&lt;p&gt;Most of the IAM API(s) do not return all results at once . For example , the ListUsers call takes in a parameter called MaxItems . If you don't pass it as an argument , the default is 100 .&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;requestUsers = new ListUsersRequest() { MaxItems = 10 };
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This is a common concept seen in many AWS API(s) which support pagination . Not all data is returned at the first call . Some data is returned , and a marker is returned that can be used in the subsequent API calls to fetch the remaining data . As a result some sort of looping is required to fetch all the data via these API(s) .&lt;/p&gt;

&lt;p&gt;So , be aware as your unit tests might pass due to a limited number of users in your system , but the code will fail when the number of users will exceed a particular limit . A easy way to get around this can be found in AWS documentation only , and is shown below.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;do
{
allUsersListResponse = iamClient.ListUsersAsync(userRequest).GetAwaiter().GetResult();
ProcessUserDetails(allUsersListResponse, AccessKeyMetadataList);
userRequest.Marker = allUsersListResponse.Marker;
} while (allUsersListResponse.IsTruncated);
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;When we initialize AWS SDK client classes , we need to provide some sort of AWS Credentials so that our account , and only our account is affected using the roles and policies that we assign to the client class.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;However , when initializing AWS clients in lambda , this is not needed and we can use the default constructor for those classes .&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;iamClient = new AmazonIdentityManagementServiceClient();
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This is because when we deploy a lambda function we associate a role to it , and it is that role that provides the AWS context (credentials) for all the lambda code . So , the lambda function basically assumes that role , and all the client classes defined using the lambda function , use the context of that role . So , there is no need to define anything separately.&lt;/p&gt;

&lt;p&gt;However , when we are doing unit testing via our IDE , we are not in an actual lambda function , and we don't have any role for the lambda that the unit test framework can assume as such . In those cases , we need to initialize the AWS SDK client classes with an AWS credentials of some sort . I simply use this :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;if (isLocalDebug) //I am debugging locally
{
var chain = new CredentialProfileStoreChain();
AWSCredentials awsCredentials;
if (chain.TryGetAWSCredentials(Constants.AWSProfileName, out awsCredentials))
{
// use awsCredentials
iamClient = new AmazonIdentityManagementServiceClient(
awsCredentials, Amazon.RegionEndpoint.APSouth1);
}
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here Constants.AWSProfileName is the AWS profile that we want the code to assume , meaning the privileges that are possessed by that AWS profile will be assumed by the code while running in VStudio.&lt;/p&gt;

&lt;p&gt;This is explained in detail here &lt;a href="https://docs.aws.amazon.com/sdk-for-net/v3/developer-guide/net-dg-config-creds.html" rel="noopener noreferrer"&gt;https://docs.aws.amazon.com/sdk-for-net/v3/developer-guide/net-dg-config-creds.html&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Always use AWS Profiles , and never hard-code credentials in your source code . Never .&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;u&gt;Permissions for the lambda function&lt;/u&gt;
&lt;/h2&gt;

&lt;p&gt;The lambda function needs to be associated with a role that has privileges to call the IAM API(s) mentioned above. So , the BasicLambdaExecution policy will not suffice. We simply need the below actions in our policies so that the API calls can be made.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;iam:ListAccessKeys&lt;/li&gt;
&lt;li&gt;iam:GetAccessKeyLastUsed&lt;/li&gt;
&lt;li&gt;iam:ListUsers&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The actual JSON template can be found in template.json in the git repo at &lt;a href="https://github.com/turjachaudhuri/AWS-Serverless/blob/ValidateAccessKey/template.json" rel="noopener noreferrer"&gt;https://github.com/turjachaudhuri/AWS-Serverless/blob/ValidateAccessKey/template.json&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;One thing to note here is that there is no Create/Update permission , simply read or list permissions . As a result this lambda function role cannot create/alter anything in IAM , simply fetch data .&lt;/p&gt;

&lt;p&gt;For detailed idea on this topic , please check &lt;a href="https://docs.aws.amazon.com/IAM/latest/UserGuide/access_permissions-required.html" rel="noopener noreferrer"&gt;https://docs.aws.amazon.com/IAM/latest/UserGuide/access_permissions-required.html&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;u&gt;How to deploy the project?&lt;/u&gt;
&lt;/h2&gt;

&lt;p&gt;This app uses the AWS SAM template . So , you can simply clone the code , and get started using a few basic commands&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Create a bucket to store the packaged code .
aws s3 mb s3://[put your bucket name here] --region [put your region here] --profile [put your profile here]&lt;/li&gt;
&lt;li&gt; This will validate the template for you
sam validate --template template.json --profile [put your profile here]&lt;/li&gt;
&lt;li&gt; This will package the code and push it to the bucket
sam package --profile [put your profile here] --template-file template.json --output-template-file serverless-output.yaml --s3-bucket [put your bucket name here] --force-upload&lt;/li&gt;
&lt;li&gt; This will create a Cloudformation stack and deploy all resources to your AWS Account
sam deploy --profile [put your profile here] --template-file serverless-output.yaml --stack-name [put your CF stack name here] --capabilities CAPABILITY_IAM --region [put your region here]&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  &lt;u&gt;Enough talk ! Show me the money&lt;/u&gt;
&lt;/h2&gt;

&lt;p&gt;Once the app gets deployed as a Cloudformation stack , you will get an API endpoint that you can use to test that everything works as promised.&lt;/p&gt;

&lt;p&gt;This is how the API will behave when an input AccessKeyID is provided that is present.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe9i8le6hrdzy8wsaynvq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe9i8le6hrdzy8wsaynvq.png" alt="Image description"&gt;&lt;/a&gt;&lt;br&gt;
This is how it will behave for an API key that is unavailable.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq9fr0z0024v3d0bekygz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq9fr0z0024v3d0bekygz.png" alt="Image description"&gt;&lt;/a&gt;&lt;br&gt;
If you need to debug AWS API calls with Postman where AWS_IAM auth is enabled , check out the detailed set of steps here : &lt;a href="https://docs.aws.amazon.com/apigateway/latest/developerguide/how-to-use-postman-to-call-api.html" rel="noopener noreferrer"&gt;https://docs.aws.amazon.com/apigateway/latest/developerguide/how-to-use-postman-to-call-api.html&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;u&gt;Wait a minute , can anybody call my API?&lt;/u&gt;
&lt;/h2&gt;

&lt;p&gt;Unless your API has some sort of authentication to filter users who can invoke it , everyone will be able to invoke your API . This will essentially mean , that a person who does not have any IAM credentials or AWS privileges can by calling my API actually get information about my AWS resources . This is not acceptable.&lt;/p&gt;

&lt;p&gt;This can simply be provided by changing the Authorization settings of your API from None to AWS_IAM in the API Gateway console , and deploying the API again . As soon as the API gets deployed again , you will need to authenticate yourself while invoking the API by signing all API requests with your access key and secret key using Sigv4 protocol.&lt;/p&gt;

&lt;p&gt;However , changing the Authorization type from the AWS Console is not an option for us as we want to do it via our SAM template that has defined the resources for this app.  In our SAM templates , we can define API resources either implicitly as lambda function event sources or explicitly as separate AWS::Serverless::Api resources .&lt;/p&gt;

&lt;p&gt;However , AWS_IAM authorization is not supported in the implicit API declaration as detailed here :&lt;a href="https://github.com/awslabs/serverless-application-model/issues/25" rel="noopener noreferrer"&gt;https://github.com/awslabs/serverless-application-model/issues/25&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As a result I changed my AWS API Gateway resource declaration from implicit to explicit , and made necessary modifications to get it working . Please check out my git repo for the detailed template at &lt;a href="https://github.com/turjachaudhuri/AWS-Serverless/blob/ValidateAccessKey/template.json" rel="noopener noreferrer"&gt;https://github.com/turjachaudhuri/AWS-Serverless/blob/ValidateAccessKey/template.json&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;u&gt;So, who can call my API?&lt;/u&gt;
&lt;/h2&gt;

&lt;p&gt;Since your API calls are now AWS_IAM authorized , not everyone can call your API . You need to sign your API call request using your accesskey and secret key to consume the API . This does not mean that anybody in AWS can call your API . The user who is calling the API needs to have permission to invoke the particular API . This is how you can ensure that only developers in your team who need access to this functionality can access this API . Others will not be able to , providing you a level of security that is needed .&lt;/p&gt;

&lt;p&gt;If an user who does not have explicit permission to consume this API , tries to invoke this API , he will see the following error:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkrnffr7e3h0glb587y9v.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkrnffr7e3h0glb587y9v.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;u&gt;Done and dusted . Where to go from here ?&lt;/u&gt;
&lt;/h2&gt;

&lt;p&gt;I want to submit this app to the AWS Serverless Repo but currently can't do so since AWS Serverless Repo only supports a few predefined policy templates that you can use . You cannot create custom roles or policies and attach to resources . And currently , there is no IAM Policy template in the AWS SAM approved list of templates that I can use .&lt;/p&gt;

&lt;p&gt;So , I am working on raising a Pull Request to the aws sam git repo that might help me out in this case.&lt;/p&gt;

&lt;p&gt;But , you guys continue . Fork my repo / clone my code . Do whatever you need to , and be sure to let me know what magic you created from my crappy code.&lt;/p&gt;

&lt;p&gt;All feedback is appreciated . Cheers !!&lt;/p&gt;

</description>
      <category>aws</category>
      <category>serverless</category>
      <category>security</category>
      <category>cloud</category>
    </item>
    <item>
      <title>Creating Lambda backed Custom Cloudformation resource in C#</title>
      <dc:creator>Turja Narayan Chaudhuri</dc:creator>
      <pubDate>Thu, 20 Jan 2022 12:16:17 +0000</pubDate>
      <link>https://dev.to/turjachaudhuri/creating-lambda-backed-custom-cloudformation-resource-in-c-4pl5</link>
      <guid>https://dev.to/turjachaudhuri/creating-lambda-backed-custom-cloudformation-resource-in-c-4pl5</guid>
      <description>&lt;p&gt;&lt;strong&gt;&lt;u&gt;Notification 1&lt;/u&gt;&lt;/strong&gt; : All code related to this blog post can be found here  &lt;a href="https://github.com/turjachaudhuri/CF-custom-resources"&gt;https://github.com/turjachaudhuri/CF-custom-resources&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;u&gt;So , what is this about?&lt;/u&gt;
&lt;/h2&gt;

&lt;p&gt;So , while doing a personal proof-of-concept I had to setup a CloudFormation template for a DynamoDB . The table also needed some initial values for config purposes , like master data that I needed to load into that DynamoDB table . Though there were only a few values , I still wanted to automate the process to some extent so that the master data will be setup as soon as the table is created . &lt;/p&gt;

&lt;p&gt;This would help a lot , mostly during migrations as this will reduce one manual step in the process . Also , this can be extended into any number of use cases as need be , which will become evident by the end of this blog post.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;u&gt;Can we simply not use CloudFormation?&lt;/u&gt;
&lt;/h2&gt;

&lt;p&gt;CloudFormation(CF) is awesome for creating AWS resources in a maintainable , consistent way . However , CF is limited in a sense that it only supports a subset of all the AWS resources and only a few specific operations .&lt;/p&gt;

&lt;p&gt;Say , you want to trigger a separate event / call an external API when a resource within CloudFormation gets created. Currently , CF does not support this . Or say , you want to create an AWS resource which is not yet supported by CloudFormation. Or say , you want to load some master data/config data into a DynamoDB table as soon as the DynamoDB table gets created . CloudFormation does not support any of these operations out of box ,as of now.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;u&gt;Custom Resources to the rescue !!&lt;/u&gt;
&lt;/h2&gt;

&lt;p&gt;"Custom resources enable you to write custom provisioning logic in templates that AWS CloudFormation runs anytime you create, update (if you change the custom resource), or delete stacks"- AWS Official post&lt;/p&gt;

&lt;p&gt;Basically , in case of a lambda backed custom resource  , what you can do is as follows -&lt;/p&gt;

&lt;p&gt;Specify a custom resource in the CloudFormation template.&lt;br&gt;
Link a lambda function to the custom resource . When the CloudFormation stack gets created/updated/deleted , CF sends a request to this lambda function.&lt;/p&gt;

&lt;p&gt;The lambda function contains the actual code to do what is needed , which can be loading a DynamoDB table from master data , call an API endpoint to create an AWS/non-AWS resource.&lt;/p&gt;

&lt;p&gt;So , custom resources are basically an extension of CF. However , there are a lot of things to keep in mind while designing custom resources.&lt;/p&gt;
&lt;h2&gt;
  
  
  &lt;u&gt;Let's get started&lt;/u&gt;
&lt;/h2&gt;

&lt;p&gt;A custom resource is defined in the CloudFormation template as -&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;  "PopulateMasterData": {
    "Type": "Custom::PopulateMasterData",
    "Properties": {
      "ServiceToken": {
        "Fn::GetAtt": [
                  "CustomResourceFunction",
                  "Arn"
                      ]
                      },
      "TableName": {
        "Fn::Sub": "${DynamoDBTableName1}"
      }
    }
  }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The only required property is **ServiceToken **which refers to the ARN of the lambda function that is to be invoked by CloudFormation while creating the custom resource.&lt;/p&gt;

&lt;p&gt;Within the properties section of the above JSON , any property other than ServiceToken  is passed into lambda function as an event by CloudFormation during the custom resource create/update/delete.&lt;/p&gt;

&lt;p&gt;Basically , in the properties section you need to mention all the parameters that will define the custom resource you can use . For example , in my case, the lambda function basically needs to push some configuration data into a DynamoDB created within the same stack . &lt;/p&gt;

&lt;p&gt;So , I have passed the tableName  as a property . In the lambda function I reference this tableName property and use it to push values into that DynamoDB table. If you are creating an AWS resource which is not supported by CloudFormation , then you might send the properties of the resource you want to create , and then reference those properties in the lambda function to create the actual resource via an API call , and so on.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;u&gt;Things to remember&lt;/u&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;There are a few things which are a must to keep in mind while designing the associated lambda function that CloudFormation calls during the creation/updation/deletion of your custom resource. These are mentioned in detail here -&lt;/p&gt;

&lt;p&gt;&lt;a href="https://aws.amazon.com/premiumsupport/knowledge-center/best-practices-custom-cf-lambda%C2%A0,and"&gt;https://aws.amazon.com/premiumsupport/knowledge-center/best-practices-custom-cf-lambda ,and&lt;/a&gt; I will mention how I have tried to accommodate them in my design.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;u&gt;1 . Build your custom resources to report, log, and handle failure gracefully&lt;/u&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;"Exceptions can cause your function code to exit without sending a response. Because CloudFormation requires an HTTPS response to confirm whether the operation was a success or a failure, an unreported exception will cause CloudFormation to wait until the operation times out before starting a stack rollback. If the exception occurs again on rollback, CloudFormation will wait again for a timeout before ultimately ending in a rollback failure. During this time, your stack is unusable, and timeout issues can be time-consuming to troubleshoot.&lt;/p&gt;

&lt;p&gt;To avoid this, make sure that your function's code has logic to handle exceptions, the ability to log the failure to help you troubleshoot, and if needed, the ability to respond back to CloudFormation with an HTTPS response confirming that an operation failed."&lt;/p&gt;

&lt;p&gt;In my lambda code , even when the code has a runtime exception , my lambda still returns a response to CloudFormation to avoid stack failure and prevent timeout errors which are very hard to debug.&lt;/p&gt;

&lt;p&gt;Please see below an expert of a catch block in my code which shows that even in case of an error , a response is returned to CloudFormation.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;catch (Exception ex)
{
context.Logger.LogLine($"StartupProgram::LoadMasterData =&amp;amp;gt; {ex.Message}");
context.Logger.LogLine($"StartupProgram::LoadMasterData =&amp;amp;gt; {ex.StackTrace}");

//Error - log it into the cloudformation console
CloudFormationResponse objResponse =
new CloudFormationResponse(
Constants.CloudformationErrorCode,
ex.Message,
context.LogStreamName,
request.StackId,
request.RequestId,
request.LogicalResourceId,
null
);

return objResponse.CompleteCloudFormationResponse(request, context).GetAwaiter().GetResult();
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;&lt;u&gt;2.Set reasonable timeout periods, and report when they're about to be exceeded&lt;/u&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;"If an operation doesn't execute within its defined timeout period, the function raises an exception and no response is sent to CloudFormation.&lt;/p&gt;

&lt;p&gt;To avoid this, ensure that the timeout value for your Lambda functions is set high enough to handle variations in processing time and network conditions. Consider also setting a timer in your function to respond to CloudFormation with an error when a function is about to timeout; this can help prevent function timeouts from causing custom resource timeouts and delays."&lt;/p&gt;

&lt;p&gt;In this particular case , I have specified the lambda function timeout to 300 seconds (the maximum supported in lambda) to ensure that the lambda function does not timeout in any case. Because, if the function timeouts somehow , CloudFormation will not receive the response that it expects and the whole stack will get stuck . More on this later.&lt;/p&gt;

&lt;p&gt;I have yet to figure out how to set a timer in a lambda function written in C# that can react when the timeout is about to expire , and at least return a sample response back to CloudFormation from the lambda , and in turn prevent the stack from getting stuck. Will update this post if I have any leads.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;u&gt;3.Understand and build around Create, Update, and Delete events&lt;br&gt;
&lt;/u&gt;&lt;/strong&gt;&lt;br&gt;
"Depending on the stack action, CloudFormation sends your function a Create, Update, or Delete event. Each event is handled distinctively, so you should ensure that there are no unintended behaviors when any of the three event types is received.&lt;/p&gt;

&lt;p&gt;For more information, see Custom Resource Request Types.&lt;/p&gt;

&lt;p&gt;For each of these different request types , CloudFormation injects different types of properties in the request object of the lambda event , and you need to handle them differently.&lt;/p&gt;

&lt;p&gt;A sample class in C# that emulates the different properties that are present in the event object injected into the lambda function associated with the custom resource.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;public class CloudFormationRequest
{
     public string StackId { get; set; }
     public string ResponseURL { get; set; }
     public string RequestType { get; set; }
     public string ResourceType { get; set; }
     public string RequestId { get; set; }
     public string LogicalResourceId { get; set; }
     public string PhysicalResourceId { get; set; } //valid for delete and update operations
     public object ResourceProperties { get; set; } //valid for delete and update operations
     public object OldResourceProperties { get; set; } //valid for update operations
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;&lt;u&gt;4.Understand how CloudFormation identifies and replaces resources&lt;/u&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;"When an update triggers replacement of a physical resource, CloudFormation compares the PhysicalResourceId returned by your Lambda function to the previous PhysicalResourceId; if the IDs differ, CloudFormation assumes the resource has been replaced with a new physical resource.&lt;/p&gt;

&lt;p&gt;However, the old resource is not implicitly removed to allow a rollback if necessary. When the stack update is completed successfully, a Delete event request is sent with the old ID as an identifier. If the stack update fails and a rollback occurs, the new physical ID is sent in the Delete event.&lt;/p&gt;

&lt;p&gt;With this in mind, returning a new PhysicalResourceId should be done with care, and delete "events must consider the input PhysicalId to ensure that updates that require replacement are properly handled."&lt;/p&gt;

&lt;p&gt;In my particular case , since I was not actually creating a custom resource of any kind , rather loading configuration data into a DynamoDB table , I did not have to consider the update/delete cases . I simply pushed the data into the table in case of Create request type.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;if (string.Equals(request.RequestType, Constants.CloudFormationCreateRequestType))
{
dynamoDBHelper.putItemTable1(item1, request.ResourceProperties.TableName);

//Success - data inserted properly in the dynamoDB
CloudFormationResponse objResponse =
new CloudFormationResponse(
Constants.CloudformationSuccessCode,
"Custom Resource Creation Successful",
$"{request.StackId}-{request.LogicalResourceId}-DataLoad",
request.StackId,
request.RequestId,
request.LogicalResourceId,
item1
);

return objResponse.CompleteCloudFormationResponse(request, context).GetAwaiter().GetResult();
}
else
{
CloudFormationResponse objResponse =
new CloudFormationResponse(
Constants.CloudformationSuccessCode,
"Do nothing.Data will be pushed in only when stack event is Create",
context.LogStreamName
request.StackId,
request.RequestId,
request.LogicalResourceId,
null
);
return objResponse.CompleteCloudFormationResponse(request, context).GetAwaiter().GetResult();
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;&lt;u&gt;5.Make sure that your functions are designed with idempotency in mind&lt;/u&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;"An idempotent function can be repeated any number of times with the same inputs, and the result will be the same as if it had been done only once. Idempotency is valuable when working with CloudFormation to ensure that retries, updates, and rollbacks don't cause the creation of duplicate resources, errors on rollback or delete, or other unintended effects.&lt;/p&gt;

&lt;p&gt;For example, if CloudFormation invokes your function to create a resource, but doesn't receive a response that the resource was created successfully, CloudFormation might invoke the function again, resulting in the creation of a second resource; the first resource may become orphaned.&lt;/p&gt;

&lt;p&gt;How to address this can differ depending on the action your function is intended to perform, but a common technique is to use a uniqueness token that CloudFormation can use to check for pre-existing resources. For example, a hash of the StackId and LogicalResourceId could be stored in the resource's metadata or in a DynamoDB table."&lt;/p&gt;

&lt;p&gt;My code to make the functions idempotent - DynamoDB insert is only performed if the item does not exist before.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;if (getItem(masterItem.UniqueID,TableName)== null) // this item does not exist
{
   Table table = Table.LoadTable(client, TableName);

   var clientItem = new Document();
   clientItem["UniqueID"] = masterItem.UniqueID;
   clientItem["EmployeeID"] = masterItem.EmployeeID;
   clientItem["Name"] = masterItem.Name;
   clientItem["Employee"] = masterItem.Designation;
   clientItem["Age"] = masterItem.Age;
   clientItem["Department"] = masterItem.Department;

   table.PutItemAsync(clientItem).GetAwaiter().GetResult();
   context.Logger.LogLine("DynamoDBHelper::PutItem() -- PutOperation succeeded");
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;&lt;u&gt;6.Rollbacks&lt;/u&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;If a stack operation fails, CloudFormation attempts to roll back, reverting all resources to their prior state. This results in different behaviors depending on whether the update caused a resource replacement.&lt;/p&gt;

&lt;p&gt;Ensuring that replacements are properly handled and the old resources are not implicitly removed until a delete event is received will help ensure that rollbacks are executed smoothly.&lt;/p&gt;

&lt;p&gt;To help implement best practices when using custom resources, consider using the Custom Resource Helper provided by awslabs, which can assist with exception and timeout trapping, sending responses to CloudFormation, and logging.&lt;/p&gt;

&lt;p&gt;Also , AWS Dotnet SDK does not support CloudFormation custom resources implicitly . So , you will need to create the custom classes yourself . I got a lot of help and ideas from this GitHub repo -&amp;gt; &lt;a href="https://medium.com/@sch.bar/a-deep-dive-on-aws-cloudformation-custom-resources-72416f2e9cef"&gt;https://medium.com/@sch.bar/a-deep-dive-on-aws-cloudformation-custom-resources-72416f2e9cef&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You can also check my code here at - &lt;a href="https://github.com/turjachaudhuri/CF-custom-resources"&gt;https://github.com/turjachaudhuri/CF-custom-resources&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Be sure to always return a response from lambda function , even if the code gets an error/ somehow timeout(s) . Otherwise the CloudFormation stack might get stuck and you will have to wait till the event timeout(s). You can find more discussion on that topic here-&lt;/p&gt;

&lt;p&gt;&lt;a href="https://forums.aws.amazon.com/thread.jspa?threadID=176003"&gt;https://forums.aws.amazon.com/thread.jspa?threadID=176003&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;u&gt;Where to go from here?&lt;/u&gt;
&lt;/h2&gt;

&lt;p&gt;This has endless opportunities . This is a great way to extend CloudFormation to create custom resources . I will extend my source project to achieve two things-&lt;/p&gt;

&lt;p&gt;Setup a AWS RDS instance with sample SQL script for master data setup.&lt;br&gt;
Load a DynamoDB table from a S3 file when the DB is created.&lt;/p&gt;

&lt;p&gt;As always , please provide me feedback on how to improve , and let me know if there is anything else that you guys are working on in this regard.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>cloudskills</category>
      <category>serverless</category>
      <category>csharp</category>
    </item>
  </channel>
</rss>
