<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: IOD Cloud Tech Research Ltd.</title>
    <description>The latest articles on DEV Community by IOD Cloud Tech Research Ltd. (@iod).</description>
    <link>https://dev.to/iod</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/iod"/>
    <language>en</language>
    <item>
      <title>A Guide for Enterprises – Migrating to the AWS Cloud: Part 1</title>
      <dc:creator>We are IOD</dc:creator>
      <pubDate>Wed, 01 Jun 2022 15:05:49 +0000</pubDate>
      <link>https://dev.to/iod/a-guide-for-enterprises-migrating-to-the-aws-cloud-part-1-53h2</link>
      <guid>https://dev.to/iod/a-guide-for-enterprises-migrating-to-the-aws-cloud-part-1-53h2</guid>
      <description>&lt;p&gt;Amazon S3 this year celebrated its 16th birthday. Launched on Pi Day (March 14) in 2006, the extremely popular cloud storage service was among AWS’ earliest offerings, along with Amazon Simple Queue Service and EC2. With the release of S3, Amazon revolutionized the world of computer storage and forever changed the way organizations look at IT infrastructure—compute, storage and network. &lt;/p&gt;

&lt;p&gt;Today, AWS is the most comprehensive and broadly adopted public cloud platform, with over 200 services, 25 geographic regions, and more than 80 data centers around the world. AWS enables anyone—from individuals to international Fortune 500 companies—to leverage enterprise-grade services with a cost-efficient pay-as-you-go pricing model.&lt;/p&gt;

&lt;p&gt;Over the past decade, and even more so since the global pandemic disruption, more and more companies have been shifting to public cloud platforms—not only to reduce their physical data-center footprints but also to innovate and adapt more quickly to changing demand. When it came to the enterprise world, certain industries were slower than others to adopt the public cloud, but even now that shift has become ubiquitous. &lt;/p&gt;

&lt;p&gt;Large organizations, however, face inherent challenges regarding cloud adoption, such as procurement, legal, and financial aspects. But the biggest factor for the delayed start across many industries has been a lack of services capable of addressing some of their specific requirements related to geographic location, compliance, and specialized hardware, among others. Still, with the maturity and evolution of cloud services, there is hardly any reason left to prevent organizations from adopting the public cloud.&lt;/p&gt;

&lt;p&gt;This article is a two-part series on moving your enterprise workloads to AWS. In this post, we will highlight some of the key points to consider when getting started. &lt;/p&gt;

&lt;p&gt;Cloud Migration Models and How to Utilize Them&lt;br&gt;
There are a few well-known strategies to migrate to the public cloud. The most popular approaches are known as rehosting (lift-and-shift), replatforming, and rebuilding.&lt;/p&gt;

&lt;p&gt;All public cloud vendors provide infrastructure-as-a-service functionalities that enable organizations to rehost their existing infrastructure (virtual machines, data storage, network, etc.) to the cloud. According to Gartner, AWS is the current leader in the infrastructure-as-a-service (IaaS) segment, and by being a common denominator across on-premises and public cloud providers, IaaS remains one of the most popular and easiest ways to get started with AWS. &lt;/p&gt;

&lt;p&gt;IaaS provides maximum control, but at the same time, requires maximum management tasks, such as configuring the system, resource monitoring and adjustments, and patching security updates. However, for a successful migration, your IT team will need to understand how AWS works at its core.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ehFX509a--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/juvxkbew0q9wz16bwzrv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ehFX509a--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/juvxkbew0q9wz16bwzrv.png" alt="Image description" width="880" height="884"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Figure 1: Gartner’s 2020 Magic Quadrant for Cloud &lt;br&gt;
Infrastructure &amp;amp; Platform Services&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;While IaaS is a popular starting point for migration, it may not be the most effective way to use AWS services. Rather, to build modern, cloud-native, scalable, and cost-effective applications, there are other categories to consider, such as platform as a service (PaaS) and software as a service (SaaS). And within these, it is worth exploring the concepts of functions as a service (FaaS) and containers as a service (CaaS), which radically changed the computing paradigm for software engineers. &lt;/p&gt;

&lt;p&gt;These services share the same purpose: to abstract the underlying infrastructure pieces and provide developers with more freedom to focus on the application, rather than the infrastructure. &lt;/p&gt;

&lt;p&gt;*&lt;em&gt;Platform as a Service *&lt;/em&gt;&lt;br&gt;
PaaS encapsulates platform configurations and OS-level tasks. For example, AWS Elastic Beanstalk automatically handles application deployment, capacity provisioning, load balancing, and autoscaling without additional manual effort. Another great example is AWS RDS, the managed relational database service that comes with out-of-the-box support for automatic snapshots, global tables, and replication, among many other features.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Software as a Service&lt;/strong&gt;&lt;br&gt;
SaaS encapsulates all internal details and provides an API-based interface to start using the service. One example is Amazon SES (Simple Email Service), which enables the programmatic sending and receiving of emails via an API. Another popular example is AWS Amplify, which enables developers to build and deploy a web or mobile application without any operational overhead. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--qIkf10ty--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ybvzv3uhtl3zu20tp091.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--qIkf10ty--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ybvzv3uhtl3zu20tp091.png" alt="Image description" width="880" height="562"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Figure 2: Evolution of cloud services (Source: Red Hat)&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;While using a single cloud provider such as AWS is typical for most organizations, a multicloud strategy is often the popular choice for large enterprises. This provides more flexibility in M&amp;amp;A operations and offers additional options such as access to exclusive geographical locations, making it easier for an organization to meet business requirements related to latency or government regulations. &lt;/p&gt;

&lt;h2&gt;
  
  
  Understand Why You Want to Move to AWS
&lt;/h2&gt;

&lt;p&gt;Every organization has different goals and priorities when beginning its cloud migration. Likewise, AWS has many services and features that can be utilized to accommodate different use cases, such as data backup, disaster recovery, high availability, low-cost storage, big-data processing, and more. &lt;/p&gt;

&lt;p&gt;The most important parameter for a successful migration is understanding the core reasoning behind the move. Enterprises should ask themselves: Why do I want to migrate to AWS? The answer will help all stakeholders get on the same page. It will also help IT teams choose the right set of AWS services (based on the different migration models discussed earlier). For example, AWS provides multiple storage services and different types of load balancers, and selecting the right one depends on your use case and business requirements.&lt;/p&gt;

&lt;h2&gt;
  
  
  Security in the Cloud
&lt;/h2&gt;

&lt;p&gt;Security is a very critical topic to any organization. Historically, security and compliance concerns have been one of the reasons many organizations, especially large enterprises, have been reluctant to adopt the cloud. Over the years, however, AWS has focused on making sure its infrastructure meets the strictest security and compliance standards; it also seeks to offer the proper tools and services for organizations in sectors such as finance, healthcare, and government to be able to run their systems in AWS Cloud. &lt;/p&gt;

&lt;p&gt;There is a common misconception that all cloud workloads must be internet-facing. Naturally, this is not true, and one can easily build a completely private and isolated workload environment. Yet, public-facing workloads such as e-commerce applications were among the first to benefit from cloud-native capabilities such as autoscaling and pay-as-you-go pricing. &lt;/p&gt;

&lt;p&gt;If you are looking to protect your internet-based applications from external threats like DDoS attacks or any of the vulnerabilities on OWASP’s list (injection, broken authentication, sensitive data exposure, etc.), AWS WAF and AWS Shield are great options. These built-in services leverage AWS’ own security expertise and make it easier for organizations to safely build globally distributed applications. &lt;/p&gt;

&lt;p&gt;Here, I’ll take a closer look at how AWS manages security in the cloud.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Shared Responsibility Model
&lt;/h2&gt;

&lt;p&gt;According to AWS’ shared responsibility model, security and compliance are the shared responsibility of AWS and its customers. While AWS manages “security of the cloud,” the customer manages “security in the cloud.” &lt;/p&gt;

&lt;p&gt;This means that AWS is responsible for protecting the infrastructure running all of its services, including the hardware, software, networking, and data-center facilities. However, customers are responsible for configuring and managing the AWS service(s) they decide to use. For instance, if you use Amazon EC2 instances to host your application, you—not AWS—will be responsible for the configurations and management of those instances. &lt;/p&gt;

&lt;p&gt;The diagram below explains who protects which segments and how much control your IT team has over the public cloud infrastructure:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--o1twWpUF--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/z3d9id6atiwkjovio37g.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--o1twWpUF--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/z3d9id6atiwkjovio37g.png" alt="Image description" width="880" height="482"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Figure 3: AWS shared responsibility model for cloud services (Source: AWS)&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Organizational Structure
&lt;/h2&gt;

&lt;p&gt;In an on-premises environment, the IT team can organize and restrict different applications, with the help of physical networks and boundaries. In an AWS environment, you can run all of your applications in the same account. However, this is not a recommended practice, as it may not be compliant with regulatory requirements (e.g., financial or healthcare applications that require process and data isolation for risk mitigation). &lt;/p&gt;

&lt;p&gt;AWS Organizations is an account-management service that allows your IT team to easily create and manage multiple AWS accounts with the required security controls and supervision. By keeping different environments in different AWS accounts, you can limit potential security threats while simultaneously maintaining overall governance. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--DAQZwT6r--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/i6j6dwqw556rcqsc7zyv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--DAQZwT6r--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/i6j6dwqw556rcqsc7zyv.png" alt="Image description" width="880" height="252"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Figure 4: AWS Organizations can be used to create and manage group accounts (Source: AWS)&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Organizational Structure
&lt;/h2&gt;

&lt;p&gt;In an on-premises environment, the IT team can organize and restrict different applications with the help of physical networks and boundaries. In an AWS environment, you can run all of your applications in the same account. However, this is not a recommended practice, as it may not be compliant with regulatory requirements (e.g., financial or healthcare applications that require process and data isolation for risk mitigation). &lt;/p&gt;

&lt;p&gt;AWS Organizations is an account management service that allows your IT team to easily create and manage multiple AWS accounts that comply with your organization’s own policies as well as follow established security controls. By keeping different environments in different AWS accounts, you can limit potential security threats while simultaneously maintaining overall governance:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--87tLgal1--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2thfare64d9of04sa6s5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--87tLgal1--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2thfare64d9of04sa6s5.png" alt="Image description" width="880" height="508"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Figure 5: AWS Single Sign-On (SSO) with enterprise identity systems like Microsoft Active Directory (Source: AWS)&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Managing Governance and Compliance in AWS
&lt;/h2&gt;

&lt;p&gt;Enterprise IT teams have to maintain the inventory of resources in use. For security and compliance reasons, they also have to regularly update the infrastructure and keep track of the changes. Below, I’ll review a few management and governance services that AWS provides. These are designed with simplicity, scale, and cost-effectiveness in mind, so they’re suitable for organizations of any size.&lt;/p&gt;

&lt;h2&gt;
  
  
  Management Services
&lt;/h2&gt;

&lt;p&gt;In a distributed, multi-account setup, you don’t want to completely depend on a central IT team to manage and perform all tasks manually. This will slow down the formation of a new environment and will also burden your team with unnecessary work. AWS has a number of management services that help IT teams carry out these tasks securely and reliably. &lt;/p&gt;

&lt;h2&gt;
  
  
  AWS Control Tower
&lt;/h2&gt;

&lt;p&gt;AWS Control Tower helps set up the baseline environment in an automated and controlled way by following organizational policies. Control Tower enables the creation of rules, called guardrails, and provides recommendations for them. These help organizations enforce their policies via service control policies (SCPs) and can also detect policy violations so you stay compliant—functionalities you can automate for both new and existing accounts&lt;/p&gt;

&lt;h2&gt;
  
  
  AWS Systems Manager
&lt;/h2&gt;

&lt;p&gt;AWS Systems Manager helps you centralize data from multiple AWS services and automate tasks across AWS resources. The service has some important features, including: &lt;/p&gt;

&lt;p&gt;Sessions Manager: For logging into instances from a web browser (among other things)&lt;br&gt;
Parameter Store: For storing important configurations, like passwords and database connection details, in an encrypted format&lt;br&gt;
Inventory: For collecting the configuration and inventory of instances&lt;br&gt;
Patch Manager: For easily applying software patches to a group of instances&lt;br&gt;
Governance Services&lt;br&gt;
Organizations want to achieve business agility by moving to the cloud, but at the same time, they want to maintain the necessary governance control. There are a few key AWS services worth exploring that provide auditing and compliance capabilities so that you can securely govern your resources at any scale.&lt;/p&gt;

&lt;h2&gt;
  
  
  AWS CloudTrail
&lt;/h2&gt;

&lt;p&gt;AWS CloudTrail is the source-of-truth service for everything that happens in the AWS environment. By default, all the changes that occur in your AWS environment are done via platform API calls. CloudTrail keeps a record of all the API calls, who made the call, and when the call was placed. This helps track user and resource activity in your cloud environment. &lt;/p&gt;

&lt;h2&gt;
  
  
  AWS Config
&lt;/h2&gt;

&lt;p&gt;In a large environment, it can be difficult to keep track of or identify changes, as well as maintain a snapshot of the environment at a particular point in time. AWS Config provides the inventory, history, and change notifications of your cloud resources and their configuration to enable better governance and an improved security posture. &lt;/p&gt;

&lt;h2&gt;
  
  
  AWS Trusted Advisor and AWS Well-Architected Tool
&lt;/h2&gt;

&lt;p&gt;After working with thousands of enterprise customers over the years, AWS has gathered together its knowledge of best practices and successful cloud ops into two services: AWS Trusted Advisor and AWS Well-Architected Tool. &lt;/p&gt;

&lt;p&gt;AWS Trusted Advisor analyzes your environment, offering up recommendations for cost, performance, security, fault tolerance, and service limits per proven industry best practices. &lt;/p&gt;

&lt;p&gt;AWS Well-Architected Tool enables engineering teams to assess the state of their workloads and ways of working by comparing them to the latest AWS architecture best practices. This tool is designed to get feedback on different aspects of your application—operational excellence, performance efficiency, reliability, security, sustainability, and cost-optimization—and then generates a risk scorecard for each of these pillars.&lt;br&gt;
Summary&lt;br&gt;
There is a saying among large organizations that have successfully migrated to AWS Cloud: “Crawl, walk, run.” What does this mean for you? &lt;/p&gt;

&lt;p&gt;Crawl: Identify and set up a clear plan and resources to build a strong cloud foundation.&lt;br&gt;
Walk: Migrate and monitor your processes. This phase is all about learning and adopting the best cloud practices. &lt;br&gt;
Run: Iterate and modernize to reap the benefits of cloud computing. This is where you identify and innovate your business processes.&lt;br&gt;
In short, define your goals, find the right strategy and people to accomplish them, and continue on your cloud path. What you learn along the way will help you evolve and adapt. As you probably know by now, cloud computing is here to stay, so the time to move your business to the cloud is now!&lt;/p&gt;

&lt;p&gt;In the next post, we will cover areas such as operational monitoring, resource management, and cloud cost optimization, as well as discuss how to create an effective team culture for successful cloud adoption. &lt;/p&gt;

&lt;p&gt;This article is originally posted on the &lt;a href="https://iamondemand.com/blog/a-guide-for-enterprises-migrating-to-the-aws-cloud-part-1/"&gt;IOD Blog.&lt;/a&gt; by Bruno Almeida.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>cloud</category>
    </item>
    <item>
      <title>Our Take on Kubernetes: 6 Top Articles to Get You up to Speed</title>
      <dc:creator>We are IOD</dc:creator>
      <pubDate>Mon, 16 May 2022 13:31:18 +0000</pubDate>
      <link>https://dev.to/iod/our-take-on-kubernetes-6-top-articles-to-get-you-up-to-speed-49n9</link>
      <guid>https://dev.to/iod/our-take-on-kubernetes-6-top-articles-to-get-you-up-to-speed-49n9</guid>
      <description>&lt;p&gt;In anticipation of the KubeCon + CloudNativeCon conference that will take place in Valencia, Spain, on May 16-20 (and virtually), we wanted to share with you some key takeaways from six recent Kubernetes articles that we found particularly interesting.&lt;/p&gt;

&lt;h2&gt;
  
  
  1. The Top 5 Kubernetes Configuration Mistakes—and How to Avoid Them by Komodor
&lt;/h2&gt;

&lt;p&gt;This article describes how to avoid five common syntax, provisioning, and resource management misconfigurations that can cause cluster-wide performance, availability, and stability issues. For example, poorly configured operators for facilitating third-party integrations can end up wantonly consuming limited resources, causing runtime errors such as OOM (out of memory). Or using a single container to handle all ingress traffic can take down the cluster if there are traffic spikes.&lt;/p&gt;

&lt;p&gt;Our main takeaway is that these and other configuration mistakes must be taken into account during the design, development, and testing stages in order to avoid runtime performance issues.&lt;/p&gt;

&lt;h2&gt;
  
  
  2. The Ultimate Kubectl Commands Cheat Sheet by Komodor
&lt;/h2&gt;

&lt;p&gt;This article is an invaluable resource on how to properly use the kubectl command line to interact optimally with Kubernetes clusters. The various kubectl options and filters are critical for getting or switching contexts, obtaining the names of containers in a running pod, creating or getting values from secrets, testing RBAC rules, and more.&lt;/p&gt;

&lt;p&gt;Our main takeaway is that complete mastery of the kubectl command is an essential Kubernetes development skill. In addition to this article, be sure to reference the official kubectl page.&lt;/p&gt;

&lt;h2&gt;
  
  
  3. Kubernetes Capacity Planning: How to Rightsize the Requests of Your Cluster by Sysdig
&lt;/h2&gt;

&lt;p&gt;Too much capacity is wasteful and needlessly costly. Too little capacity can cause performance bottlenecks. This article provides important insights on the art and science of rightsizing Kubernetes capacity. Our main takeaways are:&lt;/p&gt;

&lt;p&gt;Make sure to have Prometheus as an add-on for tracking cluster resource usage metrics.&lt;br&gt;
Use Kubernetes limits and requests whenever you can. &lt;br&gt;
Size your clusters based on the resources your pods are estimated to need and use.&lt;br&gt;
Utilize cloud-native autoscaling features if you’re deploying on public clouds.&lt;br&gt;
Although not mentioned explicitly in the article, we would also add the importance of utilizing Kubernetes’ horizontal and vertical pod autoscaling features (HPA and VPA) to rightsize your clusters.&lt;/p&gt;

&lt;h2&gt;
  
  
  4. Kubernetes 1.24 – What’s New? by Sysdig
&lt;/h2&gt;

&lt;p&gt;Kubernetes 1.24 was released on May 3. This article summarizes the most notable new, evolving, and deprecated features across a number of key categories: APIs, apps, auth, network, nodes, scheduling, and storage.&lt;/p&gt;

&lt;p&gt;Our main takeaway is that, as a Kubernetes developer, it’s important that you stay on top of where the Kubernetes project is headed and what its timeline is moving forward. In addition to this article, two other helpful resources are:&lt;/p&gt;

&lt;p&gt;Official Kubernetes 1.24 release page&lt;br&gt;
Release plan and schedule for Kubernetes 1.25&lt;/p&gt;

&lt;h2&gt;
  
  
  5. Rancher vs. Kubernetes: It’s Not Either Or by Kubecost
&lt;/h2&gt;

&lt;p&gt;Kubernetes and Rancher are both important open-source container management projects, each with a large community of users and contributors. This article starts by summarizing the key features of each project:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--aawu4T3P--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/bile9fdatmh7imn46fxe.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--aawu4T3P--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/bile9fdatmh7imn46fxe.png" alt="Image description" width="880" height="641"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The main takeaway is that the two are complementary. Kubernetes focuses on orchestrating resources within a single cluster, while Rancher eases Kubernetes cluster management at scale. So, for example, using Rancher to deploy Kubecost across a Rancher project provides end-to-end visibility into and more granular management of Kubernetes cluster costs, as well as cluster health and efficiency.&lt;/p&gt;

&lt;p&gt;We would also like to point out that Rancher is being embraced by cloud providers for managing cloud-native Kubernetes clusters. See AWS’ reference deployment Rancher for Amazon EKS.&lt;/p&gt;

&lt;h2&gt;
  
  
  6. Kubernetes kOps: Step-By-Step Example &amp;amp; Alternatives by Kubecost
&lt;/h2&gt;

&lt;p&gt;Kubernetes kOps is an open-source command line tool for automating:&lt;/p&gt;

&lt;p&gt;Configuration, maintenance, and management of Kubernetes clusters&lt;br&gt;
Provisioning of the cloud infrastructure to run them&lt;br&gt;
Although the article points out that there are alternatives to kOps (Kubespray, eksctl, and kubeadm), kOps is the only tool that is both provider-agnostic (or at least will be soon) and able to support infrastructure provisioning. It then goes on to provide a hands-on example of how to use kOps to set up a Kubernetes cluster in AWS.&lt;/p&gt;

&lt;p&gt;Our main takeaway is that tools like kOps are an important part of an organization’s Kubernetes stack, making it easier to manage and orchestrate Kubernetes clusters at scale.&lt;/p&gt;

&lt;h2&gt;
  
  
  Summary
&lt;/h2&gt;

&lt;p&gt;The Kubernetes ecosystem is continuously evolving, and we here at IOD make it our business to keep on top of emerging innovations, trends, and tips. In this article, we shared with you our key takeaways on how to: avoid common misconfigurations, fully leverage the kubectl command, rightsize Kubernetes capacity, and incorporate both kOps and Rancher into your Kubernetes stack. We also looked at what’s new (and what’s gone) in the latest version released earlier this month.&lt;/p&gt;

&lt;p&gt;Tap into &lt;a href="https://iamondemand.com/content-types/"&gt;IOD’s extensive talent network&lt;/a&gt; of K8s, DevOps, cloud experts, and more to create content that speaks to devs. &lt;a href="https://iamondemand.com/contact-us/"&gt;Get started today&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>kubecon</category>
      <category>cloudnative</category>
    </item>
    <item>
      <title>Targeting Developers with Tech Content: 4 Tips for B2D Marketers</title>
      <dc:creator>We are IOD</dc:creator>
      <pubDate>Thu, 12 May 2022 14:08:17 +0000</pubDate>
      <link>https://dev.to/iod/targeting-developers-with-tech-content-4-tips-for-b2d-marketers-2bj3</link>
      <guid>https://dev.to/iod/targeting-developers-with-tech-content-4-tips-for-b2d-marketers-2bj3</guid>
      <description>&lt;p&gt;Over the years, tech content marketers have frequently prioritized writing business content targeted at the C-suite. But, within the past decade, a bottom-up adoption method has gained more ground, inspiring marketers to incorporate increasingly more content for developers into their content strategies.&lt;/p&gt;

&lt;p&gt;Now, writing for developer practitioners has become essential for tech organizations to hit their KPIs and meet their goals. After all, developer team leads &lt;a href="https://www.devrelx.com/trends?lightbox=comp-kisqhm6d3__85a0f937-9ce5-419d-959a-80fd18ac461b_runtime_dataItem-kisqhm6e"&gt;influence technology decisions 67% of the time&lt;/a&gt;, playing a major role in deciding what tools are incorporated into workflows and processes. Creating more content for developers can play a critical role in the sales process, encouraging practitioners as they test free products or product trials.However, many brands struggle to create content that resonates with developers. Often, the knowledge gap between tech marketers and practitioners causes “business-to-developer content” (or B2D content) to fall short of a technical audience’s expectations.&lt;/p&gt;

&lt;p&gt;Practitioners have a different relationship to your product than other decision-makers. Since they’re using your product every day, developers need to see what’s in it for them before they choose to work with your brand. Plus, they’re searching for practical, precise content that solves their issues, searching queries about “how to do x” or “bug in y.” Creating B2D content like this builds trust in your product, driving developers to recommend your product or service throughout their organization.&lt;/p&gt;

&lt;p&gt;Here are four ways your brand can master B2D content marketing and start creating tech content that makes developers want to work with you.&lt;/p&gt;

&lt;h2&gt;
  
  
  1. Focus on Applications vs. Philosophy
&lt;/h2&gt;

&lt;p&gt;Traditional, high-level marketing content doesn’t resonate with developers, showcasing the knowledge gap between marketers and practitioners. That’s because this content focuses too much on the philosophy behind your product rather than how it actually works.&lt;/p&gt;

&lt;p&gt;Developers want to go beyond the theory, seeing the practical ways they can use your product or service to solve their current challenges. But dry technical content is a dime a dozen; even though developers may be used to slogging through technical manuals, that doesn’t mean it’s the best use of their time, especially for a tool not currently in their toolkit. While successful tech content undeniably emphasizes application, step-by-step walkthroughs without context aren’t enough to maintain a developer’s interest, either.&lt;/p&gt;

&lt;p&gt;Instead, they need clear insight into two elements to see if your solution is the best tool to solve their challenges: &lt;/p&gt;

&lt;p&gt;The philosophy behind your product—what your product is and how you solve a developer’s issues on a high level, as demonstrated through best practices and customer use cases with technical examples that include diagrams and code snippets.&lt;br&gt;
Real-world applications for your solution—like actionable how-to or walkthrough content that showcases specific capabilities, features, or workflows and examples of how they can reproduce the same results.&lt;br&gt;
Creating content with these elements builds a developer’s trust in your solution and offers clear insight into how quickly and efficiently developers can benefit from adding your product to their workflows.&lt;/p&gt;

&lt;h2&gt;
  
  
  2. Design Easily-Scannable, Actionable Content
&lt;/h2&gt;

&lt;p&gt;Most developers won’t call your company’s help desk to explain their current challenge, learn how to use your product, and compare your product to other alternatives. Instead, they typically start by searching on Google, (hopefully) landing on one of your website pages, and exploring for themselves to see how your solution can support their existing needs and workflows. Additionally, when developers encounter a problem they don’t know how to solve, they often turn to &lt;a href="https://www.researchgate.net/publication/315975127_What_Do_Developers_Use_the_Crowd_For_A_Study_Using_Stack_Overflow"&gt;crowdsourcing question and answer websites &lt;/a&gt;like Quora, StackOverflow, or Reddit to source answers to specific questions from other practitioners.&lt;/p&gt;

&lt;p&gt;Yet, we still see marketers try to incorporate tech content into a more traditional, long-form blog post format with more story than necessary. This format doesn’t help developers get the quick, easy answer they need to solve their problems. &lt;a href="https://www.devrelx.com/post/content-that-developers-love"&gt;Experienced tech writer Raphael Mun&lt;/a&gt; recommends structuring tech content more like online recipes instead of traditional corporate blog content.&lt;/p&gt;

&lt;p&gt;Once a developer lands on one of your blogs, they will quickly scroll to see how long an article is and what the article is about to save time. This also helps them understand how technical the article is. Then, a developer has the option to scan your content, skip past the story, and find a solution more quickly.&lt;/p&gt;

&lt;p&gt;To create scannable and actionable content for developers, provide an introduction to the use case or problem your product addresses. Then, incorporate common questions developers ask on popular question and answer websites as section headers. Including these questions as headers makes it more likely that developers will discover and consult your blog post while searching for answers on Google. Plus, these headers make it easier for developers to scan your content and find the answer they’re looking for.&lt;/p&gt;

&lt;p&gt;Don’t forget to keep your website content current, too. Things change quickly in tech, so it’s important to have content that ongoingly supports developers with accurate code snippets, up-to-date screenshots showing recent platform updates, effective walkthroughs, and popular integrations. Regularly review older content to see if it still aligns with current best practices and confirm that it reflects how your product works without any bugs.&lt;/p&gt;

&lt;p&gt;Timely content makes your brand look more credible to developers and makes them more likely to turn to your website when they’re searching for solutions.&lt;/p&gt;

&lt;h2&gt;
  
  
  3. Emphasize One Use Case At a Time
&lt;/h2&gt;

&lt;p&gt;For a marketer, it can be tempting to create high-level content like listicles that shows all the great benefits your platform has to offer. However, developers need to see that your solution works with their existing tech stack and can solve even only a specific existing challenge they experience. Maintaining a single scope in your content allows you to give developers the support they’re looking for right away.&lt;/p&gt;

&lt;p&gt;Not sure which use cases to focus on? &lt;a href="https://iamondemand.com/blog/marketing-it-love-hate-or-just-love/"&gt;Leverage internal experts&lt;/a&gt; to serve as a focus group to research and produce relevant, meaningful content that speaks to your target dev audiences. Your product managers should be able to offer insight into the requirements and questions clients have. Then, they should also walk you through the platform and show how your online service helps solve each specific requirement. &lt;/p&gt;

&lt;p&gt;You should also consult clients directly to learn about the problems they’re facing and how they’re solving them. Think of your tech content more like case studies than blog posts; give leading senior dev practitioners at other companies an opportunity to showcase the cool and innovative ways they’re using your product to solve their problems and accomplish their goals on your blog. &lt;a href="https://iamondemand.com/blog/5-key-considerations-for-building-an-authentic-content-plan/"&gt;Interviewing expert practitioners&lt;/a&gt; currently experimenting with your product can make your content even richer, offering insight into the real-world problems your product solves and the practical results your solution provides.&lt;/p&gt;

&lt;p&gt;While showing how other developers solved their challenges, explain the practitioner’s background along with what it took time- and resource-wise to generate those results. This gives developers a realistic view into how they can use your solution to create those results on their own. Then, incorporate testable, “try it yourself” examples for developers to experiment with. Detailed walkthroughs with screenshots and code snippets encourage them to try new things while using your platform.&lt;/p&gt;

&lt;h2&gt;
  
  
  4. Don’t Be Afraid to Dive Deep Into Bits and Bytes
&lt;/h2&gt;

&lt;p&gt;Trying to find the information they’re looking for amid business results and marketing claims will often drive a developer to bounce from your website and search for answers on StackOverflow instead. It’s not because developers don’t care about business results—they’re still interested in learning how your solution can decrease mean time to resolution or increase their team’s productivity. However, those results don’t support a developer’s immediate needs.&lt;/p&gt;

&lt;p&gt;Instead of focusing on the benefits of your solution, include links to other blog posts reporting on these business results, keeping content for developers focused on the technology itself. Dive deep into the details specific developers need to see if your solution helps solve their problems.&lt;/p&gt;

&lt;p&gt;Technical content can help your product sell itself if it’s easy to understand and clearly demonstrates the impacts your product has on solving IT problems. Leave the product-centric language and sales content out, focusing instead on the intricate details that support your use cases. Maintaining this focus helps to &lt;a href="https://iamondemand.com/blog/subject-matter-expert-sme-content-paradox-pulling-teeth/"&gt;keep the tone authentic&lt;/a&gt;, helpful, and knowledgeable for your developer audience.&lt;/p&gt;

&lt;p&gt;Plus, not every piece of content should be intended for every developer. Rather than focusing on making generalized content to support wider audiences, hone in on the needs of specific developer types—like front-end, back-end, DevOps, or fullstack—with varied experience levels in different dev pillars. For example, create content dealing with a specific aspect (e.g., security or scale) or a specific open-source tool (e.g., Kubernetes). This ensures that you’re speaking the developer’s language with content intended to suit their very specific needs.&lt;/p&gt;

&lt;p&gt;One way to capture the right tone is to ask internal SMEs or external developers who use your product to write content detailing a specific use case. Then, have your marketing team &lt;a href="https://iamondemand.com/blog/the-case-for-shifting-editorial-left-breaking-down-silos-between-marketing-editorial/"&gt;edit the content for clarity&lt;/a&gt;, voice and flow (including planting the relevant CTAs). This helps marketers successfully target an experienced audience, contribute to the ongoing conversation around your product, and keep developers moving through the sales funnel even if the marketers don’t have the relevant expertise themselves.&lt;/p&gt;

&lt;h2&gt;
  
  
  5 Things You Should Keep in Mind When Creating B2D Content
&lt;/h2&gt;

&lt;p&gt;Creating compelling tech content doesn’t have to be difficult. Remember these five simple rules to start writing content developers will love:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Trust is most important.&lt;/li&gt;
&lt;li&gt;Focus on practicality.&lt;/li&gt;
&lt;li&gt;Keep content tight and to the point.&lt;/li&gt;
&lt;li&gt;Leverage experts and customers as a resource.&lt;/li&gt;
&lt;li&gt;Use specific examples.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;And don’t forget: you can always ask for help when you need it! At IOD, we specialize in helping you create exceptional tech content that appeals to developers and keeps them coming back for more. &lt;/p&gt;

&lt;p&gt;Contact us to tap into our extensive network of experienced practitioners and &lt;a href="https://iamondemand.com/content-types/"&gt;start creating better tech content today&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>techmarketin</category>
    </item>
    <item>
      <title>Cloud Computing Acquisitions &amp; Trends – Infographic</title>
      <dc:creator>We are IOD</dc:creator>
      <pubDate>Mon, 02 May 2022 14:13:37 +0000</pubDate>
      <link>https://dev.to/iod/cloud-computing-acquisitions-trends-infographic-1p2f</link>
      <guid>https://dev.to/iod/cloud-computing-acquisitions-trends-infographic-1p2f</guid>
      <description>&lt;p&gt;Keeping our finger on the pulse: IOD’s new infographic reveals the top cloud acquisitions of the last 6 months, including one for $6.5B, highlighting the importance of the identity and authentication space, and a $900M purchase that has put the spotlight on the demand for edge solutions. We also cover 4 key trends that will impact your business in 2022. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--kbBSN1YJ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/mwh0tntlkt7zzqdqufyd.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--kbBSN1YJ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/mwh0tntlkt7zzqdqufyd.jpg" alt="Image description" width="800" height="2000"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The world of cloud is constantly changing, making the need for expert-based tech content greater than ever. From videos and tutorials to blogs and white papers, across DevOps, fintech, cybersecurity, AI, and beyond, IOD combines fresh ideas with deep tech and marketing expertise to make sure your message stands out.&lt;/p&gt;

</description>
      <category>cloud</category>
      <category>trend</category>
    </item>
    <item>
      <title>Jenkins and Spinnaker: Turbocharge Your CI/CD With Cloud Native</title>
      <dc:creator>We are IOD</dc:creator>
      <pubDate>Wed, 20 Apr 2022 09:59:44 +0000</pubDate>
      <link>https://dev.to/iod/jenkins-and-spinnaker-turbocharge-your-cicd-with-cloud-native-d1d</link>
      <guid>https://dev.to/iod/jenkins-and-spinnaker-turbocharge-your-cicd-with-cloud-native-d1d</guid>
      <description>&lt;p&gt;Is your organization taking advantage of cloud-native computing? Modern cloud computing is built on a diverse ecosystem of open-source projects and infrastructure. &lt;/p&gt;

&lt;p&gt;Small startups and large enterprises alike depend on open-source projects to build critical container orchestration, CI/CD, and monitoring infrastructure. But how can an open-source project thrive and adapt to be so powerful across a variety of use cases and platforms?&lt;/p&gt;

&lt;p&gt;The Cloud Native Computing Foundation (CNCF), an alliance of users, vendors, and developers, helps to expand the cloud-native community and ecosystem of projects. As stated in their charter statement:&lt;/p&gt;

&lt;p&gt;The Cloud Native Computing Foundation seeks to drive adoption of this paradigm by fostering and sustaining an ecosystem of open source, vendor-neutral projects. We democratize state-of-the-art patterns to make these innovations accessible for everyone.&lt;/p&gt;

&lt;p&gt;The CNCF stewards a wide array of cloud-native tools and software. Utilizing these tools, engineering organizations can turbocharge their existing infrastructure and workflows, extending and adding powerful capabilities.&lt;/p&gt;

&lt;h2&gt;
  
  
  Cloud-Native CI/CD
&lt;/h2&gt;

&lt;p&gt;Anyone that has worked with continuous integration or continuous delivery/deployment in the last several years is likely familiar with Jenkins, an open-source server software aimed at providing end-to-end CI/CD capabilities. Engineering teams have a variety of options when it comes to deploying Jenkins, including internal infrastructure, cloud, and managed service platforms. If you take a look inside the deployment infrastructure at a majority of companies today with significant software assets, you will most likely find a Jenkins install.&lt;/p&gt;

&lt;p&gt;In recent years, the CNCF has stewarded several CI/CD projects with a cloud-native focus. One of those projects is Spinnaker, a multi-cloud continuous delivery tool that initially came from the Netflix engineering team. Spinnaker provides application management and deployment, with the added bonus of native integration with Jenkins, enabling teams to extend their existing capabilities with CNCF tooling.&lt;/p&gt;

&lt;p&gt;This article will examine the three primary ways that teams can integrate both Jenkins and Spinnaker, utilizing the flexibility of Spinnaker to add multi-cloud delivery capabilities to existing CI platforms.&lt;/p&gt;

&lt;h2&gt;
  
  
  Jenkins as a Pipeline Trigger
&lt;/h2&gt;

&lt;p&gt;Using Jenkins as a continuous integration system, with Spinnaker acting as the continuous delivery side, is probably the most familiar and commonly used implementation pattern. Jenkins is a powerful tool for CI, but Version 1 was designed and released before the ubiquitous need for cloud-first deployment scenarios. The cloud-native focus of Spinnaker means that cloud deployments are first-class concerns in the tool, providing a batteries-included implementation pattern for software delivery across a variety of platforms.&lt;/p&gt;

&lt;p&gt;The first step to integrate Jenkins and Spinnaker is to connect them. This assumes you have a Jenkins master of Version 1.x – 2.x installed, as it is required to implement any of the scenarios presented in this article. Once that’s complete, you only have to add a Jenkins trigger to a Spinnaker pipeline.&lt;/p&gt;

&lt;h2&gt;
  
  
  Use Case
&lt;/h2&gt;

&lt;p&gt;A great example use case for this type of implementation is a hypothetical engineering organization with an existing, on-premises Jenkins deployment. As they make plans to migrate some of their workload to the cloud, there are various options to consider. They can utilize Jenkins to handle delivery and deployment, requiring additional development cycles to configure and integrate Jenkins with a cloud provider. Conversely, they can continue to have Jenkins handle CI and utilize one of the managed services, like AWS CodeDeploy. &lt;/p&gt;

&lt;p&gt;The issue with these two options is that both of them will leave the platform tightly coupled with a single vendor platform, potentially causing “lock-in.” What happens if, in the future, the team needs to expand their service to Google Cloud as well? By going with Spinnaker as their CD platform instead, they’re empowered to scale out to multiple cloud platforms as future needs arise.&lt;/p&gt;

&lt;h2&gt;
  
  
  Jenkins as a Spinnaker Pipeline Stage
&lt;/h2&gt;

&lt;p&gt;What about engineering teams that are further along in their journey to being cloud native? They may already be employing a hybrid or 100% cloud production system. They may have a cloud-first deployment system already in place, such as Spinnaker, but may still need to rely on special integration testing or automation that remains in their legacy Jenkins deployment.&lt;/p&gt;

&lt;p&gt;Fortunately, Spinnaker provides this exact functionality, allowing Jenkins to be defined as a specific pipeline stage. Like the previous integration, the first step is to connect Jenkins and Spinnaker.&lt;/p&gt;

&lt;p&gt;For teams that have an extensive collection of tests and post-build automation, this can be a great way to bridge Jenkins and Spinnaker functionalities during a migration, without consuming precious engineering resources to port and refactor automated testing.&lt;/p&gt;

&lt;h2&gt;
  
  
  Jenkins as a Script Stage
&lt;/h2&gt;

&lt;p&gt;In some cases, deployments require more flexibility in automation and scripting. Scripting languages like Bash and Python are often employed to provide additional capabilities in DevOps workflows, and some CI/CD platforms are fairly limited in what types of custom automation can be defined.&lt;/p&gt;

&lt;p&gt;In the case of Spinnaker, it utilizes Jenkins as a sandbox environment, allowing the execution of any arbitrary Python, Bash, or Groovy script that might be needed. As before, Jenkins needs to be connected as a CI provider inside Spinnaker. There are some additional steps required to configure Jenkins as a script provider for a pipeline stage, detailed here.&lt;/p&gt;

&lt;p&gt;Consider the deployment workflow for an app with a UI component. Testing software with UI features has consistently been a thorn in the side of software engineers, who often have to depend on manual, interactive testing to validate that the software functions correctly. In a CI/CD workflow where many deploys might happen per day, that simply isn’t scalable. However, utilizing a Jenkins script stage, engineering teams can create automated UI testing functionality. Plus, a Jenkins script stage with shell scripting allows you to pull a Selenium Docker container into the pipeline environment, providing self-contained, automated UI testing.&lt;/p&gt;

&lt;h2&gt;
  
  
  Take Advantage of the Rich CNCF Ecosystem
&lt;/h2&gt;

&lt;p&gt;Beyond just continuous integration and deployment, a variety of other cloud-native tools call the CNCF community home. By employing these tools, engineering teams can provide their businesses with an end-to-end, cloud-native infrastructure.&lt;/p&gt;

&lt;p&gt;For monitoring and observability, Prometheus has quickly grown to become one of the best choices for modern cloud environments. With its powerful data querying and visualization capabilities, easy integration, and broad language support, it’s easy to see why. In the context of Jenkins and Spinnaker, Prometheus is a perfect fit to monitor both the infrastructure the application lives on, as well as the infrastructure that Spinnaker itself occupies.&lt;/p&gt;

&lt;p&gt;A production-level deployment infrastructure will be generating a lot of event-based data as well. Unfortunately, event producers and consumers don’t always provide any consistent specification when it comes to the format of the event data itself. CNCF has the solution: The CloudEvents specification aims to define a common, easy-to-understand specification for all major event formats.&lt;/p&gt;

&lt;p&gt;Deploying container-based workloads to multiple cloud platforms additionally brings unique security challenges. Teams that make multiple deployments per day need to be able to integrate as much security automation as they can into their deployment pipelines, catching and preventing issues before they make it into production. &lt;/p&gt;

&lt;p&gt;Open Policy Agent provides a “unified toolset and framework for policy across the cloud native stack.” With OPA deployed, an engineering team can configure a specific policy against, say, Docker files. Developers that check in new commits to container-based applications will have their builds validated by the OPA API. Any build or configuration that fails will stop the CI workflow, alerting relevant engineers to a potential issue, and avoiding a possible deployment rollback.&lt;/p&gt;

&lt;p&gt;If there’s any downside to the CNCF ecosystem, it’s that there isn’t nearly enough space in a blog post to cover all the projects and tools that exist across the cloud-native landscape. To see all the projects in one place, visit the CNCF landscape page. As of this writing, there are 1,477 projects represented!&lt;/p&gt;

&lt;h2&gt;
  
  
  Cloud-Native Adds New Capabilities
&lt;/h2&gt;

&lt;p&gt;The strong ecosystem of cloud-native tools can enable organizations to extend their existing infrastructure, adding new, cloud-focused capabilities, such as multi-cloud deployments. &lt;/p&gt;

&lt;p&gt;One caveat: Teams should be empowered to suggest and ultimately engage in a ground-up rebuild if warranted. Not all existing infrastructure makes sense for the cloud, and sometimes it’s more cost-effective and will result in better performance to implement modern design patterns versus trying to graft a modern band-aid onto a legacy platform. Fortunately, the cloud-native ecosystem has a full spectrum of tools to enable this. &lt;/p&gt;

&lt;p&gt;By utilizing solutions such as Spinnaker, an organization gets a cloud-first deployment tool backed by a strong open-source community for support, along with broad compatibility and integration capabilities with a variety of platforms and vendors; plus, it’s platform-agnostic. Using cloud-native tools, teams can extend and improve their existing architecture while, at the same time, laying the foundation for their eventual path into the modern cloud.&lt;/p&gt;

&lt;p&gt;This article was originally posted on &lt;a href="https://iamondemand.com/blog/jenkins-and-spinnaker-turbocharge-your-ci-cd-with-cloud-native/"&gt;IOD Blog.&lt;/a&gt;&lt;br&gt;
If you are a Cloud expert and you want to become part of a powerful community with tech professionals, &lt;a href="https://iamondemand.com/iod-talent-network/"&gt;join our talent network&lt;/a&gt;!&lt;/p&gt;

</description>
      <category>cloudnative</category>
    </item>
    <item>
      <title>Automating ML Workflow with IBM’s Fabric for Deep Learning (FfDL)</title>
      <dc:creator>We are IOD</dc:creator>
      <pubDate>Wed, 13 Apr 2022 15:55:05 +0000</pubDate>
      <link>https://dev.to/iod/automating-ml-workflow-with-ibms-fabric-for-deep-learning-ffdl-56a</link>
      <guid>https://dev.to/iod/automating-ml-workflow-with-ibms-fabric-for-deep-learning-ffdl-56a</guid>
      <description>&lt;p&gt;Cloud environments provide a lot of benefits for advanced ML development and training including on-demand access to CPUs/GPUs, storage, memory, networking, and security. They also enable distributed training and scalable serving of ML models. However, training ML models in a cloud environment requires a highly customized system that links these different components and services together and allows for managing and consistently orchestrating ML pipelines. Managing a full ML workflow, from data preparation to deployment, is often really hard in a distributed and volatile environment like a cloud compute cluster.&lt;/p&gt;

&lt;p&gt;Another important challenge is the efficient and scalable deployment of ML models. In a distributed compute environment, this requires configuring model servers and creating REST APIs, load balancing remote cluster requests, enabling authentication and security, etc. Also, ML model serving needs to be scalable, highly available, and fault-tolerant. &lt;/p&gt;

&lt;p&gt;Kubernetes is one of the best solutions for managing distributed cloud clusters that addresses the above challenges. IBM’s Fabric for Deep Learning (FfDL) is a DL (Deep Learning) framework that marries advanced ML development and training with Kubernetes. It makes it easy to train and serve ML models based on different ML frameworks (e.g., TensorFlow, Caffe, PyTorch) on Kubernetes.&lt;/p&gt;

&lt;p&gt;In this article, I’ll discuss the architecture and key features of FfDL and show some practical examples of using the framework for training and deploying ML models on Kubernetes. I’ll also address the key limitations of FfDL compared to other ML frameworks for Kubernetes and point out some ways in which it could possibly improve. &lt;/p&gt;

&lt;h2&gt;
  
  
  Description of FfDL Features
&lt;/h2&gt;

&lt;p&gt;FfDL is an open-source DL platform for Kubernetes originally developed by the IBM Research and IBM Watson development teams. The main purpose behind the project was to bridge the gap between ML research and production-grade deployment of ML models in the distributed infrastructure of the cloud. FfDL is the core of many IBM ML products, including Watson Studio’s Deep Learning as a Service (DLaaS), which provides tools for the development of production-grade ML workflows in public cloud environments. &lt;/p&gt;

&lt;p&gt;It’s no surprise that the team behind FfDL chose Kubernetes to automate ML workflows. Kubernetes offers many benefits for the production deployment of ML models including automated lifecycle management (node scheduling, restarts on failure, health checks), a multi-server networking model, DNS and service discovery, security, advanced application update/upgrade patterns, autoscaling, and many more. &lt;/p&gt;

&lt;p&gt;More importantly, by design, Kubernetes is a highly extensible and pluggable platform where users can define their own custom controllers and custom resources integrated with K8s components and orchestration logic. This extensibility is leveraged by FfDL to allow ML workflows to run efficiently on Kubernetes, making use of available K8s orchestration services, APIs, and abstractions while adding the ML-specific logic needed by ML developers.&lt;/p&gt;

&lt;p&gt;This deep integration between FfDL and Kubernetes makes it possible to solve many of the challenges that ML developers face on a daily basis. For the issues listed in the opening section, FfDL offers the following features: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Cloud-agnostic deployment of ML models, enabling them to run in any environment where containers and Kubernetes run &lt;/li&gt;
&lt;li&gt;Support for training models developed for several popular DL frameworks, including TensorFlow, PyTorch, Caffe, and Horovod &lt;/li&gt;
&lt;li&gt;Built-in support for training ML models with GPUs&lt;/li&gt;
&lt;li&gt;Fine-grained configuration of ML training jobs using Kubernetes native abstractions and FfDL custom resources&lt;/li&gt;
&lt;li&gt;ML-model lifecycle management using K8s native controllers, schedulers, and FfDL control loops&lt;/li&gt;
&lt;li&gt;Scalability, fault tolerance, and high availability for ML deployments &lt;/li&gt;
&lt;li&gt;Built-in log collection, monitoring, and model evaluation layers for ML training jobs&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Plus, FfDL is an efficient way to serve ML models since it uses the Seldon Core serving framework to convert trained models (Tensorflow, Pytorch, H2O, etc.) into gRPC/REST microservices served on Kubernetes.&lt;/p&gt;

&lt;h2&gt;
  
  
  FfDL Architecture
&lt;/h2&gt;

&lt;p&gt;FfDL is deployed as a set of interconnected microservices (pods) responsible for a specific part of the ML workflow. FfDL relies on Kubernetes to restart these components when they fail and to control their lifecycle. After installing FfDL on your Kubernetes cluster, you can see similar pods:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl config set-context $(kubectl config current-context) –namespace=$NAMESPACE
kubectl get pods
# NAME                                 READY     STATUS    RESTARTS   AGE
# alertmanager-7cf6b988b9-h9q6q        1/1       Running   0          5h
# etcd0                                1/1       Running   0          5h
# ffdl-lcm-65bc97bcfd-qqkfc            1/1       Running   0          5h
# ffdl-restapi-8777444f6-7jfcf         1/1       Running   0          5h
# ffdl-trainer-768d7d6b9-4k8ql         1/1       Running   0          5h
# ffdl-trainingdata-866c8f48f5-ng27z   1/1       Running   0          5h
# ffdl-ui-5bf86cc7f5-zsqv5             1/1       Running   0          5h
# mongo-0                              1/1       Running   0          5h
# prometheus-5f85fd7695-6dpt8          2/2       Running   0          5h
# pushgateway-7dd8f7c86d-gzr2g         2/2       Running   0          5h
# storage-0                            1/1       Running   0          5h`
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In general, FfDL architecture is based on the following main components:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;REST API&lt;/li&gt;
&lt;li&gt;Trainer&lt;/li&gt;
&lt;li&gt;Lifecycle Manager&lt;/li&gt;
&lt;li&gt;Training Job&lt;/li&gt;
&lt;li&gt;Training Data Service&lt;/li&gt;
&lt;li&gt;Web UI&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Let’s briefly discuss what each of these does. &lt;/p&gt;

&lt;h2&gt;
  
  
  REST API
&lt;/h2&gt;

&lt;p&gt;The REST API microservice processes user HTTP requests and passes them to the gRPC Trainer service. It’s an entry point that allows FfDL users to interact with training jobs, configure training parameters, deploy models, and use other features provided by FfDL and Kubernetes. The REST API supports authentication and leverages K8s service registries to load balance client requests, which ensures scalability when serving an ML model.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--YDe848Tw--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/dbifponcyb85nhkzr60n.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--YDe848Tw--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/dbifponcyb85nhkzr60n.png" alt="Image description" width="880" height="443"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Figure 1: FfDL architecture (Source: GitHub)&lt;/em&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  Trainer
&lt;/h2&gt;

&lt;p&gt;The Trainer microservice processes training job requests received via the REST API and saves the training job configuration to the MongoDB database (see Figure 1 above). This microservice can initiate job deployment, serving, halting, or termination by passing respective commands to the Lifecycle Manager. &lt;/p&gt;
&lt;h2&gt;
  
  
  Lifecycle Manager
&lt;/h2&gt;

&lt;p&gt;The FfDL Lifecycle Manager is responsible for launching and managing (pausing, starting, terminating) the training jobs initiated by the Trainer by interacting with the K8s scheduler and cluster manager. The procedure according to which the Lifecycle Manager operates includes the following steps:&lt;/p&gt;

&lt;p&gt;Retrieve a training job configuration defined in the YML manifests.&lt;br&gt;
Determine the learner pods, parameter servers, sidecar containers, and other components of the job.&lt;br&gt;
Call the Kubernetes REST API to deploy the job.&lt;br&gt;
Training Job&lt;br&gt;
A training job is the FfDL abstraction that encompasses a group of learner pods and a number of sidecar containers for control logic and logging. FfDL allows for the launching of multiple learner pods for distributed training. A training job can also include parameter servers for asynchronous training with data parallelism. FfDL provides these distributed training features via Open MPI (Message Passing Interface) designed to enable network-agnostic interaction and communication between cluster nodes. The MPI protocol is widely used for enabling all-reduce style distributed ML training (see MPI Operator by Kubeflow). &lt;/p&gt;
&lt;h2&gt;
  
  
  Training Data Service
&lt;/h2&gt;

&lt;p&gt;Each training job has a sidecar logging container (log collector) that collects training data, such as evaluation metrics, visuals, and other artifacts, and sends it to the FfDL Training Data Service (TDS). The FfDL log collectors understand the unique log syntax of each ML framework supported by FfDL. In turn, TDS dynamically emits this information to the users as the job is running. It also permanently stores log data in Elasticsearch for debugging and auditing purposes. &lt;/p&gt;
&lt;h2&gt;
  
  
  Web UI
&lt;/h2&gt;

&lt;p&gt;FfDL ships with a minimalistic Web UI that allows for the uploading of data and a model code for training. Overall, the FfDL UI has limited features compared to alternatives such as FloydHub or Kubeflow Central Dashboard. &lt;/p&gt;
&lt;h2&gt;
  
  
  Training ML Models with FfDL
&lt;/h2&gt;

&lt;p&gt;Now that you understand the FfDL architecture, let’s discuss how you can train and deploy ML jobs using this framework. The process is quite straightforward: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Create a model code written in any supported framework (e.g., TensorFlow, PyTorch, Caffe).&lt;/li&gt;
&lt;li&gt;Containerize the model.&lt;/li&gt;
&lt;li&gt;Expose training data to the job using some object store (e.g., AWS S3).&lt;/li&gt;
&lt;li&gt;Create a manifest with a training job configuration using a FfDL K8s custom resource.&lt;/li&gt;
&lt;li&gt;Train your ML model via the FfDL CLI or FfDL UI. &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Serve the ML model using Seldon Core.&lt;br&gt;
Assuming that you already have a working ML model code and training datasets, you can jump right to the FfDL model manifest parameters. The FfDL custom resource lets users define resource requirements for a given job, including requests and limits for GPUs, CPUs, and memory; the number of learner pods to execute the training; paths to training data; etc. &lt;/p&gt;

&lt;p&gt;Below is an example of a FfDL training job manifest from the official documentation. It defines a TensorFlow job for training a simple convolutional neural network:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;name: tf_convolutional_network_tutorial
description: Convolutional network model using tensorflow
version: "1.0"
gpus: 0
cpus: 0.5
memory: 1Gb
learners: 1

# Object stores that allow the system to retrieve training data.
data_stores:
  - id: sl-internal-os
    type: mount_cos
    training_data:
      container: tf_training_data
    training_results:
      container: tf_trained_model
    connection:
      auth_url: http://s3.default.svc.cluster.local
      user_name: test
      password: test

framework:
  name: tensorflow
  version: "1.5.0-py3"
  command: &amp;gt;
    python3 convolutional_network.py --trainImagesFile ${DATA_DIR}/train-images-idx3-ubyte.gz
      --trainLabelsFile ${DATA_DIR}/train-labels-idx1-ubyte.gz --testImagesFile ${DATA_DIR}/t10k-images-idx3-ubyte.gz
      --testLabelsFile ${DATA_DIR}/t10k-labels-idx1-ubyte.gz --learningRate 0.001
      --trainingIters 2000

evaluation_metrics:
  type: tensorboard
  in: "$JOB_STATE_DIR/logs/tb"
  # (Eventual) Available event types: 'images', 'distributions', 'histograms', 'images'
  # 'audio', 'scalars', 'tensors', 'graph', 'meta_graph', 'run_metadata'
  #  event_types: [scalars]

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;According to the manifest, the TF training job will run using half of the node’s CPU capacity and will be processed by one learner. FfDL supports distributed training, meaning there can be multiple learners for the same training job.&lt;/p&gt;

&lt;p&gt;In the data_stores part of the spec, you can specify how FfDL should access the training data and store the training results. Training data can be provided to FfDL using any block storage such as AWS S3 or Google Cloud Storage. After the training, the trained model with corresponding model weights will be stored under the folder specified in the training_results setting. &lt;/p&gt;

&lt;p&gt;The framework section of the manifest defines framework-specific parameters used when starting learner’s containers. There, you can specify the framework version, initialization values for hyperparameters (e.g., learning rate), number of iterations, select evaluation metrics (e.g., accuracy), and location of the test and labeled data. You can define pretty much anything your training script exposes. &lt;/p&gt;

&lt;p&gt;Finally, in the evaluation_metrics section, you can define the location of generated logs and artifacts and the way to access them. The FfDL supports TensorBoard, so you can analyze your model’s logs and metrics there. &lt;/p&gt;

&lt;p&gt;After the manifest is written, you can train the model using either the FfDL CLI or FfDL UI. For detailed instructions on how to do this, please see the official docs. &lt;/p&gt;

&lt;h2&gt;
  
  
  Deploying FfDL Models
&lt;/h2&gt;

&lt;p&gt;As I’ve already mentioned, FfDL uses Seldon Core for deploying ML models as REST/gRPC microservices. Seldon is a very powerful serving platform for Kubernetes and using it with FfDL gives you a lot of useful features out of the box:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Multi-framework support (TensorFlow, Keras, PyTorch)&lt;/li&gt;
&lt;li&gt;Containerization of ML models using pre-packaged inference servers&lt;/li&gt;
&lt;li&gt;API endpoints that can be tested with Swagger UI, cURL, or gRPCurl&lt;/li&gt;
&lt;li&gt;Metadata to ensure that each model can be traced back to its training platform, data, and metrics&lt;/li&gt;
&lt;li&gt;Metrics and integration with Prometheus and Grafana&lt;/li&gt;
&lt;li&gt;Auditability and logging integration with Elasticsearch&lt;/li&gt;
&lt;li&gt;Microservice distributed tracing through Jaeger.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Any FfDL model whose runtime inference can be packaged as a Docker container can be managed by Seldon.&lt;/p&gt;

&lt;p&gt;The process of deploying your ML model with FfDL is relatively straightforward. First, you need to deploy Seldon Core to your Kubernetes cluster since it’s not part of the default FfDL installation. Next, you need to build the Seldon model image from your trained model. To do this, you can use the S2I (Openshift’s source-to-image tool) and push it to Docker Hub.&lt;/p&gt;

&lt;p&gt;After this, you need to define the Seldon REST API deployment using a deployment template similar to the one below. Here, I’m using the example from the FfDL Fashion MNIST repo on GitHub:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
  "apiVersion": "machinelearning.seldon.io/v1alpha2",
  "kind": "SeldonDeployment",
  "metadata": {
    "labels": {
      "app": "seldon"
    },
    "name": "ffdl-fashion-mnist"
  },
  "spec": {
    "annotations": {
      "project_name": "FfDL fashion-mnist",
      "deployment_version": "v1"
    },
    "name": "fashion-mnist",
    "oauth_key": "oauth-key",
    "oauth_secret": "oauth-secret",
    "predictors": [
      {
        "componentSpecs": [{
          "spec": {
            "containers": [
              {
                "image": "",
                "imagePullPolicy": "IfNotPresent",
                "name": "classifier",
                "resources": {
                  "requests": {
                    "memory": "1Mi"
                  }
                },
                "env": [
                  {
                    "name": "TRAINING_ID",
                    "value": ""
                  },
                  {
                    "name": "BUCKET_NAME",
                    "value": ""
                  },
                  {
                    "valueFrom": {
                      "secretKeyRef": {
                          "localObjectReference": {
                      "name" : "bucket-credentials"
                   },
                        "key": "endpoint"
                      }
                    },
                    "name": "BUCKET_ENDPOINT_URL"
                  },
                  {
                    "valueFrom": {
                      "secretKeyRef": {
                          "localObjectReference": {
                      "name" : "bucket-credentials"
                  },
                        "key": "key"
                      }
                    },
                      "name": "BUCKET_KEY"
                  },
                  {
                    "valueFrom": {
                      "secretKeyRef": {
                          "localObjectReference": {
                      "name" : "bucket-credentials"
                   },
                        "key": "secret"
                      }
                    },
                    "name": "BUCKET_SECRET"
                  }
                ]
              }
            ],
            "terminationGracePeriodSeconds": 20
          }
        }],
        "graph": {
          "children": [],
          "name": "classifier",
          "endpoint": {
            "type": "REST"
          },
          "type": "MODEL"
        },
        "name": "single-model",
        "replicas": 1,
        "annotations": {
          "predictor_version": "v1"
        }
      }
    ]
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The most important parts of this manifest are:&lt;/p&gt;

&lt;p&gt;BUCKET_NAME: The name of the bucket containing your model&lt;br&gt;
image: Your Seldon model on Docker Hub&lt;br&gt;
There are also Seldon-specific configurations of the inference graph and predictors you can check out in the Seldon Core docs. &lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion: FfDL Limitations
&lt;/h2&gt;

&lt;p&gt;As I showed, the FfDL platform provides basic functionality for running ML models on Kubernetes, including training and serving models. However, compared to other available alternatives for Kubernetes such as Kubeflow, the FfDL functionality is somewhat limited. In particular, it lacks flexibility in configuring training jobs for specific ML frameworks. Kubeflow’s TensorFlow Operator, for example, allows you to define distributed training jobs based on all-reduce and asynchronous patterns using TF distribution strategies. The Kubeflow CRD for TensorFlow exposes many more parameters than FfDL, and the FfDL specification for its training custom resource is not as well-documented. &lt;/p&gt;

&lt;p&gt;Similarly, FfDL does not support many important ML workflow features for AutoML, including hyperparameter optimization, and has limited functionality for creating reproducible ML experiments and pipelines, like Kubeflow Pipelines does.&lt;/p&gt;

&lt;p&gt;Also, the process of deploying and managing training jobs on Kubernetes is somewhat dependent on FfDL custom scripts and tools and does not provide a lot of Kubernetes-native resources, which limits the pluggability of the framework. The FfDL documentation for many important aspects of these tools is also limited. For example, there is no detailed description of how to deploy FfDL on various cloud providers. &lt;/p&gt;

&lt;p&gt;Finally, the FfDL UI does not provide as many useful features as FloydHub and Kubeflow Central Dashboard. It just lets users upload their model code to Kubernetes. &lt;/p&gt;

&lt;p&gt;In sum, to be a tool for the comprehensive management of modern ML workflows, FfDL needs more features and better documentation. At this moment, it can be used as a simple way to train and deploy ML models on Kubernetes but not as a comprehensive platform for managing production-grade ML pipelines. &lt;/p&gt;

</description>
      <category>aws</category>
      <category>elb</category>
      <category>alb</category>
      <category>nlb</category>
    </item>
    <item>
      <title>How to Get the Most From AWS Cost Management Tools</title>
      <dc:creator>We are IOD</dc:creator>
      <pubDate>Thu, 07 Apr 2022 08:06:24 +0000</pubDate>
      <link>https://dev.to/iod/how-to-get-the-most-from-aws-cost-management-tools-2hkg</link>
      <guid>https://dev.to/iod/how-to-get-the-most-from-aws-cost-management-tools-2hkg</guid>
      <description>&lt;p&gt;With the adoption of public cloud services on the rise and technical resources such as servers far from sight, companies are forced to address the elephant in the room: How can they manage the cloud costs of day-to-day operations? Or, more specifically, how can they keep costs from spiraling out of control?&lt;/p&gt;

&lt;p&gt;From a business point of view, several benefits have been driving organizations to adopt the public cloud, such as enhanced capacity planning, massive economies of scale from companies like Amazon Web Services (AWS), the ability to trade upfront capital investments (CapEx) for monthly operating expenses (OpEx), and, above all, the ability to truly focus on their business rather than running and maintaining data centers.&lt;/p&gt;

&lt;p&gt;As a market leader in the public cloud space, AWS has paved the way for today’s digital transformation and offers multiple mechanisms for businesses to innovate while keeping costs under control. Yet, those tools and processes are still quite unclear, or even unknown, to many business leaders. &lt;/p&gt;

&lt;p&gt;To better understand cloud costs, let’s start by examining how AWS pricing actually works.&lt;/p&gt;

&lt;h2&gt;
  
  
  How AWS Pricing Works
&lt;/h2&gt;

&lt;p&gt;From the very beginning, AWS has been quite transparent about how their pricing works and how customers can take advantage of it to gain better cost efficiencies. Architects can design systems and optimize costs by picking cloud services that match their usage needs while still having the option to expand later.&lt;/p&gt;

&lt;p&gt;With AWS’ on-demand and pay-as-you-go pricing model, customers can get exactly what they need on a per-hour basis (or even per-second in some cases) while still having at their disposal a reservation-based payment model for long-term and predictable workloads.&lt;/p&gt;

&lt;p&gt;The AWS pricing model, as described in their own whitepaper, follows four key principles that help customers to understand the best practices regarding cloud costs and to avoid pitfalls. We’ll take a look at each of these principles here below. &lt;/p&gt;

&lt;h2&gt;
  
  
  Understand the Fundamentals of Pricing
&lt;/h2&gt;

&lt;p&gt;Every new cloud customer should first learn that there are three aspects that drive costs when using AWS: compute, storage, and outbound data transfer. The weight of each of these will vary according to your product and pricing model.&lt;/p&gt;

&lt;p&gt;Compute usage is typically charged per hour, while storage is often per Gigabyte of data stored. As to data transfer, with a few exceptions, customers are not charged for inbound data transfers or transfers between services within the same region. This means that you usually don’t pay for the data going into your AWS account and thus really only have to worry about data going out of it, e.g., internet traffic.&lt;/p&gt;

&lt;h2&gt;
  
  
  Start Early with Cost Optimization
&lt;/h2&gt;

&lt;p&gt;Don’t wait until your cloud workloads are in production to optimize costs. Customers that come from an on-premises environment often fall into this trap. Cloud adoption is not a mere technical exercise. It requires a cultural change that starts from the very beginning by looking at how cloud costs are planned and allocated. &lt;/p&gt;

&lt;p&gt;Decision makers need full visibility of running costs, and mechanisms to control these should be in place early on. This drives organizations to optimize their costs frequently and with less effort. Also, having such a cost-efficient strategy from the start will give your team peace of mind as your cloud environment grows and becomes more complex.&lt;/p&gt;

&lt;h2&gt;
  
  
  Maximize the Power of Flexibility
&lt;/h2&gt;

&lt;p&gt;You can do this by leveraging cloud-native capabilities, such as launching resources on-demand and turning them off when they’re not needed, instead of keeping services running 24/7. For predictable workloads that need to be constantly running, customers can still leverage a reservation model with a long-term commitment for extra savings. &lt;/p&gt;

&lt;p&gt;This cloud elasticity can save a tremendous amount of money while still giving you the capacity for near-unlimited growth. Also, by using and paying only for the resources you need, you can focus more resources on feature development and innovation.&lt;/p&gt;

&lt;h2&gt;
  
  
  Choose the Right Pricing Model for the Job
&lt;/h2&gt;

&lt;p&gt;In AWS, the same product can have multiple pricing models, so it’s important to research the characteristics of each and choose the best fit for your workload. Pricing models vary from on-demand (pay-as-you without long-term commitment or upfront costs) and dedicated instances (for instances on dedicated hardware) to spot (a mechanism to bid on the price and have discounted hourly rates) and reservations (committing and paying for long-term capacity in exchange of a sizable discount).&lt;br&gt;
Getting Costs Under Control: Tips &amp;amp; Tricks&lt;br&gt;
Once you understand AWS’ pricing principles and use them as a guideline, you can then learn how to make the best use of AWS’ built-in tools. There are a few interesting tricks here that business leaders can implement to help get their cloud costs under control. &lt;/p&gt;

&lt;h2&gt;
  
  
  Consolidated Billing and Reserved Resources
&lt;/h2&gt;

&lt;p&gt;The AWS pricing principles suggest you reserve capacity for predictable workloads and gain substantial discounts. But how does this work in practice? The mechanics are fairly simple, as you can commit to using a certain type of resource (e.g., a certain number of EC2 M5 instances in eu-west-1 region) for a certain period of time (minimum of one year) and receive a discount of up to 75%. The exact amount of the discount depends on various factors, such as the resource type, region, amount of upfront payment, and number of years. &lt;/p&gt;

&lt;p&gt;This does not mean that a specific resource has to always be running. Since the reservation is for a certain resource type, not a specific deployed resource, you are free to stop, terminate, or re-deploy that resource as much as you want as long as you keep using the same type. &lt;/p&gt;

&lt;p&gt;When customers have multiple AWS accounts, one interesting trick is to enroll every account under the same “Organization” and enable consolidated billing. This makes the monthly operational management easier, plus it enables you to use the reserved resource type you purchased across any of your AWS accounts, meaning it becomes significantly more flexible.&lt;/p&gt;

&lt;p&gt;In addition, with the recent introduction of the Savings Plan feature across multiple AWS products, customers can now get insights on potential savings by switching to reserved resources based on their product usage. &lt;/p&gt;

&lt;h2&gt;
  
  
  Billing Alarms &amp;amp; Cost Explorer
&lt;/h2&gt;

&lt;p&gt;When it comes to cloud costs, the worst situation is when you receive an unexpected invoice at the end of the month for used resources that did not bring any business value. &lt;/p&gt;

&lt;p&gt;From an operational point of view, it’s important to not get caught by surprise. Therefore, customers must have ways to receive notifications and react swiftly when something unexpected happens.&lt;/p&gt;

&lt;p&gt;In AWS, customers can leverage a feature named Billing Alarms, which allows you to set up an alarm to notify you of custom-defined conditions. A common scenario is to configure the alarm to send an email notification in case the monthly costs are predicted to go above a certain threshold based on the current usage pattern. This enables you to quickly react and troubleshoot the cause of the sudden increase without waiting until the end of the month. &lt;/p&gt;

&lt;p&gt;For troubleshooting both current and past expenses, AWS customers can use Cost Explorer, a built-in UI tool that provides a visualization and filtering of costs based on different factors, such as service, tagging, and time period. The most popular filtering method is tagging. This is made possible by having your development team tag AWS resources with custom key/value pairs such as use case, owner, department, or cost center. &lt;/p&gt;

&lt;p&gt;For increased awareness, customers can also display billing information using CloudWatch metrics and dashboards. This enables a customized visualization of cost usage and correlates with the system status (e.g., number of requests served).&lt;/p&gt;

&lt;p&gt;These tools make it incredibly easy for decision makers to track and understand how their cloud investment is being spent.&lt;/p&gt;

&lt;h2&gt;
  
  
  Engineering Teams in the Decision-Making Process
&lt;/h2&gt;

&lt;p&gt;It is often said that when using cloud computing, your system scales with a credit card. While not wrong, it is crucial to know when and why that scaling occurs. &lt;/p&gt;

&lt;p&gt;If customers are unaware of different product pricing and how volume affects them, costs can easily skyrocket. This can be due to the system responding to an increase in demand or a simple development mistake. &lt;/p&gt;

&lt;p&gt;Engineering teams are right at the center when it comes to optimizing costs and utilizing the right type of technical resources. However, one common pitfall is choosing resources based purely on their technical characteristics. The total cost of ownership (TCO) needs to be taken into account for each component while designing the system. The TCO includes the technical specifications, pricing model, and operational costs. &lt;/p&gt;

&lt;p&gt;AWS makes it easier for engineering teams to estimate the cost of their resource choices with its Pricing Calculator tool. This lets teams weigh the pros and cons of their choices and choose the AWS services that suit them best. &lt;/p&gt;

&lt;p&gt;One important consideration to keep in mind is that while some managed serverless services might feel less affordable compared to a DIY approach with EC2 virtual instances, the human cost of operating them often largely exceeds any potential savings.&lt;/p&gt;

&lt;p&gt;Software engineering teams working in DevOps should continuously be on the lookout for ways to improve their operations. When talking about specific workloads, this eagerness to improve and adopt best practices should extend to all stakeholders. Bringing everyone to the table and performing frequent assessments, such as AWS Well-Architected Reviews, can pave the way for greater cost-efficiency as well as an increase in innovation. &lt;/p&gt;

&lt;p&gt;Therefore, engineering teams should be an active part of the decision-making process with business leaders. Only by embracing business objectives as a common goal, and maximizing the potential for digital transformation that cloud technologies provide, can businesses truly thrive.&lt;/p&gt;

&lt;h2&gt;
  
  
  Summary
&lt;/h2&gt;

&lt;p&gt;As businesses move forward in their digital transformation and execute their technology strategy, using a public cloud provider such as AWS gives a tremendous amount of speed and flexibility to accomplish their business goals. &lt;/p&gt;

&lt;p&gt;For anyone using cloud services, it’s critical to understand and control how money is being spent—making sure that only resources needed are in-use and that they are getting the most from each dollar spent.&lt;/p&gt;

&lt;p&gt;With near-unlimited resources just an API-request away, it is fairly easy to go overboard without the proper guidance and boundaries in place. Therefore, make sure to have the proper people and structure in place (e.g., architecture and cloud steering group) that can manage and optimize your cloud investment and usage.&lt;/p&gt;

&lt;p&gt;This article was originally posted on &lt;a href="https://iamondemand.com/blog/how-to-get-the-most-out-of-the-aws-cost-management-tools/"&gt;IOD Blog&lt;/a&gt;.&lt;br&gt;
If you want to write an article like this one and become a part of a global talent network. &lt;a href="https://iamondemand.com/iod-talent-network/"&gt;Join us&lt;/a&gt;&lt;/p&gt;

</description>
      <category>aws</category>
      <category>management</category>
      <category>tooling</category>
    </item>
    <item>
      <title>Security Risks and Challenges in the Serverless World</title>
      <dc:creator>We are IOD</dc:creator>
      <pubDate>Sun, 27 Mar 2022 11:46:31 +0000</pubDate>
      <link>https://dev.to/iod/security-risks-and-challenges-in-the-serverless-world-2bb8</link>
      <guid>https://dev.to/iod/security-risks-and-challenges-in-the-serverless-world-2bb8</guid>
      <description>&lt;p&gt;Adopting an architecture that gives you complete control over your application and infrastructure (servers, identity management, etc.) is good because of the flexibility it offers, but it’s only sustainable for a while. As your organization grows, things will start to get complicated, and scaling and infrastructure management will become a big challenge. Instead of delegating these responsibilities to developers, why not adopt serverless? This will allow you to shift the responsibility of managing your application infrastructures to a cloud provider.&lt;/p&gt;

&lt;p&gt;Going serverless offers numerous benefits, such as greater scalability, faster time to market, lower operational overhead, and automated scaling—all at a reduced cost. But serverless also comes with some challenges. Like with any technology, serverless applications are susceptible to malicious attacks that can be difficult to protect against. According to an audit by PureSec, 1 in 5 serverless apps has a critical security flaw that attackers can leverage to perform various malicious actions.  &lt;/p&gt;

&lt;p&gt;I have built many serverless applications throughout my software engineering career. In this post, I’ll share some of the best practices I’ve found to be useful for mitigating security risks. &lt;/p&gt;

&lt;h2&gt;
  
  
  Serverless Attack Vectors
&lt;/h2&gt;

&lt;p&gt;Serverless applications are almost never built on functions as a service (FaaS) alone. Rather, they also rely on several third-party components and libraries, connected through networks and events. Every third-party component connected to a serverless app is a potential risk, and your application could be easily exploited or damaged if a component is compromised, malicious, or has insecure dependencies. &lt;/p&gt;

&lt;p&gt;Instead of securing serverless applications using firewalls, antivirus solutions, intrusion prevention/detection systems, or other similar tools, focus on securing your application functions hosted in the cloud. While the cloud provider provisions and maintains the servers that run your code and manages resource allocation dynamically, you still need to ensure that your app is free of the following vulnerabilities, which are unique to serverless: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Data vulnerabilities: **Vulnerabilities that arise due to the movement of data between app functions and third-party services. These vulnerabilities are also introduced when you store app data in non-secure databases. 
-&lt;/strong&gt; Libraries vulnerabilities: **Security vulnerabilities that are introduced when a function uses vulnerable third-party dependencies or libraries. &lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Access and permission vulnerabilities:&lt;/strong&gt; Vulnerabilities that are introduced when you create policies that allow excessive access or permissions to sensitive functions or data.
-** Code vulnerabilities: **Vulnerabilities that are introduced when you write bad codes or vulnerable serverless functions.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Security Risks and Challenges in the Serverless World
&lt;/h2&gt;

&lt;p&gt;As more enterprises adopt and build applications using serverless architectures, it’s really important to keep serverless deployments and services secure. Unfortunately, many enterprises aren’t aware of the security risks in serverless applications, not to mention crafting strategies for mitigating those risks. In this section, I’ll discuss some critical security risks to consider when running serverless applications. &lt;/p&gt;

&lt;h2&gt;
  
  
  Inadequate Monitoring and Logging of Serverless Functions
&lt;/h2&gt;

&lt;p&gt;Serverless apps operate amid a complex web of connections and use different services from various cloud providers across multiple regions. In a serverless application, insufficient function logs lead to missed error reports. Because serverless functions communicate across a network, it’s very easy to lose track of the audit trail or event flow that you need in order to detect and identify what’s happening within the app. &lt;/p&gt;

&lt;p&gt;What’s more, without proper monitoring and logging of serverless functions and events, you won’t be able to identify critical errors, malicious attacks, or insecure flows on time. Eventually, the delay will lead to app downtime that could affect your customers or brand reputation.  &lt;/p&gt;

&lt;h2&gt;
  
  
  Sensitive Data Exposure Due To a Large Attack Surface
&lt;/h2&gt;

&lt;p&gt;Serverless applications have a large attack surface and comprise hundreds, or even thousands, of functions that can be triggered by many events, including API gateway commands, data streams, database changes, emails, IoT telemetry signals, and more. Serverless functions also ingest data from various third-party libraries and data sources, the majority of which are difficult to inspect using standard application-layer protections, such as web application firewalls. &lt;/p&gt;

&lt;p&gt;There are many factors that increase entry points to serverless architectures, including the vast range of event sources, large number of small functions associated with serverless apps, and active exchange of data between deployed functions and third-party services. In addition, all of these factors combined increase the potential attack surface and risk of sensitive data exposure, manipulation, or destruction. &lt;/p&gt;

&lt;h2&gt;
  
  
  Function Event-Data Injection
&lt;/h2&gt;

&lt;p&gt;At a high level, a function event-data injection attack occurs when a hacker uses hostile, untrusted, and unauthorized data inputs to trick an app into providing authorized access to data or executing unintended commands. A serverless application is vulnerable to fault injection attacks when it allows malicious user data to slip through the cracks without filtering, validating, or sanitizing that data. &lt;/p&gt;

&lt;p&gt;These injection attacks can lead to access denial, data corruption, data loss, and even complete host takeover. In extreme cases, the hacker can take total control of an app’s high-level execution and modify its regular flow via a ransomware attack. Some common examples of function event-data injection attacks associated with serverless architectures are: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;SQL and NoSQL injection&lt;/li&gt;
&lt;li&gt;Server-side request forgery (SSRF)&lt;/li&gt;
&lt;li&gt;Object deserialization attacks &lt;/li&gt;
&lt;li&gt;Function runtime code injection (e.g., Golang, C#, Java, JavaScript/Node.js, Python)&lt;/li&gt;
&lt;li&gt;XML External Entity&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;As you would imagine, serverless functions aren’t immune to the previously mentioned security threats and risks. Your app will still be vulnerable if you have functions or code that use excessive permissions or don’t follow security best practices. &lt;/p&gt;

&lt;h2&gt;
  
  
  Best Practices for Securing Serverless Applications
&lt;/h2&gt;

&lt;p&gt;So how do you  secure a serverless app? First, know that designing and implementing security into your app should always be a top priority—even with serverless architectures. Since you’re responsible for managing some parts of your serverless app, you need to adopt best practices that allow you to secure it against attacks, insecure coding practices, errors, and misconfigurations. Here are a few tips to get you started.  &lt;/p&gt;

&lt;h2&gt;
  
  
  Adopt the Principle of Least Privilege
&lt;/h2&gt;

&lt;p&gt;One way to secure serverless applications is to ensure proper authentication and authorization, allowing each function to access only the minimum permissions it needs to operate well or perform an intended logic. With the principle of least privilege, you grant only enough access required for a function to do its job. Setting out rules for what each function can access is essential for maintaining security in serverless architectures.&lt;/p&gt;

&lt;p&gt;This also allows you to minimize the level of security exposure for all deployed functions and mitigate the impact of any attack. Least privilege access also ensures that each function does exactly what it was designed to do, helping you maintain compliance and improve your security posture.&lt;/p&gt;

&lt;h2&gt;
  
  
  Monitor and Log Functions
&lt;/h2&gt;

&lt;p&gt;Once you start to use serverless architecture, where the provider takes care of tasks like infrastructure maintenance and scaling, you may discover that things start to move quite quickly, as there is less work to do. Also, because serverless functions are stateless and event driven, it’s very easy to miss most suspicious activities if you don’t have a good monitoring strategy. A better approach for preventing, detecting, and effectively managing security breaches is to adequately log and monitor security-related events. &lt;/p&gt;

&lt;p&gt;You can collect real-time logs from different cloud services and serverless functions, as well as periodically push the logs to a central security information and event-management system. Most cloud providers have a comprehensive log-aggregation service you can leverage. That way, it’s easier to do an audit trail that you can reference whenever you need to hunt security threats. When monitoring your serverless functions, you should collect reports on resource access, malware activities, network activity, authorization and authentication, critical failures, and errors.&lt;/p&gt;

&lt;h2&gt;
  
  
  Define IAM Roles for Each Function
&lt;/h2&gt;

&lt;p&gt;In some cases, a serverless app will contain hundreds, or even thousands, of functions, which makes managing roles and permissions a time-consuming and tedious task. In a bid to make this less demanding, some enterprises fall into a trap: setting a single, wildcard permission level for an entire app that consists of tons of functions. This approach might seem less harmful when experimenting in the sandbox environment, but it can be very dangerous. In fact, it actually increases the security risks faced by serverless applications, as most code in the sandbox environment finds its way to production. &lt;/p&gt;

&lt;p&gt;As you adopt a serverless architecture, you need to think about each function individually. You should also manage individual policies and roles for each function. As a rule of thumb, every serverless function within your application should have only the permissions it needs to complete its logic—nothing more. Even if all your functions have or begin with the same policy, you should always decouple the IAM roles to ensure least privilege access control for the future of your functions. &lt;/p&gt;

&lt;h2&gt;
  
  
  Summing Up
&lt;/h2&gt;

&lt;p&gt;New opportunities pave the way for new challenges, and serverless computing is no exception. Despite the security challenges and risks, serverless architecture is a very exciting technological evolution in the world of infrastructure and a boon to many enterprises. &lt;/p&gt;

&lt;p&gt;To address and mitigate security risks, you need to understand the serverless attack vectors and the unique challenges in serverless environments. Most importantly, you need to “shift left” and integrate security throughout the entire software-development lifecycle. &lt;/p&gt;

&lt;p&gt;All serverless applications work under the shared responsibility model, where compliance and security are a shared responsibility between the cloud provider and application owner. The cloud provider is responsible for securing the serverless infrastructure and cloud components (servers, databases, data centers, network elements, the operating system and its configuration, etc.). You are responsible for securing the application layer by enforcing legitimate app behavior, managing access to data and application code, monitoring for security incidents and errors, and so on.  &lt;/p&gt;

&lt;p&gt;Clearly, you need to invest heavily in securing your app before you can reap the benefits of serverless. In this article, I discussed the security risks you should watch out for and the best practices you should adopt to keep your serverless environments secure and safe against insecure coding practices, errors, and misconfigurations. Good luck!&lt;/p&gt;

&lt;p&gt;Are you a tech expert, blogger, influencer, writer, editor, or marketer? &lt;a href="https://iamondemand.com/iod-talent-network/"&gt;Join our talent network&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;This blog post was originally posted on &lt;a href="https://iamondemand.com/blog/security-risks-and-challenges-in-the-serverless-world/"&gt;IOD Blog &lt;/a&gt;.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>

AWS Lambda Multi-Region &amp; Multi-Account Deployments: Use Cases, Management, &amp; Pitfalls to Avoid</title>
      <dc:creator>We are IOD</dc:creator>
      <pubDate>Wed, 09 Mar 2022 14:10:14 +0000</pubDate>
      <link>https://dev.to/iod/aws-lambda-multi-region-multi-account-deployments-use-cases-management-pitfalls-to-avoid-1n43</link>
      <guid>https://dev.to/iod/aws-lambda-multi-region-multi-account-deployments-use-cases-management-pitfalls-to-avoid-1n43</guid>
      <description>&lt;p&gt;When you first start building a serverless application, you would usually do it in a single AWS account and deploy it to one region. More than likely, you will also not have thousands of users working at hundreds of different companies when you start. But there are good reasons for deploying infrastructure in general—and Lambda functions in particular—to multiple regions, and even to different accounts.&lt;/p&gt;

&lt;p&gt;In this article, I explore the reasons for multi-region and multi-account deployment, as well as how to best manage these deployments.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Deploy Lambda Functions to Multiple Regions?
&lt;/h2&gt;

&lt;p&gt;There are many reasons you should deploy a Lambda function to multiple regions. &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Redundancy &lt;br&gt;
Redundancy is the primary reason for deploying to multiple regions. Lambda functions are serverless in the sense that you do not have to set up and maintain servers, but they still run on servers that are set up and maintained by AWS. If one AWS region goes down - and you only deployed your Lambda function to that region - that function will go down too. If you deploy it to multiple regions, the functions hosted in other regions can failover when one region goes down.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Unreachable Servers&lt;br&gt;
If you deploy to one region, but suddenly all servers hosting the Lambda service are no longer reachable, your Lambda function cannot be invoked. And AWS will not run it on Lambda servers hosted in other regions if you did not set it to do so. However, if you deploy to multiple regions and one goes down, you can still redirect your users to the regions that remain.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Reducing Latency &lt;br&gt;
You can also use multi-region deployments to reduce latency. There are many causes that increase a Lambda function's latency, but geographical location can be the biggest one. Got customers in Europe and China? Well then, deploy to Europe and China! Even at the speed of light, it will take some time for a request to make it halfway around the world. The latency of a European deployment is only 20 milliseconds for me here in Germany, but 300 milliseconds for deployment in China. That difference is over an order of magnitude in scale! &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Compliance&lt;br&gt;
Finally, there are compliance reasons to consider. Sometimes you have to comply with laws that require you to store data in a specific geographical location. Private data, health-related data, or state secrets should not typically cross borders. If your service does not comply with a country’s data storage laws, you will be prohibited from offering it in that country.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Why Deploy Lambda Functions to Multiple Accounts?
&lt;/h2&gt;

&lt;p&gt;Now, let’s look into why you would need multi-account Lambda deployments.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Separating the Data of Multiple Customers&lt;br&gt;
If you deploy your infrastructure in the cloud, you usually share AWS’ servers with other AWS customers. AWS put a lot of thought into the design of AWS accounts in order to keep them isolated from each other, and to not allow competitors to see each other’s data. &lt;br&gt;
This isolation, of course, can be used to your company’s advantage as well. If you have multiple customers, you can deploy each customer into an extra AWS account,  and ensure that they cannot access each other’s data.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Securing Multiple Environments&lt;br&gt;
Also, if you have multiple environments such as development and production, you will not want a bug in one environment affecting the other one. If you have an account for each environment, this is highly unlikely to happen.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Budget Separation&lt;br&gt;
It also helps with budgeting. If you know who uses which account, you only have to look at the aggregated account bill to see who incurred what costs. &lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  When to Avoid Multi-Region &amp;amp; Multi-Account Deployments
&lt;/h2&gt;

&lt;p&gt;So when might you avoid multi-region and multi-account deployments? Multi-region deployments - and especially multi-account deployments - significantly increase the complexity of your architecture. Many AWS services do not support multi-account or multi-region deployments out of the box, making the integration with Lambda functions that much more difficult.&lt;/p&gt;

&lt;p&gt;For multi-account deployments, you will need extra services like AWS Single Sign-On or AWS Organizations to manage the users of all these accounts.&lt;/p&gt;

&lt;p&gt;If you’re only considering deploying to multiple regions due to latency issues, a tool like CloudFront can automatically cache your data at AWS’ edge locations around the world. You only need to deploy a single CloudFront distribution to an account in order to lower the latency for users in all regions.&lt;/p&gt;

&lt;h2&gt;
  
  
  Which Services Help with Multi-Region and Multi-Account Deployments?
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;AWS CloudFormation:&lt;/strong&gt; For infrastructure as code tools, this is the main service used in AWS. It comes with a structure called a stack set, which operates one level above a CloudFormation stack. With stack sets, you can use one CloudFormation template to deploy to multiple regions and accounts. This way, you can keep the configuration of the accounts and regions that should be used for deployment in a central place.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;AWS CodePipeline:&lt;/strong&gt; This is AWS’ managed CI/CD service, which allows you to define different targets for your delivery pipelines. Given the required permissions, a pipeline can deploy a CloudFormation template to any AWS account. This way, you can build a pipeline that first deploys to a staging account, and later, when someone manually approves the pipeline, it goes on and deploys to production.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;AWS Organizations:&lt;/strong&gt; If you wander into multi-account territory, this service (as well as AWS Control Tower, discussed below) is your friend. AWS Organizations helps you manage and configure multiple accounts in a centralized fashion. This enables you to pre-configure what a new account in the organization is allowed to do. For example, when laws require that you deploy in a specific region, you can create accounts that only have access to that region. In doing so, you ensure that you never accidentally violate the law. Organizations also helps with cost management, permissions, and security.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;AWS Single Sign-On:&lt;/strong&gt; This service lets you easily manage inter-account access. After all, if every developer needs to deploy to a staging account, they also have to receive the permissions to do so. Manually playing around with IAM users at an inter-account level is cumbersome.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;AWS Control Tower:&lt;/strong&gt; This service brings together Organizations and Single Sign-On, and makes it easier to work with the different services required for multi-account architecture.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Summary
&lt;/h2&gt;

&lt;p&gt;Deploying your infrastructure to multiple regions or accounts increases complexity and - in turn - the risk of something going wrong as well. Keep your architecture as simple as possible to avoid mistakes, while noting that keeping it too simple can get you into trouble too. AWS losing the one region you deployed to can cost you a lot of money, so try to distribute the risk around geographical locations with multi-region deployments.&lt;/p&gt;

&lt;p&gt;Sometimes multi-region and multi-account deployments are even required. If you need to lower latency - or are forced to comply with local laws - you usually will not be able to get around them.&lt;/p&gt;

&lt;p&gt;In particular, B2B SaaS companies can end up with rather extreme isolation requirements when renting out services to different companies. A simple WHERE clause in SQL will not cut it anymore, and two databases running alongside each other in the same account might be too risky.&lt;/p&gt;

&lt;p&gt;While not all AWS products work out-of-the-box with multiple regions in mind, AWS offers several services that address the pain points, so that you do not have to start from scratch.&lt;/p&gt;

&lt;p&gt;Tech content for tech experts by tech experts. If you think you have what it takes and if you want to be part of the IOD talent network and write articles like this one, &lt;a href="https://bit.ly/iod-talent-network-devto"&gt;contact us&lt;/a&gt;. &lt;/p&gt;

&lt;p&gt;The original article was posted on &lt;a href="https://bit.ly/aws-lambda-multi-region-devto"&gt;IOD Blog&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>lambda</category>
      <category>serverless</category>
      <category>deployment</category>
    </item>
    <item>
      <title>Don’t Have a DevOps Team? This Is Why You’re Wrong</title>
      <dc:creator>We are IOD</dc:creator>
      <pubDate>Thu, 03 Mar 2022 08:49:19 +0000</pubDate>
      <link>https://dev.to/iod/dont-have-a-devops-team-this-is-why-youre-wrong-4bcm</link>
      <guid>https://dev.to/iod/dont-have-a-devops-team-this-is-why-youre-wrong-4bcm</guid>
      <description>&lt;p&gt;If you’re not sure what “DevOps” means, and whether or not you need a DevOps team in your organization, this article is for you. Here, I provide an overview of DevOps and its various facets, discuss why you most probably want a dedicated DevOps team in your company, and cover those edge cases where you might not need one.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Is “DevOps”?
&lt;/h2&gt;

&lt;p&gt;“DevOps” is a workplace culture that merges “development” and “operations.” Before the DevOps methodology was established, engineers worked in silos, focusing solely on their particular area of expertise and usually unwilling to learn about other fields. DevOps eliminates silos by ensuring collaboration between developers and operations engineers throughout the software development lifecycle (SDLC). Teams can thus deliver optimized products much faster.&lt;/p&gt;

&lt;p&gt;The traditional siloed work environment was made up of developers on one side—responsible for writing the software code and making sure it worked on their machines—and operations on the other side, trying their best to run that software in a production environment. From the developer’s perspective, their responsibility ended when the software was released, meaning, any issue that arose in production would be the operation team’s problem. The operations engineers, on the other hand, felt it was not up to them to investigate the code if any bugs manifested in the deployed software, meaning, they would just throw the ball back to the developers. The truth is, in most cases, operations engineers wouldn’t have the necessary skills to debug the software anyway.&lt;/p&gt;

&lt;p&gt;“DevOps” has sought to bridge this gap and, in practice, has taken on a much wider meaning, embracing continuous integration, continuous deployment, automation, observability, cloud architecture, and more. &lt;/p&gt;

&lt;p&gt;As a result of this, you might have noticed that there aren’t too many sysadmins anymore. That’s because they all became DevOps engineers! So in some cases, DevOps engineers could be considered merely glorified sysadmins, while in others…/and.. .&lt;/p&gt;

&lt;p&gt;In its essence, I believe DevOps is a philosophy: Use a wide array of tools and techniques in order to deliver a software product efficiently through a number of means:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Automation (arguably DevOps’ greatest contribution to software engineering)&lt;/li&gt;
&lt;li&gt;Security&lt;/li&gt;
&lt;li&gt;Reliability&lt;/li&gt;
&lt;li&gt;Reproducibility&lt;/li&gt;
&lt;li&gt;Scalability&lt;/li&gt;
&lt;li&gt;Elasticity&lt;/li&gt;
&lt;li&gt;Observability&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Automating Software Delivery
&lt;/h2&gt;

&lt;p&gt;We are now entering the realm of continuous integration (CI) and continuous delivery/deployment (CD), which is at the heart of DevOps. I will speak about them both separately below.&lt;/p&gt;

&lt;h2&gt;
  
  
  Continuous Integration
&lt;/h2&gt;

&lt;p&gt;Technically speaking, CI is not part of DevOps, but a technique that is part of agile software development (although DevOps engineers can contribute, for example, by automating the running of static analysis or unit tests as part of a CI pipeline). CI essentially means that developers commit their changes to the main branch of code quickly and often. &lt;/p&gt;

&lt;p&gt;In the past, teams of developers would often spend weeks or months working separately on different features. When the time to release the software came, they would need to merge all their changes. Usually, the differences would be very large and lead to the dreaded “big bang merge,” where teams of developers would sometimes spend days trying to make each other’s code work together. &lt;/p&gt;

&lt;p&gt;The main advantage of CI is that it avoids individual pieces of work diverging too much and becoming difficult to merge. If a CI pipeline is created with unit tests, static analysis, and other such checks, it allows for quick feedback to developers and thus lets them fix issues before they cause further damage or prevent other developers from working.&lt;/p&gt;

&lt;h2&gt;
  
  
  Continuous Delivery/Deployment
&lt;/h2&gt;

&lt;p&gt;CD can be considered part of DevOps and builds on CI. A CD pipeline automates the delivery of software by building software automatically whenever changes are committed to a code repository and making the artifacts available in the form of a software release. When the pipeline stops at this stage, we call it “Continuous Delivery.” Additionally, a CD pipeline can automatically deploy artifacts, in which case it is called “Continuous Deployment.”&lt;/p&gt;

&lt;p&gt;In the past, building and deploying software were typically manual processes, tasks that were time-consuming and prone to errors. &lt;/p&gt;

&lt;p&gt;The main advantage of CD is that it automatically builds deliverables using a sanitized (and thus entirely controlled) environment, thus freeing up valuable time for engineers to work on more productive endeavors. Of course, the ability to automatically deploy software is certainly attractive too, but this may be one step outside the comfort zone for some engineers and managers. CD pipelines can also include high-level tests, such as integration tests, functional and non-functional tests, etc.&lt;/p&gt;

&lt;h2&gt;
  
  
  Automating Software Security
&lt;/h2&gt;

&lt;p&gt;This sub-branch of DevOps is sometimes called DevSecOps. Its goal is to automate security and best practices in software development and delivery. Also, it makes it easier to comply with security standards, as well as produce and retain the evidence required to prove adherence to such standards.&lt;/p&gt;

&lt;p&gt;Often, in software development, security is an afterthought, something that has to be done at some point but often left to the last moment when there is no time to properly do it. Developers are under pressure to perform and deliver within timeframes that can typically be very tight. Introducing a DevSecOps team may thus be a positive contribution, in the sense that it will establish which security aspects must be met and will use a variety of tools to enforce those requirements.&lt;/p&gt;

&lt;p&gt;DevSecOps can be at all levels of the software lifecycle, for example:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Static analysis of code&lt;/li&gt;
&lt;li&gt;Automatic running of tests&lt;/li&gt;
&lt;li&gt;Vulnerability scanning of the produced artifacts&lt;/li&gt;
&lt;li&gt;Threat detection (and possibly automated mitigation) when the software is running&lt;/li&gt;
&lt;li&gt;Auditing&lt;/li&gt;
&lt;li&gt;Automatically checking that certain security standards are followed&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Automating Reliability
&lt;/h2&gt;

&lt;p&gt;DevOps is often tasked with ensuring that a given system is highly available, which is achieved using tools such as load balancers, application meshes, and other tools that automatically both detect failed instances and take remedial action. Autoscaling is also an important aspect and is often implemented as an automated process by DevOps engineers.&lt;/p&gt;

&lt;p&gt;The key to all of this is that the whole system must be designed so that each of its components is ephemeral. In this way, any component can instantly be replaced by a new, healthy one, rendering a system that is self-healing. Designing such a system is usually not the remit of developers, but that of the DevOps team. &lt;/p&gt;

&lt;p&gt;Traditionally, organizations used snowflake servers running monolithic software stacks, with everything on that single server. Such a design is very fragile, with everyone living in fear of the next breakdown and engineers on duty 24/7. Admittedly, you also need engineers on duty in an automated system, just in case, but they would typically seldom be used.&lt;/p&gt;

&lt;h2&gt;
  
  
  Automating Reproducibility
&lt;/h2&gt;

&lt;p&gt;There are various tools out there that let you automate the configuration of servers and systems and the provisioning of infrastructure elements (networks, databases, servers, containers). Examples of these are configuration management and infrastructure-as-code (IaC) tools.&lt;/p&gt;

&lt;p&gt;Leveraging these, you can ensure that an exact mirror of a given system can be automatically instantiated at the press of a button. They also let you deploy new versions of software or keep the configuration of servers or serverless services up to date. &lt;/p&gt;

&lt;p&gt;IaC often integrates with CD. Indeed, one of the final stages of a CD pipeline can be the deployment of a software release in a production environment using IaC.&lt;/p&gt;

&lt;h2&gt;
  
  
  When to Avoid DevOps Practices
&lt;/h2&gt;

&lt;p&gt;Compared to traditional, manual, software development, DevOps practices require a significant amount of work upfront. This initial investment usually pays for itself many times over in the long term, but if your project is short-lived, this is probably a bad business decision. &lt;/p&gt;

&lt;p&gt;So, in any situation where you want to achieve “good enough” software that won’t be used in production, blindly applying DevOps practices isn’t likely a great idea and will only increase your development time for little added benefit. Typical examples include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Minimum viable product&lt;/li&gt;
&lt;li&gt;Demonstration&lt;/li&gt;
&lt;li&gt;Experiments&lt;/li&gt;
&lt;li&gt;Proof of concept&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In any of the above cases, moving to a production-ready product would usually require re-writing the software from scratch, in which case the DevOps practices can then be planned as part of the overall effort.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;The most recurring word in the DevOps world is “automation,” as you probably noticed in this article. As a DevOps engineer, my motto is: “If you can’t reproduce it, you don’t own it.”&lt;/p&gt;

&lt;p&gt;Compared to traditional development, DevOps usually requires more work upfront in order to establish the automation patterns. After this initial period, the productivity of developers is improved, and the effort required by the operations team is greatly reduced. &lt;/p&gt;

&lt;p&gt;Perhaps, you have also noticed that I didn’t mention anything about the cloud. This is intentional because DevOps practices apply to both cloud and on-premises environments. However, in the case of cloud-based workloads, DevOps practices are pretty much mandatory for software teams today. This is because manually provisioning and managing cloud resources is cumbersome and, of course, prone to human error. Many aspects of cloud engineering are also intrinsically tied to DevOps practices.&lt;/p&gt;

&lt;p&gt;In conclusion, it is fair to assume that unless you’re rushing to develop a minimum viable product, a DevOps team will allow you to structure your workloads in a way that is more efficient for both your developers and your operations team—and will definitely make both groups happier. Remember: “DevOps” is a philosophy that encompasses both your development and operations teams, so “just” introducing a DevOps team won’t be enough. You need to implement the necessary cultural changes across your company to make it—and your cloud environment—work.&lt;/p&gt;

&lt;p&gt;This article was originally published on &lt;a href="https://iamondemand.com/blog/dont-have-a-devops-team-this-is-why-youre-wrong/"&gt;IOD Blog.&lt;/a&gt;&lt;/p&gt;

</description>
      <category>devops</category>
    </item>
    <item>
      <title>
How to setup Lambda using only the AWS CLI</title>
      <dc:creator>We are IOD</dc:creator>
      <pubDate>Thu, 24 Feb 2022 09:37:05 +0000</pubDate>
      <link>https://dev.to/iod/how-to-setup-lambda-using-only-the-aws-cli-3fnd</link>
      <guid>https://dev.to/iod/how-to-setup-lambda-using-only-the-aws-cli-3fnd</guid>
      <description>&lt;p&gt;This article will walk you through all the steps required to create a Lambda function using only the AWS command line. So no AWS console is allowed here!&lt;/p&gt;

&lt;p&gt;Please note that this article won’t go into Lambda layers. Also, it is limited to the Python runtime, so you will need to adapt the commands to another runtime if that’s what you’re using.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Prerequisites&lt;/strong&gt;&lt;br&gt;
The first thing you’ll need to do is create an IAM user with programmatic access and sufficient permissions to manipulate Lambda functions. The AWS documentation describes how to do this.&lt;/p&gt;

&lt;p&gt;The next step is to install the AWS command line interface. Again, the AWS documentation efficiently tells you how to do this for your operating system. You will also need to configure your credentials to be used by the AWS CLI. I strongly recommend that you use an explicit profile, not the default profile. If there is any chance that you have two or more profiles, and you also use the default profile, the likelihood that you will one day enter an AWS command without specifying the –profile option is near 100%, with unintended and probably bad consequences (This happened to me while manipulating CloudFormation stacks, and, needless to say, I learned my lesson.). &lt;/p&gt;

&lt;p&gt;So, to configure the AWS CLI, run the following command:&lt;/p&gt;

&lt;p&gt;aws configure –profile test&lt;/p&gt;

&lt;p&gt;You will obviously need the code for the Lambda function. In this example, the Lambda function will query the GitHub API to retrieve the list of repositories for a given user and will send that list to an SNS topic. Here’s the code in Python (Save it in a file named “list_github_repos.py.”):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import requests

import boto3

import os



def doit(event, context):

    username = event['username']

    url = f"https://api.github.com/users/{username}/repos"

    headers = {"accept": "application/json"}

    print(f"Querying GitHub on URL: {url}")

    r = requests.get(url, headers=headers)your@email.com

    data = r.json()

    repos = [i['name'] for i in data]

    text = ",".join(repos)

    print(f"GitHub responded: {text}")

    msg = f"Repositories for {username}: {text}"



    sns = boto3.client("sns")

    sns.publish(

            TopicArn=os.environ['SNS_TOPIC_ARN'],

            Message=msg

    )
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In this case, you need an SNS topic, so go ahead and create it like so:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws --profile test sns create-topic --name notify-repos
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This will return the topic ARN; please make a copy of it, as you will need it later. To really see things in action, subscribe your email address to the SNS topic (Replace “&lt;a href="mailto:your@email.com"&gt;your@email.com&lt;/a&gt;” with your real email address):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws --profile test sns subscribe \

  --topic-arn arn:aws:sns:us-east-1:123456789012:notify-repos \

  --protocol email --notification-endpoint your@email.com
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You will need to go to your email account and confirm the subscription. Please note this will take a few minutes for the subscription to start sending emails.&lt;/p&gt;

&lt;p&gt;Finally, you will need a clear understanding of the IAM permissions the Lambda function requires. Here, you’ll need the permissions usually required by a Lambda function (which are related to sending logs to CloudWatch Logs) and also the “Publish” call on an SNS topic.&lt;/p&gt;

&lt;p&gt;Create a Role for the Lambda Function&lt;br&gt;
The next step is to create an IAM role for the Lambda function. This role will give permissions to the Lambda function to perform specific actions on certain AWS resources. To create the role, you’ll first need to have an “assume role policy document,” which specifies which entity the role applies to. In this case, it applies to the Lambda service, so the policy document will look like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{

  “Version”: “2012-10-17”,

  “Statement”: [

    {

      “Effect”: “Allow”,

      “Principal”: {

        “Service”: “lambda.amazonaws.com”

      },

      “Action”: “sts:AssumeRole”

    }

  ]

}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Save the above JSON text in a file named “assume-role-policy-document.json.” To create the role, run:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws --profile test iam create-role --role-name list-github-repos-role \

  --assume-role-policy-document file://./assume-role-policy-document.json
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The output of this command will show the role ARN. Please save this, as you will need it later.&lt;/p&gt;

&lt;p&gt;Now, you have to attach some policies to that role. AWS provides the managed policy &lt;code&gt;AWSLambdaBasicExecutionRole&lt;/code&gt;, which just gives the Lambda function permissions to write its logs to CloudWatch Logs. Go ahead and attach that policy to the role:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws --profile test iam attach-role-policy --role-name list-github-repos-role \

  --policy-arn \

  arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In your case, the Lambda function also needs to write to the “notify-repos” SNS topic. The policy document to allow the function to do this is the following (Use the SNS topic ARN you previously saved.):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{

  "Version": "2012-10-17",

  "Statement": [

    {

      "Effect": "Allow",

      "Action": "sns:Publish",

      "Resource": "arn:aws:sns:us-east-1:123456789012:notify-repos"

    }

  ]

}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Save the above text in a file named “sns-policy-for-lambda.json.” You will now create what is called an “inline policy,” which means that the policy is saved inside the role itself, as opposed to a managed policy, which exists independently. To create this policy inside the role, run:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws --profile test iam put-role-policy --role-name list-github-repos-role \

  --policy-name publish-to-sns \

  --policy-document file://./sns-policy-for-lambda.json
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Create the Lambda Function&lt;/strong&gt;&lt;br&gt;
The next step is to create the Lambda function. For this, you need to package it into a zip file, along with any dependencies that are provided by the Lambda runtime engine. Our function imports the following modules: “requests,” “boto3,” and “os.” The last two are provided by the Lambda runtime, but you need to package the “requests” module along with your function code. To do this, create a file named “requirements.txt,” and edit it so it contains only the word “requests,” like so:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;echo requests &amp;gt; requirements.txt&lt;/code&gt;&lt;br&gt;
Then run the following to install the dependencies in the “pkg” subdirectory:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;mkdir pkg

pip3 install --target pkg -r requirements.txt

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now run the following commands to package the function and its dependencies:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cd pkg

zip -r9 ../lambda.zip .

cd ..

rm -rf pkg

zip -g lambda.zip list_github_repos.py
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can now finally create your Lambda function using this command (Use the SNS topic ARN that you saved earlier.):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws --profile test lambda create-function \

  --function-name list_github_repos \

  --runtime python3.9 \

  --handler list_github_repos.doit \

  --zip-file fileb://./lambda.zip \

  --timeout 300 \

  --environment \

  'Variables={SNS_TOPIC_ARN=arn:aws:sns:us-east-1:123456789012:notify-repos}'

  --role arn:aws:iam::123456789012:role/list-github-repos-role
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;A few comments about the above command:&lt;/p&gt;

&lt;p&gt;The “handler” option tells the Lambda runtime which Python function to execute when the Lambda function is called. The format of the argument is “file name”, followed by a dot “.” followed by the name of the function inside that file.&lt;br&gt;
The “timeout” option tells the Lambda runtime to abort the execution after the given timeout. Here, I specified 5 minutes, which should be more than enough. The maximum you can set is 15 minutes.&lt;br&gt;
The “environment” option is used to set the environment variable SNS_TOPIC_ARN to the SNS topic you want to send messages to. In this way, you don’t have to hardcode the SNS topic ARN inside the function’s code.&lt;br&gt;
If you need to make changes to the function’s code, you can update the Lambda function by packaging the code and its dependencies like before, and then running the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws --profile test lambda update-function-code --function-name list_github_repos --zip-file fileb://./lambda.zip
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Test the Lambda function&lt;br&gt;
Finally, you can test your Lambda function. You need to pass an event (also called a payload), which must be a base64-encoded JSON object. In this example, the function code just tries to extract the “username” key from the JSON object, which is the GitHub username you want to query. So the way to test your function is:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws --profile test lambda invoke --function-name list_github_repos --payload $(echo '{"username": "fabricetriboix"}' | base64) /dev/null
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The “/dev/null” at the end is the file to which the AWS command line will write the output of the Lambda function. Here, the Lambda function doesn’t return anything, so there will be no output, but the AWS command line still requires this argument. If the invocation fails, it could be useful to set a real file and inspect its content; that will help you diagnose what the problem is.&lt;/p&gt;

&lt;p&gt;Once you’ve invoked the Lambda function, and provided everything goes well, you should soon receive an email with a list of the public GitHub repositories for your chosen “username.”&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Alternatives to the AWS Command Line&lt;/strong&gt;&lt;br&gt;
You’ll probably agree with me that creating a Lambda function using purely the AWS command line is quite painful, as this article shows. So let’s explore some alternatives.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;AWS Console&lt;/strong&gt;&lt;br&gt;
The most obvious alternative is to use the AWS console. It provides a nice GUI and sets some things up for you, such as a role for your function. The main inconvenience is that it is suitable only for simple functions without external dependencies and without layers.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Serverless Application Model (SAM)&lt;/strong&gt;&lt;br&gt;
SAM is essentially an extension to AWS CloudFormation that simplifies the creation and management of Lambda functions. It is very complete and is suitable for whole serverless applications. It does require a learning curve, but that might very well be time well-spent if your entire application is serverless.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;CloudFormation&lt;/strong&gt;&lt;br&gt;
CloudFormation is the infrastructure-as-code tool offered by AWS. It is proprietary and works on AWS only. In practice, writing the code on CloudFormation would require setting up more or less one resource for each manual step you did when invoking the AWS command line. On top of that, you will need to set up an S3 bucket to store the zip file because CloudFormation can’t access the zip on your computer (obviously). So overall, it will be as painful as the plain command line.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Terraform&lt;/strong&gt;&lt;br&gt;
Terraform is an infrastructure-as-code tool from HashiCorp. Terraform supports many cloud vendors and other environments where resources can be deployed. If using only basic Terraform objects, the only difference with CloudFormation will be the syntax of the code, so just as painful. Terraform does have the advantage of coming with modules, such as this AWS Lambda one, which can simplify your work.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;CDK&lt;/strong&gt;&lt;br&gt;
The Cloud Development Kit is an open-source project initiated by AWS. It is an infrastructure-as-code tool that allows you to write actual code to describe your infrastructure, as opposed to using a declarative approach, as done by both CloudFormation and Terraform. By default, CDK uses CloudFormation as a backend, but many vendors are providing backends for their own platforms. CDK might be the most concise tool for creating a Lambda function, with a single call being required to do so, as detailed in this JavaScript example.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Conclusion&lt;/strong&gt;&lt;br&gt;
In conclusion, I would say that using the AWS CLI to create and manage Lambda functions is possible, although it is quite painful. Possibly, it might be OK to embed AWS commands in a script, although realistically, if you start going down that road, you will probably want to use a real infrastructure-as-code tool.&lt;/p&gt;

&lt;p&gt;For one-offs and experimentations, just using the AWS console probably makes more sense. And for production environments, or if you need traceability and reproducibility, a proper infrastructure-as-code tool such as SAM, CloudFormation, or CDK is definitely a better choice.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Do you think you write an AWS blog post like this one? Maybe even better? &lt;a href="https://bit.ly/iod-talent-network"&gt;Write for us!&lt;/a&gt;&lt;br&gt;
This article was originally posted on &lt;a href="**https://iamondemand.com/iod-talent-network/**"&gt;IOD's Blog&lt;/a&gt;.&lt;/strong&gt;&lt;/p&gt;

</description>
      <category>aws</category>
      <category>awslambda</category>
    </item>
    <item>
      <title>
How To Use Terraform like a Pro: Part 2</title>
      <dc:creator>We are IOD</dc:creator>
      <pubDate>Wed, 16 Feb 2022 10:15:12 +0000</pubDate>
      <link>https://dev.to/iod/how-to-use-terraform-like-a-pro-part-2-3g70</link>
      <guid>https://dev.to/iod/how-to-use-terraform-like-a-pro-part-2-3g70</guid>
      <description>&lt;p&gt;In the previous post in this two-part series, I discussed what &lt;a href="https://dev.to/iod/how-to-use-terraform-like-a-pro-part-1-e1n"&gt;Terraform is and the features it supports&lt;/a&gt;. In this post, I’ll explore some use cases to show you how to get the most out of Terraform, simplifying your &lt;a href="https://iamondemand.com/blog/choosing-the-best-deployment-tool-for-your-serverless-applications/"&gt;DevOps environment&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Multi-Tier Applications
&lt;/h2&gt;

&lt;p&gt;Multi-tier architecture is the most common pattern for building systems. In this architecture, you generally use a two or three tier structure. In a two-tier structure, the first tier has a cluster of web servers, and the second tier is a pool of different databases used by the first tier’s servers. With more complicated systems requiring API servers, caching, middleware, event buses, and so on, you can add the third tier.&lt;/p&gt;

&lt;p&gt;With Terraform, each tier can be segregated as a collection of resources, and you can create dependencies between them using Terraform configuration. This ensures that your databases and middleware are ready before you provision your API and web servers. Terraform’s advantage is that it brings scalability and resilience into the system, as each tier can be scaled automatically using configuration.&lt;/p&gt;

&lt;h2&gt;
  
  
  Platform-as-a-Service (PaaS) Setup
&lt;/h2&gt;

&lt;p&gt;PaaS is a great choice if you don’t want to invest too much in building skills in infrastructure. Platforms like Cloud Foundry and Red Hat OpenShift are widely used and are being deployed on AWS, GCP, Azure, and &lt;a href="https://iamondemand.com/blog/which-cloud-provider-is-right-for-you-an-iod-series-part-1/"&gt;other cloud platforms&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;With Terraform, the platforms are enabled to scale based on demand. These platforms need regular patching, upgrades, re-configurations, and extension support, and can be enabled using Terraform configuration.&lt;/p&gt;

&lt;h2&gt;
  
  
  Multi-Cloud Deployment
&lt;/h2&gt;

&lt;p&gt;Due to compliance requirements and/or the need to avoid vendor lock-in, many organizations have started implementing &lt;a href="https://iamondemand.com/blog/getting-started-with-multi-cloud-ci-cd-pipelines/"&gt;multi-cloud deployment&lt;/a&gt;, which helps increase availability, fault tolerance, and system resiliency.&lt;/p&gt;

&lt;p&gt;To support infrastructure as code (IaC), each cloud vendor provides its own configuration tools. However, these tools are cloud specific. That’s where Terraform comes into play—you can use it to support multi-cloud deployments. It also provides support to multiple cloud providers and simplifies the orchestration between each provider’s resources.&lt;/p&gt;

&lt;h2&gt;
  
  
  Multi-Repo Environment Setup
&lt;/h2&gt;

&lt;p&gt;For simple small projects, one Terraform main configuration file in a single directory is a good place to start. However, it will become a monolith over time as resources increase. Also, you’ll need to have multiple environments to support deploying applications.&lt;/p&gt;

&lt;p&gt;Terraform, however, offers several options, such as directories and workspaces to modularize your configuration so that you can manage it smoothly. You can also separate the directories for each environment, which ensures that you only touch the intended infrastructure. For example, making changes to the Dev environment won’t impact the QA or Prod environments. However, this option duplicates the Terraform code and is useful only if your deployment requirements are different for each environment.&lt;/p&gt;

&lt;p&gt;If you want to reuse the Terraform code with different environment parameters, workspace-separated environments are a better option. In this case, you will have a separate state file for each environment.&lt;/p&gt;

&lt;h2&gt;
  
  
  Multi-Cloud Setup with Terraform
&lt;/h2&gt;

&lt;p&gt;Now that I’ve reviewed a few Terraform use cases, I’ll explore some of them in greater detail to show you how they can be implemented. First, I’ll dive deep into the multi-cloud setup configuration using Terraform. &lt;/p&gt;

&lt;p&gt;Let’s take a simple example: an httpd server being installed in AWS and Azure on a CentOS 8. Here, the same httpd server is getting deployed in multiple clouds using Terraform.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 1: Create a Common Variable Configuration
&lt;/h2&gt;

&lt;p&gt;To start, create a common configuration file named&lt;code&gt;common-variables.tf.&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;This file has all the variables shared among other modules. The configuration looks like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#Environment
variable "application_env" {
  type = string
  description = "Application environment like Dev, QA or Prod"
  default = "dev"
}

#Application name
variable "application_name" {
  type = string
  description = "Name of the Application"
  default = "multiclouddemo"
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Step 2: Terraform Configuration for Httpd Server on AWS
&lt;/h2&gt;

&lt;p&gt;Now, create a Terraform file that has configuration for httpd server on a CentOS EC2 instance.&lt;/p&gt;

&lt;p&gt;Define a variable file for AWS authentication, AZ, VPC, and CIDR.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#variables.tf
#for brevity, not putting authentication related variables.

#AWS Region
variable "region" {
  type = string
  description = "AWS Region for the VPC"
  default = "ap-southeast-1"
}
#AWS Availability Zone
variable "az" {
  type = string
  description = "AWS AZ"
  default = "ap-southeast-1a"
}
#VPC CIDR
variable "vpc_cidr" {
  type = string
  description = "VPC CIDR"
  default = "10.2.0.0/16"
}
#Subnet CIDR
variable "subnet_cidr" {
  type = string
  description = "Subnet CIDR"
  default = "10.2.1.0/24"
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Next, create a shell script that installs the httpd server.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#! /bin/bash
sudo apt-get update
sudo apt-get install -y apache2
sudo systemctl start apache2
sudo systemctl enable apache2
echo "&amp;lt;h1&amp;gt;Deployment on AWS&amp;lt;/h1&amp;gt;" | sudo tee /var/www/html/index.html
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then create all the resources in the main Terraform file.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#for brevity, not putting each and every parameter name. Only keeping the ones that are relevant for the article.

#main.tf
#Initialize the AWS Provider
provider "aws" {
 ---
}
#VPC definition
resource "aws_vpc" "aws-vpc" {
  ----
}
#subnet definition
resource "aws_subnet" "aws-subnet" {
  ---
}
#Define the internet gateway
#Define the route table to the internet
#Assign the public route table to the subnet
#Define the security group for HTTP web server

#Centos 8 AMI
data "aws_ami" "centos_8" {
  most_recent = true
  owners = ["02342412312"]
  filter {
    name = "name"
    values = ["centos/images/hvm-ssd/centos-8.03-amd64-
      server-*"]
  }
  filter {
    name = "virtualization-type"
    values = ["hvm"]
  }
}
#Define Elastic IP for web server
resource "aws_eip" "aws-web-eip" {
  ----
}
# EC2 Instances
resource "aws_instance" "aws-web-server" {
  ami = data.aws_ami.centos_8.id
  instance_type = "t3.micro"
  subnet_id = aws_subnet.aws-subnet.id
  vpc_security_group_ids = [aws_security_group.aws-web-sg.id]
  associate_public_ip_address = true
  source_dest_check = false
  key_name = var.aws_key_pair
  user_data = file("aws-data.sh")
  tags = {
    Name = "${var.application_name}-${var.application_env}-web-server"
    Env = var.application_env
  }
}
#Define Elastic IP
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Step 3: Terraform Configuration for Httpd Server on Azure
&lt;/h2&gt;

&lt;p&gt;Similar to what I just showed you for AWS, you now need to define variables for Azure authentication and resources:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#Azure authentication variables

#Location Resource Group
variable "rg_location" {
  type = string
  description = "Location of Resource Group"
  default = "South East"
}
#Virtual Network CIDR
variable "vnet_cidr" {
  type = string
  description = "Vnet CIDR"
  default = "10.3.0.0/16"
}
#Subnet CIDR
variable "subnet_cidr" {
  type = string
  description = "Subnet CIDR"
  default = "10.4.1.0/24"
}
# Define centos linux User related variables
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Next, create a shell script, similar to the AWS one, which installs the httpd server on Azure with a different message:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#! /bin/bash
sudo apt-get update
sudo apt-get install -y apache2
sudo systemctl start apache2
sudo systemctl enable apache2
echo "&amp;lt;h1&amp;gt;Deployment on Azure&amp;lt;/h1&amp;gt;" | sudo tee /var/www/html/index.html
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then create all the Azure resources in the main Terraform file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#main.tf

#for brevity, not putting each and every parameter names.Only keeping the one is relevant for the article.
#Configure the Azure Provider
provider "azurerm" {
  --
}
#Define Resource Group
resource "azurerm_resource_group" "azure-resource_grp" {
  --
}
#Define a virtual network
resource "azurerm_virtual_network" "azure-vnet" {
  --
}
#Define a subnet
resource "azurerm_subnet" "azure-subnet" {
  ---
}
#Create Security Group to access Web Server
resource "azurerm_network_security_group" "azure-web-nsg" {
  ---
}
#Associate the Web NSG with the subnet
resource "azurerm_subnet_network_security_group_association" "azure-web-nsg-association" {
  ---
}
#Get a Static Public IP
resource "azurerm_public_ip" "azure-web-ip" {
  ---
}
#Create Network Card for Web Server VM
resource "azurerm_network_interface" "azure-web-nic" {
  ---
}
#Create web server vm
resource "azurerm_virtual_machine" "azure-web-vm" {
  name = "${var.application_name}-${var.application_env}-web-vm"
  location = azurerm_resource_group.azure-resource_grp.location
  resource_group_name = azurerm_resource_group.azure-resource_grp.name
  network_interface_ids = [azurerm_network_interface.azure-web-
    nic.id]

  storage_image_reference {
   ---
  }
  tags = {
    environment = var.application_env
  }
}
#Output
output "azure-web-server-external-ip" {
  value = azurerm_public_ip.azure-web-ip.ip_address
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now the Terraform configuration is ready for both AWS and Azure. You can run the following commands to create the multi-cloud application using Terraform:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ terraform init 
$ terraform apply
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;There is one additional configuration for distributing the traffic to both AWS and Azure using the same URL. For that, you can use Amazon Route 53 or Cloudflare. &lt;/p&gt;

&lt;h2&gt;
  
  
  Multi-Repo Environment Application
&lt;/h2&gt;

&lt;p&gt;Earlier, I briefly discussed how you can use directories and workspaces to support multi- repository applications. I’ll now explore how to implement workspaces to reuse the same Terraform configuration for multiple environments.&lt;/p&gt;

&lt;p&gt;Terraform configurations generally have a default workspace. You can check this by running the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ terraform workspace list
   * default
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Note: * means the current workspace.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 1: Create Variables
&lt;/h2&gt;

&lt;p&gt;Start with defining a file named variables.tf:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;variable "aws_region" {
  description = "AWS region where our web application will be deployed."
}

variable "env_prefix" {
  description = "Environment like dev, qa or prod"
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Step 2: Define Main Configuration
&lt;/h2&gt;

&lt;p&gt;Next, define a main.tf configuration defining all the resources required for a small web application:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#main.tf
provider "aws" {
  region = var.region
}

resource "random_country" "countryname" {
  length    = 20
  separator = "-"
}

resource "aws_s3_bucket" "bucket" {
  bucket = "${var.env_prefix}-${random_country.countryname.id}"
  acl    = "public-read"

  policy = &amp;lt;&amp;lt;EOF
  {
    ---
  }
  EOF

  website {
    index_document = "welcome.html"
    error_document = "error.html"

  }
  force_destroy = true
}

resource "aws_s3_bucket_object" "countryapp" {
  acl          = "public-read"
  key          = "welcome.html"
  bucket       = aws_s3_bucket.bucket.id
  content      = file("${path.module}/assets/welcome.html")
  content_type = "text/html"

}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Step 3: Define Variables for Command Line
&lt;/h2&gt;

&lt;p&gt;Interface (CLI)&lt;br&gt;
Now, define the &lt;code&gt;dev.tfvars&lt;/code&gt; file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;region = "ap-southeast-1"
prefix = "dev"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then define the prod.tfvars file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;region = "ap-southeast-1"
prefix = "prod"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;These files can be kept in different repositories in order to isolate them. Which repository a file is kept in depends on the roles of the users who will be allowed to access them.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 4: Define Output File
&lt;/h2&gt;

&lt;p&gt;The output file will be the same for both of the environments:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;output "website_endpoint" {
  value = "http://${aws_s3_bucket.bucket.website_endpoint}/index.html"
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Step 5: Create Workspaces
&lt;/h2&gt;

&lt;p&gt;Next, create two workspaces: one for dev and one for prod.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ terraform workspace new dev
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once you create the dev workspace, it will become your current workspace.&lt;/p&gt;

&lt;p&gt;Now, initialize the directory and then apply the dev.tfvars file using the flag -var-file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ terraform init
$ terraform apply -var-file=dev.tfvars
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The output of this configuration will execute in the dev workspace, and the web application will launch in the browser.&lt;/p&gt;

&lt;p&gt;You can create the prod workspace similarly by applying prod.tfvars:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ terraform workspace new prod
$ terraform init
$ terraform apply -var-file=prod.tfvars
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now you should be able to run the web application in a prod environment as well.&lt;/p&gt;

&lt;p&gt;Also, your folder structure will have three repositories. The first repository will have two workspaces: dev and prod. This ensures that the state is maintained in accordance with the flag -&lt;code&gt;var-file&lt;/code&gt;. &lt;/p&gt;

&lt;p&gt;You’ll also notice that there is a separate state file for each environment/workspace.&lt;/p&gt;

&lt;p&gt;Here is the structure of the three repositories:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
├── README.md        

├── assets

│   └── index.html

├── main.tf

├── outputs.tf

├── terraform.tfstate.d

│   ├── dev

│   │   └── terraform.tfstate

│   ├── prod

│   │   └── terraform.tfstate

└── variables.tf

├── README.md

├── dev.tfvars

├── README.md

├── prod.tfvars
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Summary
&lt;/h2&gt;

&lt;p&gt;In this article, I reviewed several use cases that show how Terraform can help DevOps processes run smoothly and enable you to maintain the infrastructure with versioning, reuse of the code, automated scalability, and much more. While I shared some of the most well-known examples, since Terraform allows the extension of its features, there are many other use cases that show how Terraform can enable IaC.&lt;/p&gt;




&lt;p&gt;If you want to write expert-based articles like this one, &lt;a href="https://iamondemand.com/iod-talent-network/"&gt;join our talent network&lt;/a&gt;. &lt;/p&gt;

</description>
      <category>terraform</category>
      <category>cloudskills</category>
    </item>
  </channel>
</rss>
