<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: We are IOD</title>
    <description>The latest articles on DEV Community by We are IOD (@iamondemand).</description>
    <link>https://dev.to/iamondemand</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/iamondemand"/>
    <language>en</language>
    <item>
      <title>Freelance Tech Marketing Writer– 5 Things That Can Scar You for Life</title>
      <dc:creator>We are IOD</dc:creator>
      <pubDate>Sun, 02 Oct 2022 12:53:41 +0000</pubDate>
      <link>https://dev.to/iamondemand/freelance-tech-marketing-writer-5-things-that-can-scar-you-for-life-5gad</link>
      <guid>https://dev.to/iamondemand/freelance-tech-marketing-writer-5-things-that-can-scar-you-for-life-5gad</guid>
      <description>&lt;p&gt;It needs to be said: Making a living as a freelance tech writer can be pretty challenging at times.&lt;/p&gt;

&lt;p&gt;Unless you’re able to confide in fellow writers once in a while, you might be feeling you’re the only one having some trouble—the only writer who’s not out there raking in new clients hand over fist, not building rarefied new skills with every job; not writing authoritatively for glamorous, big-name tech companies; and not smashing earnings goals month after profitable month.&lt;/p&gt;

&lt;p&gt;The reality is that almost every freelance writer gets a little—or a lot—bashed up on their journey to becoming a successful, independent businessperson. Even the writers you most admire bear some scars from their struggle through the ranks.&lt;/p&gt;

&lt;p&gt;And the challenges persist, regardless of the level of success a writer enjoys. Knowing how to manage those challenges effectively is what distinguishes the winners from the also-rans. This post presents the five key challenges freelance tech marketing writers face and suggests a practical solution that’s easy to implement—which could change your professional life forever!&lt;/p&gt;

&lt;h2&gt;
  
  
  1. Client Acquisition
&lt;/h2&gt;

&lt;p&gt;The most significant challenge—because it’s the axis around which your entire business articulates—is finding the right clients to work with.&lt;/p&gt;

&lt;p&gt;If you come into freelance writing from a corporate environment, it doesn’t take long to figure out that you’re in a new paradigm with one sizable obstacle: To keep your business afloat, you need to spend a good part of your working day doing non-billable marketing and administrative work.&lt;/p&gt;

&lt;p&gt;If only 50–60% of all the time you spend at your desk can be billed to clients, how do you match your previous salary or even earn enough to cover your monthly bills?&lt;/p&gt;

&lt;p&gt;The only solution is to find solid clients who understand the value you have to offer and will pay reasonable fees for your work, which can be like looking for a needle in a haystack. However you organize your search, you’ll have to kiss a lot of frogs.&lt;/p&gt;

&lt;p&gt;Says veteran freelance tech marketing writer Yetunde, “When I started out as a freelancer, I spent entire days and weeks sending out emails, following them up, and sitting in (mostly) ultimately fruitless discovery calls with countless potential clients. Things became easier over time, and I now have a client base, but that doesn’t mean I can stop making an effort to acquire new clients. I continue to spend a lot of time optimizing my website, updating my online portfolio, and actively reaching out to prospects. Client acquisition is a never-ending process for freelancers and time spent on outreach obviously impinges on the time we can spend writing and doing other billable work. As such, finding clients remains my biggest pain point.”&lt;/p&gt;

&lt;p&gt;Also worth bearing in mind is that for tech marketing writers in particular, getting exposure to big brands is typically harder than it is in other industries. Tech companies typically prefer to hire employees via personal referral or to work with agencies, making it hard for individual freelancers to get a foot in the door. Moreover, marketing teams and SMEs often don’t have the time or resources to invest in getting new writers up to speed on complex products. As a result, the tech industry can represent a closed shop for newcomers.&lt;/p&gt;

&lt;h2&gt;
  
  
  2. Admin and Paperwork Burden
&lt;/h2&gt;

&lt;p&gt;Have you ever stopped to calculate how much time you spend preparing to work for your various clients, talking about the work as it’s underway, and cleaning up afterwards?&lt;/p&gt;

&lt;p&gt;Contracts, project proposals, briefing forms, status updates, approvals, and invoices can become an enormous paperwork burden when you serve multiple clients—each with their own contractual requirements, payment methods, and work processes. The tech industry is represented by a vast range of companies, from global conglomerates to three-person startups. That’s why every freelance tech marketing writer must take the time to create and maintain a dedicated and constantly expanding library of client management templates to suit every eventuality.&lt;/p&gt;

&lt;p&gt;Dealing with multiple clients and company types also means keeping up with a plethora of communications and project management software, apps, platforms, email accounts, and even computers—all flashing and pinging around your work station, day and night, needing passwords and usernames and reboots….now Jira, now Slack, now Trello, now Basecamp, now Google docs, now Word.&lt;/p&gt;

&lt;p&gt;As your client count grows, so does the number of editorial, tone-of-voice, and brand style guides to be followed and mastered. Welcoming a new client also means preparing to deal with new editors and with editorial habits and processes that often seem inflexible, even irritating, or just outright wrong.&lt;/p&gt;

&lt;p&gt;The result for a freelance writer serving a wide range of clients in diverse industries may be overwhelm and even burnout.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Lack of Structure
Navigating a lack of structure within a client company can be even more painful for a freelancer than struggling to become familiar with a surfeit of rules and processes around content writing and production in a single client company.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;When former marketing executive Clair stepped into freelance employment after years in a smoothly run corporate environment, it never occurred to her that her new clients might lack the systems and capabilities to implement the strategies she designed—or even to upload her content into their CMS. Her Florence Nightingale instincts leapt to the fore and she soon found herself in a world of trouble.&lt;/p&gt;

&lt;p&gt;Says Clair, “My biggest issue was taking on too many tasks that were outside the scope of my paid work. I frequently found myself working extra hours for exhausted solopreneurs or small teams when I didn’t feel confident requesting extra pay for these ‘small’ tasks. For one client who simply had no marketing support structure whatsoever, I worked on content uploads and similar tasks for a few extra hours every month just to be sure my work was properly implemented. I didn’t consider how that was taking my time from other clients and soon I found myself working full-time hours for the client at a rate way below what I was worth. In such a chaotic set-up, it was impossible to showcase the full value of the strategy I provided because I was doing so many ‘emergency’ tasks outside that strategy. Ultimately, my confidence decreased, causing a ripple effect in my other client relationships where I also undercharged and took on too many admin tasks beyond my scope of work.”&lt;/p&gt;

&lt;p&gt;As a marketing professional with a range of writing-adjacent skills, Clair gave in to the temptation to demonstrate all her capabilities when the need arose. Predictably, she burned out after a few years.&lt;/p&gt;

&lt;h2&gt;
  
  
  4. Lack of Sector Knowledge
&lt;/h2&gt;

&lt;p&gt;There are many reasons freelance writers choose tech as a niche. With the global IT industry expected to be worth $13.8 billion by 2026, it makes sound career sense.&lt;/p&gt;

&lt;p&gt;A full 53% of respondents to IOD’s 2022 survey of tech marketing writers cited superior compensation as their key reason for going into the field. Other motivations included stimulating work (23%), career growth (12%), and steady work (7%).&lt;/p&gt;

&lt;p&gt;However, for a professional writer without any particular tech knowledge or expertise and with no prior work experience in the tech industry, choosing a tech speciality can be a headache. Cloud? DevOps? Cybersecurity? Data engineering? AI?&lt;/p&gt;

&lt;p&gt;Each specialist area has its own target audiences, key stakeholders, concepts, and terminology. How does an outsider become familiar with the territory? And where can he or she start to acquire the sector expertise, the confidence, and the grasp of the specialist language needed to write professionally in this space and build a lucrative and meaningful career?&lt;/p&gt;

&lt;h2&gt;
  
  
  5. Managing Cash Flow
&lt;/h2&gt;

&lt;p&gt;Many beginner freelancers quickly give up in defeat when they experience the income insecurity that comes with independent employment. But even experienced freelancers can find themselves having to chase after the client well after payment is due. Worse, in some cases they are not paid at all, and the time, effort, and cost involved in trying to get the client to fork up the cash isn’t always worth it. Not knowing whether they’ll be able to meet their monthly expenses, many freelancers may feel forced to return to the corporate jobs they’d hoped to leave behind.&lt;/p&gt;

&lt;p&gt;Even for more established freelancers, ensuring a steady flow of income on the back of unstable piecework is a significant challenge. Many clients offer net 30, 60, or even 90-day payment terms, with longer terms often being associated with larger and more prestigious companies and brands.&lt;/p&gt;

&lt;p&gt;Inconsistent income makes it very difficult for freelancers to plan large purchases, schedule all-important breaks, and make important life decisions.&lt;/p&gt;

&lt;p&gt;The IOD Talent Network&lt;br&gt;
At IOD, we’re leaders in tech content, serving the most well-respected tech brands in cloud, DevOps, cybersecurity, data engineering, and AI. But beyond engineering powerful content, we’ve made it our mission to build a talent network that puts you at the center, addressing all the pain points of freelance writing. &lt;/p&gt;

&lt;p&gt;IOD does all the heavy lifting for you—bringing you clients from leading tech companies, handling client management and administration on your behalf, and protecting your interests at all times.&lt;/p&gt;

&lt;p&gt;As a member of the IOD Talent Network, you’ll get the full benefit of:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Proven, streamlined processes&lt;/li&gt;
&lt;li&gt;Expert mentors to help you learn the ropes and grow your expertise&lt;/li&gt;
&lt;li&gt;Exposure to the biggest brands in tech&lt;/li&gt;
&lt;li&gt;Professional editors to elevate your work and implement client style guides so you can focus on research and writing&lt;/li&gt;
&lt;li&gt;NET 10 payment&lt;/li&gt;
&lt;li&gt;Discover how freelance tech marketing writers Tina and Bruno successfully leveraged their professional skills by partnering with IOD.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Looking for a powerful and effective route to consistent income, simplified administration, and a dependable flow of stimulating work? &lt;a&gt;Join the IOD Talent Network&lt;/a&gt;.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Which Kubernetes Ingress Is Right for You</title>
      <dc:creator>We are IOD</dc:creator>
      <pubDate>Sun, 11 Sep 2022 08:08:30 +0000</pubDate>
      <link>https://dev.to/iamondemand/which-kubernetes-ingress-is-right-for-you-4gd7</link>
      <guid>https://dev.to/iamondemand/which-kubernetes-ingress-is-right-for-you-4gd7</guid>
      <description>&lt;p&gt;By default, Kubernetes pods and services cannot be accessed outside of the cluster. At some point, however, you may want to turn your K8s applications into full-fledged web services accessible over the Internet. The Kubernetes Ingress resource is one of several ways to accomplish this.&lt;/p&gt;

&lt;p&gt;Kubernetes Ingress allows you to configure various rules for routing traffic to services within your Kubernetes cluster. However, Ingress is not just that. You can use an advanced K8s Ingress solution to load-balance traffic; terminate SSL/TLS; implement name-based virtual hosting; and enable API authentication and security, monitoring, and more. Whether you need a simple reverse proxy that routes traffic to a specific service, or a more advanced setup with traffic middleware and complex traffic-splitting rules, depends on the requirements of your applications. &lt;/p&gt;

&lt;p&gt;In this article, we’ll try to guide you through various K8s Ingress solutions and use cases to help you find the best K8s Ingress option that fits your needs.&lt;/p&gt;

&lt;h2&gt;
  
  
  K8s Ingress Basics
&lt;/h2&gt;

&lt;p&gt;Before we dive into the discussion of various K8s Ingress solutions, let’s see how ingress is implemented in Kubernetes. K8s has a built-in resource that defines the configuration needed for an ingress controller and services it can route to. In a nutshell, the K8s Ingress resource is just a K8s metadata object that defines URI paths, backing service name and ports, and other metadata (see code example below). &lt;/p&gt;

&lt;p&gt;However, on its own, Ingress does nothing. In order to work, you have to deploy an Ingress controller and associate the K8s Ingress resource with it.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--HFLiuInw--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ql3rfohx8erwtanemd2c.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--HFLiuInw--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ql3rfohx8erwtanemd2c.png" alt="Image description" width="880" height="574"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;K8s lets you create multiple Ingresses, which is useful when you want to use one Ingress for external traffic and another one for in-cluster traffic between services. &lt;/p&gt;

&lt;p&gt;Key Things to Consider When Selecting K8s Ingress&lt;br&gt;
The choice of the Kubernetes Ingress controller directly depends on the requirements of your application, the location of your K8s cluster (e.g., on-premises or in the cloud), the design of your microservices architecture, security compliance requirements, and more.  &lt;/p&gt;

&lt;p&gt;Below, we list several important things to consider when choosing the right K8s Ingress solution. &lt;/p&gt;

&lt;h2&gt;
  
  
  Protocol Requirements
&lt;/h2&gt;

&lt;p&gt;Traditional APIs typically use standard Layer 7 protocols such as  HTTP(s). If your K8s services are based on HTTP(s), you can probably opt for some simple reverse proxy solution. However, this may not be enough if your API relies on some other protocols, including binary protocol HTTP/2, gRPC, and WebSockets. &lt;/p&gt;

&lt;p&gt;Also, if your API needs to route low-level traffic, the K8s Ingress controller has to support low-level Layer 4 protocols such as TCP/UDP. &lt;/p&gt;

&lt;h2&gt;
  
  
  Downtime Tolerance
&lt;/h2&gt;

&lt;p&gt;Most statically configured reverse proxies and ingress controllers are reloaded on the configuration update. This causes short downtimes and increased memory consumption when your application reloads. Correspondingly, users will not be able to access your services at that time. &lt;/p&gt;

&lt;p&gt;If you need zero downtime for your API services, a K8s Ingress that supports automatic service discovery and dynamic reconfiguration may be a more preferable option for you. &lt;/p&gt;

&lt;h2&gt;
  
  
  Middleware
&lt;/h2&gt;

&lt;p&gt;Modern API services require various middleware for managing traffic. This may include rate limiting for controlling the traffic load on your service, circuit breakers for preventing error loops, authentication middleware, etc. K8s’ advanced Ingress controllers have this middleware configured at the API gateway level. &lt;/p&gt;

&lt;p&gt;Because of this, developers do not need to code special traffic management rules into the application and can instead focus on building a product. &lt;/p&gt;

&lt;h2&gt;
  
  
  Extensibility
&lt;/h2&gt;

&lt;p&gt;If you don’t know all API and application requirements in advance, you can opt for a more flexible and extensible Ingress solution that lets you add custom plugins and modules for extra functionalities. Unfortunately, not many K8s Ingress solutions are built with such extensibility in mind. &lt;/p&gt;

&lt;h2&gt;
  
  
  Regulatory Compliance
&lt;/h2&gt;

&lt;p&gt;If your company works in a highly regulated industry, you may need an Ingress solution that complies with strict security standards like Federal Information Processing Standards (FIPS). &lt;/p&gt;

&lt;p&gt;Also, for hardening your API security, you may opt for a K8s Ingress solution that supports a web application firewall (WAF) and includes built-in protection against common web vulnerabilities like the OWASP Top 10. Luckily, many K8s Ingress solutions do include security compliance as part of their paid subscription. &lt;/p&gt;

&lt;h2&gt;
  
  
  Overview of Ingress Solutions
&lt;/h2&gt;

&lt;p&gt;Let’s review some popular Ingress solutions and see how they address the requirements listed above. &lt;/p&gt;

&lt;h2&gt;
  
  
  NGINX Ingress
&lt;/h2&gt;

&lt;p&gt;The NGINX Ingress Controller is based on the NGINX reverse proxy and is one of the most popular ingress technologies for Kubernetes with over 50 million Docker pulls. The product ships in three versions: a community version supported by the F5 NGINX project and two official NGINX versions, a free OSS version and an NGINX Plus commercial version. &lt;/p&gt;

&lt;p&gt;Both community and open-source NGINX versions are quite simple integrations of the NGINX reverse proxy with the Kubernetes Ingress resource. When using these open-source versions, you get basic HTTP(s) routing functionality and SSL features. On the protocol side, the free versions provide support for TCP/UDP protocols and allow for gRPC integration and WebSockets. At this moment, integrating non-HTTP protocols requires additional configuration and engineering effort. &lt;/p&gt;

&lt;p&gt;A major limitation of the free NGINX Ingress is that it’s statically configured, which leads to downtime due to configuration updates—not optimal if your application requires zero downtime. &lt;/p&gt;

&lt;p&gt;In sum, if you need a simple ingress solution that routes HTTP(s) traffic to K8s services, these two open-source versions may be enough. The official NGINX OSS version is more stable and backward-compatible than its community-developed counterpart since it does not rely on external third-party tools and Lua modules. It’s fully managed by the NGINX team and any tools and modules used in it undergo extensive interoperability testing. &lt;/p&gt;

&lt;p&gt;Thus, the official NGINX OSS is preferable if you need a stable ingress product with 100% backward compatibility. &lt;/p&gt;

&lt;p&gt;The NGINX paid version is a more advanced ingress implementation with many useful features. In particular, it supports advanced traffic policies (rate limiting), IP ACL, mTLS, and JWT validation. If your app needs to implement IT Ops, you can leverage NGINX Plus support for A/B testing and canary deployments. Also, the paid version has all the building blocks for security compliance including implementation of WAF security middleware, JSON schema validation, and OWASP Top 10 protection. &lt;/p&gt;

&lt;p&gt;It should be noted that NGINX Plus is one of the few K8s Ingress solutions that implement a WAF, which it does via NGINX Protect. Thus, NGINX Plus covers pretty much all the ingress requirements mentioned above. &lt;/p&gt;

&lt;h2&gt;
  
  
  Kong Ingress Controller
&lt;/h2&gt;

&lt;p&gt;The Kong Ingress Controller is based on NGINX Ingress but with the inclusion of additional modules and plugins. At this moment, the Kong ecosystem has over 400 enterprise and community plugins for traffic management, API security, monitoring, and more. These plugins are available from third-party developers and are often easy to configure and use. &lt;/p&gt;

&lt;p&gt;There are various plugins for non-HTTP protocols as well, including gRPC and HTTP/2, request middleware, declarative configuration, and advanced load-balancing algorithms. Also, there are Kong plugins that support OpenID Connect, Open Policy Agent, WAFs, and other security features. If there is no plugin that matches your exact needs, you can create a new one using a well-documented plugin development kit (PDK).&lt;/p&gt;

&lt;p&gt;Not all Kong Ingress plugins are available for free though. Kong Ingress ships in two versions: the Kong Gateway OSS version and the Enterprise version. The OSS version offers basic API gateway features and open-source plugins. With Enterprise, you get access to a wide range of enterprise plugins, a dev portal (to generate an API and manage API versions), vitals (API analytics and monitoring), and RBAC. Also, Kong Enterprise provides a FIPS 140-2-compliant gateway, ideal for highly regulated industries. &lt;/p&gt;

&lt;p&gt;In sum, the built-in extensibility of the Kong platform means that with the necessary effort, you can achieve feature parity with NGINX Plus and implement many new features not covered by competing ingress solutions.&lt;/p&gt;

&lt;h2&gt;
  
  
  Cloud Ingress
&lt;/h2&gt;

&lt;p&gt;Cloud ingress requires an external cloud-based load balancer to route traffic to Ingress resources. On the Google Cloud Platform (GCP), for instance, an HTTP(s) load balancer is automatically created when you deploy Ingress. &lt;/p&gt;

&lt;p&gt;The major benefit of a cloud-based ingress controller is seamless integration with various cloud services offered by a cloud platform. If you’re running K8s in the cloud, this is a great advantage. For example, the GCE Ingress controller directly integrates with GKE’s cloud IAP, which lets you use Identity-Aware Proxy for the protection of K8s applications. Meanwhile, the Amazon ALB controller creates an Application Load Balancer that integrates with AWS WAF, Cognito, and Route 53. &lt;/p&gt;

&lt;p&gt;The main disadvantage of cloud-based ingress is that you have to use the specific cloud ingress controller offered by your cloud provider, which leads to vendor lock-in. Also, since cloud ingress requires a load balancer, you will incur additional costs. Load balancers are not free and can be quite expensive, especially if your K8s cluster has many applications and you have to install multiple ingresses. &lt;/p&gt;

&lt;p&gt;Other potential problems to consider with cloud ingress include the following:&lt;/p&gt;

&lt;p&gt;Additional operational complexity. Load balancers require deployment and configuration of various resources such as IPs and DNS certificates, which may be hard to manage in the highly dynamic environment of K8s. &lt;br&gt;
Performance. Most load balancers are not built for performance, which may be a problem if you need to have multiple ingresses.&lt;br&gt;
Cloud quota. Cloud projects have a quota for backend services. For example, by default GCE projects grant a quota of three backend services. This may be insufficient for most Kubernetes clusters. &lt;br&gt;
Load balancing. Not all ingress controllers support fine-grained control over load balancing algorithms&lt;br&gt;
Cluster size. Most cloud-based ingresses do not support large K8s clusters (1,000+ nodes).&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;As this article demonstrates, there are many factors to consider when choosing the right ingress for your clusters. If you’re planning to run a K8s cluster in the cloud, you have pretty much no other choice but to use the cloud-provided Ingress controllers and load balancers. When considering whether to use your K8s clusters in the cloud or on-premises, you should definitely consider the performance, quota, and cluster-size limitations of cloud ingress. For most applications that run in the cloud, including medium-size K8s clusters, cloud ingress offers many benefits including direct integration with other cloud services. &lt;/p&gt;

&lt;p&gt;If your API does not have strict compliance requirements and can use simple traffic policies, you can opt for the free or community NGINX Ingress. The free NGINX Ingress provides better stability and security because it’s directly managed by the NGINX team.&lt;/p&gt;

&lt;p&gt;The Kong API gateway is an excellent choice if your API services need to be highly flexible and you want to have additional plugins. However, some plugins may only be available in the paid Kong version. Also, having multiple plugins may introduce compatibility issues and require more configuration efforts on the part of your engineering team. &lt;/p&gt;

&lt;p&gt;Finally, NGINX Plus may be a preferred solution if your K8s cluster requires strict regulatory compliance and API security. Also, the paid NGINX version meets all other K8s Ingress criteria discussed in this article.&lt;/p&gt;

&lt;p&gt;This article was originally published on the &lt;a href="https://iamondemand.com/blog/which-kubernetes-ingress-is-right-for-you/"&gt;IOD Blog&lt;/a&gt;. &lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>k8s</category>
      <category>ingress</category>
    </item>
    <item>
      <title>Developers Don’t Want Fluff: Ofir Nachmani Talks B2D Best Practices on the DevRelX Podcast</title>
      <dc:creator>We are IOD</dc:creator>
      <pubDate>Mon, 05 Sep 2022 14:44:05 +0000</pubDate>
      <link>https://dev.to/iamondemand/developers-dont-want-fluff-ofir-nachmani-talks-b2d-best-practices-on-the-devrelx-podcast-2nl9</link>
      <guid>https://dev.to/iamondemand/developers-dont-want-fluff-ofir-nachmani-talks-b2d-best-practices-on-the-devrelx-podcast-2nl9</guid>
      <description>&lt;p&gt;Tech marketing these days means speaking directly to the users of your product—the developers. This requires marketing teams to have the technical knowledge to develop content that developers find useful and credible. At IOD, we help companies bridge the gap between technologists and marketers; it’s what our CEO Ofir Nachmani sought to do when he founded the agency. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--W3ynsCDk--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/37nx3lfdwb99qa5908wm.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--W3ynsCDk--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/37nx3lfdwb99qa5908wm.jpeg" alt="Image description" width="300" height="300"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;“I believe that establishing a high-quality, high-volume content operation requires a hybrid approach. This will help the marketer establish a production line that is predictable, sustainable, and recurring—and one you can scale up when the time comes.” – Ofir Nachmani&lt;/p&gt;

&lt;p&gt;Ofir was recently a guest on the DevRelX podcast, hosted by Stathis Georgakopoulos, Product Marketing Manager at SlashData. In the chat, Ofir discusses his road to IOD, talks about the gaps that tech marketing agencies fill, and offers best practices for business-to-developer (B2D) content.&lt;/p&gt;

&lt;p&gt;Read on for highlights from their discussion.&lt;/p&gt;

&lt;p&gt;From Accidental Blogger to Founder &amp;amp; CEO&lt;br&gt;
Stathis began his conversation with Ofir by asking him a simple question. &lt;/p&gt;

&lt;p&gt;Stathis: As a child, what did you want to be when you grew up?&lt;/p&gt;

&lt;p&gt;Ofir:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;To be a creator, a builder, and to be rich.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;I wanted to be rich, but in order to be successful, you need to build, change worlds, and create new things. I’m always trying to find the challenges and pain points of people within a particular industry experience and to build or create solutions to fix the problem. Of course, this can make you rich. But it’s really not about having money. It’s about actually being a creator and solving people’s problems.&lt;/p&gt;

&lt;p&gt;Stathis: What has your journey been like, from being a blogger in the tech world to eventually CEO of IOD?&lt;/p&gt;

&lt;p&gt;Ofir: &lt;/p&gt;

&lt;p&gt;A technology company that I founded was acquired. One of my goals after the acquisition was knowledge sharing so that the company that acquired us could learn who we were and how to use our product. I first began writing informational content on Sharepoint but soon moved to Tumblr because it was easier to use. I hadn’t realized that Tumblr was public, but soon people outside of the company began reading the blog to learn.&lt;/p&gt;

&lt;p&gt;Blogging led to greater influence and more opportunities for content consulting with large tech companies. It was through this consulting work that I recognized an opportunity. What I saw was a very broken world. Startups couldn’t generate content at the capacity needed in order to scale up. And they didn’t consider outsourcing that work to freelancers.&lt;/p&gt;

&lt;p&gt;While large companies had the resources to hire freelancers, I didn’t think they were hiring the right people. The big brands were using writers to generate “fluff” pieces. I thought, “How can they earn credibility when no one in the content development process has any hands-on experience?”&lt;/p&gt;

&lt;h2&gt;
  
  
  Developer Trends
&lt;/h2&gt;

&lt;p&gt;The conversation then shifted into a discussion about trends in the developer ecosystem, with Ofir and Stathis first talking about how experienced developers are playing a larger role in organizational decision-making.&lt;/p&gt;

&lt;p&gt;Stathis: Let’s talk about data! Please pick a graph from Devrelx.com/Trends and tell us what stands out to you and why.&lt;/p&gt;

&lt;p&gt;Ofir: &lt;/p&gt;

&lt;p&gt;I would choose two trends of particular interest to me: developer autonomy and the change in work-life balance. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--h97_3A8d--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/1vfe5lb1ob57jbpvy78w.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--h97_3A8d--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/1vfe5lb1ob57jbpvy78w.png" alt="Image description" width="880" height="565"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Developer Autonomy
&lt;/h2&gt;

&lt;p&gt;Ofir:&lt;/p&gt;

&lt;p&gt;This trend is not surprising. Fifteen years ago, the decision-maker would have been the CIO or CTO. But today, it’s totally different because of the consumerization of technology. I can take a cybersecurity solution, try it for a month or two, and leave it. I’m not stuck with it, I don’t need to buy servers to run it. This consumerization gives experienced developers more decision-making powers than they ever had before.&lt;/p&gt;

&lt;p&gt;It’s what’s called “bottom-up” adoption of technology. It’s less about the C-level. The experts will research and find a tool, and possibly even pay for a trial themselves. And if the tool works for them, then they will go back to their boss and say, “Listen, we need this.” In most cases, the boss will trust the tech expert.&lt;/p&gt;

&lt;p&gt;At IOD, we generate deeply technical articles, because we want to attract these practitioners and support the bottom-up adoption of technology for our customers’ goals and target audience.&lt;/p&gt;

&lt;h2&gt;
  
  
  Change in Work-Life Balance
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--dbQx4deT--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6yav313clvl1i62d5tna.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--dbQx4deT--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6yav313clvl1i62d5tna.png" alt="Image description" width="880" height="565"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Ofir:&lt;/p&gt;

&lt;p&gt;Many more workers, particularly developers, began to work remotely full-time during the pandemic. This was not just a change in place, but also a change of mindset. The work style of freelancers has since spilled over into the lives of those who work full time.&lt;/p&gt;

&lt;p&gt;Experienced developers are today making big decisions from a room in their apartment—critical decisions that impact the brand.&lt;/p&gt;

&lt;p&gt;And they’re not just writing code and clocking out at 5:00 p.m. They are testing their features to make sure the code they generated is secure enough. DevOps people need to make sure that production is always running. It’s their responsibility. And because of that, they need more agency to make independent decisions.&lt;/p&gt;

&lt;h2&gt;
  
  
  B2D Marketing Strategy
&lt;/h2&gt;

&lt;p&gt;The idea that developers need to be spoken to in a way that is different from a traditional business-to-business (B2B) marketing strategy is an unfamiliar concept to many marketers. Ofir explained why a business-to-developer (B2D) content strategy is necessary. &lt;/p&gt;

&lt;p&gt;Stathis: Why does a business-to-developer (B2D) company need a tech content marketing strategy and content planning?&lt;/p&gt;

&lt;p&gt;Ofir:&lt;/p&gt;

&lt;p&gt;There is this notion that content is “easy” to do. People think it’s very quick, like “I can generate an article in a week and have it done and published.” But it doesn’t work like that.&lt;/p&gt;

&lt;p&gt;In the world of what we call “expert-based content,” things take time and research. If, for example, you want to do benchmarking and showcase your product against your competition, it can take three months just for testing and research before marketing comes into play.&lt;/p&gt;

&lt;p&gt;Stathis: Where and how do you start building such a plan?&lt;/p&gt;

&lt;p&gt;Ofir:&lt;/p&gt;

&lt;p&gt;First you need to differentiate between marketing strategy and content strategy. Once you have a marketing strategy in place and SEO guidelines, then you can go in and say, “I want to know which topics are the best to share with the world.”&lt;/p&gt;

&lt;p&gt;Tech content creation comes from two perspectives: Internal creativity and market trends. While you may love an idea, is that necessarily what the market wants? You must use both to come up with a strategy that fits the audience that you want to reach.&lt;/p&gt;

&lt;h2&gt;
  
  
  Creating Developer-Focused Content
&lt;/h2&gt;

&lt;p&gt;It’s one thing to formulate a strategy, and another to craft content itself. Ofir offered some further insight and suggestions when asked about this. &lt;/p&gt;

&lt;p&gt;Stathis: How do you create tech content that developers will like and engage with? What tips would you offer a B2D marketer?&lt;/p&gt;

&lt;p&gt;Ofir:&lt;/p&gt;

&lt;p&gt;Keep this in mind: Developers today spend about 50% of their working hours doing research. If you waste their time with fluff, they will notify their colleagues that this brand is generating fluff content and warn them not to get near you.&lt;/p&gt;

&lt;p&gt;To generate traction on a topic, you need to start with what we call top-of-the-funnel content. It’s very technical, but broad.&lt;/p&gt;

&lt;p&gt;The next step is what we call the consideration stage. For example, let’s say the audience in the first stage learned about running Jenkins from you, and you wrote about some common pain points but didn’t discuss a solution. At the consideration stage, they would like to see a solution. You need to start generating articles that compare your product to others on the market.&lt;/p&gt;

&lt;p&gt;The third stage is what we call converting content. You need to show off your product and actually create, for example, how-to guides on how to solve common pain points. When you publish this article, it will help the audience understand that you are the one they need.&lt;/p&gt;

&lt;p&gt;Stathis: What are some best practices for building a tech content production machine?&lt;/p&gt;

&lt;p&gt;Ofir:&lt;/p&gt;

&lt;p&gt;Content production requires a team: an expert, a writer, and an editor. With this team, you have everything you need to generate any piece of content, whether it’s high level for the C-suite, or a very deep tech article. &lt;/p&gt;

&lt;p&gt;Make sure that when you’re assembling the team, you start with the expert. A writer is of course important, but who has already done the research and has the institutional knowledge? The team also needs a production manager to monitor timelines.&lt;/p&gt;

&lt;p&gt;I believe that establishing a high-quality, high-volume content operation requires a hybrid approach. What I mean is that in addition to internal marketing work, you also need to work with an external source. This will help the marketer establish a production line that is predictable, sustainable, and recurring—and one you can scale up when the time comes. This would be very hard to do with a single in-house writer or freelancer who is too busy to answer your calls.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Future of the Talent Marketplace
&lt;/h2&gt;

&lt;p&gt;Stathis concluded the conversation by asking Ofir what he was currently reading or learning that intrigued him. &lt;/p&gt;

&lt;p&gt;Stathis: What are you reading right now that gets you excited?&lt;/p&gt;

&lt;p&gt;Ofir:&lt;/p&gt;

&lt;p&gt;I’m currently doing some research into freelance marketplaces. It seems like the industry is moving from small gigs (e.g., Fiverr) to talent marketplaces (small freelance networks, high-quality well-educated resources).&lt;/p&gt;

&lt;p&gt;Buyers in these networks are no longer looking just for tasks, but for talent. They’re looking for someone to help them solve a problem and to continue a relationship with that individual for a long time.&lt;/p&gt;

&lt;p&gt;This is what we’re doing at IOD. We give our customer a talent—whether an expert or an experienced writer—and they learn about the customer’s needs and essentially become part of their team.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Listen to the &lt;a href="(https://bit.ly/3TfQrgh)"&gt;full podcast here&lt;/a&gt;.&lt;/strong&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  About IOD
&lt;/h2&gt;

&lt;p&gt;IOD serves some of the most well-respected tech brands in cloud, DevOps, data engineering, cybersecurity, and AI, creating meaningful tech content that strengthens your brand and converts traffic into quality leads.&lt;/p&gt;

&lt;p&gt;IOD’s agile teams of vetted tech experts and professional editors work together to build you a rich content library: technical blogs, white papers, ebooks, tutorials, product comparisons, thought leadership, and more.&lt;/p&gt;

&lt;h2&gt;
  
  
  About SlashData
&lt;/h2&gt;

&lt;p&gt;SlashData is the leading analyst firm in the developer economy, tracking global software developer trends via the largest, most comprehensive developer surveys worldwide.&lt;/p&gt;

&lt;p&gt;Our research helps the top technology firms understand who developers are, what tools they are using, and where they‘re headed.&lt;/p&gt;

&lt;p&gt;Developer Economics, SlashData’s flagship research program, tracks technologies and developer trends, from mobile, IoT, cloud, and desktop to games, AR/VR, and machine learning. Our semi-annual surveys reach more than 40,000 developers in over 150 countries and engage developers across all regions, platforms, and developer segments.&lt;/p&gt;

</description>
      <category>devrel</category>
      <category>techmarketing</category>
    </item>
    <item>
      <title>2022 Cloud Security Trends: What Experts Predict</title>
      <dc:creator>We are IOD</dc:creator>
      <pubDate>Tue, 09 Aug 2022 13:39:42 +0000</pubDate>
      <link>https://dev.to/iamondemand/2022-cloud-security-trends-what-experts-predict-3ag5</link>
      <guid>https://dev.to/iamondemand/2022-cloud-security-trends-what-experts-predict-3ag5</guid>
      <description>&lt;p&gt;We’ve been hearing about the transition to the cloud for close to a decade, and over time, many companies have been making gradual moves from on-premises infrastructure to alternatives hosted by AWS, Azure, and Google Cloud. But in the last two years, spurred mainly by the COVID-19 pandemic and work from home policies, companies were forced to make the jump to cloud infrastructure in a matter of weeks.&lt;/p&gt;

&lt;p&gt;This rapid shift, while remarkable, has left some organizations more vulnerable to threats from malicious actors. This brief moment in time has also seen some of history’s most severe cyberattacks, including those on SolarWinds, Kaseya, Colonial Pipeline, and JBL Foods.&lt;/p&gt;

&lt;p&gt;So while transitioning to the cloud was a revolutionary act for businesses, building and maintaining cloud security is now the leading and vital trend. We asked some leading cloud security experts for their predictions about cloud security in the coming year including: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Kubernetes security&lt;/li&gt;
&lt;li&gt;Security management for multi-cloud environments&lt;/li&gt;
&lt;li&gt;Attack surface management expansion&lt;/li&gt;
&lt;li&gt;SaaS governance maturity&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This is a collection of their thoughts and some insights of our own.&lt;/p&gt;

&lt;h2&gt;
  
  
  Kubernetes Security
&lt;/h2&gt;

&lt;p&gt;Container-based architecture, and in particular the use of Kubernetes, is growing steadily among businesses. As of December 2021, there were more than 5.6 million Kubernetes developers, representing a 67% increase from 2020.&lt;/p&gt;

&lt;p&gt;Kubernetes and containers allow for faster application development. However, they were designed for developer convenience and not necessarily security. Red Hat’s 2022 State of Kubernetes Security report found that issues with security are hindering even more wide-scale Kubernetes adoption and application innovation. Among the 300 security professionals surveyed, 93% reported at least one security incident in their Kubernetes environment over the past year, while 31% noted it had resulted in customer or revenue loss.&lt;/p&gt;

&lt;p&gt;Many resources exist as guides for best practices with security with Kubernetes (including their official documentation). Companies should be mindful about reviewing them as they transition into container-based architecture.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--VppoSD4F--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/1idl0h7x9yk3r6lq0al4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--VppoSD4F--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/1idl0h7x9yk3r6lq0al4.png" alt="Image description" width="870" height="376"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Multicloud Security Management
&lt;/h2&gt;

&lt;p&gt;A shift to the cloud rarely means migrating all of your data to the infrastructure of one vendor. A multicloud environment where data is split between multiple vendors with private and public options is increasingly becoming the norm. However, the use of this type of environment presents some challenges.&lt;/p&gt;

&lt;p&gt;According to a report from the Cloud Security Alliance, the most common challenges companies face in a multicloud environment include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Lack of professional security expertise&lt;/li&gt;
&lt;li&gt;Regulatory and industry compliance concerns&lt;/li&gt;
&lt;li&gt;Lack of visibility into cloud resources&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The report also suggests that these multicloud issues will lead to the development of more security tools to meet the needs of this environment.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--I6Skul8G--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ojzrd4bb6mo4zrzz1mqn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--I6Skul8G--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ojzrd4bb6mo4zrzz1mqn.png" alt="Image description" width="874" height="566"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Attack Surface Management Expansion
&lt;/h2&gt;

&lt;p&gt;The move to hybrid and remote work environments exponentially increased the number of attack surfaces with the potential to be exploited. Mobile phones, tablets, home routers, and IoT devices could all be at risk if companies don’t have proper security procedures in place.&lt;/p&gt;

&lt;p&gt;Attack-surface management tools are evolving in the same way that companies are including the monitoring of remote hardware, SaaS applications, and third-party supply chain vendors.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--A2AANZHz--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/vzidkvxpdn7igck6mori.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--A2AANZHz--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/vzidkvxpdn7igck6mori.png" alt="Image description" width="828" height="370"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  SaaS Governance Maturity
&lt;/h2&gt;

&lt;p&gt;When thinking about recent high-profile cyber attacks, it is worth considering who the ultimate victims are. In the case of SolarWinds, U.S. government agencies and major corporations were the actual targets and bore the brunt of damage. In the case of the Kaseya ransomware attack, over 1,500 small businesses who had received the software from their MSPs were ultimately affected. &lt;/p&gt;

&lt;p&gt;These situations and others like it are forcing companies to reevaluate their technology supply chain and develop measures to hold vendors accountable for security breaches.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--5dxXpRlP--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/v9wio1tzj9ilr9gjq29q.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--5dxXpRlP--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/v9wio1tzj9ilr9gjq29q.png" alt="Image description" width="854" height="306"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;A major shift to the cloud has led to an additional shift in how businesses secure their operations. Cloud security professionals expect us to see an increased focus on Kubernetes security, multi-cloud management, attack surface management, and SaaS vendor accountability in the years to come. &lt;/p&gt;

&lt;p&gt;Teams of tech marketers are keeping up with these security trends and communicating best practices to audiences of all levels.&lt;/p&gt;

&lt;p&gt;What cloud security trend do you see in 2022 and beyond? Let us know!&lt;/p&gt;

&lt;p&gt;This articles was originally published on the&lt;a href="https://iamondemand.com/blog/2022-cloud-security-what-experts-predict/"&gt;IOD Blog&lt;/a&gt;. &lt;/p&gt;

</description>
    </item>
    <item>
      <title>A Guide for Enterprises – Migrating to the AWS Cloud: Part 1</title>
      <dc:creator>We are IOD</dc:creator>
      <pubDate>Wed, 01 Jun 2022 15:05:49 +0000</pubDate>
      <link>https://dev.to/iod/a-guide-for-enterprises-migrating-to-the-aws-cloud-part-1-53h2</link>
      <guid>https://dev.to/iod/a-guide-for-enterprises-migrating-to-the-aws-cloud-part-1-53h2</guid>
      <description>&lt;p&gt;Amazon S3 this year celebrated its 16th birthday. Launched on Pi Day (March 14) in 2006, the extremely popular cloud storage service was among AWS’ earliest offerings, along with Amazon Simple Queue Service and EC2. With the release of S3, Amazon revolutionized the world of computer storage and forever changed the way organizations look at IT infrastructure—compute, storage and network. &lt;/p&gt;

&lt;p&gt;Today, AWS is the most comprehensive and broadly adopted public cloud platform, with over 200 services, 25 geographic regions, and more than 80 data centers around the world. AWS enables anyone—from individuals to international Fortune 500 companies—to leverage enterprise-grade services with a cost-efficient pay-as-you-go pricing model.&lt;/p&gt;

&lt;p&gt;Over the past decade, and even more so since the global pandemic disruption, more and more companies have been shifting to public cloud platforms—not only to reduce their physical data-center footprints but also to innovate and adapt more quickly to changing demand. When it came to the enterprise world, certain industries were slower than others to adopt the public cloud, but even now that shift has become ubiquitous. &lt;/p&gt;

&lt;p&gt;Large organizations, however, face inherent challenges regarding cloud adoption, such as procurement, legal, and financial aspects. But the biggest factor for the delayed start across many industries has been a lack of services capable of addressing some of their specific requirements related to geographic location, compliance, and specialized hardware, among others. Still, with the maturity and evolution of cloud services, there is hardly any reason left to prevent organizations from adopting the public cloud.&lt;/p&gt;

&lt;p&gt;This article is a two-part series on moving your enterprise workloads to AWS. In this post, we will highlight some of the key points to consider when getting started. &lt;/p&gt;

&lt;p&gt;Cloud Migration Models and How to Utilize Them&lt;br&gt;
There are a few well-known strategies to migrate to the public cloud. The most popular approaches are known as rehosting (lift-and-shift), replatforming, and rebuilding.&lt;/p&gt;

&lt;p&gt;All public cloud vendors provide infrastructure-as-a-service functionalities that enable organizations to rehost their existing infrastructure (virtual machines, data storage, network, etc.) to the cloud. According to Gartner, AWS is the current leader in the infrastructure-as-a-service (IaaS) segment, and by being a common denominator across on-premises and public cloud providers, IaaS remains one of the most popular and easiest ways to get started with AWS. &lt;/p&gt;

&lt;p&gt;IaaS provides maximum control, but at the same time, requires maximum management tasks, such as configuring the system, resource monitoring and adjustments, and patching security updates. However, for a successful migration, your IT team will need to understand how AWS works at its core.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ehFX509a--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/juvxkbew0q9wz16bwzrv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ehFX509a--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/juvxkbew0q9wz16bwzrv.png" alt="Image description" width="880" height="884"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Figure 1: Gartner’s 2020 Magic Quadrant for Cloud &lt;br&gt;
Infrastructure &amp;amp; Platform Services&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;While IaaS is a popular starting point for migration, it may not be the most effective way to use AWS services. Rather, to build modern, cloud-native, scalable, and cost-effective applications, there are other categories to consider, such as platform as a service (PaaS) and software as a service (SaaS). And within these, it is worth exploring the concepts of functions as a service (FaaS) and containers as a service (CaaS), which radically changed the computing paradigm for software engineers. &lt;/p&gt;

&lt;p&gt;These services share the same purpose: to abstract the underlying infrastructure pieces and provide developers with more freedom to focus on the application, rather than the infrastructure. &lt;/p&gt;

&lt;p&gt;*&lt;em&gt;Platform as a Service *&lt;/em&gt;&lt;br&gt;
PaaS encapsulates platform configurations and OS-level tasks. For example, AWS Elastic Beanstalk automatically handles application deployment, capacity provisioning, load balancing, and autoscaling without additional manual effort. Another great example is AWS RDS, the managed relational database service that comes with out-of-the-box support for automatic snapshots, global tables, and replication, among many other features.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Software as a Service&lt;/strong&gt;&lt;br&gt;
SaaS encapsulates all internal details and provides an API-based interface to start using the service. One example is Amazon SES (Simple Email Service), which enables the programmatic sending and receiving of emails via an API. Another popular example is AWS Amplify, which enables developers to build and deploy a web or mobile application without any operational overhead. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--qIkf10ty--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ybvzv3uhtl3zu20tp091.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--qIkf10ty--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ybvzv3uhtl3zu20tp091.png" alt="Image description" width="880" height="562"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Figure 2: Evolution of cloud services (Source: Red Hat)&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;While using a single cloud provider such as AWS is typical for most organizations, a multicloud strategy is often the popular choice for large enterprises. This provides more flexibility in M&amp;amp;A operations and offers additional options such as access to exclusive geographical locations, making it easier for an organization to meet business requirements related to latency or government regulations. &lt;/p&gt;

&lt;h2&gt;
  
  
  Understand Why You Want to Move to AWS
&lt;/h2&gt;

&lt;p&gt;Every organization has different goals and priorities when beginning its cloud migration. Likewise, AWS has many services and features that can be utilized to accommodate different use cases, such as data backup, disaster recovery, high availability, low-cost storage, big-data processing, and more. &lt;/p&gt;

&lt;p&gt;The most important parameter for a successful migration is understanding the core reasoning behind the move. Enterprises should ask themselves: Why do I want to migrate to AWS? The answer will help all stakeholders get on the same page. It will also help IT teams choose the right set of AWS services (based on the different migration models discussed earlier). For example, AWS provides multiple storage services and different types of load balancers, and selecting the right one depends on your use case and business requirements.&lt;/p&gt;

&lt;h2&gt;
  
  
  Security in the Cloud
&lt;/h2&gt;

&lt;p&gt;Security is a very critical topic to any organization. Historically, security and compliance concerns have been one of the reasons many organizations, especially large enterprises, have been reluctant to adopt the cloud. Over the years, however, AWS has focused on making sure its infrastructure meets the strictest security and compliance standards; it also seeks to offer the proper tools and services for organizations in sectors such as finance, healthcare, and government to be able to run their systems in AWS Cloud. &lt;/p&gt;

&lt;p&gt;There is a common misconception that all cloud workloads must be internet-facing. Naturally, this is not true, and one can easily build a completely private and isolated workload environment. Yet, public-facing workloads such as e-commerce applications were among the first to benefit from cloud-native capabilities such as autoscaling and pay-as-you-go pricing. &lt;/p&gt;

&lt;p&gt;If you are looking to protect your internet-based applications from external threats like DDoS attacks or any of the vulnerabilities on OWASP’s list (injection, broken authentication, sensitive data exposure, etc.), AWS WAF and AWS Shield are great options. These built-in services leverage AWS’ own security expertise and make it easier for organizations to safely build globally distributed applications. &lt;/p&gt;

&lt;p&gt;Here, I’ll take a closer look at how AWS manages security in the cloud.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Shared Responsibility Model
&lt;/h2&gt;

&lt;p&gt;According to AWS’ shared responsibility model, security and compliance are the shared responsibility of AWS and its customers. While AWS manages “security of the cloud,” the customer manages “security in the cloud.” &lt;/p&gt;

&lt;p&gt;This means that AWS is responsible for protecting the infrastructure running all of its services, including the hardware, software, networking, and data-center facilities. However, customers are responsible for configuring and managing the AWS service(s) they decide to use. For instance, if you use Amazon EC2 instances to host your application, you—not AWS—will be responsible for the configurations and management of those instances. &lt;/p&gt;

&lt;p&gt;The diagram below explains who protects which segments and how much control your IT team has over the public cloud infrastructure:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--o1twWpUF--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/z3d9id6atiwkjovio37g.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--o1twWpUF--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/z3d9id6atiwkjovio37g.png" alt="Image description" width="880" height="482"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Figure 3: AWS shared responsibility model for cloud services (Source: AWS)&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Organizational Structure
&lt;/h2&gt;

&lt;p&gt;In an on-premises environment, the IT team can organize and restrict different applications, with the help of physical networks and boundaries. In an AWS environment, you can run all of your applications in the same account. However, this is not a recommended practice, as it may not be compliant with regulatory requirements (e.g., financial or healthcare applications that require process and data isolation for risk mitigation). &lt;/p&gt;

&lt;p&gt;AWS Organizations is an account-management service that allows your IT team to easily create and manage multiple AWS accounts with the required security controls and supervision. By keeping different environments in different AWS accounts, you can limit potential security threats while simultaneously maintaining overall governance. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--DAQZwT6r--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/i6j6dwqw556rcqsc7zyv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--DAQZwT6r--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/i6j6dwqw556rcqsc7zyv.png" alt="Image description" width="880" height="252"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Figure 4: AWS Organizations can be used to create and manage group accounts (Source: AWS)&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Organizational Structure
&lt;/h2&gt;

&lt;p&gt;In an on-premises environment, the IT team can organize and restrict different applications with the help of physical networks and boundaries. In an AWS environment, you can run all of your applications in the same account. However, this is not a recommended practice, as it may not be compliant with regulatory requirements (e.g., financial or healthcare applications that require process and data isolation for risk mitigation). &lt;/p&gt;

&lt;p&gt;AWS Organizations is an account management service that allows your IT team to easily create and manage multiple AWS accounts that comply with your organization’s own policies as well as follow established security controls. By keeping different environments in different AWS accounts, you can limit potential security threats while simultaneously maintaining overall governance:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--87tLgal1--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2thfare64d9of04sa6s5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--87tLgal1--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2thfare64d9of04sa6s5.png" alt="Image description" width="880" height="508"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Figure 5: AWS Single Sign-On (SSO) with enterprise identity systems like Microsoft Active Directory (Source: AWS)&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Managing Governance and Compliance in AWS
&lt;/h2&gt;

&lt;p&gt;Enterprise IT teams have to maintain the inventory of resources in use. For security and compliance reasons, they also have to regularly update the infrastructure and keep track of the changes. Below, I’ll review a few management and governance services that AWS provides. These are designed with simplicity, scale, and cost-effectiveness in mind, so they’re suitable for organizations of any size.&lt;/p&gt;

&lt;h2&gt;
  
  
  Management Services
&lt;/h2&gt;

&lt;p&gt;In a distributed, multi-account setup, you don’t want to completely depend on a central IT team to manage and perform all tasks manually. This will slow down the formation of a new environment and will also burden your team with unnecessary work. AWS has a number of management services that help IT teams carry out these tasks securely and reliably. &lt;/p&gt;

&lt;h2&gt;
  
  
  AWS Control Tower
&lt;/h2&gt;

&lt;p&gt;AWS Control Tower helps set up the baseline environment in an automated and controlled way by following organizational policies. Control Tower enables the creation of rules, called guardrails, and provides recommendations for them. These help organizations enforce their policies via service control policies (SCPs) and can also detect policy violations so you stay compliant—functionalities you can automate for both new and existing accounts&lt;/p&gt;

&lt;h2&gt;
  
  
  AWS Systems Manager
&lt;/h2&gt;

&lt;p&gt;AWS Systems Manager helps you centralize data from multiple AWS services and automate tasks across AWS resources. The service has some important features, including: &lt;/p&gt;

&lt;p&gt;Sessions Manager: For logging into instances from a web browser (among other things)&lt;br&gt;
Parameter Store: For storing important configurations, like passwords and database connection details, in an encrypted format&lt;br&gt;
Inventory: For collecting the configuration and inventory of instances&lt;br&gt;
Patch Manager: For easily applying software patches to a group of instances&lt;br&gt;
Governance Services&lt;br&gt;
Organizations want to achieve business agility by moving to the cloud, but at the same time, they want to maintain the necessary governance control. There are a few key AWS services worth exploring that provide auditing and compliance capabilities so that you can securely govern your resources at any scale.&lt;/p&gt;

&lt;h2&gt;
  
  
  AWS CloudTrail
&lt;/h2&gt;

&lt;p&gt;AWS CloudTrail is the source-of-truth service for everything that happens in the AWS environment. By default, all the changes that occur in your AWS environment are done via platform API calls. CloudTrail keeps a record of all the API calls, who made the call, and when the call was placed. This helps track user and resource activity in your cloud environment. &lt;/p&gt;

&lt;h2&gt;
  
  
  AWS Config
&lt;/h2&gt;

&lt;p&gt;In a large environment, it can be difficult to keep track of or identify changes, as well as maintain a snapshot of the environment at a particular point in time. AWS Config provides the inventory, history, and change notifications of your cloud resources and their configuration to enable better governance and an improved security posture. &lt;/p&gt;

&lt;h2&gt;
  
  
  AWS Trusted Advisor and AWS Well-Architected Tool
&lt;/h2&gt;

&lt;p&gt;After working with thousands of enterprise customers over the years, AWS has gathered together its knowledge of best practices and successful cloud ops into two services: AWS Trusted Advisor and AWS Well-Architected Tool. &lt;/p&gt;

&lt;p&gt;AWS Trusted Advisor analyzes your environment, offering up recommendations for cost, performance, security, fault tolerance, and service limits per proven industry best practices. &lt;/p&gt;

&lt;p&gt;AWS Well-Architected Tool enables engineering teams to assess the state of their workloads and ways of working by comparing them to the latest AWS architecture best practices. This tool is designed to get feedback on different aspects of your application—operational excellence, performance efficiency, reliability, security, sustainability, and cost-optimization—and then generates a risk scorecard for each of these pillars.&lt;br&gt;
Summary&lt;br&gt;
There is a saying among large organizations that have successfully migrated to AWS Cloud: “Crawl, walk, run.” What does this mean for you? &lt;/p&gt;

&lt;p&gt;Crawl: Identify and set up a clear plan and resources to build a strong cloud foundation.&lt;br&gt;
Walk: Migrate and monitor your processes. This phase is all about learning and adopting the best cloud practices. &lt;br&gt;
Run: Iterate and modernize to reap the benefits of cloud computing. This is where you identify and innovate your business processes.&lt;br&gt;
In short, define your goals, find the right strategy and people to accomplish them, and continue on your cloud path. What you learn along the way will help you evolve and adapt. As you probably know by now, cloud computing is here to stay, so the time to move your business to the cloud is now!&lt;/p&gt;

&lt;p&gt;In the next post, we will cover areas such as operational monitoring, resource management, and cloud cost optimization, as well as discuss how to create an effective team culture for successful cloud adoption. &lt;/p&gt;

&lt;p&gt;This article is originally posted on the &lt;a href="https://iamondemand.com/blog/a-guide-for-enterprises-migrating-to-the-aws-cloud-part-1/"&gt;IOD Blog.&lt;/a&gt; by Bruno Almeida.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>cloud</category>
    </item>
    <item>
      <title>Our Take on Kubernetes: 6 Top Articles to Get You up to Speed</title>
      <dc:creator>We are IOD</dc:creator>
      <pubDate>Mon, 16 May 2022 13:31:18 +0000</pubDate>
      <link>https://dev.to/iod/our-take-on-kubernetes-6-top-articles-to-get-you-up-to-speed-49n9</link>
      <guid>https://dev.to/iod/our-take-on-kubernetes-6-top-articles-to-get-you-up-to-speed-49n9</guid>
      <description>&lt;p&gt;In anticipation of the KubeCon + CloudNativeCon conference that will take place in Valencia, Spain, on May 16-20 (and virtually), we wanted to share with you some key takeaways from six recent Kubernetes articles that we found particularly interesting.&lt;/p&gt;

&lt;h2&gt;
  
  
  1. The Top 5 Kubernetes Configuration Mistakes—and How to Avoid Them by Komodor
&lt;/h2&gt;

&lt;p&gt;This article describes how to avoid five common syntax, provisioning, and resource management misconfigurations that can cause cluster-wide performance, availability, and stability issues. For example, poorly configured operators for facilitating third-party integrations can end up wantonly consuming limited resources, causing runtime errors such as OOM (out of memory). Or using a single container to handle all ingress traffic can take down the cluster if there are traffic spikes.&lt;/p&gt;

&lt;p&gt;Our main takeaway is that these and other configuration mistakes must be taken into account during the design, development, and testing stages in order to avoid runtime performance issues.&lt;/p&gt;

&lt;h2&gt;
  
  
  2. The Ultimate Kubectl Commands Cheat Sheet by Komodor
&lt;/h2&gt;

&lt;p&gt;This article is an invaluable resource on how to properly use the kubectl command line to interact optimally with Kubernetes clusters. The various kubectl options and filters are critical for getting or switching contexts, obtaining the names of containers in a running pod, creating or getting values from secrets, testing RBAC rules, and more.&lt;/p&gt;

&lt;p&gt;Our main takeaway is that complete mastery of the kubectl command is an essential Kubernetes development skill. In addition to this article, be sure to reference the official kubectl page.&lt;/p&gt;

&lt;h2&gt;
  
  
  3. Kubernetes Capacity Planning: How to Rightsize the Requests of Your Cluster by Sysdig
&lt;/h2&gt;

&lt;p&gt;Too much capacity is wasteful and needlessly costly. Too little capacity can cause performance bottlenecks. This article provides important insights on the art and science of rightsizing Kubernetes capacity. Our main takeaways are:&lt;/p&gt;

&lt;p&gt;Make sure to have Prometheus as an add-on for tracking cluster resource usage metrics.&lt;br&gt;
Use Kubernetes limits and requests whenever you can. &lt;br&gt;
Size your clusters based on the resources your pods are estimated to need and use.&lt;br&gt;
Utilize cloud-native autoscaling features if you’re deploying on public clouds.&lt;br&gt;
Although not mentioned explicitly in the article, we would also add the importance of utilizing Kubernetes’ horizontal and vertical pod autoscaling features (HPA and VPA) to rightsize your clusters.&lt;/p&gt;

&lt;h2&gt;
  
  
  4. Kubernetes 1.24 – What’s New? by Sysdig
&lt;/h2&gt;

&lt;p&gt;Kubernetes 1.24 was released on May 3. This article summarizes the most notable new, evolving, and deprecated features across a number of key categories: APIs, apps, auth, network, nodes, scheduling, and storage.&lt;/p&gt;

&lt;p&gt;Our main takeaway is that, as a Kubernetes developer, it’s important that you stay on top of where the Kubernetes project is headed and what its timeline is moving forward. In addition to this article, two other helpful resources are:&lt;/p&gt;

&lt;p&gt;Official Kubernetes 1.24 release page&lt;br&gt;
Release plan and schedule for Kubernetes 1.25&lt;/p&gt;

&lt;h2&gt;
  
  
  5. Rancher vs. Kubernetes: It’s Not Either Or by Kubecost
&lt;/h2&gt;

&lt;p&gt;Kubernetes and Rancher are both important open-source container management projects, each with a large community of users and contributors. This article starts by summarizing the key features of each project:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--aawu4T3P--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/bile9fdatmh7imn46fxe.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--aawu4T3P--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/bile9fdatmh7imn46fxe.png" alt="Image description" width="880" height="641"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The main takeaway is that the two are complementary. Kubernetes focuses on orchestrating resources within a single cluster, while Rancher eases Kubernetes cluster management at scale. So, for example, using Rancher to deploy Kubecost across a Rancher project provides end-to-end visibility into and more granular management of Kubernetes cluster costs, as well as cluster health and efficiency.&lt;/p&gt;

&lt;p&gt;We would also like to point out that Rancher is being embraced by cloud providers for managing cloud-native Kubernetes clusters. See AWS’ reference deployment Rancher for Amazon EKS.&lt;/p&gt;

&lt;h2&gt;
  
  
  6. Kubernetes kOps: Step-By-Step Example &amp;amp; Alternatives by Kubecost
&lt;/h2&gt;

&lt;p&gt;Kubernetes kOps is an open-source command line tool for automating:&lt;/p&gt;

&lt;p&gt;Configuration, maintenance, and management of Kubernetes clusters&lt;br&gt;
Provisioning of the cloud infrastructure to run them&lt;br&gt;
Although the article points out that there are alternatives to kOps (Kubespray, eksctl, and kubeadm), kOps is the only tool that is both provider-agnostic (or at least will be soon) and able to support infrastructure provisioning. It then goes on to provide a hands-on example of how to use kOps to set up a Kubernetes cluster in AWS.&lt;/p&gt;

&lt;p&gt;Our main takeaway is that tools like kOps are an important part of an organization’s Kubernetes stack, making it easier to manage and orchestrate Kubernetes clusters at scale.&lt;/p&gt;

&lt;h2&gt;
  
  
  Summary
&lt;/h2&gt;

&lt;p&gt;The Kubernetes ecosystem is continuously evolving, and we here at IOD make it our business to keep on top of emerging innovations, trends, and tips. In this article, we shared with you our key takeaways on how to: avoid common misconfigurations, fully leverage the kubectl command, rightsize Kubernetes capacity, and incorporate both kOps and Rancher into your Kubernetes stack. We also looked at what’s new (and what’s gone) in the latest version released earlier this month.&lt;/p&gt;

&lt;p&gt;Tap into &lt;a href="https://iamondemand.com/content-types/"&gt;IOD’s extensive talent network&lt;/a&gt; of K8s, DevOps, cloud experts, and more to create content that speaks to devs. &lt;a href="https://iamondemand.com/contact-us/"&gt;Get started today&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>kubecon</category>
      <category>cloudnative</category>
    </item>
    <item>
      <title>Targeting Developers with Tech Content: 4 Tips for B2D Marketers</title>
      <dc:creator>We are IOD</dc:creator>
      <pubDate>Thu, 12 May 2022 14:08:17 +0000</pubDate>
      <link>https://dev.to/iod/targeting-developers-with-tech-content-4-tips-for-b2d-marketers-2bj3</link>
      <guid>https://dev.to/iod/targeting-developers-with-tech-content-4-tips-for-b2d-marketers-2bj3</guid>
      <description>&lt;p&gt;Over the years, tech content marketers have frequently prioritized writing business content targeted at the C-suite. But, within the past decade, a bottom-up adoption method has gained more ground, inspiring marketers to incorporate increasingly more content for developers into their content strategies.&lt;/p&gt;

&lt;p&gt;Now, writing for developer practitioners has become essential for tech organizations to hit their KPIs and meet their goals. After all, developer team leads &lt;a href="https://www.devrelx.com/trends?lightbox=comp-kisqhm6d3__85a0f937-9ce5-419d-959a-80fd18ac461b_runtime_dataItem-kisqhm6e"&gt;influence technology decisions 67% of the time&lt;/a&gt;, playing a major role in deciding what tools are incorporated into workflows and processes. Creating more content for developers can play a critical role in the sales process, encouraging practitioners as they test free products or product trials.However, many brands struggle to create content that resonates with developers. Often, the knowledge gap between tech marketers and practitioners causes “business-to-developer content” (or B2D content) to fall short of a technical audience’s expectations.&lt;/p&gt;

&lt;p&gt;Practitioners have a different relationship to your product than other decision-makers. Since they’re using your product every day, developers need to see what’s in it for them before they choose to work with your brand. Plus, they’re searching for practical, precise content that solves their issues, searching queries about “how to do x” or “bug in y.” Creating B2D content like this builds trust in your product, driving developers to recommend your product or service throughout their organization.&lt;/p&gt;

&lt;p&gt;Here are four ways your brand can master B2D content marketing and start creating tech content that makes developers want to work with you.&lt;/p&gt;

&lt;h2&gt;
  
  
  1. Focus on Applications vs. Philosophy
&lt;/h2&gt;

&lt;p&gt;Traditional, high-level marketing content doesn’t resonate with developers, showcasing the knowledge gap between marketers and practitioners. That’s because this content focuses too much on the philosophy behind your product rather than how it actually works.&lt;/p&gt;

&lt;p&gt;Developers want to go beyond the theory, seeing the practical ways they can use your product or service to solve their current challenges. But dry technical content is a dime a dozen; even though developers may be used to slogging through technical manuals, that doesn’t mean it’s the best use of their time, especially for a tool not currently in their toolkit. While successful tech content undeniably emphasizes application, step-by-step walkthroughs without context aren’t enough to maintain a developer’s interest, either.&lt;/p&gt;

&lt;p&gt;Instead, they need clear insight into two elements to see if your solution is the best tool to solve their challenges: &lt;/p&gt;

&lt;p&gt;The philosophy behind your product—what your product is and how you solve a developer’s issues on a high level, as demonstrated through best practices and customer use cases with technical examples that include diagrams and code snippets.&lt;br&gt;
Real-world applications for your solution—like actionable how-to or walkthrough content that showcases specific capabilities, features, or workflows and examples of how they can reproduce the same results.&lt;br&gt;
Creating content with these elements builds a developer’s trust in your solution and offers clear insight into how quickly and efficiently developers can benefit from adding your product to their workflows.&lt;/p&gt;

&lt;h2&gt;
  
  
  2. Design Easily-Scannable, Actionable Content
&lt;/h2&gt;

&lt;p&gt;Most developers won’t call your company’s help desk to explain their current challenge, learn how to use your product, and compare your product to other alternatives. Instead, they typically start by searching on Google, (hopefully) landing on one of your website pages, and exploring for themselves to see how your solution can support their existing needs and workflows. Additionally, when developers encounter a problem they don’t know how to solve, they often turn to &lt;a href="https://www.researchgate.net/publication/315975127_What_Do_Developers_Use_the_Crowd_For_A_Study_Using_Stack_Overflow"&gt;crowdsourcing question and answer websites &lt;/a&gt;like Quora, StackOverflow, or Reddit to source answers to specific questions from other practitioners.&lt;/p&gt;

&lt;p&gt;Yet, we still see marketers try to incorporate tech content into a more traditional, long-form blog post format with more story than necessary. This format doesn’t help developers get the quick, easy answer they need to solve their problems. &lt;a href="https://www.devrelx.com/post/content-that-developers-love"&gt;Experienced tech writer Raphael Mun&lt;/a&gt; recommends structuring tech content more like online recipes instead of traditional corporate blog content.&lt;/p&gt;

&lt;p&gt;Once a developer lands on one of your blogs, they will quickly scroll to see how long an article is and what the article is about to save time. This also helps them understand how technical the article is. Then, a developer has the option to scan your content, skip past the story, and find a solution more quickly.&lt;/p&gt;

&lt;p&gt;To create scannable and actionable content for developers, provide an introduction to the use case or problem your product addresses. Then, incorporate common questions developers ask on popular question and answer websites as section headers. Including these questions as headers makes it more likely that developers will discover and consult your blog post while searching for answers on Google. Plus, these headers make it easier for developers to scan your content and find the answer they’re looking for.&lt;/p&gt;

&lt;p&gt;Don’t forget to keep your website content current, too. Things change quickly in tech, so it’s important to have content that ongoingly supports developers with accurate code snippets, up-to-date screenshots showing recent platform updates, effective walkthroughs, and popular integrations. Regularly review older content to see if it still aligns with current best practices and confirm that it reflects how your product works without any bugs.&lt;/p&gt;

&lt;p&gt;Timely content makes your brand look more credible to developers and makes them more likely to turn to your website when they’re searching for solutions.&lt;/p&gt;

&lt;h2&gt;
  
  
  3. Emphasize One Use Case At a Time
&lt;/h2&gt;

&lt;p&gt;For a marketer, it can be tempting to create high-level content like listicles that shows all the great benefits your platform has to offer. However, developers need to see that your solution works with their existing tech stack and can solve even only a specific existing challenge they experience. Maintaining a single scope in your content allows you to give developers the support they’re looking for right away.&lt;/p&gt;

&lt;p&gt;Not sure which use cases to focus on? &lt;a href="https://iamondemand.com/blog/marketing-it-love-hate-or-just-love/"&gt;Leverage internal experts&lt;/a&gt; to serve as a focus group to research and produce relevant, meaningful content that speaks to your target dev audiences. Your product managers should be able to offer insight into the requirements and questions clients have. Then, they should also walk you through the platform and show how your online service helps solve each specific requirement. &lt;/p&gt;

&lt;p&gt;You should also consult clients directly to learn about the problems they’re facing and how they’re solving them. Think of your tech content more like case studies than blog posts; give leading senior dev practitioners at other companies an opportunity to showcase the cool and innovative ways they’re using your product to solve their problems and accomplish their goals on your blog. &lt;a href="https://iamondemand.com/blog/5-key-considerations-for-building-an-authentic-content-plan/"&gt;Interviewing expert practitioners&lt;/a&gt; currently experimenting with your product can make your content even richer, offering insight into the real-world problems your product solves and the practical results your solution provides.&lt;/p&gt;

&lt;p&gt;While showing how other developers solved their challenges, explain the practitioner’s background along with what it took time- and resource-wise to generate those results. This gives developers a realistic view into how they can use your solution to create those results on their own. Then, incorporate testable, “try it yourself” examples for developers to experiment with. Detailed walkthroughs with screenshots and code snippets encourage them to try new things while using your platform.&lt;/p&gt;

&lt;h2&gt;
  
  
  4. Don’t Be Afraid to Dive Deep Into Bits and Bytes
&lt;/h2&gt;

&lt;p&gt;Trying to find the information they’re looking for amid business results and marketing claims will often drive a developer to bounce from your website and search for answers on StackOverflow instead. It’s not because developers don’t care about business results—they’re still interested in learning how your solution can decrease mean time to resolution or increase their team’s productivity. However, those results don’t support a developer’s immediate needs.&lt;/p&gt;

&lt;p&gt;Instead of focusing on the benefits of your solution, include links to other blog posts reporting on these business results, keeping content for developers focused on the technology itself. Dive deep into the details specific developers need to see if your solution helps solve their problems.&lt;/p&gt;

&lt;p&gt;Technical content can help your product sell itself if it’s easy to understand and clearly demonstrates the impacts your product has on solving IT problems. Leave the product-centric language and sales content out, focusing instead on the intricate details that support your use cases. Maintaining this focus helps to &lt;a href="https://iamondemand.com/blog/subject-matter-expert-sme-content-paradox-pulling-teeth/"&gt;keep the tone authentic&lt;/a&gt;, helpful, and knowledgeable for your developer audience.&lt;/p&gt;

&lt;p&gt;Plus, not every piece of content should be intended for every developer. Rather than focusing on making generalized content to support wider audiences, hone in on the needs of specific developer types—like front-end, back-end, DevOps, or fullstack—with varied experience levels in different dev pillars. For example, create content dealing with a specific aspect (e.g., security or scale) or a specific open-source tool (e.g., Kubernetes). This ensures that you’re speaking the developer’s language with content intended to suit their very specific needs.&lt;/p&gt;

&lt;p&gt;One way to capture the right tone is to ask internal SMEs or external developers who use your product to write content detailing a specific use case. Then, have your marketing team &lt;a href="https://iamondemand.com/blog/the-case-for-shifting-editorial-left-breaking-down-silos-between-marketing-editorial/"&gt;edit the content for clarity&lt;/a&gt;, voice and flow (including planting the relevant CTAs). This helps marketers successfully target an experienced audience, contribute to the ongoing conversation around your product, and keep developers moving through the sales funnel even if the marketers don’t have the relevant expertise themselves.&lt;/p&gt;

&lt;h2&gt;
  
  
  5 Things You Should Keep in Mind When Creating B2D Content
&lt;/h2&gt;

&lt;p&gt;Creating compelling tech content doesn’t have to be difficult. Remember these five simple rules to start writing content developers will love:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Trust is most important.&lt;/li&gt;
&lt;li&gt;Focus on practicality.&lt;/li&gt;
&lt;li&gt;Keep content tight and to the point.&lt;/li&gt;
&lt;li&gt;Leverage experts and customers as a resource.&lt;/li&gt;
&lt;li&gt;Use specific examples.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;And don’t forget: you can always ask for help when you need it! At IOD, we specialize in helping you create exceptional tech content that appeals to developers and keeps them coming back for more. &lt;/p&gt;

&lt;p&gt;Contact us to tap into our extensive network of experienced practitioners and &lt;a href="https://iamondemand.com/content-types/"&gt;start creating better tech content today&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>techmarketin</category>
    </item>
    <item>
      <title>Cloud Computing Acquisitions &amp; Trends – Infographic</title>
      <dc:creator>We are IOD</dc:creator>
      <pubDate>Mon, 02 May 2022 14:13:37 +0000</pubDate>
      <link>https://dev.to/iod/cloud-computing-acquisitions-trends-infographic-1p2f</link>
      <guid>https://dev.to/iod/cloud-computing-acquisitions-trends-infographic-1p2f</guid>
      <description>&lt;p&gt;Keeping our finger on the pulse: IOD’s new infographic reveals the top cloud acquisitions of the last 6 months, including one for $6.5B, highlighting the importance of the identity and authentication space, and a $900M purchase that has put the spotlight on the demand for edge solutions. We also cover 4 key trends that will impact your business in 2022. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--kbBSN1YJ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/mwh0tntlkt7zzqdqufyd.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--kbBSN1YJ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/mwh0tntlkt7zzqdqufyd.jpg" alt="Image description" width="800" height="2000"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The world of cloud is constantly changing, making the need for expert-based tech content greater than ever. From videos and tutorials to blogs and white papers, across DevOps, fintech, cybersecurity, AI, and beyond, IOD combines fresh ideas with deep tech and marketing expertise to make sure your message stands out.&lt;/p&gt;

</description>
      <category>cloud</category>
      <category>trend</category>
    </item>
    <item>
      <title>Jenkins and Spinnaker: Turbocharge Your CI/CD With Cloud Native</title>
      <dc:creator>We are IOD</dc:creator>
      <pubDate>Wed, 20 Apr 2022 09:59:44 +0000</pubDate>
      <link>https://dev.to/iod/jenkins-and-spinnaker-turbocharge-your-cicd-with-cloud-native-d1d</link>
      <guid>https://dev.to/iod/jenkins-and-spinnaker-turbocharge-your-cicd-with-cloud-native-d1d</guid>
      <description>&lt;p&gt;Is your organization taking advantage of cloud-native computing? Modern cloud computing is built on a diverse ecosystem of open-source projects and infrastructure. &lt;/p&gt;

&lt;p&gt;Small startups and large enterprises alike depend on open-source projects to build critical container orchestration, CI/CD, and monitoring infrastructure. But how can an open-source project thrive and adapt to be so powerful across a variety of use cases and platforms?&lt;/p&gt;

&lt;p&gt;The Cloud Native Computing Foundation (CNCF), an alliance of users, vendors, and developers, helps to expand the cloud-native community and ecosystem of projects. As stated in their charter statement:&lt;/p&gt;

&lt;p&gt;The Cloud Native Computing Foundation seeks to drive adoption of this paradigm by fostering and sustaining an ecosystem of open source, vendor-neutral projects. We democratize state-of-the-art patterns to make these innovations accessible for everyone.&lt;/p&gt;

&lt;p&gt;The CNCF stewards a wide array of cloud-native tools and software. Utilizing these tools, engineering organizations can turbocharge their existing infrastructure and workflows, extending and adding powerful capabilities.&lt;/p&gt;

&lt;h2&gt;
  
  
  Cloud-Native CI/CD
&lt;/h2&gt;

&lt;p&gt;Anyone that has worked with continuous integration or continuous delivery/deployment in the last several years is likely familiar with Jenkins, an open-source server software aimed at providing end-to-end CI/CD capabilities. Engineering teams have a variety of options when it comes to deploying Jenkins, including internal infrastructure, cloud, and managed service platforms. If you take a look inside the deployment infrastructure at a majority of companies today with significant software assets, you will most likely find a Jenkins install.&lt;/p&gt;

&lt;p&gt;In recent years, the CNCF has stewarded several CI/CD projects with a cloud-native focus. One of those projects is Spinnaker, a multi-cloud continuous delivery tool that initially came from the Netflix engineering team. Spinnaker provides application management and deployment, with the added bonus of native integration with Jenkins, enabling teams to extend their existing capabilities with CNCF tooling.&lt;/p&gt;

&lt;p&gt;This article will examine the three primary ways that teams can integrate both Jenkins and Spinnaker, utilizing the flexibility of Spinnaker to add multi-cloud delivery capabilities to existing CI platforms.&lt;/p&gt;

&lt;h2&gt;
  
  
  Jenkins as a Pipeline Trigger
&lt;/h2&gt;

&lt;p&gt;Using Jenkins as a continuous integration system, with Spinnaker acting as the continuous delivery side, is probably the most familiar and commonly used implementation pattern. Jenkins is a powerful tool for CI, but Version 1 was designed and released before the ubiquitous need for cloud-first deployment scenarios. The cloud-native focus of Spinnaker means that cloud deployments are first-class concerns in the tool, providing a batteries-included implementation pattern for software delivery across a variety of platforms.&lt;/p&gt;

&lt;p&gt;The first step to integrate Jenkins and Spinnaker is to connect them. This assumes you have a Jenkins master of Version 1.x – 2.x installed, as it is required to implement any of the scenarios presented in this article. Once that’s complete, you only have to add a Jenkins trigger to a Spinnaker pipeline.&lt;/p&gt;

&lt;h2&gt;
  
  
  Use Case
&lt;/h2&gt;

&lt;p&gt;A great example use case for this type of implementation is a hypothetical engineering organization with an existing, on-premises Jenkins deployment. As they make plans to migrate some of their workload to the cloud, there are various options to consider. They can utilize Jenkins to handle delivery and deployment, requiring additional development cycles to configure and integrate Jenkins with a cloud provider. Conversely, they can continue to have Jenkins handle CI and utilize one of the managed services, like AWS CodeDeploy. &lt;/p&gt;

&lt;p&gt;The issue with these two options is that both of them will leave the platform tightly coupled with a single vendor platform, potentially causing “lock-in.” What happens if, in the future, the team needs to expand their service to Google Cloud as well? By going with Spinnaker as their CD platform instead, they’re empowered to scale out to multiple cloud platforms as future needs arise.&lt;/p&gt;

&lt;h2&gt;
  
  
  Jenkins as a Spinnaker Pipeline Stage
&lt;/h2&gt;

&lt;p&gt;What about engineering teams that are further along in their journey to being cloud native? They may already be employing a hybrid or 100% cloud production system. They may have a cloud-first deployment system already in place, such as Spinnaker, but may still need to rely on special integration testing or automation that remains in their legacy Jenkins deployment.&lt;/p&gt;

&lt;p&gt;Fortunately, Spinnaker provides this exact functionality, allowing Jenkins to be defined as a specific pipeline stage. Like the previous integration, the first step is to connect Jenkins and Spinnaker.&lt;/p&gt;

&lt;p&gt;For teams that have an extensive collection of tests and post-build automation, this can be a great way to bridge Jenkins and Spinnaker functionalities during a migration, without consuming precious engineering resources to port and refactor automated testing.&lt;/p&gt;

&lt;h2&gt;
  
  
  Jenkins as a Script Stage
&lt;/h2&gt;

&lt;p&gt;In some cases, deployments require more flexibility in automation and scripting. Scripting languages like Bash and Python are often employed to provide additional capabilities in DevOps workflows, and some CI/CD platforms are fairly limited in what types of custom automation can be defined.&lt;/p&gt;

&lt;p&gt;In the case of Spinnaker, it utilizes Jenkins as a sandbox environment, allowing the execution of any arbitrary Python, Bash, or Groovy script that might be needed. As before, Jenkins needs to be connected as a CI provider inside Spinnaker. There are some additional steps required to configure Jenkins as a script provider for a pipeline stage, detailed here.&lt;/p&gt;

&lt;p&gt;Consider the deployment workflow for an app with a UI component. Testing software with UI features has consistently been a thorn in the side of software engineers, who often have to depend on manual, interactive testing to validate that the software functions correctly. In a CI/CD workflow where many deploys might happen per day, that simply isn’t scalable. However, utilizing a Jenkins script stage, engineering teams can create automated UI testing functionality. Plus, a Jenkins script stage with shell scripting allows you to pull a Selenium Docker container into the pipeline environment, providing self-contained, automated UI testing.&lt;/p&gt;

&lt;h2&gt;
  
  
  Take Advantage of the Rich CNCF Ecosystem
&lt;/h2&gt;

&lt;p&gt;Beyond just continuous integration and deployment, a variety of other cloud-native tools call the CNCF community home. By employing these tools, engineering teams can provide their businesses with an end-to-end, cloud-native infrastructure.&lt;/p&gt;

&lt;p&gt;For monitoring and observability, Prometheus has quickly grown to become one of the best choices for modern cloud environments. With its powerful data querying and visualization capabilities, easy integration, and broad language support, it’s easy to see why. In the context of Jenkins and Spinnaker, Prometheus is a perfect fit to monitor both the infrastructure the application lives on, as well as the infrastructure that Spinnaker itself occupies.&lt;/p&gt;

&lt;p&gt;A production-level deployment infrastructure will be generating a lot of event-based data as well. Unfortunately, event producers and consumers don’t always provide any consistent specification when it comes to the format of the event data itself. CNCF has the solution: The CloudEvents specification aims to define a common, easy-to-understand specification for all major event formats.&lt;/p&gt;

&lt;p&gt;Deploying container-based workloads to multiple cloud platforms additionally brings unique security challenges. Teams that make multiple deployments per day need to be able to integrate as much security automation as they can into their deployment pipelines, catching and preventing issues before they make it into production. &lt;/p&gt;

&lt;p&gt;Open Policy Agent provides a “unified toolset and framework for policy across the cloud native stack.” With OPA deployed, an engineering team can configure a specific policy against, say, Docker files. Developers that check in new commits to container-based applications will have their builds validated by the OPA API. Any build or configuration that fails will stop the CI workflow, alerting relevant engineers to a potential issue, and avoiding a possible deployment rollback.&lt;/p&gt;

&lt;p&gt;If there’s any downside to the CNCF ecosystem, it’s that there isn’t nearly enough space in a blog post to cover all the projects and tools that exist across the cloud-native landscape. To see all the projects in one place, visit the CNCF landscape page. As of this writing, there are 1,477 projects represented!&lt;/p&gt;

&lt;h2&gt;
  
  
  Cloud-Native Adds New Capabilities
&lt;/h2&gt;

&lt;p&gt;The strong ecosystem of cloud-native tools can enable organizations to extend their existing infrastructure, adding new, cloud-focused capabilities, such as multi-cloud deployments. &lt;/p&gt;

&lt;p&gt;One caveat: Teams should be empowered to suggest and ultimately engage in a ground-up rebuild if warranted. Not all existing infrastructure makes sense for the cloud, and sometimes it’s more cost-effective and will result in better performance to implement modern design patterns versus trying to graft a modern band-aid onto a legacy platform. Fortunately, the cloud-native ecosystem has a full spectrum of tools to enable this. &lt;/p&gt;

&lt;p&gt;By utilizing solutions such as Spinnaker, an organization gets a cloud-first deployment tool backed by a strong open-source community for support, along with broad compatibility and integration capabilities with a variety of platforms and vendors; plus, it’s platform-agnostic. Using cloud-native tools, teams can extend and improve their existing architecture while, at the same time, laying the foundation for their eventual path into the modern cloud.&lt;/p&gt;

&lt;p&gt;This article was originally posted on &lt;a href="https://iamondemand.com/blog/jenkins-and-spinnaker-turbocharge-your-ci-cd-with-cloud-native/"&gt;IOD Blog.&lt;/a&gt;&lt;br&gt;
If you are a Cloud expert and you want to become part of a powerful community with tech professionals, &lt;a href="https://iamondemand.com/iod-talent-network/"&gt;join our talent network&lt;/a&gt;!&lt;/p&gt;

</description>
      <category>cloudnative</category>
    </item>
    <item>
      <title>Automating ML Workflow with IBM’s Fabric for Deep Learning (FfDL)</title>
      <dc:creator>We are IOD</dc:creator>
      <pubDate>Wed, 13 Apr 2022 15:55:05 +0000</pubDate>
      <link>https://dev.to/iod/automating-ml-workflow-with-ibms-fabric-for-deep-learning-ffdl-56a</link>
      <guid>https://dev.to/iod/automating-ml-workflow-with-ibms-fabric-for-deep-learning-ffdl-56a</guid>
      <description>&lt;p&gt;Cloud environments provide a lot of benefits for advanced ML development and training including on-demand access to CPUs/GPUs, storage, memory, networking, and security. They also enable distributed training and scalable serving of ML models. However, training ML models in a cloud environment requires a highly customized system that links these different components and services together and allows for managing and consistently orchestrating ML pipelines. Managing a full ML workflow, from data preparation to deployment, is often really hard in a distributed and volatile environment like a cloud compute cluster.&lt;/p&gt;

&lt;p&gt;Another important challenge is the efficient and scalable deployment of ML models. In a distributed compute environment, this requires configuring model servers and creating REST APIs, load balancing remote cluster requests, enabling authentication and security, etc. Also, ML model serving needs to be scalable, highly available, and fault-tolerant. &lt;/p&gt;

&lt;p&gt;Kubernetes is one of the best solutions for managing distributed cloud clusters that addresses the above challenges. IBM’s Fabric for Deep Learning (FfDL) is a DL (Deep Learning) framework that marries advanced ML development and training with Kubernetes. It makes it easy to train and serve ML models based on different ML frameworks (e.g., TensorFlow, Caffe, PyTorch) on Kubernetes.&lt;/p&gt;

&lt;p&gt;In this article, I’ll discuss the architecture and key features of FfDL and show some practical examples of using the framework for training and deploying ML models on Kubernetes. I’ll also address the key limitations of FfDL compared to other ML frameworks for Kubernetes and point out some ways in which it could possibly improve. &lt;/p&gt;

&lt;h2&gt;
  
  
  Description of FfDL Features
&lt;/h2&gt;

&lt;p&gt;FfDL is an open-source DL platform for Kubernetes originally developed by the IBM Research and IBM Watson development teams. The main purpose behind the project was to bridge the gap between ML research and production-grade deployment of ML models in the distributed infrastructure of the cloud. FfDL is the core of many IBM ML products, including Watson Studio’s Deep Learning as a Service (DLaaS), which provides tools for the development of production-grade ML workflows in public cloud environments. &lt;/p&gt;

&lt;p&gt;It’s no surprise that the team behind FfDL chose Kubernetes to automate ML workflows. Kubernetes offers many benefits for the production deployment of ML models including automated lifecycle management (node scheduling, restarts on failure, health checks), a multi-server networking model, DNS and service discovery, security, advanced application update/upgrade patterns, autoscaling, and many more. &lt;/p&gt;

&lt;p&gt;More importantly, by design, Kubernetes is a highly extensible and pluggable platform where users can define their own custom controllers and custom resources integrated with K8s components and orchestration logic. This extensibility is leveraged by FfDL to allow ML workflows to run efficiently on Kubernetes, making use of available K8s orchestration services, APIs, and abstractions while adding the ML-specific logic needed by ML developers.&lt;/p&gt;

&lt;p&gt;This deep integration between FfDL and Kubernetes makes it possible to solve many of the challenges that ML developers face on a daily basis. For the issues listed in the opening section, FfDL offers the following features: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Cloud-agnostic deployment of ML models, enabling them to run in any environment where containers and Kubernetes run &lt;/li&gt;
&lt;li&gt;Support for training models developed for several popular DL frameworks, including TensorFlow, PyTorch, Caffe, and Horovod &lt;/li&gt;
&lt;li&gt;Built-in support for training ML models with GPUs&lt;/li&gt;
&lt;li&gt;Fine-grained configuration of ML training jobs using Kubernetes native abstractions and FfDL custom resources&lt;/li&gt;
&lt;li&gt;ML-model lifecycle management using K8s native controllers, schedulers, and FfDL control loops&lt;/li&gt;
&lt;li&gt;Scalability, fault tolerance, and high availability for ML deployments &lt;/li&gt;
&lt;li&gt;Built-in log collection, monitoring, and model evaluation layers for ML training jobs&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Plus, FfDL is an efficient way to serve ML models since it uses the Seldon Core serving framework to convert trained models (Tensorflow, Pytorch, H2O, etc.) into gRPC/REST microservices served on Kubernetes.&lt;/p&gt;

&lt;h2&gt;
  
  
  FfDL Architecture
&lt;/h2&gt;

&lt;p&gt;FfDL is deployed as a set of interconnected microservices (pods) responsible for a specific part of the ML workflow. FfDL relies on Kubernetes to restart these components when they fail and to control their lifecycle. After installing FfDL on your Kubernetes cluster, you can see similar pods:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl config set-context $(kubectl config current-context) –namespace=$NAMESPACE
kubectl get pods
# NAME                                 READY     STATUS    RESTARTS   AGE
# alertmanager-7cf6b988b9-h9q6q        1/1       Running   0          5h
# etcd0                                1/1       Running   0          5h
# ffdl-lcm-65bc97bcfd-qqkfc            1/1       Running   0          5h
# ffdl-restapi-8777444f6-7jfcf         1/1       Running   0          5h
# ffdl-trainer-768d7d6b9-4k8ql         1/1       Running   0          5h
# ffdl-trainingdata-866c8f48f5-ng27z   1/1       Running   0          5h
# ffdl-ui-5bf86cc7f5-zsqv5             1/1       Running   0          5h
# mongo-0                              1/1       Running   0          5h
# prometheus-5f85fd7695-6dpt8          2/2       Running   0          5h
# pushgateway-7dd8f7c86d-gzr2g         2/2       Running   0          5h
# storage-0                            1/1       Running   0          5h`
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In general, FfDL architecture is based on the following main components:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;REST API&lt;/li&gt;
&lt;li&gt;Trainer&lt;/li&gt;
&lt;li&gt;Lifecycle Manager&lt;/li&gt;
&lt;li&gt;Training Job&lt;/li&gt;
&lt;li&gt;Training Data Service&lt;/li&gt;
&lt;li&gt;Web UI&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Let’s briefly discuss what each of these does. &lt;/p&gt;

&lt;h2&gt;
  
  
  REST API
&lt;/h2&gt;

&lt;p&gt;The REST API microservice processes user HTTP requests and passes them to the gRPC Trainer service. It’s an entry point that allows FfDL users to interact with training jobs, configure training parameters, deploy models, and use other features provided by FfDL and Kubernetes. The REST API supports authentication and leverages K8s service registries to load balance client requests, which ensures scalability when serving an ML model.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--YDe848Tw--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/dbifponcyb85nhkzr60n.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--YDe848Tw--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/dbifponcyb85nhkzr60n.png" alt="Image description" width="880" height="443"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Figure 1: FfDL architecture (Source: GitHub)&lt;/em&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  Trainer
&lt;/h2&gt;

&lt;p&gt;The Trainer microservice processes training job requests received via the REST API and saves the training job configuration to the MongoDB database (see Figure 1 above). This microservice can initiate job deployment, serving, halting, or termination by passing respective commands to the Lifecycle Manager. &lt;/p&gt;
&lt;h2&gt;
  
  
  Lifecycle Manager
&lt;/h2&gt;

&lt;p&gt;The FfDL Lifecycle Manager is responsible for launching and managing (pausing, starting, terminating) the training jobs initiated by the Trainer by interacting with the K8s scheduler and cluster manager. The procedure according to which the Lifecycle Manager operates includes the following steps:&lt;/p&gt;

&lt;p&gt;Retrieve a training job configuration defined in the YML manifests.&lt;br&gt;
Determine the learner pods, parameter servers, sidecar containers, and other components of the job.&lt;br&gt;
Call the Kubernetes REST API to deploy the job.&lt;br&gt;
Training Job&lt;br&gt;
A training job is the FfDL abstraction that encompasses a group of learner pods and a number of sidecar containers for control logic and logging. FfDL allows for the launching of multiple learner pods for distributed training. A training job can also include parameter servers for asynchronous training with data parallelism. FfDL provides these distributed training features via Open MPI (Message Passing Interface) designed to enable network-agnostic interaction and communication between cluster nodes. The MPI protocol is widely used for enabling all-reduce style distributed ML training (see MPI Operator by Kubeflow). &lt;/p&gt;
&lt;h2&gt;
  
  
  Training Data Service
&lt;/h2&gt;

&lt;p&gt;Each training job has a sidecar logging container (log collector) that collects training data, such as evaluation metrics, visuals, and other artifacts, and sends it to the FfDL Training Data Service (TDS). The FfDL log collectors understand the unique log syntax of each ML framework supported by FfDL. In turn, TDS dynamically emits this information to the users as the job is running. It also permanently stores log data in Elasticsearch for debugging and auditing purposes. &lt;/p&gt;
&lt;h2&gt;
  
  
  Web UI
&lt;/h2&gt;

&lt;p&gt;FfDL ships with a minimalistic Web UI that allows for the uploading of data and a model code for training. Overall, the FfDL UI has limited features compared to alternatives such as FloydHub or Kubeflow Central Dashboard. &lt;/p&gt;
&lt;h2&gt;
  
  
  Training ML Models with FfDL
&lt;/h2&gt;

&lt;p&gt;Now that you understand the FfDL architecture, let’s discuss how you can train and deploy ML jobs using this framework. The process is quite straightforward: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Create a model code written in any supported framework (e.g., TensorFlow, PyTorch, Caffe).&lt;/li&gt;
&lt;li&gt;Containerize the model.&lt;/li&gt;
&lt;li&gt;Expose training data to the job using some object store (e.g., AWS S3).&lt;/li&gt;
&lt;li&gt;Create a manifest with a training job configuration using a FfDL K8s custom resource.&lt;/li&gt;
&lt;li&gt;Train your ML model via the FfDL CLI or FfDL UI. &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Serve the ML model using Seldon Core.&lt;br&gt;
Assuming that you already have a working ML model code and training datasets, you can jump right to the FfDL model manifest parameters. The FfDL custom resource lets users define resource requirements for a given job, including requests and limits for GPUs, CPUs, and memory; the number of learner pods to execute the training; paths to training data; etc. &lt;/p&gt;

&lt;p&gt;Below is an example of a FfDL training job manifest from the official documentation. It defines a TensorFlow job for training a simple convolutional neural network:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;name: tf_convolutional_network_tutorial
description: Convolutional network model using tensorflow
version: "1.0"
gpus: 0
cpus: 0.5
memory: 1Gb
learners: 1

# Object stores that allow the system to retrieve training data.
data_stores:
  - id: sl-internal-os
    type: mount_cos
    training_data:
      container: tf_training_data
    training_results:
      container: tf_trained_model
    connection:
      auth_url: http://s3.default.svc.cluster.local
      user_name: test
      password: test

framework:
  name: tensorflow
  version: "1.5.0-py3"
  command: &amp;gt;
    python3 convolutional_network.py --trainImagesFile ${DATA_DIR}/train-images-idx3-ubyte.gz
      --trainLabelsFile ${DATA_DIR}/train-labels-idx1-ubyte.gz --testImagesFile ${DATA_DIR}/t10k-images-idx3-ubyte.gz
      --testLabelsFile ${DATA_DIR}/t10k-labels-idx1-ubyte.gz --learningRate 0.001
      --trainingIters 2000

evaluation_metrics:
  type: tensorboard
  in: "$JOB_STATE_DIR/logs/tb"
  # (Eventual) Available event types: 'images', 'distributions', 'histograms', 'images'
  # 'audio', 'scalars', 'tensors', 'graph', 'meta_graph', 'run_metadata'
  #  event_types: [scalars]

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;According to the manifest, the TF training job will run using half of the node’s CPU capacity and will be processed by one learner. FfDL supports distributed training, meaning there can be multiple learners for the same training job.&lt;/p&gt;

&lt;p&gt;In the data_stores part of the spec, you can specify how FfDL should access the training data and store the training results. Training data can be provided to FfDL using any block storage such as AWS S3 or Google Cloud Storage. After the training, the trained model with corresponding model weights will be stored under the folder specified in the training_results setting. &lt;/p&gt;

&lt;p&gt;The framework section of the manifest defines framework-specific parameters used when starting learner’s containers. There, you can specify the framework version, initialization values for hyperparameters (e.g., learning rate), number of iterations, select evaluation metrics (e.g., accuracy), and location of the test and labeled data. You can define pretty much anything your training script exposes. &lt;/p&gt;

&lt;p&gt;Finally, in the evaluation_metrics section, you can define the location of generated logs and artifacts and the way to access them. The FfDL supports TensorBoard, so you can analyze your model’s logs and metrics there. &lt;/p&gt;

&lt;p&gt;After the manifest is written, you can train the model using either the FfDL CLI or FfDL UI. For detailed instructions on how to do this, please see the official docs. &lt;/p&gt;

&lt;h2&gt;
  
  
  Deploying FfDL Models
&lt;/h2&gt;

&lt;p&gt;As I’ve already mentioned, FfDL uses Seldon Core for deploying ML models as REST/gRPC microservices. Seldon is a very powerful serving platform for Kubernetes and using it with FfDL gives you a lot of useful features out of the box:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Multi-framework support (TensorFlow, Keras, PyTorch)&lt;/li&gt;
&lt;li&gt;Containerization of ML models using pre-packaged inference servers&lt;/li&gt;
&lt;li&gt;API endpoints that can be tested with Swagger UI, cURL, or gRPCurl&lt;/li&gt;
&lt;li&gt;Metadata to ensure that each model can be traced back to its training platform, data, and metrics&lt;/li&gt;
&lt;li&gt;Metrics and integration with Prometheus and Grafana&lt;/li&gt;
&lt;li&gt;Auditability and logging integration with Elasticsearch&lt;/li&gt;
&lt;li&gt;Microservice distributed tracing through Jaeger.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Any FfDL model whose runtime inference can be packaged as a Docker container can be managed by Seldon.&lt;/p&gt;

&lt;p&gt;The process of deploying your ML model with FfDL is relatively straightforward. First, you need to deploy Seldon Core to your Kubernetes cluster since it’s not part of the default FfDL installation. Next, you need to build the Seldon model image from your trained model. To do this, you can use the S2I (Openshift’s source-to-image tool) and push it to Docker Hub.&lt;/p&gt;

&lt;p&gt;After this, you need to define the Seldon REST API deployment using a deployment template similar to the one below. Here, I’m using the example from the FfDL Fashion MNIST repo on GitHub:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
  "apiVersion": "machinelearning.seldon.io/v1alpha2",
  "kind": "SeldonDeployment",
  "metadata": {
    "labels": {
      "app": "seldon"
    },
    "name": "ffdl-fashion-mnist"
  },
  "spec": {
    "annotations": {
      "project_name": "FfDL fashion-mnist",
      "deployment_version": "v1"
    },
    "name": "fashion-mnist",
    "oauth_key": "oauth-key",
    "oauth_secret": "oauth-secret",
    "predictors": [
      {
        "componentSpecs": [{
          "spec": {
            "containers": [
              {
                "image": "",
                "imagePullPolicy": "IfNotPresent",
                "name": "classifier",
                "resources": {
                  "requests": {
                    "memory": "1Mi"
                  }
                },
                "env": [
                  {
                    "name": "TRAINING_ID",
                    "value": ""
                  },
                  {
                    "name": "BUCKET_NAME",
                    "value": ""
                  },
                  {
                    "valueFrom": {
                      "secretKeyRef": {
                          "localObjectReference": {
                      "name" : "bucket-credentials"
                   },
                        "key": "endpoint"
                      }
                    },
                    "name": "BUCKET_ENDPOINT_URL"
                  },
                  {
                    "valueFrom": {
                      "secretKeyRef": {
                          "localObjectReference": {
                      "name" : "bucket-credentials"
                  },
                        "key": "key"
                      }
                    },
                      "name": "BUCKET_KEY"
                  },
                  {
                    "valueFrom": {
                      "secretKeyRef": {
                          "localObjectReference": {
                      "name" : "bucket-credentials"
                   },
                        "key": "secret"
                      }
                    },
                    "name": "BUCKET_SECRET"
                  }
                ]
              }
            ],
            "terminationGracePeriodSeconds": 20
          }
        }],
        "graph": {
          "children": [],
          "name": "classifier",
          "endpoint": {
            "type": "REST"
          },
          "type": "MODEL"
        },
        "name": "single-model",
        "replicas": 1,
        "annotations": {
          "predictor_version": "v1"
        }
      }
    ]
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The most important parts of this manifest are:&lt;/p&gt;

&lt;p&gt;BUCKET_NAME: The name of the bucket containing your model&lt;br&gt;
image: Your Seldon model on Docker Hub&lt;br&gt;
There are also Seldon-specific configurations of the inference graph and predictors you can check out in the Seldon Core docs. &lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion: FfDL Limitations
&lt;/h2&gt;

&lt;p&gt;As I showed, the FfDL platform provides basic functionality for running ML models on Kubernetes, including training and serving models. However, compared to other available alternatives for Kubernetes such as Kubeflow, the FfDL functionality is somewhat limited. In particular, it lacks flexibility in configuring training jobs for specific ML frameworks. Kubeflow’s TensorFlow Operator, for example, allows you to define distributed training jobs based on all-reduce and asynchronous patterns using TF distribution strategies. The Kubeflow CRD for TensorFlow exposes many more parameters than FfDL, and the FfDL specification for its training custom resource is not as well-documented. &lt;/p&gt;

&lt;p&gt;Similarly, FfDL does not support many important ML workflow features for AutoML, including hyperparameter optimization, and has limited functionality for creating reproducible ML experiments and pipelines, like Kubeflow Pipelines does.&lt;/p&gt;

&lt;p&gt;Also, the process of deploying and managing training jobs on Kubernetes is somewhat dependent on FfDL custom scripts and tools and does not provide a lot of Kubernetes-native resources, which limits the pluggability of the framework. The FfDL documentation for many important aspects of these tools is also limited. For example, there is no detailed description of how to deploy FfDL on various cloud providers. &lt;/p&gt;

&lt;p&gt;Finally, the FfDL UI does not provide as many useful features as FloydHub and Kubeflow Central Dashboard. It just lets users upload their model code to Kubernetes. &lt;/p&gt;

&lt;p&gt;In sum, to be a tool for the comprehensive management of modern ML workflows, FfDL needs more features and better documentation. At this moment, it can be used as a simple way to train and deploy ML models on Kubernetes but not as a comprehensive platform for managing production-grade ML pipelines. &lt;/p&gt;

</description>
      <category>aws</category>
      <category>elb</category>
      <category>alb</category>
      <category>nlb</category>
    </item>
    <item>
      <title>How to Get the Most From AWS Cost Management Tools</title>
      <dc:creator>We are IOD</dc:creator>
      <pubDate>Thu, 07 Apr 2022 08:06:24 +0000</pubDate>
      <link>https://dev.to/iod/how-to-get-the-most-from-aws-cost-management-tools-2hkg</link>
      <guid>https://dev.to/iod/how-to-get-the-most-from-aws-cost-management-tools-2hkg</guid>
      <description>&lt;p&gt;With the adoption of public cloud services on the rise and technical resources such as servers far from sight, companies are forced to address the elephant in the room: How can they manage the cloud costs of day-to-day operations? Or, more specifically, how can they keep costs from spiraling out of control?&lt;/p&gt;

&lt;p&gt;From a business point of view, several benefits have been driving organizations to adopt the public cloud, such as enhanced capacity planning, massive economies of scale from companies like Amazon Web Services (AWS), the ability to trade upfront capital investments (CapEx) for monthly operating expenses (OpEx), and, above all, the ability to truly focus on their business rather than running and maintaining data centers.&lt;/p&gt;

&lt;p&gt;As a market leader in the public cloud space, AWS has paved the way for today’s digital transformation and offers multiple mechanisms for businesses to innovate while keeping costs under control. Yet, those tools and processes are still quite unclear, or even unknown, to many business leaders. &lt;/p&gt;

&lt;p&gt;To better understand cloud costs, let’s start by examining how AWS pricing actually works.&lt;/p&gt;

&lt;h2&gt;
  
  
  How AWS Pricing Works
&lt;/h2&gt;

&lt;p&gt;From the very beginning, AWS has been quite transparent about how their pricing works and how customers can take advantage of it to gain better cost efficiencies. Architects can design systems and optimize costs by picking cloud services that match their usage needs while still having the option to expand later.&lt;/p&gt;

&lt;p&gt;With AWS’ on-demand and pay-as-you-go pricing model, customers can get exactly what they need on a per-hour basis (or even per-second in some cases) while still having at their disposal a reservation-based payment model for long-term and predictable workloads.&lt;/p&gt;

&lt;p&gt;The AWS pricing model, as described in their own whitepaper, follows four key principles that help customers to understand the best practices regarding cloud costs and to avoid pitfalls. We’ll take a look at each of these principles here below. &lt;/p&gt;

&lt;h2&gt;
  
  
  Understand the Fundamentals of Pricing
&lt;/h2&gt;

&lt;p&gt;Every new cloud customer should first learn that there are three aspects that drive costs when using AWS: compute, storage, and outbound data transfer. The weight of each of these will vary according to your product and pricing model.&lt;/p&gt;

&lt;p&gt;Compute usage is typically charged per hour, while storage is often per Gigabyte of data stored. As to data transfer, with a few exceptions, customers are not charged for inbound data transfers or transfers between services within the same region. This means that you usually don’t pay for the data going into your AWS account and thus really only have to worry about data going out of it, e.g., internet traffic.&lt;/p&gt;

&lt;h2&gt;
  
  
  Start Early with Cost Optimization
&lt;/h2&gt;

&lt;p&gt;Don’t wait until your cloud workloads are in production to optimize costs. Customers that come from an on-premises environment often fall into this trap. Cloud adoption is not a mere technical exercise. It requires a cultural change that starts from the very beginning by looking at how cloud costs are planned and allocated. &lt;/p&gt;

&lt;p&gt;Decision makers need full visibility of running costs, and mechanisms to control these should be in place early on. This drives organizations to optimize their costs frequently and with less effort. Also, having such a cost-efficient strategy from the start will give your team peace of mind as your cloud environment grows and becomes more complex.&lt;/p&gt;

&lt;h2&gt;
  
  
  Maximize the Power of Flexibility
&lt;/h2&gt;

&lt;p&gt;You can do this by leveraging cloud-native capabilities, such as launching resources on-demand and turning them off when they’re not needed, instead of keeping services running 24/7. For predictable workloads that need to be constantly running, customers can still leverage a reservation model with a long-term commitment for extra savings. &lt;/p&gt;

&lt;p&gt;This cloud elasticity can save a tremendous amount of money while still giving you the capacity for near-unlimited growth. Also, by using and paying only for the resources you need, you can focus more resources on feature development and innovation.&lt;/p&gt;

&lt;h2&gt;
  
  
  Choose the Right Pricing Model for the Job
&lt;/h2&gt;

&lt;p&gt;In AWS, the same product can have multiple pricing models, so it’s important to research the characteristics of each and choose the best fit for your workload. Pricing models vary from on-demand (pay-as-you without long-term commitment or upfront costs) and dedicated instances (for instances on dedicated hardware) to spot (a mechanism to bid on the price and have discounted hourly rates) and reservations (committing and paying for long-term capacity in exchange of a sizable discount).&lt;br&gt;
Getting Costs Under Control: Tips &amp;amp; Tricks&lt;br&gt;
Once you understand AWS’ pricing principles and use them as a guideline, you can then learn how to make the best use of AWS’ built-in tools. There are a few interesting tricks here that business leaders can implement to help get their cloud costs under control. &lt;/p&gt;

&lt;h2&gt;
  
  
  Consolidated Billing and Reserved Resources
&lt;/h2&gt;

&lt;p&gt;The AWS pricing principles suggest you reserve capacity for predictable workloads and gain substantial discounts. But how does this work in practice? The mechanics are fairly simple, as you can commit to using a certain type of resource (e.g., a certain number of EC2 M5 instances in eu-west-1 region) for a certain period of time (minimum of one year) and receive a discount of up to 75%. The exact amount of the discount depends on various factors, such as the resource type, region, amount of upfront payment, and number of years. &lt;/p&gt;

&lt;p&gt;This does not mean that a specific resource has to always be running. Since the reservation is for a certain resource type, not a specific deployed resource, you are free to stop, terminate, or re-deploy that resource as much as you want as long as you keep using the same type. &lt;/p&gt;

&lt;p&gt;When customers have multiple AWS accounts, one interesting trick is to enroll every account under the same “Organization” and enable consolidated billing. This makes the monthly operational management easier, plus it enables you to use the reserved resource type you purchased across any of your AWS accounts, meaning it becomes significantly more flexible.&lt;/p&gt;

&lt;p&gt;In addition, with the recent introduction of the Savings Plan feature across multiple AWS products, customers can now get insights on potential savings by switching to reserved resources based on their product usage. &lt;/p&gt;

&lt;h2&gt;
  
  
  Billing Alarms &amp;amp; Cost Explorer
&lt;/h2&gt;

&lt;p&gt;When it comes to cloud costs, the worst situation is when you receive an unexpected invoice at the end of the month for used resources that did not bring any business value. &lt;/p&gt;

&lt;p&gt;From an operational point of view, it’s important to not get caught by surprise. Therefore, customers must have ways to receive notifications and react swiftly when something unexpected happens.&lt;/p&gt;

&lt;p&gt;In AWS, customers can leverage a feature named Billing Alarms, which allows you to set up an alarm to notify you of custom-defined conditions. A common scenario is to configure the alarm to send an email notification in case the monthly costs are predicted to go above a certain threshold based on the current usage pattern. This enables you to quickly react and troubleshoot the cause of the sudden increase without waiting until the end of the month. &lt;/p&gt;

&lt;p&gt;For troubleshooting both current and past expenses, AWS customers can use Cost Explorer, a built-in UI tool that provides a visualization and filtering of costs based on different factors, such as service, tagging, and time period. The most popular filtering method is tagging. This is made possible by having your development team tag AWS resources with custom key/value pairs such as use case, owner, department, or cost center. &lt;/p&gt;

&lt;p&gt;For increased awareness, customers can also display billing information using CloudWatch metrics and dashboards. This enables a customized visualization of cost usage and correlates with the system status (e.g., number of requests served).&lt;/p&gt;

&lt;p&gt;These tools make it incredibly easy for decision makers to track and understand how their cloud investment is being spent.&lt;/p&gt;

&lt;h2&gt;
  
  
  Engineering Teams in the Decision-Making Process
&lt;/h2&gt;

&lt;p&gt;It is often said that when using cloud computing, your system scales with a credit card. While not wrong, it is crucial to know when and why that scaling occurs. &lt;/p&gt;

&lt;p&gt;If customers are unaware of different product pricing and how volume affects them, costs can easily skyrocket. This can be due to the system responding to an increase in demand or a simple development mistake. &lt;/p&gt;

&lt;p&gt;Engineering teams are right at the center when it comes to optimizing costs and utilizing the right type of technical resources. However, one common pitfall is choosing resources based purely on their technical characteristics. The total cost of ownership (TCO) needs to be taken into account for each component while designing the system. The TCO includes the technical specifications, pricing model, and operational costs. &lt;/p&gt;

&lt;p&gt;AWS makes it easier for engineering teams to estimate the cost of their resource choices with its Pricing Calculator tool. This lets teams weigh the pros and cons of their choices and choose the AWS services that suit them best. &lt;/p&gt;

&lt;p&gt;One important consideration to keep in mind is that while some managed serverless services might feel less affordable compared to a DIY approach with EC2 virtual instances, the human cost of operating them often largely exceeds any potential savings.&lt;/p&gt;

&lt;p&gt;Software engineering teams working in DevOps should continuously be on the lookout for ways to improve their operations. When talking about specific workloads, this eagerness to improve and adopt best practices should extend to all stakeholders. Bringing everyone to the table and performing frequent assessments, such as AWS Well-Architected Reviews, can pave the way for greater cost-efficiency as well as an increase in innovation. &lt;/p&gt;

&lt;p&gt;Therefore, engineering teams should be an active part of the decision-making process with business leaders. Only by embracing business objectives as a common goal, and maximizing the potential for digital transformation that cloud technologies provide, can businesses truly thrive.&lt;/p&gt;

&lt;h2&gt;
  
  
  Summary
&lt;/h2&gt;

&lt;p&gt;As businesses move forward in their digital transformation and execute their technology strategy, using a public cloud provider such as AWS gives a tremendous amount of speed and flexibility to accomplish their business goals. &lt;/p&gt;

&lt;p&gt;For anyone using cloud services, it’s critical to understand and control how money is being spent—making sure that only resources needed are in-use and that they are getting the most from each dollar spent.&lt;/p&gt;

&lt;p&gt;With near-unlimited resources just an API-request away, it is fairly easy to go overboard without the proper guidance and boundaries in place. Therefore, make sure to have the proper people and structure in place (e.g., architecture and cloud steering group) that can manage and optimize your cloud investment and usage.&lt;/p&gt;

&lt;p&gt;This article was originally posted on &lt;a href="https://iamondemand.com/blog/how-to-get-the-most-out-of-the-aws-cost-management-tools/"&gt;IOD Blog&lt;/a&gt;.&lt;br&gt;
If you want to write an article like this one and become a part of a global talent network. &lt;a href="https://iamondemand.com/iod-talent-network/"&gt;Join us&lt;/a&gt;&lt;/p&gt;

</description>
      <category>aws</category>
      <category>management</category>
      <category>tooling</category>
    </item>
    <item>
      <title>Security Risks and Challenges in the Serverless World</title>
      <dc:creator>We are IOD</dc:creator>
      <pubDate>Sun, 27 Mar 2022 11:46:31 +0000</pubDate>
      <link>https://dev.to/iod/security-risks-and-challenges-in-the-serverless-world-2bb8</link>
      <guid>https://dev.to/iod/security-risks-and-challenges-in-the-serverless-world-2bb8</guid>
      <description>&lt;p&gt;Adopting an architecture that gives you complete control over your application and infrastructure (servers, identity management, etc.) is good because of the flexibility it offers, but it’s only sustainable for a while. As your organization grows, things will start to get complicated, and scaling and infrastructure management will become a big challenge. Instead of delegating these responsibilities to developers, why not adopt serverless? This will allow you to shift the responsibility of managing your application infrastructures to a cloud provider.&lt;/p&gt;

&lt;p&gt;Going serverless offers numerous benefits, such as greater scalability, faster time to market, lower operational overhead, and automated scaling—all at a reduced cost. But serverless also comes with some challenges. Like with any technology, serverless applications are susceptible to malicious attacks that can be difficult to protect against. According to an audit by PureSec, 1 in 5 serverless apps has a critical security flaw that attackers can leverage to perform various malicious actions.  &lt;/p&gt;

&lt;p&gt;I have built many serverless applications throughout my software engineering career. In this post, I’ll share some of the best practices I’ve found to be useful for mitigating security risks. &lt;/p&gt;

&lt;h2&gt;
  
  
  Serverless Attack Vectors
&lt;/h2&gt;

&lt;p&gt;Serverless applications are almost never built on functions as a service (FaaS) alone. Rather, they also rely on several third-party components and libraries, connected through networks and events. Every third-party component connected to a serverless app is a potential risk, and your application could be easily exploited or damaged if a component is compromised, malicious, or has insecure dependencies. &lt;/p&gt;

&lt;p&gt;Instead of securing serverless applications using firewalls, antivirus solutions, intrusion prevention/detection systems, or other similar tools, focus on securing your application functions hosted in the cloud. While the cloud provider provisions and maintains the servers that run your code and manages resource allocation dynamically, you still need to ensure that your app is free of the following vulnerabilities, which are unique to serverless: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Data vulnerabilities: **Vulnerabilities that arise due to the movement of data between app functions and third-party services. These vulnerabilities are also introduced when you store app data in non-secure databases. 
-&lt;/strong&gt; Libraries vulnerabilities: **Security vulnerabilities that are introduced when a function uses vulnerable third-party dependencies or libraries. &lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Access and permission vulnerabilities:&lt;/strong&gt; Vulnerabilities that are introduced when you create policies that allow excessive access or permissions to sensitive functions or data.
-** Code vulnerabilities: **Vulnerabilities that are introduced when you write bad codes or vulnerable serverless functions.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Security Risks and Challenges in the Serverless World
&lt;/h2&gt;

&lt;p&gt;As more enterprises adopt and build applications using serverless architectures, it’s really important to keep serverless deployments and services secure. Unfortunately, many enterprises aren’t aware of the security risks in serverless applications, not to mention crafting strategies for mitigating those risks. In this section, I’ll discuss some critical security risks to consider when running serverless applications. &lt;/p&gt;

&lt;h2&gt;
  
  
  Inadequate Monitoring and Logging of Serverless Functions
&lt;/h2&gt;

&lt;p&gt;Serverless apps operate amid a complex web of connections and use different services from various cloud providers across multiple regions. In a serverless application, insufficient function logs lead to missed error reports. Because serverless functions communicate across a network, it’s very easy to lose track of the audit trail or event flow that you need in order to detect and identify what’s happening within the app. &lt;/p&gt;

&lt;p&gt;What’s more, without proper monitoring and logging of serverless functions and events, you won’t be able to identify critical errors, malicious attacks, or insecure flows on time. Eventually, the delay will lead to app downtime that could affect your customers or brand reputation.  &lt;/p&gt;

&lt;h2&gt;
  
  
  Sensitive Data Exposure Due To a Large Attack Surface
&lt;/h2&gt;

&lt;p&gt;Serverless applications have a large attack surface and comprise hundreds, or even thousands, of functions that can be triggered by many events, including API gateway commands, data streams, database changes, emails, IoT telemetry signals, and more. Serverless functions also ingest data from various third-party libraries and data sources, the majority of which are difficult to inspect using standard application-layer protections, such as web application firewalls. &lt;/p&gt;

&lt;p&gt;There are many factors that increase entry points to serverless architectures, including the vast range of event sources, large number of small functions associated with serverless apps, and active exchange of data between deployed functions and third-party services. In addition, all of these factors combined increase the potential attack surface and risk of sensitive data exposure, manipulation, or destruction. &lt;/p&gt;

&lt;h2&gt;
  
  
  Function Event-Data Injection
&lt;/h2&gt;

&lt;p&gt;At a high level, a function event-data injection attack occurs when a hacker uses hostile, untrusted, and unauthorized data inputs to trick an app into providing authorized access to data or executing unintended commands. A serverless application is vulnerable to fault injection attacks when it allows malicious user data to slip through the cracks without filtering, validating, or sanitizing that data. &lt;/p&gt;

&lt;p&gt;These injection attacks can lead to access denial, data corruption, data loss, and even complete host takeover. In extreme cases, the hacker can take total control of an app’s high-level execution and modify its regular flow via a ransomware attack. Some common examples of function event-data injection attacks associated with serverless architectures are: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;SQL and NoSQL injection&lt;/li&gt;
&lt;li&gt;Server-side request forgery (SSRF)&lt;/li&gt;
&lt;li&gt;Object deserialization attacks &lt;/li&gt;
&lt;li&gt;Function runtime code injection (e.g., Golang, C#, Java, JavaScript/Node.js, Python)&lt;/li&gt;
&lt;li&gt;XML External Entity&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;As you would imagine, serverless functions aren’t immune to the previously mentioned security threats and risks. Your app will still be vulnerable if you have functions or code that use excessive permissions or don’t follow security best practices. &lt;/p&gt;

&lt;h2&gt;
  
  
  Best Practices for Securing Serverless Applications
&lt;/h2&gt;

&lt;p&gt;So how do you  secure a serverless app? First, know that designing and implementing security into your app should always be a top priority—even with serverless architectures. Since you’re responsible for managing some parts of your serverless app, you need to adopt best practices that allow you to secure it against attacks, insecure coding practices, errors, and misconfigurations. Here are a few tips to get you started.  &lt;/p&gt;

&lt;h2&gt;
  
  
  Adopt the Principle of Least Privilege
&lt;/h2&gt;

&lt;p&gt;One way to secure serverless applications is to ensure proper authentication and authorization, allowing each function to access only the minimum permissions it needs to operate well or perform an intended logic. With the principle of least privilege, you grant only enough access required for a function to do its job. Setting out rules for what each function can access is essential for maintaining security in serverless architectures.&lt;/p&gt;

&lt;p&gt;This also allows you to minimize the level of security exposure for all deployed functions and mitigate the impact of any attack. Least privilege access also ensures that each function does exactly what it was designed to do, helping you maintain compliance and improve your security posture.&lt;/p&gt;

&lt;h2&gt;
  
  
  Monitor and Log Functions
&lt;/h2&gt;

&lt;p&gt;Once you start to use serverless architecture, where the provider takes care of tasks like infrastructure maintenance and scaling, you may discover that things start to move quite quickly, as there is less work to do. Also, because serverless functions are stateless and event driven, it’s very easy to miss most suspicious activities if you don’t have a good monitoring strategy. A better approach for preventing, detecting, and effectively managing security breaches is to adequately log and monitor security-related events. &lt;/p&gt;

&lt;p&gt;You can collect real-time logs from different cloud services and serverless functions, as well as periodically push the logs to a central security information and event-management system. Most cloud providers have a comprehensive log-aggregation service you can leverage. That way, it’s easier to do an audit trail that you can reference whenever you need to hunt security threats. When monitoring your serverless functions, you should collect reports on resource access, malware activities, network activity, authorization and authentication, critical failures, and errors.&lt;/p&gt;

&lt;h2&gt;
  
  
  Define IAM Roles for Each Function
&lt;/h2&gt;

&lt;p&gt;In some cases, a serverless app will contain hundreds, or even thousands, of functions, which makes managing roles and permissions a time-consuming and tedious task. In a bid to make this less demanding, some enterprises fall into a trap: setting a single, wildcard permission level for an entire app that consists of tons of functions. This approach might seem less harmful when experimenting in the sandbox environment, but it can be very dangerous. In fact, it actually increases the security risks faced by serverless applications, as most code in the sandbox environment finds its way to production. &lt;/p&gt;

&lt;p&gt;As you adopt a serverless architecture, you need to think about each function individually. You should also manage individual policies and roles for each function. As a rule of thumb, every serverless function within your application should have only the permissions it needs to complete its logic—nothing more. Even if all your functions have or begin with the same policy, you should always decouple the IAM roles to ensure least privilege access control for the future of your functions. &lt;/p&gt;

&lt;h2&gt;
  
  
  Summing Up
&lt;/h2&gt;

&lt;p&gt;New opportunities pave the way for new challenges, and serverless computing is no exception. Despite the security challenges and risks, serverless architecture is a very exciting technological evolution in the world of infrastructure and a boon to many enterprises. &lt;/p&gt;

&lt;p&gt;To address and mitigate security risks, you need to understand the serverless attack vectors and the unique challenges in serverless environments. Most importantly, you need to “shift left” and integrate security throughout the entire software-development lifecycle. &lt;/p&gt;

&lt;p&gt;All serverless applications work under the shared responsibility model, where compliance and security are a shared responsibility between the cloud provider and application owner. The cloud provider is responsible for securing the serverless infrastructure and cloud components (servers, databases, data centers, network elements, the operating system and its configuration, etc.). You are responsible for securing the application layer by enforcing legitimate app behavior, managing access to data and application code, monitoring for security incidents and errors, and so on.  &lt;/p&gt;

&lt;p&gt;Clearly, you need to invest heavily in securing your app before you can reap the benefits of serverless. In this article, I discussed the security risks you should watch out for and the best practices you should adopt to keep your serverless environments secure and safe against insecure coding practices, errors, and misconfigurations. Good luck!&lt;/p&gt;

&lt;p&gt;Are you a tech expert, blogger, influencer, writer, editor, or marketer? &lt;a href="https://iamondemand.com/iod-talent-network/"&gt;Join our talent network&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;This blog post was originally posted on &lt;a href="https://iamondemand.com/blog/security-risks-and-challenges-in-the-serverless-world/"&gt;IOD Blog &lt;/a&gt;.&lt;/p&gt;

</description>
    </item>
  </channel>
</rss>
