<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Bassel Al Annan</title>
    <description>The latest articles on DEV Community by Bassel Al Annan (@bassel_alannan).</description>
    <link>https://dev.to/bassel_alannan</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/bassel_alannan"/>
    <language>en</language>
    <item>
      <title>Transforming Retail with AI: Enhancing Efficiency, Personalization, and ROI</title>
      <dc:creator>Bassel Al Annan</dc:creator>
      <pubDate>Tue, 19 Nov 2024 14:43:35 +0000</pubDate>
      <link>https://dev.to/bassel_alannan/transforming-retail-with-ai-enhancing-efficiency-personalization-and-roi-521m</link>
      <guid>https://dev.to/bassel_alannan/transforming-retail-with-ai-enhancing-efficiency-personalization-and-roi-521m</guid>
      <description>&lt;p&gt;The retail industry is becoming more competitive, driving companies to constantly seek innovative ways to boost efficiency, cut expenses, and maximize long-term returns on investment (ROI). One of these solutions is seeking artificial intelligence (AI) to simplify operations, improve accuracy, and unlock significant financial benefits. Moreover, the retail industry has witnessed considerable changes over the past few years, widely driven by the rise of generative AI (GenAI).&lt;/p&gt;

&lt;p&gt;Being born from retail and built for retailers, AWS is by far the foremost pioneer in cloud services and has been uniquely positioned to guide retailers through their transformative journey through a suite of AI solutions tailored especially for retail applications. In this blog post, we'll dive into the remarkable effect of integrating AI-powered services from AWS in the retail industry, focusing on critical areas where AI can generate impressive cost reductions and drive sustainable long-term ROI.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Retail Challenge: Unified Retail Experience&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Designing a unified retail experience that meets modern consumer expectations has presented a significant challenge which has been made even more apparent by the COVID-19 pandemic and ongoing supply chain issues. This fragmented the consumer shopping experience and necessitated a comprehensive approach spanning both online and offline realms. Fortunately, AI is now available as a critical solution for filling these gaps by introducing tools like predictive analytics to improve inventory management and AI-powered personalization engines that customize interactions to individual preferences, thereby assisting retailers in boosting customer engagement across various channels.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;A Four-Stage Framework for Success:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;But how can retailers translate the concept of “AI” into practical, real-world applications within the retail industry, and how can retailers capitalize on this to stay ahead of the competition?&lt;/p&gt;

&lt;p&gt;According to the AWS Cloud Adoption Framework for Artificial Intelligence (CAF-AI), the following 4 stages should be followed for successful AI adoption:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Envision:&lt;/strong&gt; Identify AI opportunities and get everyone on board to meet business goals.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Align:&lt;/strong&gt; Work with different teams to make sure everyone supports AI adoption.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Launch:&lt;/strong&gt; Start small projects to show AI's benefits and learn from them.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Scale:&lt;/strong&gt; Grow successful projects into full operations to make a big impact on the organization.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Key Use Cases in AI-Driven Retail:&lt;/strong&gt;&lt;br&gt;
&lt;em&gt;Enhanced Customer Experience through Amazon Personalize:&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;A study by Twilio found that 39% of businesses struggle with implementing personalization technology, while 62% of consumers expect personalized experiences and may switch brands if they don't get them. Clearly, offering personalized experiences in online shops is beneficial for both retailers and customers. The solution is a reliable tool that reduces the technical burden for retailers.&lt;/p&gt;

&lt;p&gt;Amazon Personalize is an AI/ML-powered service that uses your data to generate item recommendations for your users. It helps create custom shopping experiences and predicts product recommendations that match individual customer preferences. For example, a retailer could use Amazon Personalize to suggest accessories for a recently purchased item, enhancing the shopping experience, simplifying new content acquisition, and increasing conversion rates.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Optimizing Operations with AI-Driven Forecasting:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Efficient inventory management and accurate demand forecasting are essential for reducing costs and ensuring product availability. Retailers are now using AWS services like Amazon Forecast, combined with Amazon SageMaker, to get accurate predictions. These tools help track stock levels in real-time and forecast customer demand based on specific times, locations, historical data, and market trends.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Intelligent Search and Product Substitution:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Clients often look for specific products and expect useful search features to improve their shopping experience. Online shopping websites are using intelligent search services like Amazon Kendra and Amazon OpenSearch to make searches more intuitive and responsive. For example, when clients type "running shoes" into the search bar at a sporting goods store, the results will show options that fit both "running" and "shoes." If they search for a specific dress that's out of stock, the website will suggest alternative dresses based on their shopping history and interests.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Real-Time Fraud Detection and Prevention:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Common issues that online retailers also face are Credit Card fraud and Fake Items detection. This is where Amazon SageMaker, an AWS service that offers tools for building, training, and deploying machine learning (ML) models, can prove invaluable. SageMaker helps in verifying the authenticity of products by comparing uploaded images with official product photos to identify fakes. Additionally, it assists in detecting online transaction fraud by dynamically analyzing information about customers, including their purchase frequency and account activity duration.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Content Generation:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;AI in retail doesn't end here! Most online shops rely on marketing campaigns and need smart solutions to create engaging content for their customers. Amazon Bedrock, a fully managed AWS service, provides retailers with high-performing models that can generate personalized marketing content. It tailors content to each user's interests and adds engaging themes based on related items, using data from social media or purchase history.&lt;/p&gt;

&lt;p&gt;Summing it all up, AI in retail is no longer a need but a must for future innovation and the growth of companies in the retail industry. I invite you to join me on a journey to modernize your client's shopping experience at all levels using our AI-driven solutions, powered by AWS.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>aws</category>
      <category>retail</category>
      <category>genai</category>
    </item>
    <item>
      <title>How SAP on AWS - Specialty certification Can Boost Your Career: The Ultimate Guide to Exam Preparation</title>
      <dc:creator>Bassel Al Annan</dc:creator>
      <pubDate>Sun, 26 Mar 2023 17:43:55 +0000</pubDate>
      <link>https://dev.to/aws-builders/how-sap-on-aws-specialty-certification-can-boost-your-career-the-ultimate-guide-to-exam-preparation-2gnf</link>
      <guid>https://dev.to/aws-builders/how-sap-on-aws-specialty-certification-can-boost-your-career-the-ultimate-guide-to-exam-preparation-2gnf</guid>
      <description>&lt;p&gt;Due to the cooperation of the two leaders, SAP and AWS, clients and partners can take advantage of a number of benefits, including flexibility, scalability, reliability, and security. Customers can migrate their SAP workloads to the cloud and utilize AWS's infrastructure services by running SAP on AWS, which enables them to lower costs, boost productivity, and foster creativity. Moreover, businesses can build a scalable and adaptable infrastructure that can manage complex business processes and data management by combining these two potent technologies. Needless to say, Engineers with expertise in SAP on AWS are therefore in great demand and can expect to make a good living.&lt;/p&gt;

&lt;h2&gt;
  
  
  The benefits of becoming an SAP on AWS certified engineer
&lt;/h2&gt;

&lt;p&gt;By gaining this certification, you can demonstrate your expertise and proficiency in this field, making you a valuable asset to any organization. Moreover, the skills you learn while preparing for the exam will help you develop your technical capabilities and expand your knowledge in cloud computing and enterprise software. &lt;/p&gt;

&lt;p&gt;Fortunately, I was among the first to pass the AWS Certified: SAP on AWS - Specialty Beta Exam back in January 2022, and in this blog post, I'd like to share some of my notes that enabled me to pass this exam on the first attempt and in just two weeks.&lt;/p&gt;

&lt;h2&gt;
  
  
  How to prepare for the SAP on AWS certification exam
&lt;/h2&gt;

&lt;p&gt;The best path for preparing for the SAP on AWS exam is to follow a structured approach that covers all the necessary topics and concepts. The AWS website offers a range of resources, including &lt;a href="https://aws.amazon.com/sap/docs/" rel="noopener noreferrer"&gt;whitepapers&lt;/a&gt;, &lt;a href="https://www.udemy.com/course/aws-certified-sap-on-aws-specialty/" rel="noopener noreferrer"&gt;tutorials&lt;/a&gt;, and &lt;a href="https://d1.awsstatic.com/training-and-certification/docs-sap-on-aws-specialty/SAP-on-AWS-Specialty_Sample-Questions.pdf" rel="noopener noreferrer"&gt;practice exams&lt;/a&gt;, to help you prepare. Additionally, SAP offers a certification program that includes comprehensive training materials and hands-on exercises. By combining these resources, you can gain a deep understanding of SAP on AWS and develop the skills you need to pass the exam with confidence. So, let's get into the 47 points you will need to nail this exam!&lt;/p&gt;

&lt;p&gt;1) Services that are mostly used for SAP deployment on AWS are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Amazon EC2: Virtual machine or Bare Metal server to host the SAP application &lt;/li&gt;
&lt;li&gt;Amazon EBS: Store root files, SAP HANA binaries, data, logs, shared and backups.&lt;/li&gt;
&lt;li&gt;Amazon S3: Store file and database backup and archiving data.&lt;/li&gt;
&lt;li&gt;Amazon EFS: Store SAP application server (e.g., /sapmnt) on a shared Linux file system. This can be used with a scale-out SAP HANA topology to store shared and backup file system across multiple SAP HANA instances.&lt;/li&gt;
&lt;li&gt;Amazon FSx: Store SAP application server (e.g., /sapmnt) on a shared Windows file system&lt;/li&gt;
&lt;li&gt;Amazon VPC: Virtual network for your SAP workloads used to create environment subnets and network isolation.&lt;/li&gt;
&lt;li&gt;Amazon VPN: Connect your on-prem datacenter with the AWS network&lt;/li&gt;
&lt;li&gt;Amazon Direct Connect: Low latency dedicated lease line with high bandwidth that connects your datacenter with AWS. Normally used for moving large data with greater speeds during SAP migrations.&lt;/li&gt;
&lt;li&gt;Amazon Route 53: DNS resolution for SAP applications on AWS.&lt;/li&gt;
&lt;li&gt;Amazon Time Sync: Time synchronization for your SAP systems on EC2 instances.&lt;/li&gt;
&lt;li&gt;Amazon CloudWatch: Monitoring SAP systems running on AWS.&lt;/li&gt;
&lt;li&gt;AWS CloudTrail: Audit all AWS account API calls to get more visibility and better security on your SAP workloads.&lt;/li&gt;
&lt;li&gt;AWS CloudFormation: Automate SAP deployments and DR strategies using IaC.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;2) As part of the AWS Shared Responsibility Model, AWS is only responsible for managing your EC2 Hypervisor while you are responsible for managing the SAP Application, Databases, and Operating System.&lt;/p&gt;

&lt;p&gt;3) You can use your current SAP license while migrating to AWS if it meets the SAP licensing policies.&lt;/p&gt;

&lt;p&gt;4) AWS Do Not provide or sell SAP Licenses.&lt;/p&gt;

&lt;p&gt;5) The SAP Cloud Appliance Library provides users with preconfigured SAP environments that can be run automatically via a launch wizard.&lt;/p&gt;

&lt;p&gt;6) In order to get the full support of SAP on AWS you must at least meet the following guidelines:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Detailed Monitoring MUST be enabled for Amazon CloudWatch&lt;/li&gt;
&lt;li&gt;AWS Data Provider for SAP MUST be installed and configured on your SAP machines to share performance and configuration data between SAP machines.&lt;/li&gt;
&lt;li&gt;You MUST have either Business Support or Enterprise Support plan.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;7) SAP applications can be deployed on AWS using 3 methods:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Manual&lt;/li&gt;
&lt;li&gt;SAP Cloud Appliance Library&lt;/li&gt;
&lt;li&gt;AWS Quick Start&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;8) SAP on AWS comes with two primary architectures:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;All systems on AWS&lt;/li&gt;
&lt;li&gt;Hybrid&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;9) All systems on AWS ensures that all SAP components are either deployed from scratch or fully migrated from an on-prem datacenter to AWS.&lt;/p&gt;

&lt;p&gt;10) Guidelines for the SAP All-on-AWS architecture:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Network communication between Data Center and AWS is handled through either Site-to-Site VPN, or AWS Direct Connect.&lt;/li&gt;
&lt;li&gt;SAProuter is deployed in a public subnet and assigned a public IP.&lt;/li&gt;
&lt;li&gt;The SAProuter should have a dedicated security group that controls the required inbound and outbound access to the SAP support network.&lt;/li&gt;
&lt;li&gt;SAProuter is a proxy used to connect your SAP environment with External Networks such as SAP OSS.&lt;/li&gt;
&lt;li&gt;SAP Solution Manager system and SAProuter should be installed on your AWS network and integrated to the SAP support network (SAP OSS) via a secure network communication (SNC).&lt;/li&gt;
&lt;li&gt;SAP OSS is the official Online SAP Support Network that includes knowledge base to address frequently released bug fixes, new enhancements and helps you check whether a particular SAP note is present in your SAP system.&lt;/li&gt;
&lt;li&gt;SNC stands for Secure Network Communications and is used to encrypt the connections between SAProuters.&lt;/li&gt;
&lt;li&gt;Amazon Nat Gateway is used to secure instances behind a private subnet and allow these instances to have outbound connection to the internet.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;11) Guidelines for the Hybrid AWS Architecture:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Usually ideal for hosted Dev and Staging SAL environments on AWS while keeping the production on-prem.&lt;/li&gt;
&lt;li&gt;The client VPC and the on-prem datacenter is connected via AWS VPN or Amazon Direct Connect.&lt;/li&gt;
&lt;li&gt;SAP Systems on AWS are managed by the SAProuter and SAP Solution Manager running on-prem.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;12) If no HA is required, all SAP systems must be installed in a single Availability Zone.&lt;/p&gt;

&lt;p&gt;13) To receive SAP Support for your SAP NetWeaver environment you should be running &lt;a href="https://aws.amazon.com/sap/instance-types/" rel="noopener noreferrer"&gt;EC2 instances certified by SAP&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;14) &lt;a href="https://www.sap.com/dmc/exp/2014-09-02-hana-hardware/enEN/#/solutions?filters=iaas;ve:23" rel="noopener noreferrer"&gt;Specific EC2 instances&lt;/a&gt; are required to setup SAP HANA solution on AWS.&lt;/p&gt;

&lt;p&gt;15) Operating Systems supported for SAP on AWS are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;SUSE Linux Enterprise Server (SLES)&lt;/li&gt;
&lt;li&gt;SUSE Linux Enterprise Server for SAP Applications (SLES for SAP)&lt;/li&gt;
&lt;li&gt;Red Hat Enterprise Linux (RHEL)&lt;/li&gt;
&lt;li&gt;Red Hat Enterprise Linux for SAP Solutions (RHEL for SAP)&lt;/li&gt;
&lt;li&gt;Microsoft Windows Server&lt;/li&gt;
&lt;li&gt;Oracle Enterprise Linux&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;16) Operating System Licenses Considerations:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwjjgjmyt05ieiqe7u9ru.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwjjgjmyt05ieiqe7u9ru.png" alt="Operating System Licenses" width="800" height="235"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;17) Amazon RDS is only supported for SAP BusinessObjects BI and SAP Commerce (previously known as SAP Hybris Commerce).&lt;/p&gt;

&lt;p&gt;18) Amazon Aurora is only supported for SAP Commerce (previously known as SAP Hybris Commerce). &lt;/p&gt;

&lt;p&gt;19) Database Licenses Considerations:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F23b2eulzngecj1j801wa.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F23b2eulzngecj1j801wa.png" alt="Database Licenses" width="800" height="266"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;20) You can always download the SAP installation media either from the SAP Download Center or directly from your network to Amazon EC2.&lt;/p&gt;

&lt;p&gt;21) Ensure that you have sufficient resources via AWS Quota before starting your SAP project.&lt;/p&gt;

&lt;p&gt;22) Use placement groups if you want to place all your Amazon EC2 instances in close proximity.&lt;/p&gt;

&lt;p&gt;23) EBS with io1 configuration is highly recommended for your mission-critical SAP HANA workloads for production use.&lt;/p&gt;

&lt;p&gt;24) It is best practice to sync the SAP backups to Amazon S3 after the backup files are available on the EC2 instance.&lt;/p&gt;

&lt;p&gt;25) You can backup SAP HANA system automatically using AWS Systems Manager Run Command along with Amazon CloudWatch Events.&lt;/p&gt;

&lt;p&gt;26) You can add multiple Security Groups and ENIs on each SAP HANA machine to isolate the client, internal communication, and, if applicable, SAP HANA System Replication (HSR).&lt;/p&gt;

&lt;p&gt;27) Data Aging helps free up more SAP HANA memory by storing older, less frequently accessed data in the disk area. (SAP Business Suite on HANA (SoH) or SAP S/4HANA). &lt;/p&gt;

&lt;p&gt;28) You can achieve HA for SAP using Overlay IP routing with AWS Network Load Balancer or AWS Transit Gateway.&lt;/p&gt;

&lt;p&gt;29) In a SAP HANA cluster instances, source/destination check must be disabled on both EC2 instances which are supposed to receive traffic from the Overlay IP address.&lt;/p&gt;

&lt;p&gt;30) The Overlay IP should be outside your VPC CIDR Range.&lt;/p&gt;

&lt;p&gt;31) SAP HANA has 6 HA/DR options on AWS:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Automatic Recovery &amp;amp; HANA Backup/Restore: &lt;/li&gt;
&lt;li&gt;Automatic Recovery &amp;amp; HSR without Data Preload (Warm Standby)&lt;/li&gt;
&lt;li&gt;Automatic Recovery &amp;amp; HSR without Data Preload (Warm Standby + Dev/QA)&lt;/li&gt;
&lt;li&gt;Automatic Recovery &amp;amp; HSR with Data Preload (Hot Standby)&lt;/li&gt;
&lt;li&gt;Automatic Recovery &amp;amp; Multi-Tier HSR (Hot Standby + Out-of-Region DR)&lt;/li&gt;
&lt;li&gt;Automatic Recovery &amp;amp; HSR with Amazon S3 CrossRegion Replication (Hot Standby + Out-of-Region DR)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb83ww4st3nc49bcxlwok.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb83ww4st3nc49bcxlwok.png" alt="Automatic Recovery &amp;amp; HSR" width="800" height="668"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;32) When preload option is turned on, replicated data is always loaded into the memory of the secondary HANA instance for instant failover. Preload option is usually turned off to lower the cost in the failover zone by reducing the instance memory size.&lt;/p&gt;

&lt;p&gt;33) EC2 Autoscaling for SAP HANA is possible using EC2 snapshots and Amazon EFS.&lt;/p&gt;

&lt;p&gt;34) SAP Rapid Migration Test Program is used to migrate SAP ECC and SAP Business Warehouse to SAP HANA or SAP ASE on AWS using a special export and import process.&lt;/p&gt;

&lt;p&gt;35) The database migration option (DMO) of the Software Update Manager (SUM) is used to migrate heterogenous databases for example migrating SAP ABAP system to the SAP HANA or anyDB to SAP HANA.&lt;/p&gt;

&lt;p&gt;36) You can use AWS services such as Amazon S3, Amazon EFS (over AWS Direct Connect), AWS Storage Gateway file interface, and AWS Snowball to transfer your SAP files to AWS during your SAP migration from on-prem to AWS.&lt;/p&gt;

&lt;p&gt;37) AWS Backint Agent for SAP HANA (Backup &amp;amp; Restore) is used to backup SAP HANA databases to Amazon Simple Storage Service (S3) buckets directly.&lt;/p&gt;

&lt;p&gt;38) SAP HANA HSR is one way to migrate SAP HANA to AWS.&lt;/p&gt;

&lt;p&gt;39) SAP HANA Cockpit is one way to allows you automate database backups through the Backup schedule features. (Not recommended)&lt;/p&gt;

&lt;p&gt;40) Customers running SAP with Oracle DB on AWS can use Oracle Secure Backup(OSB) Cloud Module to integrate Oracle backups with AWS S3 service.&lt;/p&gt;

&lt;p&gt;41) AWS Systems Manager is the recommended way to automate HANA backups through the Command Document, Run Command, and Maintenance Windows features.&lt;/p&gt;

&lt;p&gt;42) Oracle Data Guard, SIOS Life Keeper, and Veritas Infoscale are three methods used to achieve HA for SAP workloads running on Oracle database.&lt;/p&gt;

&lt;p&gt;43) SAP workloads running on AnyDB Databases can be backed up to Amazon EBS and then move the backups to Amazon S3. This can be automated using Amazon SSM.&lt;/p&gt;

&lt;p&gt;44) It is recommended to enable EC2 auto recovery to automatically recover impaired SAP instances using CloudWatch Alarms.&lt;/p&gt;

&lt;p&gt;45) EC2 High Memory instances only support SUSE Linux Enterprise Server for SAP Applications (SLES for SAP) and Red Hat Enterprise Linux for SAP Solutions (RHEL for SAP) operating systems.&lt;/p&gt;

&lt;p&gt;46) u-*tb1.metal instances can only be launched as Amazon EC2 Dedicated Hosts with host tenancy.&lt;/p&gt;

&lt;p&gt;47) u-*tb1.metal instances that offer 6, 9, 12 TB of memory can only be launched through AWS CLI or APIs.&lt;/p&gt;

&lt;h3&gt;
  
  
  Wrapping up: The price, timing, and importance of SAP on AWS certification
&lt;/h3&gt;

&lt;p&gt;The SAP on AWS test costs $300, same as all Professional and Speciality Exams, and you have three hours to finish it with a total of 65 questions, either multiple choice or multiple response. Before sitting for the exam, make sure to anticipate needing in-depth understanding of a variety of subjects, such as SAP HANA, SAP NetWeaver, SAP S/4HANA, and SAP Business Suite, and you must obtain a minimum score of 750 in order to pass it! However, with the appropriate preparation, you can certainly raise your odds of success and earn this certification within a reasonable timeframe.&lt;/p&gt;

&lt;p&gt;In conclusion, preparing for the Certified SAP on AWS - Specialty exam can enhance your career prospects by demonstrating your proficiency in this critical technology. Skilled engineers in leveraging the power of AWS are in high demand and can earn excellent salaries. My final advise for preparing for this exam is to follow a structured approach covering all necessary topics and utilize the resources available on the AWS and SAP websites. Starting today can be the first step towards a brighter future. Happy learning!&lt;/p&gt;

</description>
      <category>aws</category>
      <category>sap</category>
      <category>awscloud</category>
      <category>awscertified</category>
    </item>
    <item>
      <title>How SAP on AWS - Specialty certification Can Boost Your Career: The Ultimate Guide to Exam Preparation</title>
      <dc:creator>Bassel Al Annan</dc:creator>
      <pubDate>Sun, 26 Mar 2023 17:33:49 +0000</pubDate>
      <link>https://dev.to/bassel_alannan/how-sap-on-aws-specialty-certification-can-boost-your-career-the-ultimate-guide-to-exam-preparation-34o2</link>
      <guid>https://dev.to/bassel_alannan/how-sap-on-aws-specialty-certification-can-boost-your-career-the-ultimate-guide-to-exam-preparation-34o2</guid>
      <description>&lt;p&gt;Due to the cooperation of the two leaders, SAP and AWS, clients and partners can take advantage of a number of advantages, including flexibility, scalability, reliability, and security. Customers can migrate their SAP workloads to the cloud and utilize AWS's infrastructure services by running SAP on AWS, which enables them to lower costs, boost productivity, and foster creativity. Moreover, businesses can build a scalable and adaptable infrastructure that can manage complex business processes and data management by combining these two potent technologies. Needless to say, Engineers with expertise in SAP on AWS are therefore in great demand and can expect to make a good living.&lt;/p&gt;

&lt;h2&gt;
  
  
  The benefits of becoming an SAP on AWS certified engineer
&lt;/h2&gt;

&lt;p&gt;By gaining this certification, you can demonstrate your expertise and proficiency in this critical technology, making you a valuable asset to any organization. Moreover, the skills you learn while preparing for the exam will help you develop your technical capabilities and expand your knowledge in cloud computing and enterprise software. &lt;/p&gt;

&lt;p&gt;Fortunately, I was among the first to pass the AWS Certified: SAP on AWS - Specialty Beta Exam in January 2022, and in this blog post, I'd like to share some of my notes that enabled me to pass this exam on the first try and in just two weeks.&lt;/p&gt;

&lt;h2&gt;
  
  
  How to prepare for the SAP on AWS certification exam
&lt;/h2&gt;

&lt;p&gt;The best path for preparing for the SAP on AWS exam is to follow a structured approach that covers all the necessary topics and concepts. The AWS website offers a range of resources, including &lt;a href="https://aws.amazon.com/sap/docs/" rel="noopener noreferrer"&gt;whitepapers&lt;/a&gt;, &lt;a href="https://www.udemy.com/course/aws-certified-sap-on-aws-specialty/" rel="noopener noreferrer"&gt;tutorials&lt;/a&gt;, and &lt;a href="https://d1.awsstatic.com/training-and-certification/docs-sap-on-aws-specialty/SAP-on-AWS-Specialty_Sample-Questions.pdf" rel="noopener noreferrer"&gt;practice exams&lt;/a&gt;, to help you prepare. Additionally, SAP offers a certification program that includes comprehensive training materials and hands-on exercises. By combining these resources, you can gain a deep understanding of SAP on AWS and develop the skills you need to pass the exam with confidence. So, let's get into the 47 points you will need to nail this exam!&lt;/p&gt;

&lt;p&gt;1) Services that are mostly used for SAP deployment on AWS are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Amazon EC2: Virtual machine or Bare Metal server to host the SAP application &lt;/li&gt;
&lt;li&gt;Amazon EBS: Store root files, SAP HANA binaries, data, logs, shared and backups.&lt;/li&gt;
&lt;li&gt;Amazon S3: Store file and database backup and archiving data.&lt;/li&gt;
&lt;li&gt;Amazon EFS: Store SAP application server (e.g., /sapmnt) on a shared Linux file system. This can be used with a scale-out SAP HANA topology to store shared and backup file system across multiple SAP HANA instances.&lt;/li&gt;
&lt;li&gt;Amazon FSx: Store SAP application server (e.g., /sapmnt) on a shared Windows file system&lt;/li&gt;
&lt;li&gt;Amazon VPC: Virtual network for your SAP workloads used to create environment subnets and network isolation.&lt;/li&gt;
&lt;li&gt;Amazon VPN: Connect your on-prem datacenter with the AWS network&lt;/li&gt;
&lt;li&gt;Amazon Direct Connect: Low latency dedicated lease line with high bandwidth that connects your datacenter with AWS. Normally used for moving large data with greater speeds during SAP migrations.&lt;/li&gt;
&lt;li&gt;Amazon Route 53: DNS resolution for SAP applications on AWS.&lt;/li&gt;
&lt;li&gt;Amazon Time Sync: Time synchronization for your SAP systems on EC2 instances.&lt;/li&gt;
&lt;li&gt;Amazon CloudWatch: Monitoring SAP systems running on AWS.&lt;/li&gt;
&lt;li&gt;AWS CloudTrail: Audit all AWS account API calls to get more visibility and better security on your SAP workloads.&lt;/li&gt;
&lt;li&gt;AWS CloudFormation: Automate SAP deployments and DR strategies using IaC.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;2) As part of the AWS Shared Responsibility Model, AWS is only responsible for managing your EC2 Hypervisor while you are responsible for managing the SAP Application, Databases, and Operating System.&lt;/p&gt;

&lt;p&gt;3) You can use your current SAP license while migrating to AWS if it meets the SAP licensing policies.&lt;/p&gt;

&lt;p&gt;4) AWS Do Not provide or sell SAP Licenses.&lt;/p&gt;

&lt;p&gt;5) The SAP Cloud Appliance Library provides users with preconfigured SAP environments that can be run automatically via a launch wizard.&lt;/p&gt;

&lt;p&gt;6) In order to get the full support of SAP on AWS you must at least meet the following guidelines:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Detailed Monitoring MUST be enabled for Amazon CloudWatch&lt;/li&gt;
&lt;li&gt;AWS Data Provider for SAP MUST be installed and configured on your SAP machines to share performance and configuration data between SAP machines.&lt;/li&gt;
&lt;li&gt;You MUST have either Business Support or Enterprise Support plan.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;7) SAP applications can be deployed on AWS using 3 methods:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Manual&lt;/li&gt;
&lt;li&gt;SAP Cloud Appliance Library&lt;/li&gt;
&lt;li&gt;AWS Quick Start&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;8) SAP on AWS comes with two primary architectures:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;All systems on AWS&lt;/li&gt;
&lt;li&gt;Hybrid&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;9) All systems on AWS ensures that all SAP components are either deployed from scratch or fully migrated from an on-prem datacenter to AWS.&lt;/p&gt;

&lt;p&gt;10) Guidelines for the SAP All-on-AWS architecture:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Network communication between Data Center and AWS is handled through either Site-to-Site VPN, or AWS Direct Connect.&lt;/li&gt;
&lt;li&gt;SAProuter is deployed in a public subnet and assigned a public IP.&lt;/li&gt;
&lt;li&gt;The SAProuter should have a dedicated security group that controls the required inbound and outbound access to the SAP support network.&lt;/li&gt;
&lt;li&gt;SAProuter is a proxy used to connect your SAP environment with External Networks such as SAP OSS.&lt;/li&gt;
&lt;li&gt;SAP Solution Manager system and SAProuter should be installed on your AWS network and integrated to the SAP support network (SAP OSS) via a secure network communication (SNC).&lt;/li&gt;
&lt;li&gt;SAP OSS is the official Online SAP Support Network that includes knowledge base to address frequently released bug fixes, new enhancements and helps you check whether a particular SAP note is present in your SAP system.&lt;/li&gt;
&lt;li&gt;SNC stands for Secure Network Communications and is used to encrypt the connections between SAProuters.&lt;/li&gt;
&lt;li&gt;Amazon Nat Gateway is used to secure instances behind a private subnet and allow these instances to have outbound connection to the internet.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;11) Guidelines for the Hybrid AWS Architecture:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Usually ideal for hosted Dev and Staging SAL environments on AWS while keeping the production on-prem.&lt;/li&gt;
&lt;li&gt;The client VPC and the on-prem datacenter is connected via AWS VPN or Amazon Direct Connect.&lt;/li&gt;
&lt;li&gt;SAP Systems on AWS are managed by the SAProuter and SAP Solution Manager running on-prem.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;12) If no HA is required, all SAP systems must be installed in a single Availability Zone.&lt;/p&gt;

&lt;p&gt;13) To receive SAP Support for your SAP NetWeaver environment you should be running &lt;a href="https://aws.amazon.com/sap/instance-types/" rel="noopener noreferrer"&gt;EC2 instances certified by SAP&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;14) &lt;a href="https://www.sap.com/dmc/exp/2014-09-02-hana-hardware/enEN/#/solutions?filters=iaas;ve:23" rel="noopener noreferrer"&gt;Specific EC2 instances&lt;/a&gt; are required to setup SAP HANA solution on AWS.&lt;/p&gt;

&lt;p&gt;15) Operating Systems supported for SAP on AWS are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;SUSE Linux Enterprise Server (SLES)&lt;/li&gt;
&lt;li&gt;SUSE Linux Enterprise Server for SAP Applications (SLES for SAP)&lt;/li&gt;
&lt;li&gt;Red Hat Enterprise Linux (RHEL)&lt;/li&gt;
&lt;li&gt;Red Hat Enterprise Linux for SAP Solutions (RHEL for SAP)&lt;/li&gt;
&lt;li&gt;Microsoft Windows Server&lt;/li&gt;
&lt;li&gt;Oracle Enterprise Linux&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;16) Operating System Licenses Considerations:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwjjgjmyt05ieiqe7u9ru.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwjjgjmyt05ieiqe7u9ru.png" alt="Operating System Licenses" width="800" height="235"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;17) Amazon RDS is only supported for SAP BusinessObjects BI and SAP Commerce (previously known as SAP Hybris Commerce).&lt;/p&gt;

&lt;p&gt;18) Amazon Aurora is only supported for SAP Commerce (previously known as SAP Hybris Commerce). &lt;/p&gt;

&lt;p&gt;19) Database Licenses Considerations:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F23b2eulzngecj1j801wa.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F23b2eulzngecj1j801wa.png" alt="Database Licenses" width="800" height="266"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;20) You can always download the SAP installation media either from the SAP Download Center or directly from your network to Amazon EC2.&lt;/p&gt;

&lt;p&gt;21) Ensure that you have sufficient resources via AWS Quota before starting your SAP project.&lt;/p&gt;

&lt;p&gt;22) Use placement groups if you want to place all your Amazon EC2 instances in close proximity.&lt;/p&gt;

&lt;p&gt;23) EBS with io1 configuration is highly recommended for your mission-critical SAP HANA workloads for production use.&lt;/p&gt;

&lt;p&gt;24) It is best practice to sync the SAP backups to Amazon S3 after the backup files are available on the EC2 instance.&lt;/p&gt;

&lt;p&gt;25) You can backup SAP HANA system automatically using AWS Systems Manager Run Command along with Amazon CloudWatch Events.&lt;/p&gt;

&lt;p&gt;26) You can add multiple Security Groups and ENIs on each SAP HANA machine to isolate the client, internal communication, and, if applicable, SAP HANA System Replication (HSR).&lt;/p&gt;

&lt;p&gt;27) Data Aging helps free up more SAP HANA memory by storing older, less frequently accessed data in the disk area. (SAP Business Suite on HANA (SoH) or SAP S/4HANA). &lt;/p&gt;

&lt;p&gt;28) You can achieve HA for SAP using Overlay IP routing with AWS Network Load Balancer or AWS Transit Gateway.&lt;/p&gt;

&lt;p&gt;29) In a SAP HANA cluster instances, source/destination check must be disabled on both EC2 instances which are supposed to receive traffic from the Overlay IP address.&lt;/p&gt;

&lt;p&gt;30) The Overlay IP should be outside your VPC CIDR Range.&lt;/p&gt;

&lt;p&gt;31) SAP HANA has 6 HA/DR options on AWS:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Automatic Recovery &amp;amp; HANA Backup/Restore: &lt;/li&gt;
&lt;li&gt;Automatic Recovery &amp;amp; HSR without Data Preload (Warm Standby)&lt;/li&gt;
&lt;li&gt;Automatic Recovery &amp;amp; HSR without Data Preload (Warm Standby + Dev/QA)&lt;/li&gt;
&lt;li&gt;Automatic Recovery &amp;amp; HSR with Data Preload (Hot Standby)&lt;/li&gt;
&lt;li&gt;Automatic Recovery &amp;amp; Multi-Tier HSR (Hot Standby + Out-of-Region DR)&lt;/li&gt;
&lt;li&gt;Automatic Recovery &amp;amp; HSR with Amazon S3 CrossRegion Replication (Hot Standby + Out-of-Region DR)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb83ww4st3nc49bcxlwok.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb83ww4st3nc49bcxlwok.png" alt="Automatic Recovery &amp;amp; HSR" width="800" height="668"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;32) When preload option is turned on, replicated data is always loaded into the memory of the secondary HANA instance for instant failover. Preload option is usually turned off to lower the cost in the failover zone by reducing the instance memory size.&lt;/p&gt;

&lt;p&gt;33) EC2 Autoscaling for SAP HANA is possible using EC2 snapshots and Amazon EFS.&lt;/p&gt;

&lt;p&gt;34) SAP Rapid Migration Test Program is used to migrate SAP ECC and SAP Business Warehouse to SAP HANA or SAP ASE on AWS using a special export and import process.&lt;/p&gt;

&lt;p&gt;35) The database migration option (DMO) of the Software Update Manager (SUM) is used to migrate heterogenous databases for example migrating SAP ABAP system to the SAP HANA or anyDB to SAP HANA.&lt;/p&gt;

&lt;p&gt;36) You can use AWS services such as Amazon S3, Amazon EFS (over AWS Direct Connect), AWS Storage Gateway file interface, and AWS Snowball to transfer your SAP files to AWS during your SAP migration from on-prem to AWS.&lt;/p&gt;

&lt;p&gt;37) AWS Backint Agent for SAP HANA (Backup &amp;amp; Restore) is used to backup SAP HANA databases to Amazon Simple Storage Service (S3) buckets directly.&lt;/p&gt;

&lt;p&gt;38) SAP HANA HSR is one way to migrate SAP HANA to AWS.&lt;/p&gt;

&lt;p&gt;39) SAP HANA Cockpit is one way to allows you automate database backups through the Backup schedule features. (Not recommended)&lt;/p&gt;

&lt;p&gt;40) Customers running SAP with Oracle DB on AWS can use Oracle Secure Backup(OSB) Cloud Module to integrate Oracle backups with AWS S3 service.&lt;/p&gt;

&lt;p&gt;41) AWS Systems Manager is the recommended way to automate HANA backups through the Command Document, Run Command, and Maintenance Windows features.&lt;/p&gt;

&lt;p&gt;42) Oracle Data Guard, SIOS Life Keeper, and Veritas Infoscale are three methods used to achieve HA for SAP workloads running on Oracle database.&lt;/p&gt;

&lt;p&gt;43) SAP workloads running on AnyDB Databases can be backed up to Amazon EBS and then move the backups to Amazon S3. This can be automated using Amazon SSM.&lt;/p&gt;

&lt;p&gt;44) It is recommended to enable EC2 auto recovery to automatically recover impaired SAP instances using CloudWatch Alarms.&lt;/p&gt;

&lt;p&gt;45) EC2 High Memory instances only support SUSE Linux Enterprise Server for SAP Applications (SLES for SAP) and Red Hat Enterprise Linux for SAP Solutions (RHEL for SAP) operating systems.&lt;/p&gt;

&lt;p&gt;46) u-*tb1.metal instances can only be launched as Amazon EC2 Dedicated Hosts with host tenancy.&lt;/p&gt;

&lt;p&gt;47) u-*tb1.metal instances that offer 6, 9, 12 TB of memory can only be launched through AWS CLI or APIs.&lt;/p&gt;

&lt;h3&gt;
  
  
  Wrapping up: The price, timing, and importance of SAP on AWS certification
&lt;/h3&gt;

&lt;p&gt;The SAP on AWS test costs $300, same as all Professional and Speciality Exams, and you have three hours to finish it with a total of 65 questions, either multiple choice or multiple response. Before sitting for the exam, make sure to anticipate needing in-depth understanding of a variety of subjects, such as SAP HANA, SAP NetWeaver, SAP S/4HANA, and SAP Business Suite, and you must obtain a minimum score of 750 in order to pass it! However, with the appropriate preparation, you can certainly raise your odds of success and earn this certification within a reasonable timeframe.&lt;/p&gt;

&lt;p&gt;In conclusion, preparing for the Certified SAP on AWS - Specialty exam can enhance your career prospects by demonstrating your proficiency in this critical technology. Skilled engineers in leveraging the power of AWS are in high demand and can earn excellent salaries. My final advise for preparing for this exam is to follow a structured approach covering all necessary topics and utilize the resources available on the AWS and SAP websites. Starting today can be the first step towards a brighter future. Happy learning!&lt;/p&gt;

</description>
      <category>aws</category>
      <category>sap</category>
      <category>awscloud</category>
      <category>awscertified</category>
    </item>
    <item>
      <title>Methods to Secure Amazon AppStream and Amazon WorkSpaces</title>
      <dc:creator>Bassel Al Annan</dc:creator>
      <pubDate>Thu, 20 Oct 2022 15:33:23 +0000</pubDate>
      <link>https://dev.to/bassel_alannan/methods-to-secure-amazon-appstream-and-amazon-workspaces-1dmg</link>
      <guid>https://dev.to/bassel_alannan/methods-to-secure-amazon-appstream-and-amazon-workspaces-1dmg</guid>
      <description>&lt;p&gt;Amazon AppStream and Amazon WorkSpaces were one of the greatest technologies used by organizations to enable their employees to work remotely through the Covid-19 pandemic. Recently and after the pandemic, organizations started to understand the real benefit of using Desktop-as-a-Service and Application Streaming services on the cloud such as agility, being fully managed, reliability, and security. Speaking of security, most clients usually have strict security regulation requirements that must be met and are mandatory for compliance reasons. In today's blog, I will walk you through some best practices to help you secure your Amazon AppStream and Amazon WorkSpaces.&lt;/p&gt;

&lt;p&gt;So, what are some of the security tools that AWS provides to you by default?&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Network Security Groups that act as a virtual firewall to control the traffic for one or more WorkSpace instances.&lt;/li&gt;
&lt;li&gt;Network ACLs work as a Second Line of Defense.&lt;/li&gt;
&lt;li&gt;CloudWatch Events to monitor access&lt;/li&gt;
&lt;li&gt;Volume Encryption through AWS KMS integration.&lt;/li&gt;
&lt;li&gt;Captcha Prompt to limit incorrect login attempts.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;However, some regulations require more and this is where we are going to discuss other topics that explain different ways to secure your Amazon AppStream and Amazon WorkSpaces environments.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Restricting Access by IP Address&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Although API endpoints for Amazon AppStream and Amazon WorkSpaces just like many other AWS services (Amazon RDS, Amazon S3, Amazon Lambda) are public and can be accessible from the internet, you can still limit access to these services by IP Address using the following methods:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Amazon WorkSpaces:&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Limit access to the workspaces using the IP Access Control List functionality. This feature comes out of the box by Amazon WorkSpaces and is straightforward to use from the console directly.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Amazon AppStream:&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Limit access to AppStream using SAML-based authentication (AD FS, Azure AD, OKTA, etc.) This feature requires configuring the source IP-based filter policy using an inline policy on the SAML 2.0 federation IAM role.&lt;/p&gt;

&lt;p&gt;Another option would be using AWS PrivateLink endpoints and connecting to your AppStream Fleet through AWS VPN.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Enabling Multi-Factor Authentication&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Unfortunately, MFA is still not an "out of the box" option for those two services however I will list down some workarounds that can enable you to use multi-factor authentication.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Amazon WorkSpaces:&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In WorkSpaces, the only way to enable MFA is through a &lt;a href="https://en.wikipedia.org/wiki/RADIUS" rel="noopener noreferrer"&gt;Radius server&lt;/a&gt; integrated either with an on-premises AD or an AWS Managed AD. This approach will allow you to use authentication apps like Google Authenticator to first authenticate the username and password against your Active Directory and the Radius Server will be responsible to authenticate the One-Time Password (OTP) generated by Google Authenticator. One of the open-source Radius software that can be used is &lt;a href="https://freeradius.org/" rel="noopener noreferrer"&gt;FreeRadius&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftt5try23uiohh80oudjf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftt5try23uiohh80oudjf.png" alt="Image description" width="788" height="357"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Amazon AppStream:&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Enforcing MFA for Amazon AppStream can only be achieved through configuring SAML 2.0 federation with your corporate directory.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Network Protection&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;We have also seen cases where clients require network filtering on their WorkSpaces and AppStream Fleets due to compliance and regulatory reasons such as PCI DSS Requirement 11.4 which requires implementing intrusion detection and intrusion prevention systems. Others prefer to conduct domain name filtering to limit and block specific Fully qualified domain names (FQDNs) from being accessed within their VPC.&lt;/p&gt;

&lt;p&gt;Previously, clients had to route their ingress and egress traffic through either their on-premises Firewalls or purchase a Firewall Appliance subscription from Amazon Marketplace to protect their network from Layer3 - Layer 7 attacks such as IP spoofing, viruses, worms, and trojans. Others relied on securing their network by only using Security Groups and Network Access Lists to block specific IP addresses and Ports. Luckily, AWS announced the general availability of the AWS Network Firewall back in November 2020 and it was a game changer for such scenarios. AWS Network Firewall is simply a fully managed service that can help clients protect their network security across their Amazon VPCs and can also act as an IDS/IPS for network flow inspection.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwjbbadqwhy8tu9lhvq5o.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwjbbadqwhy8tu9lhvq5o.png" alt="Image description" width="754" height="756"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In this blog, we have discussed some of the many security solutions that can be applied on both Amazon WorkSpaces and Amazon AppStream 2.0 to provide your workforce and organization with robust application streaming and desktop-as-a-service environments. I hope this was informative for you and stay tuned for more interesting blogs.&lt;/p&gt;

</description>
      <category>vdi</category>
      <category>aws</category>
      <category>security</category>
      <category>daas</category>
    </item>
    <item>
      <title>Migrating Oracle E-Business Suite To AWS</title>
      <dc:creator>Bassel Al Annan</dc:creator>
      <pubDate>Fri, 07 Oct 2022 16:53:29 +0000</pubDate>
      <link>https://dev.to/bassel_alannan/migrating-oracle-e-business-suite-to-aws-hd</link>
      <guid>https://dev.to/bassel_alannan/migrating-oracle-e-business-suite-to-aws-hd</guid>
      <description>&lt;p&gt;AWS experience with Oracle is a decade in the making with more than 10+ years of Oracle App experience (JD Edwards, E-Business Suite, Peoplesoft), 1000+ Oracle EBS instances running on AWS today, and 100k+ Oracle Applications running on AWS.&lt;/p&gt;

&lt;p&gt;Organizations running Oracle E-Business Suite workloads are looking for different ways to migrate to AWS but are hesitant when it comes to the technology needed to complete this migration with minimal downtime and real-time replication.&lt;/p&gt;

&lt;p&gt;This is where we usually start our discovery phase with clients, and the primary focus would be questioning about the expected RPO and RTO for this migration. In brief, Recovery Time Objective (RTO) is basically the amount of time a specific workload requires to restore its processes in the event of a disaster whereas Recovery Point Objective (RPO) focuses on the maximum acceptable amount of data that an organization can lose after a recovery in the event of a disaster and the organization can tolerate.&lt;/p&gt;

&lt;p&gt;This blog will target a potential migration that requires 12 hours of RTO and 6 hours of RPO. Planning for this migration is sometimes tricky and expect a thorough analysis of the current running Oracle EBS environment to generate a well-defined solution. For that reason, I write down some questions that should be taken into consideration during your assessment phase:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;How many Applications and Databases will be part of this migration?&lt;/li&gt;
&lt;li&gt;What current tools are used to back up your Oracle EBS Workloads?&lt;/li&gt;
&lt;li&gt;Are you running your current production databases
underlying Oracle E-Business Suite on Oracle Real Application Clusters (Oracle RAC)?&lt;/li&gt;
&lt;li&gt;What operating system and Oracle version are currently in use for the Application and Database servers? (You can collect this using application discovery tools such as AWS Application
Discovery Service and Cloudamize)&lt;/li&gt;
&lt;li&gt;Do you have any plans to upgrade Oracle EBS or the DBMS in the near future?&lt;/li&gt;
&lt;li&gt;What is your current backup frequency for the running Oracle EBS workloads?&lt;/li&gt;
&lt;li&gt;Would you prefer to BYOL for the running operating systems or have it supplied by AWS?&lt;/li&gt;
&lt;li&gt;What is the current total data storage?&lt;/li&gt;
&lt;li&gt;Do you require Database Encryption at rest?&lt;/li&gt;
&lt;li&gt;Do you have any 3rd party integrations with the running Oracle EBS application?&lt;/li&gt;
&lt;li&gt;Do you have a running Oracle Enterprise support plan?&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Replicating Oracle EBS to AWS can be achieved in different methods and tools such as:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Data Guard&lt;/li&gt;
&lt;li&gt;Physical Standby&lt;/li&gt;
&lt;li&gt;RMAN&lt;/li&gt;
&lt;li&gt;Transportable Tablespaces&lt;/li&gt;
&lt;li&gt;DataPump&lt;/li&gt;
&lt;li&gt;AWS Snowball&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In this scenario, we will assume that the client is utilizing RMAN to automate the backup and restore process, and their Oracle License does not support Oracle Data Guard. However, if you would like to achieve minimum downtime by using Oracle Data Guard, the migration sequence will look as follows:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Replicate the database from On-prem to AWS using Data Guard&lt;/li&gt;
&lt;li&gt;Establish the replication from On-prem to AWS using Data Guard/Standby&lt;/li&gt;
&lt;li&gt;Replicate the Oracle Application files from On-prem to AWS using tools such as rsync or Amazon Application Migration Service (Amazon MGN)&lt;/li&gt;
&lt;li&gt;Recover the standby node&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;So, let's get started with the steps required to migrate your Oracle E-Business Suite to Amazon EC2 using RMAN backup-based duplication which is an Oracle native tool recommended by AWS to migrate Oracle EBS applications.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Run pre-clone on the source database and application nodes&lt;/li&gt;
&lt;li&gt;Tar and Compress the database backup with archives using RMAN backup and copy it to the target node&lt;/li&gt;
&lt;li&gt;Clean up the target database and application node&lt;/li&gt;
&lt;li&gt;Copy the source application binaries and database binaries to the target node&lt;/li&gt;
&lt;li&gt;Catalog Backup pieces, and restore the database on Amazon EC2&lt;/li&gt;
&lt;li&gt;Run post-restore on the Target database node&lt;/li&gt;
&lt;li&gt;Untar the application stack on the target EC2 node&lt;/li&gt;
&lt;li&gt;Run post-clone steps on the application node&lt;/li&gt;
&lt;li&gt;Sync your $APPLCSF directory which includes all your APPLOG and APPLOUT files&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Some might ask if Amazon DMS can do all this job by replicating the data directly from an Oracle EBS to either Amazon RDS or Amazon EC2. Although Amazon DMS looks like a great solution to copy the database data from source to destination, however, AWS doesn't recommend this approach for E-Business Suite Databases knowing that replicating or migrating Oracle E-Business Suite database is unlike any other regular Oracle Database and you can not simply use Golden Gate or Amazon DMS to replicate the entire database. Thus, specific data types used in the application might not work well if Amazon DMS was used and the supportability may be a challenge post-migration.&lt;/p&gt;

&lt;p&gt;In this blog post, we have discussed in brief one of the migration patterns used to migrate Oracle E-Business Suite to AWS in a single zone architecture. In my next posts, I will work on another blog that will explain how you can perform this migration with real-time replication of the Oracle database and demonstrate the methods and AWS tools needed to achieve a highly available and resilient Oracle EBS environment on AWS.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;References:&lt;/code&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://d1.awsstatic.com/whitepapers/migrate-oracle-e-business-suite.pdf?did=wp_card&amp;amp;trk=wp_card" rel="noopener noreferrer"&gt;https://d1.awsstatic.com/whitepapers/migrate-oracle-e-business-suite.pdf?did=wp_card&amp;amp;trk=wp_card&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/migrate-oracle-e-business-suite-to-amazon-rds-custom.html" rel="noopener noreferrer"&gt;https://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/migrate-oracle-e-business-suite-to-amazon-rds-custom.html&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>aws</category>
      <category>oracle</category>
      <category>database</category>
      <category>migration</category>
    </item>
    <item>
      <title>Deploying NodeJS Application on Amazon EC2 using AWS CodePipeline.</title>
      <dc:creator>Bassel Al Annan</dc:creator>
      <pubDate>Thu, 21 Jan 2021 04:15:46 +0000</pubDate>
      <link>https://dev.to/aws-builders/deploying-nodejs-application-on-amazon-ec2-using-aws-codepipeline-20i1</link>
      <guid>https://dev.to/aws-builders/deploying-nodejs-application-on-amazon-ec2-using-aws-codepipeline-20i1</guid>
      <description>&lt;p&gt;Although most developers are shifting to serverless and containerized architectures for building their applications, EC2 instances are still among the most popular and used AWS Services. In this blog, I will walk you through the steps required to deploy your scalable NodeJS applications on Amazon EC2 using AWS CodePipeline and mention some of the challenges that you might face while setting up this solution. It might first seem simple, but trust me it requires more effort than you expect and that's the main reason I am writing this blog today.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Okay enough said, now lets rock and roll!&lt;/strong&gt; 🎸&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Services covered in this blog:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://aws.amazon.com/ec2/" rel="noopener noreferrer"&gt;Amazon EC2&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://aws.amazon.com/codepipeline/" rel="noopener noreferrer"&gt;AWS CodePipeline EC2&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://aws.amazon.com/codebuild/" rel="noopener noreferrer"&gt;AWS CodeBuild&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://aws.amazon.com/codedeploy/" rel="noopener noreferrer"&gt;AWS CodeDeploy&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://nodejs.org/en/" rel="noopener noreferrer"&gt;NodeJS&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://aws.amazon.com/elasticloadbalancing/" rel="noopener noreferrer"&gt;Elastic Load Balancing&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://aws.amazon.com/autoscaling" rel="noopener noreferrer"&gt;Amazon Auto Scaling&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://pm2.keymetrics.io/" rel="noopener noreferrer"&gt;PM2&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.nginx.com/" rel="noopener noreferrer"&gt;NGINX&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I will assume that you have successfully set up your underlying infrastructure using your preferred method (Manually, CDK, CloudFormation, Terraform, etc.)&lt;/p&gt;

&lt;p&gt;So, you have set up your EC2 instances, CodeDeploy Agent, Autoscaling Group, installed the latest Nginx, NodeJS, and PM2 versions on the EC2 instances, and ready to deploy your NodeJS Application via AWS CodePipeline. First, you start by creating a new Pipeline project, connect to your source provider such as GitHub, then CodeBuild for compiling your source code and running some unit tests then finally, you choose AWS Code Deploy for deploying your latest releases on Amazon EC2 through the deployment group. The tricky part comes with the buildspec.yml and appspec.yml files where you can set a collection of commands used to build and deploy your code. The first thing that comes to mind is creating the below buildspec and appspec files.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;buildspec.yml file&lt;/em&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;version: 0.2
phases:
  install:
    runtime-versions:
      nodejs: 10
    commands:
      - echo Installing
  pre_build:
    commands:
      - echo Installing source NPM dependencies.
      - npm install
  build:
    commands:
      - echo Build started on `date`
      - echo Compiling the Node.js code
      - npm run build
  post_build:
    commands:
      - echo Build completed on `date`
artifacts:
  files:
    - '**/*'
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;em&gt;appspec.yml file&lt;/em&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;version: 0.0
os: linux
files:
  - source: /
    destination: /usr/share/nginx/html
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You push your code to your version control system (GitHub in our case) and trigger your first CodePipeline pipeline and guess what? The pipeline will successfully complete at this stage. Now, we are excited to run our node script using "npm start" but suddenly we get the below error:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;Error: Cannot find module '../package.json'&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;But how? We are pretty sure that our package.json files are located under the root directory and libraries in the node_modules folder. Honestly speaking, the only fix for this issue is to run &lt;code&gt;npm rebuild&lt;/code&gt; or just remove the node_modules folder and run &lt;code&gt;npm install&lt;/code&gt; again on the EC2 instance. After doing that, you will be able to start your node script. That's great but it doesn't meet our requirements. We are looking for a fully automated deployment with zero human intervention. Luckily, the &lt;a href="https://docs.aws.amazon.com/codedeploy/latest/userguide/reference-appspec-file-structure-hooks.html" rel="noopener noreferrer"&gt;life cycle event hooks&lt;/a&gt; section of the Code Deploy appspec.yml file will solve this for us by creating a couple of bash scripts that can replace the "npm install and build" steps executed by Code Build leaving AWS Code Build for the test cases phase only. Here's how our two files look like now:&lt;/p&gt;

&lt;p&gt;&lt;em&gt;buildspec.yml file&lt;/em&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;version: 0.2
phases:
  pre_build:
    commands:
      - echo Installing source NPM dependencies...
      - npm install
  build:
    commands:
      - echo Build started on `date`
      - echo Compiling the Node.js code
      - echo Running unit tests
      - npm test
  post_build:
    commands:
      - echo Build completed on `date`
artifacts:
  files:
    - '**/*'
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;em&gt;appspec.yml file&lt;/em&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;version: 0.0
os: linux
files:
  - source: /
    destination: /usr/share/nginx/html
hooks:
  BeforeInstall:
    - location: scripts/BeforeInstallHook.sh
      timeout: 300
  AfterInstall:
    - location: scripts/AfterInstallHook.sh
      timeout: 300
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;BeforeInstall: Use to run tasks before the replacement task set is created. One target group is associated with the original task set. If an optional test listener is specified, it is associated with the original task set. A rollback is not possible at this point.
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#!/bin/bash
set -e
yum update -y
pm2 update
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;AfterInstall: Use to run tasks after the replacement task set is created and one of the target groups is associated with it. If an optional test listener is specified, it is associated with the original task set. The results of a hook function at this lifecycle event can trigger a rollback.
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#!/bin/bash
set -e
cd /usr/share/nginx/html
npm install
npm run build
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; We are setting the set -e flag to stop the execution of our scripts in the event of an error.&lt;/p&gt;

&lt;p&gt;Another issue you might face even after updating your appspec and buildspec files is: &lt;code&gt;The deployment failed because a specified file already exists at this location: /usr/share/nginx/html/.cache/plugins/somefile.js&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;In our case, we will solve this by simply asking CodeDeploy to replace already existing files using the &lt;code&gt;overwrite:true&lt;/code&gt; option.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Final appspec.yml file&lt;/em&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;version: 0.0
os: linux
files:
  - source: /
    destination: /usr/share/nginx/html
    overwrite: true
hooks:
  BeforeInstall:
    - location: scripts/BeforeInstallHook.sh
      timeout: 300
  AfterInstall:
    - location: scripts/AfterInstallHook.sh
      timeout: 300
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Perfect, we have reached a stage that after AWS CodePipeline is successfully complete, we are now able to start our npm script without facing any issues. It's time to automatically restart our application upon every new deployment using PM2 which is a process management tool responsible for running and managing our Node.js applications.&lt;/p&gt;

&lt;p&gt;Simply, run &lt;code&gt;sudo npm install pm2@latest -g&lt;/code&gt; on your EC2 instances, then generate the pm2 ecosystem.config.js file to declare the applications/services you would like to deploy your code into by executing this command &lt;code&gt;pm2 ecosystem&lt;/code&gt;. PM2 will generate a sample file for you so make sure it matches your application structure.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;ecosystem.config.js file&lt;/em&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;module.exports = {
  apps : [{
    name: "npm",
    cwd: '/usr/share/nginx/html',
    script: "npm",
    args: 'start',
    env: {
      NODE_ENV: "production",
      HOST: '0.0.0.0',
      PORT: '3000',
    },
  }]
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;At this stage, you can simply run &lt;code&gt;pm2 start ecosystem.config.js&lt;/code&gt; and PM2 will start your application for you. But that's not the only power of PM2. This module can actually restart your application automatically upon every new release by simply including the watch parameter in the ecosystem.config.js file.&lt;/p&gt;

&lt;p&gt;Final ecosystem.config.js file_&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;module.exports = {
  apps : [{
    name: "npm",
    cwd: '/usr/share/nginx/html',
    script: "npm",
    args: 'start',
    watch: true,
    env: {
      NODE_ENV: "production",
      HOST: '0.0.0.0',
      PORT: '3000',
    },
  }]
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Wonderful! We have set up a fully automated deployment pipeline that can run unit tests, install, build, and deploy the node modules on the Amazon EC2 instances then PM2 takes care of restarting the application for us. &lt;/p&gt;

&lt;p&gt;Okay, what if our server got rebooted for some reason? We want our app to start automatically and this can also be accomplished by using the &lt;code&gt;pm2 startup&lt;/code&gt; parameter that can be executed after starting your application.&lt;/p&gt;

&lt;p&gt;Have we missed anything so far? Oh yes! &lt;strong&gt;Autoscaling&lt;/strong&gt;&lt;br&gt;
We want to make sure that our production environment scalable enough to accommodate huge loads on our application.&lt;/p&gt;

&lt;p&gt;This can easily be set up through AWS CodeDeploy by updating the deployment group environment configuration from Amazon EC2 instances "Tagging Strategy' to Amazon EC2 Auto Scaling groups. This is a great feature by AWS CodeDeploy where it can deploy your latest revisions to new instances automatically while keeping your desired number of instances healthy throughout the deployment. However, we will face another challenge here. PM2 startup makes sure that your application is started after any instance reboot but it, unfortunately, doesn't work this way when Autoscaling Group launch new instances thus the application doesn't automatically run in the event of horizontal scaling. But don't worry I got your back!&lt;/p&gt;

&lt;p&gt;In order to solve this issue, go to your Launch Configuration settings, and in the "userdata" section add the below bash script to it.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#!/bin/bash -ex
# restart pm2 and thus node app on reboot
crontab -l | { cat; echo "@reboot sudo pm2 start /usr/share/nginx/html/ecosystem.config.js -i 0 --name \"node-app\""; } | crontab -
# start the server
pm2 start /usr/share/nginx/html/ecosystem.config.js -i 0 --name "node-app"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;There you go! Now you have a highly scalable NodeJS Application that is fully automated using AWS CodePipeline.&lt;/strong&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Conclusion
&lt;/h1&gt;

&lt;p&gt;I hope this blog has been informative to you all. I have tried as much as possible to make this blog look like a story because the main purpose of writing it is to show you the many challenges DevOps Engineers and Developers face to set up this solution and the various ways used to solve it. I will not stop updating this project and will make sure it has an improvement plan because I know it can even be better!&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;References:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://regbrain.com/article/node-nginx-ec2" rel="noopener noreferrer"&gt;https://regbrain.com/article/node-nginx-ec2&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://pm2.keymetrics.io/docs/usage/startup" rel="noopener noreferrer"&gt;https://pm2.keymetrics.io/docs/usage/startup&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.digitalocean.com/community/tutorials/how-to-set-up-a-node-js-application-for-production-on-ubuntu-20-04" rel="noopener noreferrer"&gt;https://www.digitalocean.com/community/tutorials/how-to-set-up-a-node-js-application-for-production-on-ubuntu-20-04&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://cloudnweb.dev/2019/12/a-complete-guide-to-aws-elastic-load-balancer-using-nodejs/" rel="noopener noreferrer"&gt;https://cloudnweb.dev/2019/12/a-complete-guide-to-aws-elastic-load-balancer-using-nodejs/&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://pm2.keymetrics.io/docs/usage/watch-and-restart/" rel="noopener noreferrer"&gt;https://pm2.keymetrics.io/docs/usage/watch-and-restart/&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://pm2.keymetrics.io/docs/usage/application-declaration/#cli" rel="noopener noreferrer"&gt;https://pm2.keymetrics.io/docs/usage/application-declaration/#cli&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>javascript</category>
      <category>node</category>
      <category>devops</category>
      <category>aws</category>
    </item>
    <item>
      <title>Deploying NodeJS Application on Amazon EC2 using AWS CodePipeline</title>
      <dc:creator>Bassel Al Annan</dc:creator>
      <pubDate>Thu, 21 Jan 2021 04:13:47 +0000</pubDate>
      <link>https://dev.to/bassel_alannan/deploying-nodejs-application-on-amazon-ec2-using-aws-codepipeline-17hg</link>
      <guid>https://dev.to/bassel_alannan/deploying-nodejs-application-on-amazon-ec2-using-aws-codepipeline-17hg</guid>
      <description>&lt;p&gt;Although most developers are shifting to serverless and containerized architectures for building their applications, EC2 instances are still among the most popular and used AWS Services. In this blog, I will walk you through the steps required to deploy your scalable NodeJS applications on Amazon EC2 using AWS CodePipeline and mention some of the challenges that you might face while setting up this solution. It might first seem simple, but trust me it requires more effort than you expect and that's the main reason I am writing this blog today.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Okay enough said, now lets rock and roll!&lt;/strong&gt; 🎸&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Services covered in this blog:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://aws.amazon.com/ec2/" rel="noopener noreferrer"&gt;Amazon EC2&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://aws.amazon.com/codepipeline/" rel="noopener noreferrer"&gt;AWS CodePipeline EC2&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://aws.amazon.com/codebuild/" rel="noopener noreferrer"&gt;AWS CodeBuild&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://aws.amazon.com/codedeploy/" rel="noopener noreferrer"&gt;AWS CodeDeploy&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://nodejs.org/en/" rel="noopener noreferrer"&gt;NodeJS&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://aws.amazon.com/elasticloadbalancing/" rel="noopener noreferrer"&gt;Elastic Load Balancing&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://aws.amazon.com/autoscaling" rel="noopener noreferrer"&gt;Amazon Auto Scaling&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://pm2.keymetrics.io/" rel="noopener noreferrer"&gt;PM2&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.nginx.com/" rel="noopener noreferrer"&gt;NGINX&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I will assume that you have successfully set up your underlying infrastructure using your preferred method (Manually, CDK, CloudFormation, Terraform, etc.)&lt;/p&gt;

&lt;p&gt;So, you have set up your EC2 instances, CodeDeploy Agent, Autoscaling Group, installed the latest Nginx, NodeJS, and PM2 versions on the EC2 instances, and ready to deploy your NodeJS Application via AWS CodePipeline. First, you start by creating a new Pipeline project, connect to your source provider such as GitHub, then CodeBuild for compiling your source code and running some unit tests then finally, you choose AWS Code Deploy for deploying your latest releases on Amazon EC2 through the deployment group. The tricky part comes with the buildspec.yml and appspec.yml files where you can set a collection of commands used to build and deploy your code. The first thing that comes to mind is creating the below buildspec and appspec files.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;buildspec.yml file&lt;/em&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;version: 0.2
phases:
  install:
    runtime-versions:
      nodejs: 10
    commands:
      - echo Installing
  pre_build:
    commands:
      - echo Installing source NPM dependencies.
      - npm install
  build:
    commands:
      - echo Build started on `date`
      - echo Compiling the Node.js code
      - npm run build
  post_build:
    commands:
      - echo Build completed on `date`
artifacts:
  files:
    - '**/*'
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;em&gt;appspec.yml file&lt;/em&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;version: 0.0
os: linux
files:
  - source: /
    destination: /usr/share/nginx/html
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You push your code to your version control system (GitHub in our case) and trigger your first CodePipeline pipeline and guess what? The pipeline will successfully complete at this stage. Now, we are excited to run our node script using "npm start" but suddenly we get the below error:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;Error: Cannot find module '../package.json'&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;But how? We are pretty sure that our package.json files are located under the root directory and libraries in the node_modules folder. Honestly speaking, the only fix for this issue is to run &lt;code&gt;npm rebuild&lt;/code&gt; or just remove the node_modules folder and run &lt;code&gt;npm install&lt;/code&gt; again on the EC2 instance. After doing that, you will be able to start your node script. That's great but it doesn't meet our requirements. We are looking for a fully automated deployment with zero human intervention. Luckily, the &lt;a href="https://docs.aws.amazon.com/codedeploy/latest/userguide/reference-appspec-file-structure-hooks.html" rel="noopener noreferrer"&gt;life cycle event hooks&lt;/a&gt; section of the Code Deploy appspec.yml file will solve this for us by creating a couple of bash scripts that can replace the "npm install and build" steps executed by Code Build leaving AWS Code Build for the test cases phase only. Here's how our two files look like now:&lt;/p&gt;

&lt;p&gt;&lt;em&gt;buildspec.yml file&lt;/em&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;version: 0.2
phases:
  pre_build:
    commands:
      - echo Installing source NPM dependencies...
      - npm install
  build:
    commands:
      - echo Build started on `date`
      - echo Compiling the Node.js code
      - echo Running unit tests
      - npm test
  post_build:
    commands:
      - echo Build completed on `date`
artifacts:
  files:
    - '**/*'
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;em&gt;appspec.yml file&lt;/em&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;version: 0.0
os: linux
files:
  - source: /
    destination: /usr/share/nginx/html
hooks:
  BeforeInstall:
    - location: scripts/BeforeInstallHook.sh
      timeout: 300
  AfterInstall:
    - location: scripts/AfterInstallHook.sh
      timeout: 300
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;BeforeInstall: Use to run tasks before the replacement task set is created. One target group is associated with the original task set. If an optional test listener is specified, it is associated with the original task set. A rollback is not possible at this point.
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#!/bin/bash
set -e
yum update -y
pm2 update
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;AfterInstall: Use to run tasks after the replacement task set is created and one of the target groups is associated with it. If an optional test listener is specified, it is associated with the original task set. The results of a hook function at this lifecycle event can trigger a rollback.
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#!/bin/bash
set -e
cd /usr/share/nginx/html
npm install
npm run build
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; We are setting the set -e flag to stop the execution of our scripts in the event of an error.&lt;/p&gt;

&lt;p&gt;Another issue you might face even after updating your appspec and buildspec files is: &lt;code&gt;The deployment failed because a specified file already exists at this location: /usr/share/nginx/html/.cache/plugins/somefile.js&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;In our case, we will solve this by simply asking CodeDeploy to replace already existing files using the &lt;code&gt;overwrite:true&lt;/code&gt; option.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Final appspec.yml file&lt;/em&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;version: 0.0
os: linux
files:
  - source: /
    destination: /usr/share/nginx/html
    overwrite: true
hooks:
  BeforeInstall:
    - location: scripts/BeforeInstallHook.sh
      timeout: 300
  AfterInstall:
    - location: scripts/AfterInstallHook.sh
      timeout: 300
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Perfect, we have reached a stage that after AWS CodePipeline is successfully complete, we are now able to start our npm script without facing any issues. It's time to automatically restart our application upon every new deployment using PM2 which is a process management tool responsible for running and managing our Node.js applications.&lt;/p&gt;

&lt;p&gt;Simply, run &lt;code&gt;sudo npm install pm2@latest -g&lt;/code&gt; on your EC2 instances, then generate the pm2 ecosystem.config.js file to declare the applications/services you would like to deploy your code into by executing this command &lt;code&gt;pm2 ecosystem&lt;/code&gt;. PM2 will generate a sample file for you so make sure it matches your application structure.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;ecosystem.config.js file&lt;/em&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;module.exports = {
  apps : [{
    name: "npm",
    cwd: '/usr/share/nginx/html',
    script: "npm",
    args: 'start',
    env: {
      NODE_ENV: "production",
      HOST: '0.0.0.0',
      PORT: '3000',
    },
  }]
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;At this stage, you can simply run &lt;code&gt;pm2 start ecosystem.config.js&lt;/code&gt; and PM2 will start your application for you. But that's not the only power of PM2. This module can actually restart your application automatically upon every new release by simply including the watch parameter in the ecosystem.config.js file.&lt;/p&gt;

&lt;p&gt;Final ecosystem.config.js file_&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;module.exports = {
  apps : [{
    name: "npm",
    cwd: '/usr/share/nginx/html',
    script: "npm",
    args: 'start',
    watch: true,
    env: {
      NODE_ENV: "production",
      HOST: '0.0.0.0',
      PORT: '3000',
    },
  }]
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Wonderful! We have set up a fully automated deployment pipeline that can run unit tests, install, build, and deploy the node modules on the Amazon EC2 instances then PM2 takes care of restarting the application for us. &lt;/p&gt;

&lt;p&gt;Okay, what if our server got rebooted for some reason? We want our app to start automatically and this can also be accomplished by using the &lt;code&gt;pm2 startup&lt;/code&gt; parameter that can be executed after starting your application.&lt;/p&gt;

&lt;p&gt;Have we missed anything so far? Oh yes! &lt;strong&gt;Autoscaling&lt;/strong&gt;&lt;br&gt;
We want to make sure that our production environment scalable enough to accommodate huge loads on our application.&lt;/p&gt;

&lt;p&gt;This can easily be set up through AWS CodeDeploy by updating the deployment group environment configuration from Amazon EC2 instances "Tagging Strategy' to Amazon EC2 Auto Scaling groups. This is a great feature by AWS CodeDeploy where it can deploy your latest revisions to new instances automatically while keeping your desired number of instances healthy throughout the deployment. However, we will face another challenge here. PM2 startup makes sure that your application is started after any instance reboot but it, unfortunately, doesn't work this way when Autoscaling Group launch new instances thus the application doesn't automatically run in the event of horizontal scaling. But don't worry I got your back!&lt;/p&gt;

&lt;p&gt;In order to solve this issue, go to your Launch Configuration settings, and in the "userdata" section add the below bash script to it.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#!/bin/bash -ex
# restart pm2 and thus node app on reboot
crontab -l | { cat; echo "@reboot sudo pm2 start /usr/share/nginx/html/ecosystem.config.js -i 0 --name \"node-app\""; } | crontab -
# start the server
pm2 start /usr/share/nginx/html/ecosystem.config.js -i 0 --name "node-app"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;There you go! Now you have a highly scalable NodeJS Application that is fully automated using AWS CodePipeline.&lt;/strong&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Conclusion
&lt;/h1&gt;

&lt;p&gt;I hope this blog has been informative to you all. I have tried as much as possible to make this blog look like a story because the main purpose of writing it is to show you the many challenges DevOps Engineers and Developers face to set up this solution and the various ways used to solve it. I will not stop updating this project and will make sure it has an improvement plan because I know it can even be better!&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;References:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://regbrain.com/article/node-nginx-ec2" rel="noopener noreferrer"&gt;https://regbrain.com/article/node-nginx-ec2&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://pm2.keymetrics.io/docs/usage/startup" rel="noopener noreferrer"&gt;https://pm2.keymetrics.io/docs/usage/startup&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.digitalocean.com/community/tutorials/how-to-set-up-a-node-js-application-for-production-on-ubuntu-20-04" rel="noopener noreferrer"&gt;https://www.digitalocean.com/community/tutorials/how-to-set-up-a-node-js-application-for-production-on-ubuntu-20-04&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://cloudnweb.dev/2019/12/a-complete-guide-to-aws-elastic-load-balancer-using-nodejs/" rel="noopener noreferrer"&gt;https://cloudnweb.dev/2019/12/a-complete-guide-to-aws-elastic-load-balancer-using-nodejs/&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://pm2.keymetrics.io/docs/usage/watch-and-restart/" rel="noopener noreferrer"&gt;https://pm2.keymetrics.io/docs/usage/watch-and-restart/&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://pm2.keymetrics.io/docs/usage/application-declaration/#cli" rel="noopener noreferrer"&gt;https://pm2.keymetrics.io/docs/usage/application-declaration/#cli&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>javascript</category>
      <category>node</category>
      <category>devops</category>
      <category>aws</category>
    </item>
    <item>
      <title>Serverless – The Deployment Era</title>
      <dc:creator>Bassel Al Annan</dc:creator>
      <pubDate>Wed, 13 Jan 2021 00:54:32 +0000</pubDate>
      <link>https://dev.to/bassel_alannan/serverless-the-deployment-era-16le</link>
      <guid>https://dev.to/bassel_alannan/serverless-the-deployment-era-16le</guid>
      <description>&lt;p&gt;&lt;strong&gt;Keep the business running – always!&lt;/strong&gt;&lt;br&gt;
Traditionally, developers used to publish new releases to the production environments by updating the lambda function code for version $LATEST or even pointing an alias to a new function version. Fortunately, AWS has introduced AWS Lambda Traffic Shifting and Phased Deployments with AWS CodeDeploy. This was definitely a game-changer and many DevOps Engineers started adopting this strategy for their day-to-day deployments. However, this was not the case for those who were using the Serverless Framework as it didn’t support this type of deployment process until the Serverless Plugin Canary Deployments has come into effect by implementing canary deployments of Lambda functions and making use of the traffic shifting Lambda feature in combination with AWS CodeDeploy.&lt;/p&gt;

&lt;p&gt;Fortunately, we will be looking at how to completely transform the new application release approach for such cases and guarantee an almost zero downtime fully automated deployment strategy with the help of several AWS Services some of which we will discuss later in this blog.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Deploying Serverless the right way:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The architecture has been completely built using the Serverless Framework, which is an open-source software which that builds, compiles, and packages code for serverless deployment, and then deploys the package to the cloud. Moreover, the serverless framework has also been utilized to build serverless AWS services such as AWS Lambda, AWS Cognito, AWS API Gateway, AWS DynamoDB, and many more. In this blog, I will introduce a fully automated CI/CD pipeline using AWS CodePipeline for the new deployment strategy knowing the many features that this service supports in conjunction with AWS Codebuild and AWS Codedeploy. This solution depends on the Canary deployment strategy that is responsible for shifting a specific amount of traffic to a new Lambda version within a specific period of time until all the traffic is completely shifted to the new version.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How does this exactly work in a fully automated CI/CD pipeline?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Developers will still push new code to their GitHub repository as usual but this time, CodePipeline will start a source stage withing the pipeline to pull the latest code changes in a GitHub branch, AWS CodeBuild on the other hand will start the building stage by installing all the Serverless plugins and libraries and execute the “sls deploy” command. At this stage, AWS CodeDeploy will start deploying the latest code to a new Lambda version and uses the “Live” weighted alias created in the previous build step to distribute traffic between the latest version and the currently running one gradually. Now here’s where the canary deployment plugin comes into play by distributing the traffic between the lambda versions and use the Linear10PercentEvery1Minute deployment preference type to gradually shift traffic within a 1-minute interval. Moreover, CodeDeploy was also configured to monitor the Lambda Function deployments status through Amazon CloudWatch so that it can roll back to the previous running version if the “Error” monitoring metric triggers a CloudWatch alarm which indicates that the function has either fail due to timeouts, memory issues, or unhandled exceptions.&lt;/p&gt;

&lt;p&gt;In this blog, I wanted to explain this approach and the benefits it brings for organizations but I will also do my best to create another blog showing a step by step guide on how to deploy your serverless infrastructure using the Canary Deployment Strategy.&lt;/p&gt;

</description>
      <category>devops</category>
      <category>aws</category>
    </item>
    <item>
      <title>Serverless – The Deployment Era</title>
      <dc:creator>Bassel Al Annan</dc:creator>
      <pubDate>Wed, 13 Jan 2021 00:48:43 +0000</pubDate>
      <link>https://dev.to/aws-builders/serverless-the-deployment-era-5dd2</link>
      <guid>https://dev.to/aws-builders/serverless-the-deployment-era-5dd2</guid>
      <description>&lt;p&gt;&lt;strong&gt;Keep the business running – always!&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Traditionally, developers used to publish new releases to the production environments by updating the lambda function code for version $LATEST or even pointing an alias to a new function version. Fortunately, AWS has introduced AWS Lambda Traffic Shifting and Phased Deployments with AWS CodeDeploy. This was definitely a game-changer and many DevOps Engineers started adopting this strategy for their day-to-day deployments. However, this was not the case for those who were using the Serverless Framework as it didn’t support this type of deployment process until the Serverless Plugin Canary Deployments has come into effect by implementing canary deployments of Lambda functions and making use of the traffic shifting Lambda feature in combination with AWS CodeDeploy.&lt;/p&gt;

&lt;p&gt;Fortunately, we will be looking at how to completely transform the new application release approach for such cases and guarantee an almost zero downtime and fully automated deployment strategy with the help of several AWS Services some of which we will discuss later in this blog.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Deploying Serverless the right way:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The architecture has been completely built using the Serverless Framework, which is an open-source software that builds, compiles, and packages code for serverless deployment, and then deploys the package to the cloud. Moreover, the serverless framework has also been utilized to build serverless AWS services such as AWS Lambda, AWS Cognito, AWS API Gateway, AWS DynamoDB, and many more. In this blog, I will introduce a fully automated CI/CD pipeline using AWS CodePipeline for the new deployment strategy knowing the many features that this service supports in conjunction with AWS Codebuild and AWS Codedeploy. This solution depends on the Canary deployment strategy that is responsible for shifting a specific amount of traffic to a new Lambda version within a specific period of time until all the traffic is completely shifted to the new version.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How does this exactly work in a fully automated CI/CD pipeline?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Developers will still push new code to their GitHub repository as usual but this time, CodePipeline will start a source stage within the pipeline to pull the latest code changes in a GitHub branch, AWS CodeBuild on the other hand will start the build stage by installing all the Serverless plugins and libraries and execute the “sls deploy” command. At this stage, AWS CodeDeploy will deploy the latest code to a new Lambda version and gradually use the “Live” weighted alias created in the previous build step to distribute traffic between the latest version and the currently running one. Now here’s where the canary deployment plugin comes into play by distributing the traffic between the lambda versions and using the Linear10PercentEvery1Minute deployment preference type to shift traffic within a 1-minute interval gradually. Moreover, CodeDeploy was also configured to monitor the Lambda Function deployments status through Amazon CloudWatch so that it can roll back to the previous running version if the “Error” monitoring metric triggers a CloudWatch alarm which indicates that the function has either fail due to timeouts, memory issues, or unhandled exceptions.&lt;/p&gt;

&lt;p&gt;In this blog, I wanted to explain this approach and the benefits it brings to organizations but I will also do my best to create another blog showing a step-by-step guide on how to deploy your serverless infrastructure using the Canary Deployment Strategy.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>devops</category>
    </item>
    <item>
      <title>AWS Fargate: From Start to DevSecOps</title>
      <dc:creator>Bassel Al Annan</dc:creator>
      <pubDate>Wed, 13 Jan 2021 00:47:20 +0000</pubDate>
      <link>https://dev.to/bassel_alannan/aws-fargate-from-start-to-devsecops-3l1k</link>
      <guid>https://dev.to/bassel_alannan/aws-fargate-from-start-to-devsecops-3l1k</guid>
      <description>&lt;p&gt;&lt;strong&gt;Test, Roll Back, and Deploy:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;During this blog, I will go through each and every AWS service that was leveraged to build a robust infrastructure such as Amazon ECS on AWS Fargate, AWS CodePipeline, AWS CodeBuild, AWS CodeDeploy, AWS CDK, and many other services that we will discuss later in this article. Together, these services help you securely store, version control your Node.js application source code, automatically build, test, and deploy your application to AWS.&lt;/p&gt;

&lt;p&gt;To start with, instead of manually creating all the services that we have mentioned above, AWS CDK (TypeScript) was used to automate the provisioning of the infrastructure shown in the below diagram while using code reviews and revision controls to review stack changes and keep an accurate history of the running resources. AWS CDK is an open-source software development framework to define your cloud application resources using familiar programming languages.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Frp4o1nthkopygq9ff1e8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Frp4o1nthkopygq9ff1e8.png" alt="Alt Text" width="800" height="603"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We will now go through the provisioning process of this application and explain how a fully functional production environment was created with the help of AWS CDK Stacks to achieve a successful Dockerized deployment with minimal effort. Three stacks were created where the first one will be responsible for creating all the networking components of this infrastructure (VPC, Subnets, NAT Gateway, Internet Gateway, etc.), the second one will create the ECS Cluster along with its Task Definition, and ECS Service, and the Application Load Balancer with Two Target Groups. Finally, the third stack will be responsible for creating the CI/CD Pipeline with all the needed configurations needed to complete this setup. One thing to mention here is that CDK for CodeDeploy does not currently support the Blue/Green deployment with CodeDeploy feature so this had to be configured from the console manually.&lt;/p&gt;

&lt;p&gt;The first stage of the pipeline will fetch the latest Git version of the Dockerized application from GitHub and execute the build stage. Now that the source stage of CodePipeline has the latest Git version, AWS CodeBuild is now ready to start the build tasks that will first build our docker image, and automatically scan it for known vulnerabilities as well as the severity of the outdated libraries or CVE records right after pushing it to Amazon ECR. This build method will automatically scan the docker image for known vulnerabilities so as soon as we push the build, Amazon ECR will return all the discovered vulnerabilities as well as the severity of the outdated libraries or CVE records. So, what if a high or medium risk vulnerability was found? In this case, a bash script will get executed in the buildspec.yml file to check for high and medium severity vulnerabilities and fail the build automatically in the event of discovering a security threat preventing the pipeline from starting the deployment stage, else it will proceed to the deployment stage.&lt;/p&gt;

&lt;p&gt;After successfully completing the build, CodeDeploy gets triggered and starts with deploying a replacement task set (or green task set) that has the latest task definition and docker image attached to it. Once the ECS Task status is "Running" CodeDeploy will use the AllAtOnce method to reroute the production traffic to the new replacement task set via the Green ALB Target Group and automatically roll back to the last known good version of the application revision if a deployment fails. Finally, CodeDeploy will wait for 1 hour before it terminates the original task set (or blue task set).&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fii5d8bitmozt515umaiu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fii5d8bitmozt515umaiu.png" alt="Alt Text" width="800" height="278"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;In conclusion:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;AWS provides users with several deployment strategies that best meet the user's case and scenario (Canary, Linear, All-at-once). Moreover, ECS Blue/Green Deployments with CodeDeploy also supports adding a test listener port if you want to test your replacement version before traffic reroutes to it. This will allow you to run validation tests with the help of "AfterAllowTestTraffic" lifecycle hook Lambda function that can be updated in your AppSpec.yml file. I think it's really exciting to start getting your hands dirty with these deployments and improve your use cases gradually to get a fully DevSecOps strategy that best fits your organizational culture.&lt;/p&gt;

&lt;p&gt;Happy deploying!&lt;/p&gt;

</description>
      <category>aws</category>
      <category>devops</category>
    </item>
    <item>
      <title>AWS Fargate: From Start to DevSecOps</title>
      <dc:creator>Bassel Al Annan</dc:creator>
      <pubDate>Wed, 13 Jan 2021 00:44:12 +0000</pubDate>
      <link>https://dev.to/aws-builders/aws-fargate-from-start-to-devsecops-240l</link>
      <guid>https://dev.to/aws-builders/aws-fargate-from-start-to-devsecops-240l</guid>
      <description>&lt;p&gt;&lt;strong&gt;Test, Roll Back, and Deploy:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;During this blog, I will go through each and every AWS service that was leveraged to build a robust infrastructure such as Amazon ECS on AWS Fargate, AWS CodePipeline, AWS CodeBuild, AWS CodeDeploy, AWS CDK, and many other services that we will discuss later in this article. Together, these services help you securely store, version control your Node.js application source code, automatically build, test, and deploy your application to AWS.&lt;/p&gt;

&lt;p&gt;To start with, instead of manually creating all the services that we have mentioned above, AWS CDK (TypeScript) was used to automate the provisioning of the infrastructure shown in the below diagram while using code reviews and revision controls to review stack changes and keep an accurate history of the running resources. AWS CDK is an open-source software development framework to define your cloud application resources using familiar programming languages.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fjzd6gjewdt3f6bf2972i.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fjzd6gjewdt3f6bf2972i.png" alt="Alt Text" width="800" height="603"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We will now go through the provisioning process of this application and explain how a fully functional production environment was created with the help of AWS CDK Stacks to achieve a successful Dockerized deployment with minimal effort. Three stacks were created where the first one will be responsible for creating all the networking components of this infrastructure (VPC, Subnets, NAT Gateway, Internet Gateway, etc.), the second one will create the ECS Cluster along with its Task Definition, and ECS Service, and the Application Load Balancer with Two Target Groups. Finally, the third stack will be responsible for creating the CI/CD Pipeline with all the needed configurations needed to complete this setup. One thing to mention here is that CDK for CodeDeploy does not currently support the Blue/Green deployment with CodeDeploy feature so this had to be configured from the console manually.&lt;/p&gt;

&lt;p&gt;The first stage of the pipeline will fetch the latest Git version of the Dockerized application from GitHub and execute the build stage. Now that the source stage of CodePipeline has the latest Git version, AWS CodeBuild is now ready to start the build tasks that will first build our docker image, and automatically scan it for known vulnerabilities as well as the severity of the outdated libraries or CVE records right after pushing it to Amazon ECR. This build method will automatically scan the docker image for known vulnerabilities so as soon as we push the build, Amazon ECR will return all the discovered vulnerabilities as well as the severity of the outdated libraries or CVE records. So, what if a high or medium risk vulnerability was found? In this case, a bash script will get executed in the buildspec.yml file to check for high and medium severity vulnerabilities and fail the build automatically in the event of discovering a security threat preventing the pipeline from starting the deployment stage, else it will proceed to the deployment stage.&lt;/p&gt;

&lt;p&gt;After successfully completing the build, CodeDeploy gets triggered and starts with deploying a replacement task set (or green task set) that has the latest task definition and docker image attached to it. Once the ECS Task status is "Running" CodeDeploy will use the AllAtOnce method to reroute the production traffic to the new replacement task set via the Green ALB Target Group and automatically roll back to the last known good version of the application revision if a deployment fails. Finally, CodeDeploy will wait for 1 hour before it terminates the original task set (or blue task set).&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F5v03i9o0mliyzoawx7dz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F5v03i9o0mliyzoawx7dz.png" alt="Alt Text" width="800" height="278"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;In conclusion:&lt;/strong&gt;&lt;br&gt;
AWS provides users with several deployment strategies that best meet the user's case and scenario (Canary, Linear, All-at-once). Moreover, ECS Blue/Green Deployments with CodeDeploy also supports adding a test listener port if you want to test your replacement version before traffic reroutes to it. This will allow you to run validation tests with the help of "AfterAllowTestTraffic" lifecycle hook Lambda function that can be updated in your AppSpec.yml file. I think it's really exciting to start getting your hands dirty with these deployments and improve your use cases gradually to get a fully DevSecOps strategy that best fits your organizational culture.&lt;/p&gt;

&lt;p&gt;Happy deploying!&lt;/p&gt;

</description>
      <category>aws</category>
      <category>devops</category>
    </item>
  </channel>
</rss>
