<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Daniel Dominguez</title>
    <description>The latest articles on DEV Community by Daniel Dominguez (@dominguezdaniel).</description>
    <link>https://dev.to/dominguezdaniel</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/dominguezdaniel"/>
    <language>en</language>
    <item>
      <title>Studying for AWS Certified Solutions Architect</title>
      <dc:creator>Daniel Dominguez</dc:creator>
      <pubDate>Tue, 07 Mar 2023 14:12:08 +0000</pubDate>
      <link>https://dev.to/dominguezdaniel/studying-for-aws-certified-solutions-architect-253i</link>
      <guid>https://dev.to/dominguezdaniel/studying-for-aws-certified-solutions-architect-253i</guid>
      <description>&lt;p&gt;  &lt;iframe src="https://www.youtube.com/embed/H1ryXS9cDGw"&gt;
  &lt;/iframe&gt;
&lt;br&gt;
The “&lt;a href="https://rebrand.ly/devshelf-001" rel="noopener noreferrer"&gt;AWS Certified Solutions Architect Study Guide: Associate SAA-C01 Exam 2nd Edition&lt;/a&gt;” by Ben Piper and David Clinton is a comprehensive guide to prepare for the Amazon Web Services (AWS) Certified Solutions Architect — Associate exam. The book covers all the relevant topics, including the latest updates and best practices in the cloud computing industry.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What Does This Book Cover?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This book covers topics you need to know to prepare for the Amazon Web Services (AWS) Certified Solutions Architect — Associate exam:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Chapter 1: Introduction to Cloud Computing and AWS This chapter provides an overview of the AWS Cloud computing platform and its core services and concepts.&lt;/p&gt;

&lt;p&gt;Chapter 2: Amazon Elastic Compute Cloud and Amazon Elastic Block Store This chapter covers EC2 instances — the virtual machines that you can use to run Linux and Windows workloads on AWS. It also covers the Elastic Block Store service that EC2 instances depend on for persistent data storage.&lt;/p&gt;

&lt;p&gt;Chapter 3: AWS Storage In this chapter, you’ll learn about Simple Storage Service (S3) and Glacier, which provide unlimited data storage and retrieval for AWS services, your applications, and the Internet.&lt;/p&gt;

&lt;p&gt;Chapter 4: Amazon Virtual Private Cloud This chapter explains Amazon Virtual Private Cloud (Amazon VPC), a virtual network that contains network resources for AWS services.&lt;/p&gt;

&lt;p&gt;Chapter 5: Database Services In this chapter, you will learn about some different managed database services offered by AWS, including Relational Database Service (RDS), DynamoDB, and Redshift.&lt;/p&gt;

&lt;p&gt;Chapter 6: Authentication and Authorization — AWS Identity and Access Management This chapter covers AWS Identity and Access Management (IAM), which provides the primary means for protecting the AWS resources in your account.&lt;/p&gt;

&lt;p&gt;Chapter 7: CloudTrail, CloudWatch, and AWS Config In this chapter, you’ll learn how to log, monitor, and audit your AWS resources.&lt;/p&gt;

&lt;p&gt;Chapter 8: The Domain Name System and Network Routing: Amazon Route 53 and Amazon CloudFront This chapter focuses on the Domain Name System (DNS) and Route 53, the service that provides public and private DNS hosting for both internal AWS resources and the Internet. It also covers CloudFront, Amazon’s global content delivery network.&lt;/p&gt;

&lt;p&gt;Chapter 9: Simple Queue Service and Kinesis This chapter explains how to use the principle of loose coupling to create scalable and highly available applications. You’ll learn how Simple Queue Service (SQS) and Kinesis fit into the picture.&lt;/p&gt;

&lt;p&gt;Chapter 10: The Reliability Pillar This chapter will show you how to architect and integrate AWS services to achieve a high level of reliability for your applications. You’ll learn how to plan around and recover from inevitable outages to keep your systems up and running.&lt;/p&gt;

&lt;p&gt;Chapter 11: The Performance Efficiency Pillar This chapter covers how to build highly performing systems and use the AWS elastic infrastructure to rapidly scale up and out to meet peak demand.&lt;/p&gt;

&lt;p&gt;Chapter 12: The Security Pillar In this chapter, you’ll learn how to use encryption and security controls to protect the confidentiality, integrity, and availability of your data and systems on AWS. You’ll also learn about the various security services such as GuardDuty, Inspector, Shield, and Web Application Firewall.&lt;/p&gt;

&lt;p&gt;Chapter 13: The Cost Optimization Pillar This chapter will show you how to estimate and control your costs in the cloud.&lt;/p&gt;

&lt;p&gt;Chapter 14: The Operational Excellence Pillar In this chapter, you’ll learn how to keep your systems running smoothly on AWS. You’ll learn how to implement a DevOps mind‐set using CloudFormation, Systems Manager, and the AWS Developer Tools.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;One of the strengths of the book is its clear and concise writing style, which makes it easy to understand complex concepts. The authors also provide a wealth of practical examples and hands-on exercises to help reinforce the material. The book also includes a robust set of review questions and study aids, including a comprehensive glossary, that can help you assess your understanding of the material.&lt;/p&gt;

&lt;p&gt;In addition, the authors provide valuable insights into the exam format and the types of questions you can expect to encounter on the test. This information can be especially helpful for those who are taking the exam for the first time.&lt;/p&gt;

&lt;p&gt;In my personal opinion, the “&lt;a href="https://rebrand.ly/devshelf-001" rel="noopener noreferrer"&gt;AWS Certified Solutions Architect Study Guide: Associate SAA-C01 Exam 2nd Edition&lt;/a&gt;” is an excellent resource for anyone looking to prepare for the AWS Certified Solutions Architect — Associate exam. Whether you are a seasoned AWS professional or just starting out, this book will provide you with the knowledge and skills you need to succeed on the exam and in your career. I highly recommend it!&lt;/p&gt;

&lt;p&gt;📚&lt;a href="https://rebrand.ly/devshelf-001" rel="noopener noreferrer"&gt;Get the Book in Amazon&lt;/a&gt;&lt;/p&gt;




&lt;p&gt;&lt;a href="https://devshelf.co/" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv04cas2nc9gcvaznw3va.jpg" alt="DevShelf" width="800" height="211"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>aws</category>
      <category>books</category>
      <category>certification</category>
    </item>
    <item>
      <title>Understanding Amazon Web Services</title>
      <dc:creator>Daniel Dominguez</dc:creator>
      <pubDate>Mon, 06 Mar 2023 16:14:48 +0000</pubDate>
      <link>https://dev.to/dominguezdaniel/understanding-amazon-web-services-19na</link>
      <guid>https://dev.to/dominguezdaniel/understanding-amazon-web-services-19na</guid>
      <description>&lt;p&gt;  &lt;iframe src="https://www.youtube.com/embed/b-wsOF8gGik"&gt;
  &lt;/iframe&gt;
&lt;br&gt;
"&lt;a href="https://rebrand.ly/devshelf-003" rel="noopener noreferrer"&gt;Amazon Web Services for Dummies&lt;/a&gt;" by Bernard Golden is an excellent resource for those looking to get started with the Amazon Web Services (AWS) platform. The book breaks down complex concepts and makes them accessible to readers of all levels. The book is well-organized, easy to follow, and provides a comprehensive overview of the AWS platform and its various services.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What Does This Book Cover?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This book covers a wide range of topics, including cloud computing, infrastructure as a service (IaaS), platform as a service (PaaS), and software as a service (SaaS). He also provides a thorough explanation of the different AWS services, such as EC2, S3, and RDS, and how they can be used to build scalable, secure, and reliable applications.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Chapter 1: Amazon Web Services Philosophy and Design.&lt;br&gt;
Chapter 2: Introducing the AWS API.&lt;br&gt;
Chapter 3: Introducing the AWS Management Console.&lt;br&gt;
Chapter 4: Setting Up AWS Storage.&lt;br&gt;
Chapter 5: Stretching Out with Elastic Compute Cloud.&lt;br&gt;
Chapter 6: AWS Networking.&lt;br&gt;
Chapter 7: AWS Security.&lt;br&gt;
Chapter 8: Additional Core AWS Services.&lt;br&gt;
Chapter 9: AWS Platform Services.&lt;br&gt;
Chapter 10: AWS Management Services.&lt;br&gt;
Chapter 11: Managing AWS Costs.&lt;br&gt;
Chapter 12: Bringing It All Together: An AWS Application.&lt;br&gt;
Chapter 13: Ten Reasons to Use Amazon Web Services.&lt;br&gt;
Chapter 14: Ten Design Principles for Cloud Applications.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The book also provides a wealth of information on how to use AWS in a cost-effective manner, including tips on how to reduce costs and optimize performance. Additionally, it includes a variety of real-world examples that illustrate the concepts discussed in the book.&lt;/p&gt;

&lt;p&gt;Overall, "&lt;a href="https://rebrand.ly/devshelf-003" rel="noopener noreferrer"&gt;Amazon Web Services for Dummies&lt;/a&gt;" is an excellent resource for anyone looking to get started with AWS. Whether you are a beginner or have some experience with the platform, this book will provide you with the knowledge and skills you need to become proficient in using AWS. I highly recommend this book to anyone interested in cloud computing or AWS.&lt;/p&gt;

&lt;p&gt;📚&lt;a href="https://rebrand.ly/devshelf-003" rel="noopener noreferrer"&gt;Get the Book in Amazon&lt;/a&gt;&lt;/p&gt;




&lt;p&gt;&lt;a href="https://devshelf.co/" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv04cas2nc9gcvaznw3va.jpg" alt="DevShelf" width="800" height="211"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>aws</category>
      <category>books</category>
      <category>cloud</category>
    </item>
    <item>
      <title>Building Serverless Applications with AWS Lambda</title>
      <dc:creator>Daniel Dominguez</dc:creator>
      <pubDate>Thu, 02 Mar 2023 14:08:44 +0000</pubDate>
      <link>https://dev.to/dominguezdaniel/building-serverless-applications-with-aws-lambda-1l0k</link>
      <guid>https://dev.to/dominguezdaniel/building-serverless-applications-with-aws-lambda-1l0k</guid>
      <description>&lt;p&gt;  &lt;iframe src="https://www.youtube.com/embed/gGe022vdK5E"&gt;
  &lt;/iframe&gt;
&lt;br&gt;
“&lt;a href="https://rebrand.ly/devshelf-004" rel="noopener noreferrer"&gt;AWS Lambda in Action: Event-driven serverless applications&lt;/a&gt;” by Danilo Poccia is a comprehensive guide for developers looking to understand and implement serverless applications using Amazon Web Services (AWS) Lambda. The book covers the core concepts of serverless architecture and walks readers through the process of building and deploying real-world applications on AWS Lambda.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What Does This Book Cover?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The book explains the benefits of serverless computing and why AWS Lambda is a popular choice for implementing event-driven applications. It also provides clear, step-by-step instructions for setting up the necessary components for serverless computing, including AWS S3, API Gateway, and DynamoDB. Throughout the book, readers will learn how to develop, test, and deploy Lambda functions using Node.js, and also how to integrate other AWS services and tools.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Chapter 1: Running functions in the cloud&lt;br&gt;
Chapter 2: Your first Lambda function&lt;br&gt;
Chapter 3: Your function as a web API&lt;br&gt;
Chapter 4: Managing security&lt;br&gt;
Chapter 5: Using standalone functions&lt;br&gt;
Chapter 6: Managing identities&lt;br&gt;
Chapter 7: Calling functions from a client&lt;br&gt;
Chapter 8: Designing an authentication service&lt;br&gt;
Chapter 9: Implementing an authentication service&lt;br&gt;
Chapter 10: Adding more features to the authentication service&lt;br&gt;
Chapter 11: Building a media-sharing application&lt;br&gt;
Chapter 12: Why event-driven?&lt;br&gt;
Chapter 13: Improving development and testing&lt;br&gt;
Chapter 14: Automating deployment&lt;br&gt;
Chapter 15: Automating infrastructure management&lt;br&gt;
Chapter 16: Calling external services&lt;br&gt;
Chapter 17: Receiving events from other services&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The book is well-organized and easy to follow, making it accessible to both beginner and intermediate-level developers. It provides hands-on examples and practical advice for optimizing serverless applications and avoiding common pitfalls. The author’s writing style is clear and concise, making it easy to understand the concepts being presented.&lt;/p&gt;

&lt;p&gt;In conclusion, “&lt;a href="https://rebrand.ly/devshelf-004" rel="noopener noreferrer"&gt;AWS Lambda in Action&lt;/a&gt;” is a must-read for developers looking to build and deploy serverless applications on AWS. The book provides a solid foundation of knowledge and practical skills for utilizing AWS Lambda, and is a valuable resource for anyone looking to get started with serverless computing.&lt;/p&gt;

&lt;p&gt;📚&lt;a href="https://rebrand.ly/devshelf-004" rel="noopener noreferrer"&gt;Get the Book in Amazon&lt;/a&gt;&lt;/p&gt;




&lt;p&gt;&lt;a href="https://devshelf.co/" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv04cas2nc9gcvaznw3va.jpg" alt="DevShelf" width="800" height="211"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>aws</category>
      <category>books</category>
      <category>serverless</category>
      <category>lambda</category>
    </item>
    <item>
      <title>Building and Deploying Machine Learning Models on AWS</title>
      <dc:creator>Daniel Dominguez</dc:creator>
      <pubDate>Wed, 01 Mar 2023 14:39:33 +0000</pubDate>
      <link>https://dev.to/dominguezdaniel/building-and-deploying-ml-models-on-aws-38nh</link>
      <guid>https://dev.to/dominguezdaniel/building-and-deploying-ml-models-on-aws-38nh</guid>
      <description>&lt;p&gt;  &lt;iframe src="https://www.youtube.com/embed/UXm5nXo3aeg"&gt;
  &lt;/iframe&gt;
&lt;br&gt;
“&lt;a href="https://rebrand.ly/devsheld-005" rel="noopener noreferrer"&gt;Mastering Machine Learning on AWS&lt;/a&gt;” is a comprehensive guide for advanced machine learning in Python using AWS tools such as SageMaker, Apache Spark, and TensorFlow. The authors, Dr. Saket S.R. Mengle and Maximo Gurmendez, expertly break down complex concepts and provide clear, step-by-step instructions for implementing machine learning models on AWS.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What Does This Book Cover?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The book covers a wide range of topics, including deep learning, transfer learning, and unsupervised learning, and it also provides guidance on how to deploy models and monitor their performance in production. The inclusion of Apache Spark and TensorFlow, in addition to SageMaker, makes the book a valuable resource for anyone looking to expand their knowledge and skills in the field of machine learning.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Chapter 1, Getting Started with Machine Learning for AWS, introduces you to machine learning. It explains why it is necessary for data scientists to learn about machine learning and how AWS can help them to solve various real-world problems. We also discuss the AWS services and tools that we will be covered in the book.&lt;/p&gt;

&lt;p&gt;Chapter 2, Classifying Twitter Feeds with Naive Bayes, introduces the basics of the Naive Bayes algorithm and presents a text classification problem that will be addressed by the use of this algorithm and language models. We’ll provide examples explaining how to apply Naive Bayes using scikit-learn and Apache Spark on SageMaker’s BlazingText. Additionally, we’ll explore how to use the ideas behind Bayesian reasoning in more complex scenarios. We will use the Twitter API to stream tweets from two different political candidates and predict who wrote them. We will use scikit-learn, Apache Spark, SageMaker, and BlazingText.&lt;/p&gt;

&lt;p&gt;Chapter 3, Predicting House Value with Regression Algorithms, introduces the basics of regression algorithms and applies them to predict the price of houses given a number of features. We’ll also introduce how to use logistic regression for classification problems. Examples in SageMaker for scikit-learn and Apache Spark will be provided. We’ll be using the Boston Housing Price dataset (&lt;a href="https://www.kaggle.com/c/boston-housing/" rel="noopener noreferrer"&gt;https://www.kaggle.com/c/boston-housing/&lt;/a&gt;) along with scikit-learn, Apache Spark, and SageMaker.&lt;/p&gt;

&lt;p&gt;Chapter 4, Predicting User Behavior with Tree-Based Methods, introduces decision trees, random forests, and gradient-boosted trees. We will explore how to use these algorithms to predict when users will click on ads. Additionally, we will explain how to use AWS EMR and Apache Spark to engineer models at a large scale. We will use the Adform click prediction dataset (&lt;a href="https://doi.org/10.7910/DVN/TADBY7" rel="noopener noreferrer"&gt;https://doi.org/10.7910/DVN/TADBY7&lt;/a&gt;, Harvard Dataverse, V2). We will use the XGBoost, Apache Spark, SageMaker, and EMR libraries.&lt;/p&gt;

&lt;p&gt;Chapter 5, Customer Segmentation Using Clustering Algorithms, introduces the main clustering algorithms by exploring how to apply them for customer segmentation based on consumer patterns. Through AWS SageMaker, we will show how to run these algorithms in skicit-learn and Apache Spark. We will use the e-commerce data from Fabien Daniel (&lt;a href="https://www.kaggle.com/fabiendaniel/customer-segmentation/data" rel="noopener noreferrer"&gt;https://www.kaggle.com/fabiendaniel/customer-segmentation/data&lt;/a&gt;) and scikit-learn, Apache Spark, and SageMaker.&lt;/p&gt;

&lt;p&gt;Chapter 6, Analyzing Visitor Patterns to Make Recommendations, presents the problem of finding similar users based on their navigation patterns in order to recommend custom marketing strategies. Collaborative filtering and distance-based methods will be introduced with examples in scikit-learn and Apache Spark on AWS SageMaker. We will use Kwan Hui Lim’s Theme Park Attraction Visits dataset (&lt;a href="https://sites.google.com/site/limkwanhui/datacode" rel="noopener noreferrer"&gt;https://sites.google.com/site/limkwanhui/datacode&lt;/a&gt;), Apache Spark, and SageMaker.&lt;/p&gt;

&lt;p&gt;Chapter 7, Implementing Deep Learning Algorithms, introduces you to the main concepts behind deep learning and explains why it has become so relevant in today’s AI-powered products. The aim of this chapter is to not discuss the theoretical details of deep learning, but to explain the algorithms with examples and provide a high-level conceptual understanding of deep learning algorithms. This will give you a platform to understand what you will be implementing in the next chapters.&lt;/p&gt;

&lt;p&gt;Chapter 8, Implementing Deep Learning with TensorFlow on AWS, goes through a series of practical image-recognition problems and explains how to address them with TensorFlow on AWS. TensorFlow is a very popular deep learning framework that can be used to train deep neural networks. This chapter will explain how TensorFlow can be installed and used to train deep learning models using toy datasets. In this chapter, we’ll use the MNIST handwritten digits dataset (&lt;a href="http://yann.lecun.com/exdb/mnist/" rel="noopener noreferrer"&gt;http://yann.lecun.com/exdb/mnist/&lt;/a&gt;), along with TensorFlow and SageMaker.&lt;/p&gt;

&lt;p&gt;Chapter 9, Image Classification and Detection with SageMaker, revisits the image classification problem we dealt with in the previous chapters, but using SageMaker’s image classification algorithm and object detection algorithm. We’ll use the Caltech256 dataset (&lt;a href="http://www.vision.caltech.edu/Image_Datasets/Caltech256/" rel="noopener noreferrer"&gt;http://www.vision.caltech.edu/Image_Datasets/Caltech256/&lt;/a&gt;) and AWS Sagemaker.&lt;/p&gt;

&lt;p&gt;Chapter 10, Working with AWS Comprehend, explains the functionality of an AWS tool called Comprehend, which is a natural language processing tool that performs various useful tasks.&lt;/p&gt;

&lt;p&gt;Chapter 11, Using AWS Rekognition, explains how to use Rekognition, which is an image recognition tool that uses deep learning. You will learn an easy way of applying image recognition in your applications.&lt;/p&gt;

&lt;p&gt;Chapter 12, Building Conversational Interfaces Using AWS Lex, explains that AWS Lex is a tool that allows programmers to build conversational interfaces. This chapter introduces you to topics such as natural language understanding using deep learning.&lt;/p&gt;

&lt;p&gt;Chapter 13, Creating Clusters on AWS, addresses how one of the key problems in deep learning is understanding how to scale and parallelize learning on multiple machines. In this chapter, we’ll examine different ways to create clusters of learners. In particular, we’ll focus on how to parallelize deep learning pipelines through distributed TensorFlow and Apache Spark.&lt;/p&gt;

&lt;p&gt;Chapter 14, Optimizing Models in Spark and SageMaker, explains that the models that are trained on AWS can be further optimized to run smoothly in production environments. In this section, we will discuss various tricks that you can use to improve the performance of your algorithms.&lt;/p&gt;

&lt;p&gt;Chapter 15, Tuning Clusters for Machine Learning, explains that many data scientists and machine learning practitioners face the problem of scale when attempting to run machine learning data pipelines at scale. In this chapter, we focus primarily on EMR, which is a very powerful tool for running very large machine learning jobs. There are many ways to configure EMR, and not every setup works for every scenario. We will go through the main configurations of EMR and explain how each configuration works for different objectives. Additionally, we’ll present other ways to run big data pipelines through AWS.&lt;/p&gt;

&lt;p&gt;Chapter 16, Deploying Models Built on AWS, discusses deployment. At this point, you will have your model built on AWS and would like to ship it to production. There are a variety of different contexts in which models should be deployed. In some cases, it’s as easy as generating a CSV of actions that would be fed to some system. Often, we just need to deploy a web service that’s capable of making predictions. However, there are many times in which we need to deploy these models to complex, low-latency, or edge systems. We will go through the different ways you can deploy machine learning models to production in this chapter.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;One of the strengths of the book is its hands-on approach, with numerous examples and practical exercises that allow readers to apply their newfound knowledge and skills. The authors also do a great job of explaining the underlying theories and concepts in a way that is accessible to both technical and non-technical readers.&lt;/p&gt;

&lt;p&gt;Overall, “&lt;a href="https://rebrand.ly/devsheld-005" rel="noopener noreferrer"&gt;Mastering Machine Learning on AWS&lt;/a&gt;” is an excellent resource for anyone looking to get started or further their knowledge in machine learning on AWS. It is well-written, well-organized, and packed with practical information and examples, making it an essential guide for anyone working in the field of machine learning and AWS.&lt;/p&gt;

&lt;p&gt;📚&lt;a href="https://rebrand.ly/devsheld-005" rel="noopener noreferrer"&gt;Get the Book in Amazon&lt;/a&gt;&lt;/p&gt;




&lt;p&gt;&lt;a href="https://devshelf.co/" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv04cas2nc9gcvaznw3va.jpg" alt="DevShelf" width="800" height="211"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>aws</category>
      <category>books</category>
      <category>machinelearning</category>
    </item>
    <item>
      <title>Developing Efficient Serverless Applications with AWS Lambda</title>
      <dc:creator>Daniel Dominguez</dc:creator>
      <pubDate>Wed, 01 Mar 2023 08:01:18 +0000</pubDate>
      <link>https://dev.to/dominguezdaniel/developing-efficient-serverless-applications-with-aws-lambda-17i7</link>
      <guid>https://dev.to/dominguezdaniel/developing-efficient-serverless-applications-with-aws-lambda-17i7</guid>
      <description>&lt;p&gt;  &lt;iframe src="https://www.youtube.com/embed/LY_vsa2ohJM"&gt;
  &lt;/iframe&gt;
&lt;br&gt;
“&lt;a href="https://rebrand.ly/devshelf-002" rel="noopener noreferrer"&gt;Mastering AWS Lambda&lt;/a&gt;” by Yohan Wadia and Udita Gupta is a comprehensive guide to building and deploying serverless applications on Amazon Web Services (AWS). The book covers all the essential concepts of serverless computing and focuses on the use of AWS Lambda as the platform for implementing serverless applications. The authors provide clear and concise explanations of the core concepts and features of AWS Lambda, along with hands-on examples and best practices for building scalable and robust serverless applications.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What Does This Book Cover?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The book starts with an introduction to serverless computing and its benefits, followed by a deep dive into the various components of AWS Lambda, such as triggers, functions, and event sources. The authors also cover topics like security, monitoring, and troubleshooting, providing guidance on how to build and deploy secure and reliable serverless applications. Additionally, the book covers advanced topics such as serverless architectures, custom domains, and edge computing.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Chapter 1: Introducing AWS Lambda&lt;br&gt;
Chapter 2: Writing Lambda Functions&lt;br&gt;
Chapter 3: Testing Lambda Functions&lt;br&gt;
Chapter 4: Event-Driven Model&lt;br&gt;
Chapter 5: Extending AWS Lambda with External Services&lt;br&gt;
Chapter 6: Build and Deploy Serverless Applications with AWS Lambda&lt;br&gt;
Chapter 7: Monitoring and Troubleshooting AWS Lambda&lt;br&gt;
Chapter 8: Introducing the Serverless Application Framework&lt;br&gt;
Chapter 9: AWS Lambda — Use Cases&lt;br&gt;
Chapter 10: Next Steps with AWS Lambda&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;One of the strengths of this book is its hands-on approach. Throughout the book, the authors provide step-by-step instructions and code samples to help readers understand how to build and deploy their own serverless applications on AWS. This approach makes the book an excellent resource for developers who are new to serverless computing, as well as those who are looking to expand their knowledge and skills.&lt;/p&gt;

&lt;p&gt;I think “&lt;a href="https://rebrand.ly/devshelf-002" rel="noopener noreferrer"&gt;Mastering AWS Lambda&lt;/a&gt;” is a well-written and comprehensive guide to building and deploying serverless applications on AWS. The authors have done an excellent job of explaining complex concepts in a clear and concise manner, making it an accessible resource for developers of all levels. If you are looking to dive into the world of serverless computing or want to learn more about AWS Lambda, I highly recommend this book.&lt;/p&gt;

&lt;p&gt;📚&lt;a href="https://rebrand.ly/devshelf-002" rel="noopener noreferrer"&gt;Get the Book in Amazon&lt;/a&gt;&lt;/p&gt;




&lt;p&gt;&lt;a href="https://devshelf.co/" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv04cas2nc9gcvaznw3va.jpg" alt="DevShelf" width="800" height="211"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>discuss</category>
    </item>
    <item>
      <title>Building AI Models with OpenAI and AWS</title>
      <dc:creator>Daniel Dominguez</dc:creator>
      <pubDate>Fri, 16 Dec 2022 12:22:34 +0000</pubDate>
      <link>https://dev.to/dominguezdaniel/building-ai-models-with-openai-and-aws-4ha7</link>
      <guid>https://dev.to/dominguezdaniel/building-ai-models-with-openai-and-aws-4ha7</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;Starting with &lt;a href="https://openai.com/" rel="noopener noreferrer"&gt;OpenAI&lt;/a&gt; and &lt;a href="https://aws.amazon.com/" rel="noopener noreferrer"&gt;AWS&lt;/a&gt; can be a great way to access the latest tools and technologies for building and deploying advanced artificial intelligence (AI) models. With AWS, you can easily spin up powerful compute resources and take advantage of the scalability and reliability of the cloud.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;First, let's talk about &lt;a href="https://openai.com/" rel="noopener noreferrer"&gt;OpenAI&lt;/a&gt;. OpenAI is a research institute that was founded in 2015 with the goal of promoting and developing friendly AI. Since its inception, OpenAI has made a number of significant contributions to the field of AI, including the development of advanced language processing models like &lt;a href="https://en.wikipedia.org/wiki/GPT-3" rel="noopener noreferrer"&gt;GPT-3&lt;/a&gt; and the creation of the Dota 2-playing AI, &lt;a href="https://en.wikipedia.org/wiki/OpenAI_Five" rel="noopener noreferrer"&gt;OpenAI Five&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Next, let's talk about &lt;a href="https://aws.amazon.com/" rel="noopener noreferrer"&gt;AWS&lt;/a&gt;. Amazon Web Services, is a cloud computing platform that offers a wide range of services, including storage, analytics, and machine learning. With AWS, you can easily access and use powerful computing resources without having to invest in expensive hardware.&lt;/p&gt;

&lt;p&gt;To get started with OpenAI and AWS, the first thing you'll need to do is sign up for an AWS account. This is free, and you can do it by visiting the AWS website and following the instructions there. Once you have an AWS account, you'll be able to access a wide range of services, including machine learning and AI-related services.&lt;/p&gt;

&lt;p&gt;Here are some steps to get started:&lt;/p&gt;

&lt;p&gt;1- Set up an AWS account. To create an AWS account, visit the AWS website and click on the "Create an AWS account" button. You will be asked to provide some basic information and to set up a billing method.&lt;/p&gt;

&lt;p&gt;2- Install the &lt;a href="https://aws.amazon.com/cli/" rel="noopener noreferrer"&gt;AWS CLI&lt;/a&gt;. The AWS CLI is a command-line tool that allows you to manage your AWS resources from the terminal. To install the AWS CLI, follow the instructions on the AWS website.&lt;/p&gt;

&lt;p&gt;3- Configure the AWS CLI. After installing the AWS CLI, you will need to configure it with your AWS credentials. This can be done using the &lt;code&gt;aws configure&lt;/code&gt; command. You will need to provide your access key and secret key, which can be found in the &lt;a href="https://console.aws.amazon.com/iam/home" rel="noopener noreferrer"&gt;IAM console&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;4- Create an IAM user. To access the &lt;a href="https://openai.com/api/" rel="noopener noreferrer"&gt;OpenAI API&lt;/a&gt;, you will need to create an IAM user with the appropriate permissions. This can be done using the AWS CLI or the IAM console.&lt;/p&gt;

&lt;p&gt;5- Create an &lt;a href="https://aws.amazon.com/s3/" rel="noopener noreferrer"&gt;S3&lt;/a&gt; bucket. An S3 bucket is a storage location where you can store your data and models. To create an S3 bucket, use the &lt;code&gt;aws s3 mb&lt;/code&gt; command, followed by the name of the bucket you want to create.&lt;/p&gt;

&lt;p&gt;6- Create an &lt;a href="https://aws.amazon.com/ec2/" rel="noopener noreferrer"&gt;EC2&lt;/a&gt; instance. An EC2 instance is a virtual machine that you can use to run your OpenAI workloads. To create an EC2 instance, use the &lt;code&gt;aws ec2 run-instances&lt;/code&gt; command, followed by the instance type and other configuration options.&lt;/p&gt;

&lt;p&gt;7- Install the &lt;a href="https://github.com/openai/openai-python" rel="noopener noreferrer"&gt;OpenAI Python&lt;/a&gt; library. To install the OpenAI Python library, use the &lt;code&gt;pip install openai&lt;/code&gt; command. This will allow you to access the OpenAI API and start building your AI models.&lt;/p&gt;

&lt;p&gt;8- Start building your AI models. With the OpenAI Python library installed, you can start using the OpenAI API to build and deploy AI models. This can be done using the &lt;code&gt;openai.compose()&lt;/code&gt; method, which allows you to specify the type of model you want to build and the data you want to use to train it.&lt;/p&gt;

&lt;p&gt;OpenAI offers a number of pre-trained models that you can use to perform various tasks, including language processing, image recognition, and more. You can use these models directly, or you can fine-tune them to fit your specific needs.&lt;/p&gt;

&lt;p&gt;To use OpenAI's pre-trained models, you'll need to download them from the OpenAI website and then upload them to your AWS account. Once you've done this, you can use the models in your own applications or workflows.&lt;/p&gt;

&lt;p&gt;In conclusion, starting with OpenAI and AWS can be a great way to access the latest tools and technologies for building and deploying advanced AI models. By signing up for an AWS account and downloading OpenAI's pre-trained models, you'll be able to access a wide range of powerful AI tools and resources. With these tools, you can develop advanced AI applications, perform advanced analytics, and much more.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Follow me on Twitter for the upcoming updates &lt;a href="https://twitter.com/dominguezdaniel" rel="noopener noreferrer"&gt;@dominguezdaniel&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;




</description>
      <category>openai</category>
      <category>aws</category>
      <category>ai</category>
      <category>python</category>
    </item>
    <item>
      <title>Underlying Principles of Blockchain</title>
      <dc:creator>Daniel Dominguez</dc:creator>
      <pubDate>Sat, 07 May 2022 02:26:37 +0000</pubDate>
      <link>https://dev.to/dominguezdaniel/five-basic-principles-of-blockchain-1pf8</link>
      <guid>https://dev.to/dominguezdaniel/five-basic-principles-of-blockchain-1pf8</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;With blockchain, we can imagine a world in which contracts are embedded in digital code and stored in transparent, shared databases, where they are protected from deletion, tampering, and revision. In this world every agreement, every process, every task, and every payment would have a digital record and signature that could be identified, validated, stored, and shared.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Intermediaries like lawyers, brokers, and bankers might no longer be necessary. Individuals, organizations, machines, and algorithms would freely transact and interact with one another with little friction. This is the immense potential of blockchain.&lt;/p&gt;

&lt;p&gt;Here are five basic principles underlying the technology:&lt;/p&gt;

&lt;p&gt;1- Distributed database. Each party on a blockchain has access to the entire database and its complete history. No single party controls the data or the information. Every party can verify the records of its transaction partners directly, without an intermediary.&lt;/p&gt;

&lt;p&gt;2- Peer-to-peer transmission. Communication occurs directly between peers instead of through a central node. Each node stores and forwards information to all other nodes.&lt;/p&gt;

&lt;p&gt;3- Transparency with pseudonymity. Every transaction and its associated value are visible to anyone with access to the system. Each node, or user, on a blockchain has a unique 30-plus-character alphanumeric address that identifies it. Users can choose to remain anonymous or provide proof of their identity to others. Transactions occur between blockchain addresses.&lt;/p&gt;

&lt;p&gt;4- Irreversibility of records. Once a transaction is entered in the database and the accounts are updated, the records cannot be altered, because they’re linked to every transaction record that came before them (hence the term “chain”). Various computational algorithms and approaches are deployed to ensure that the recording on the database is permanent, chronologically ordered, and available to all others on the network.&lt;/p&gt;

&lt;p&gt;5- Computational logic. The digital nature of the ledger means that blockchain transactions can be tied to computational logic and in essence programmed. So users can set up algorithms and rules that automatically trigger transactions between nodes.&lt;/p&gt;

&lt;p&gt;In summary, the underlying principles of blockchain technology make it a powerful and disruptive technology that has the potential to transform many industries.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Follow me on Twitter for the upcoming updates &lt;a href="https://twitter.com/dominguezdaniel" rel="noopener noreferrer"&gt;@dominguezdaniel&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;




</description>
      <category>blockchain</category>
      <category>crypto</category>
      <category>ethereum</category>
      <category>web3</category>
    </item>
    <item>
      <title>Exploring the World of DApps</title>
      <dc:creator>Daniel Dominguez</dc:creator>
      <pubDate>Thu, 31 Mar 2022 14:08:31 +0000</pubDate>
      <link>https://dev.to/dominguezdaniel/what-is-a-dapp-53dn</link>
      <guid>https://dev.to/dominguezdaniel/what-is-a-dapp-53dn</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;A DApp is a kind of Internet application whose backend runs on a decentralized peer-to-peer network and its source code is open source. No single node in the network has complete control over the DApp. &lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Depending on the functionality of the DApp, different data structures are used to store application data. For example, the Bitcoin DApp uses the blockchain data structure.&lt;/p&gt;

&lt;p&gt;These peers can be any computer connected to the Internet; therefore, it becomes a big challenge to detect and prevent peers from making invalid changes to the application data and sharing wrong information with others. So we need some sort of consensus between the peers regarding whether the data published by a peer is right or wrong. &lt;/p&gt;

&lt;p&gt;There is no central server in a DApp to coordinate the peers and decide what is right and wrong; therefore, it becomes really difficult to solve this challenge. &lt;/p&gt;

&lt;p&gt;There are certain protocols (specifically called consensus protocols) to tackle this challenge. Consensus protocols are designed specifically for the type of data structure the DApp uses. &lt;/p&gt;

&lt;p&gt;For example, Bitcoin uses the proof-of-work protocol to achieve consensus.&lt;/p&gt;

&lt;p&gt;Every DApp needs a client for the user to use the DApp. To use a DApp, we first need a node in the network by running our own node server of the DApp and then connecting the client to the node server. &lt;/p&gt;

&lt;p&gt;Nodes of a DApp provide an API only and let the developer community develop various clients using the API. Some DApp developers officially provide a client. Clients of DApps should be open source and should be downloaded for use; otherwise, the whole idea of decentralization will fail.&lt;/p&gt;

&lt;p&gt;But this architecture of a client is cumbersome to set up, especially if the user is a non-developer; therefore, clients are usually hosted and/or nodes are hosted as a service to make the process of using a DApp easier.&lt;/p&gt;

&lt;p&gt;What are distributed applications?&lt;/p&gt;

&lt;p&gt;Distributed applications are those applications that are spread across multiple servers instead of just one. This is necessary when application data and traffic becomes huge and application downtime is not affordable. &lt;/p&gt;

&lt;p&gt;In distributed applications, data is replicated among various servers to achieve high availability of data. &lt;/p&gt;

&lt;p&gt;Centralized applications may or may not be distributed, but decentralized applications are always distributed. &lt;/p&gt;




&lt;p&gt;&lt;em&gt;Follow me on Twitter for the upcoming updates &lt;a href="https://twitter.com/dominguezdaniel" rel="noopener noreferrer"&gt;@dominguezdaniel&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;




</description>
      <category>blockchain</category>
      <category>dapp</category>
      <category>web3</category>
    </item>
    <item>
      <title>Tracking Key Development Metrics</title>
      <dc:creator>Daniel Dominguez</dc:creator>
      <pubDate>Wed, 20 Oct 2021 15:13:44 +0000</pubDate>
      <link>https://dev.to/dominguezdaniel/on-development-metrics-2k3g</link>
      <guid>https://dev.to/dominguezdaniel/on-development-metrics-2k3g</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;How can you measure productivity, efficiency, and effectiveness, along with other indicators that can tell you about the quality of the overall development experience?&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Let’s start with terminology. This may seem pretty basic, but software development is a human-intensive process. Much of its success hinges on a developer’s productivity and efficiency. These terms are often used interchangeably but they are actually different metrics:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Developer productivity is the ratio of output (i.e., the quantity of work completed) to input (i.e. effort, time). This metric is easy to measure, but it doesn’t tell the whole story.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Developer efficiency is measured by the amount of resources (i.e., time, computation power, human resources) used per unit of best possible output. It accounts for all the input required, so you can see if the team is doing enough of the right kind of work.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Developer effectiveness evaluates important contexts that allow your organization to address low morale, stress/burnout, and other factors that can contribute to poorer quality and high turnover in the long term.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Below you’ll find a list of other metrics that you can use to verify a successful impact on your software development process:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Number of merges per developer/day&lt;/strong&gt;&lt;br&gt;
Less time spent jumping between different tools and looking for information means more time to focus on shipping code. A second level of bottlenecks can be identified if you categorize contributions by domain (services, web, data, etc).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Deploys to production&lt;/strong&gt;&lt;br&gt;
A close cousin to the metric above. How many times does an engineer push changes into production?&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;MTTR&lt;/strong&gt;&lt;br&gt;
With clear ownership of all the pieces in your microservices ecosystem and all tools integrated into one place, its quick for teams to find the root cause of failures and fix them.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;T-shapedness&lt;/strong&gt;&lt;br&gt;
A T-shaped engineer is someone who is able to contribute to different domains of engineering. Teams with T-shaped people have fewer bottlenecks and can therefore deliver more consistently.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Fragmentation&lt;/strong&gt;&lt;br&gt;
Software Templates help drive standardization in your software ecosystem. By measuring the variance in technology between different software components, it's possible to get a sense of the overall fragmentation in your ecosystem. Examples could include: framework versions, languages, deployment methods and various code quality measurements.&lt;/p&gt;

&lt;p&gt;In conclusion, there is no universal answer on how to measure development success, but a healthy mix of qualitative and quantitative metrics mapped to your organization’s goals are key to that evaluation.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Follow me on Twitter for the upcoming updates &lt;a href="https://twitter.com/dominguezdaniel" rel="noopener noreferrer"&gt;@dominguezdaniel&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;




</description>
      <category>devjournal</category>
      <category>programming</category>
      <category>productivity</category>
      <category>management</category>
    </item>
    <item>
      <title>Monitoring Performance for AWS Lambda</title>
      <dc:creator>Daniel Dominguez</dc:creator>
      <pubDate>Wed, 13 Oct 2021 15:54:49 +0000</pubDate>
      <link>https://dev.to/dominguezdaniel/monitoring-aws-lambda-3gfn</link>
      <guid>https://dev.to/dominguezdaniel/monitoring-aws-lambda-3gfn</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;Monitoring AWS Lambda performance helps you identify any issues, and it can also send you alerts and notify you of anything you might need to know. The world is slowly getting to a point where machines and computers will be flawless, but until then, if we let them perform various tasks for us, we could at least monitor their performance.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Monitoring your AWS Lambda functions is an important aspect of ensuring the stability and reliability of your application. By keeping track of key performance indicators (KPIs) and monitoring for potential issues, you can proactively identify and fix problems before they affect your users.&lt;/p&gt;

&lt;p&gt;Below the top 5 AWS Lambda monitoring tools:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Datadog&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.datadoghq.com/" rel="noopener noreferrer"&gt;Datadog&lt;/a&gt; provides the unity of metrics, logs, and traces. Aggregating events and metrics from more than 200 technologies such as Amazon Web Services, MongoDB, Slack, Docker, Chef, and many others. Datadog also explores enriched data, searches and analyzes log data while tracing requests across the distributed systems, and alerts you on app performance.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Dashbird&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://dashbird.io/" rel="noopener noreferrer"&gt;Dashbird&lt;/a&gt; is excellent in providing error alerts and also in monitoring support. Dashbird collects and analyzes CloudWatch logs while zeroing the effects on your AWS Lambda performance. Integration with the Slack account is also possible, and that brings alerts about early exits, crashes, cold starts, timeouts, runtime errors, etc., to your development chat. Dashbird’s error diagnostics, advanced log searching, and function statistics are only a few of the benefits Dashbird offers to its users.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Logz&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://logz.io/" rel="noopener noreferrer"&gt;Logz&lt;/a&gt; offers ELK service the best choice for scaling and performance with ease while there’s no need to perform upgrades or capacity management. Logz.io security is enterprise-grade, and it keeps your data private and secure while also complying with key industry standards. Logz.io goes way beyond the ELK service to provide an Enterprise-Class log analytics platform consisted of features like integrated alerts and multiple sub-accounts.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4. Thundra&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.thundra.io/" rel="noopener noreferrer"&gt;Thundra&lt;/a&gt; started as a serverless monitoring platform but later switched to targeting more general services. While they’re still a good choice for serverless systems, their tools can now be used for containers and virtual machines too.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;5. Lumigo&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://lumigo.io/" rel="noopener noreferrer"&gt;Lumigo&lt;/a&gt; offers visual debugging, and it also comes with tracing, metrics, and alert support, but, differing from Thundra, it is more focused on serverless monitoring, from the architecture down to function logs and traces. Lumigo also comes with a Lambda Layer/Extension for Python and Node.js runtimes to instrument Lambda functions.&lt;/p&gt;

&lt;p&gt;In conclusion learning about how to approach the serverless monitoring architecture will for sure make your life (and work) much easier. With a proper understanding of the AWS infrastructure, you are one step closer to a new skill called “observability” regarding the lambda functions. The price is set, but it’s a small one compared to the lambda function benefits you’ll obtain.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Follow me on Twitter for the upcoming updates &lt;a href="https://twitter.com/dominguezdaniel" rel="noopener noreferrer"&gt;@dominguezdaniel&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;




</description>
      <category>aws</category>
      <category>monitoring</category>
      <category>performance</category>
      <category>devjournal</category>
    </item>
    <item>
      <title>Thinking on AWS as Lego Blocks</title>
      <dc:creator>Daniel Dominguez</dc:creator>
      <pubDate>Fri, 01 Oct 2021 18:31:39 +0000</pubDate>
      <link>https://dev.to/dominguezdaniel/think-on-aws-as-lego-blocks-2e1h</link>
      <guid>https://dev.to/dominguezdaniel/think-on-aws-as-lego-blocks-2e1h</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;Think of Lego blocks. Very simple in appearance and functionality on their own. However, put them together in the right way, with the right pieces, and you can create something highly complex and amazing.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The basic Lego unit is the 1x1. It’s the smallest block, connects to any other piece and hurts a lot when you step on it with bare feet.&lt;/p&gt;

&lt;p&gt;The building blocks of AWS are Storage, Compute, Networking, Security, Monitoring, Media Processing, Analytics, and AI.&lt;/p&gt;

&lt;p&gt;Once you’ve mastered creating simple buildings and boxes on top of your baseplate using your starter Lego set, you can graduate to a full-blown Lego kit with triangles, curved pieces and half moons to create more sophisticated things like castles, cars and bridges. In a similar fashion, after you get the hang of the AWS basics, you can add more specialized AWS services like DynamoDB, ElastiCache, Systems Manager, EFS, and Lambda.&lt;/p&gt;

&lt;p&gt;No matter how sophisticated you get, everything is still rooted in those basic 1x1 building blocks.&lt;/p&gt;

&lt;p&gt;But legos aren’t just about size or shape, they’re also about color. You can have the same Lego in a variety of different colors just like AWS services are variations of others. For example, Amazon Elastic Container Service (ECS), Amazon Elastic Container Service for Kubernetes (EKS), and Fargate serve the same purpose but just do it in a slightly different way whereas Elastic Beanstalk, OpsWorks, and CloudFormation all allow you to manage your infrastructure in an automated way but with different levels of control.&lt;/p&gt;

&lt;p&gt;What you build with legos or AWS, and how you build it, is limited only by your imagination. But beware, just like creating a tower of legos, unless you build a sound structure, your building will fall over. A tall skinny tower without an intelligently-designed structure will eventually collapse, and the same holds true in AWS with experience, expertise and continued education, you can create a structure that is built strong enough to support your application and users.&lt;/p&gt;

&lt;p&gt;AWS is a compilation of many different Lego pieces that all have their own simple functions. Although on their own they may lack much functionality; combined together with other services, they can construct a highly complex and practical applications. &lt;/p&gt;




&lt;p&gt;&lt;em&gt;Follow me on Twitter for the upcoming updates &lt;a href="https://twitter.com/dominguezdaniel" rel="noopener noreferrer"&gt;@dominguezdaniel&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;




</description>
      <category>aws</category>
      <category>lego</category>
      <category>devjournal</category>
      <category>programming</category>
    </item>
    <item>
      <title>Scaling AWS EC2 Instances</title>
      <dc:creator>Daniel Dominguez</dc:creator>
      <pubDate>Wed, 28 Jul 2021 15:41:16 +0000</pubDate>
      <link>https://dev.to/dominguezdaniel/scaling-aws-ec2-instances-1ai</link>
      <guid>https://dev.to/dominguezdaniel/scaling-aws-ec2-instances-1ai</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;Migrating workloads AWS comes with many advantages. You can operate workloads in new ways. When you only pay for what you use and add capacity within minutes, the world of auto-scaling opens up.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;When your workload is idling, you remove capacity and lower your expenses. When the workload is busy, you add capacity to keep the users happy. An excellent idea that works perfectly in theory. In practice, many pitfalls are waiting for you.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Vertical vs Horizontal scaling&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Vertical scaling refers to the practice of upgrading the hardware of a machine that runs your workloads. Instead of running on 16 cores, you can run on 32 cores. The problem here is that you can not add as many cores as you wish. Infinite core CPUs are not yet invented. Besides that, CPUs with a high core density are usually more expensive than many CPUs with a lower core count.&lt;/p&gt;

&lt;p&gt;On AWS, it is not possible to change the hardware specs of a running EC2 instance. Vertical scaling always comes with a short downtime. You have to stop the EC2 instance, change the instance type, and start the EC2 instance again.&lt;/p&gt;

&lt;p&gt;Horizontal scaling is about adding more machines, usually by using cheap commodity hardware.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Auto Scaling Groups&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The naming Auto Scaling Group (ASG) is misleading. The surprising fact is this: ASGs don’t provide auto-scaling capabilities. An ASG has one job: Keep the number of running EC2 instances in sync with a desired number of instances. The ASG itself does not modify the desired number of instances. An external trigger is responsible for adjusting the desired number of instances. By increasing the number, the ASG will add EC2 instances (scale-out). By decreasing the number, the ASL will terminate EC2 instances (scale-in).&lt;/p&gt;

&lt;p&gt;AWS provides four options for Auto Scaling:&lt;/p&gt;

&lt;p&gt;1- Simple Scaling: This option relies on a CloudWatch Alarm. The alarm watches a metric like network or CPU utilization or the requests per target behind a load balancer. If the metric reaches a predefined threshold, a scaling policy is triggered. &lt;/p&gt;

&lt;p&gt;2- Step Scaling: Step scaling also required a CloudWatch Alarm. But you can react differently based on how bad things are. You could add one instance if the CPU is utilized more than 60%. But if the CPU utilization grows to 70%, you want to add two instances. If it rises to 80%, you add four instances. If it grows to 90%, you add eight.&lt;/p&gt;

&lt;p&gt;3- Target Tracking: With target tracking, you ask AWS to keep a metric within a specific range. For example, you can tell AWS to keep your CPU utilization inside an ASG at 50%. AWS will adjust the number of instances in a way to reach the target. No CloudWatch Alarm is required here. AWS will create alarms on your behalf.&lt;/p&gt;

&lt;p&gt;4- AWS Auto Scaling: AWS Auto Scaling can be confusing. It also provides target tracking, which is called dynamic scaling, but t. The difference is that you also get predictive scaling. To do so, AWS Auto Scaling takes the past 14 days into account to predict the next two days. This works great for workloads that are less used on weekends or during the night. Black Friday and large end-of-month batch jobs will not be detected.&lt;/p&gt;

&lt;p&gt;In summary, no matter which option you choose, don’t forget to run extensive load tests to validate your configuration. Load testing is very time-consuming. You have to wait a lot to see how the system behaves while load patterns change.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Follow me on Twitter for the upcoming updates &lt;a href="https://twitter.com/dominguezdaniel" rel="noopener noreferrer"&gt;@dominguezdaniel&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;




</description>
      <category>aws</category>
      <category>ec2</category>
      <category>cloud</category>
      <category>programming</category>
    </item>
  </channel>
</rss>
