<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Mark Freedman</title>
    <description>The latest articles on DEV Community by Mark Freedman (@markfreedman).</description>
    <link>https://dev.to/markfreedman</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/markfreedman"/>
    <language>en</language>
    <item>
      <title>How Vertical AI Agents Work</title>
      <dc:creator>Mark Freedman</dc:creator>
      <pubDate>Sat, 01 Mar 2025 19:20:34 +0000</pubDate>
      <link>https://dev.to/markfreedman/how-vertical-ai-agents-work-2io5</link>
      <guid>https://dev.to/markfreedman/how-vertical-ai-agents-work-2io5</guid>
      <description>&lt;p&gt;Now that we’ve talked about what Vertical AI Agents are, let’s take a look at how they actually work. These AI agents don’t start out intelligent. They have to be trained, just like people learn skills over time. But instead of sources like teachers and YouTube videos, they learn from data — often a massive amount of data. This training makes them really good at one job, whether it’s spotting fraud, assisting doctors, or even recommending new songs.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Just a quick note — I’m writing these articles as if the reader may not be familiar with several of these terms, or if they aren’t in the software field. Even if you have this experience, you may pick up something new.&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Feeding the AI – Where Data Comes From
&lt;/h2&gt;

&lt;p&gt;So before a vertical AI can do its job, it needs information. Imagine trying to play a new video game without ever looking at the rules. You may make some educated guesses based on general knowledge you’ve gained playing games over the years. But you’d really have no idea exactly what to do. AI is the same way; it needs examples to learn from.&lt;/p&gt;

&lt;h3&gt;
  
  
  Where Does the Data Come From?
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Healthcare AI: Medical images, patient records, doctor’s notes&lt;/li&gt;
&lt;li&gt;Finance AI: Transaction logs, fraud reports, stock market trends&lt;/li&gt;
&lt;li&gt;Retail AI: Purchase history, customer behavior, inventory levels&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Retrieval-Augmented Generation (RAG) is a method that helps AI find the best and most useful information from huge amounts of data, improving what it learns. We’ll dig into this a bit more in the next article.&lt;/p&gt;

&lt;h3&gt;
  
  
  Cleaning the Data: Getting Rid of the Mess
&lt;/h3&gt;

&lt;p&gt;AI can’t learn properly if the data is a mess. Garbage in leads to garbage out. Imagine trying to read a book full of spelling mistakes and missing pages. It wouldn’t make sense. That’s why data has to be cleaned before AI can use it. Cleaning means:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Removing duplicate or useless data&lt;/li&gt;
&lt;li&gt;Filling in missing information&lt;/li&gt;
&lt;li&gt;Making sure all data follows the same format&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Training the AI – Learning from Examples
&lt;/h2&gt;

&lt;p&gt;Once the data is ready, it’s time to teach the AI. There are different ways AI can learn, just like us.&lt;/p&gt;

&lt;h3&gt;
  
  
  Three Ways AI Learns:
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Supervised Learning: The AI is given “labeled data,” meaning it’s told what’s right and wrong. Think of a teacher grading homework and showing students their mistakes.&lt;/li&gt;
&lt;li&gt;Unsupervised Learning: The AI is given data but not told what’s right or wrong. Instead, it finds patterns by itself, like a kid sorting Legos into colors without being told how.&lt;/li&gt;
&lt;li&gt;Reinforcement Learning: AI learns by trial and error, like playing a video game and figuring out what works based on scores and rewards.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;AI developers often use pre-trained AI models (like using a pre-made cake mix instead of baking from scratch). APIs (Application Programming Interface) allow AI to access these models to learn even faster. We’ll talk more about APIs in a future article. They have multiple uses.&lt;/p&gt;

&lt;h2&gt;
  
  
  Fine-Tuning – Polishing AI’s Knowledge
&lt;/h2&gt;

&lt;p&gt;Training gives AI a good starting point, but it’s not perfect. It needs fine-tuning to do its job well. This step is like practicing for a test — learning from mistakes and improving over time.&lt;/p&gt;

&lt;h3&gt;
  
  
  How AI Gets Fine-Tuned:
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Adjusting settings to improve accuracy (we’ll talk about what these settings are in a future article)&lt;/li&gt;
&lt;li&gt;Testing on new, unseen data&lt;/li&gt;
&lt;li&gt;Removing biases that could cause unfair results
Many AI systems keep learning even after they’re deployed. With workflow automation, AI models can keep updating themselves without needing to be re-trained from scratch every time new data appears.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Putting AI to Work – Deployment
&lt;/h2&gt;

&lt;p&gt;Once AI is ready, it’s time to put it to use. There are different ways to use an AI system:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Inside apps or software (for example, an AI that detects fraud in banking apps)&lt;/li&gt;
&lt;li&gt;As an API (used by a chatbot that helps customer support teams)&lt;/li&gt;
&lt;li&gt;Standalone programs (like a medical AI that scans X-rays for doctors)&lt;/li&gt;
&lt;li&gt;Inside existing tools we use daily (like Slack, WhatsApp, etc.)&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Continuous Learning – Keeping AI Smart Over Time
&lt;/h2&gt;

&lt;p&gt;Just like we need to keep learning to stay sharp and relevant in our field, AI needs updates too. If an AI is trained on old data, it might make bad or outdated decisions. To remain useful, it must keep learning from new data.&lt;/p&gt;

&lt;h3&gt;
  
  
  How AI Stays Up To Date:
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;New training data helps it stay accurate&lt;/li&gt;
&lt;li&gt;Feedback from users helps it adjust and improve&lt;/li&gt;
&lt;li&gt;Error checking helps fix mistakes and biases
As with initial training, AI systems can use Retrieval-Augmented Generation (RAG) to pull in the latest information whenever they make decisions. This helps them avoid outdated answers and stay fresh.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  What is Inference?
&lt;/h3&gt;

&lt;p&gt;Once AI is trained, it needs to use what it learned to make decisions. This is called inference. It’s when AI looks at new input and predicts an answer based on what it was taught.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Example:&lt;/strong&gt; A fraud detection AI is trained on thousands of real fraud cases. When it sees a new transaction, it infers whether it looks like fraud or not.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Example:&lt;/strong&gt; A medical AI learns from millions of X-ray images. When it sees a new X-ray, it infers if there’s a problem.
## Wrapping Up
Vertical AI Agents don’t start smart. They learn from data, get trained, and keep improving over time. But learning isn’t enough. AI needs tools to help it stay useful. From Retrieval-Augmented Generation (RAG) to APIs and workflow automation, these technologies help AI work more efficient, stay current, and improve over time. Improving enough to think for itself? Well, that’s a debate for another time.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Next time, we’ll dive deeper into how these tools work and how they help AI stay accurate, relevant, and powerful.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>agents</category>
    </item>
    <item>
      <title>Introduction to Vertical AI Agents</title>
      <dc:creator>Mark Freedman</dc:creator>
      <pubDate>Mon, 24 Feb 2025 13:00:46 +0000</pubDate>
      <link>https://dev.to/markfreedman/introduction-to-vertical-ai-agents-mnl</link>
      <guid>https://dev.to/markfreedman/introduction-to-vertical-ai-agents-mnl</guid>
      <description>&lt;h2&gt;
  
  
  What is AI, and How Do Vertical AI Agents Fit In?
&lt;/h2&gt;

&lt;p&gt;Artificial Intelligence (AI) is impacting up many industries, but not all AI works the same way. Some AI systems act like generalists — they can do a lot of different things but aren’t great at any one job. Jack of all trades; master at none. Others are trained specialists so they can excel in one specific area. These are Vertical AI Agents.&lt;/p&gt;

&lt;h2&gt;
  
  
  AI vs. Regular Software
&lt;/h2&gt;

&lt;p&gt;Typical software follows predefined instructions, like a cookbook where every recipe has step-by-step directions. But AI learns from experience by looking at patterns in data and improving its output over time.&lt;/p&gt;

&lt;p&gt;For example, look at e-commerce recommendation engines like Amazon’s. Instead of following hard-coded rules, it studies what people browse and buy, like a store employee who learns what items to suggest based on what a customer has liked before.&lt;/p&gt;

&lt;h2&gt;
  
  
  Broad AI vs. Narrow AI – “Swiss Army Knife vs. Precision Tool”
&lt;/h2&gt;

&lt;p&gt;AI generally falls into two categories:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;General AI (the Swiss Army knife)&lt;/strong&gt; can handle a variety of tasks but isn’t deeply specialized in one. Think about Siri, Alexa, or ChatGPT out-of-the-box. They can answer random questions but they aren’t experts in any one subject. But when models like ChatGPT are fine-tuned, they can become more focused, moving closer toward specialized AI.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Narrow AI (the precision tool)&lt;/strong&gt; is built for a specific task. For example, an AI designed to detect fraudulent credit card transactions is focused on that task and wouldn’t be useful for answering trivia questions or writing blog post outlines.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  What is a Vertical AI Agent? – “The AI That Masters a Single Industry”
&lt;/h2&gt;

&lt;p&gt;A &lt;strong&gt;Vertical AI Agent&lt;/strong&gt; is an AI system built to work in one specific industry (“vertical”). It’s designed to handle tasks within that domain better than a general AI could. For example:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Healthcare AI&lt;/strong&gt; that assists doctors by analyzing medical images for early disease detection.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Finance AI&lt;/strong&gt; that monitors transactions to detect fraud before it happens.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Music AI&lt;/strong&gt; that helps sound engineers tweak levels for perfectly balanced tracks (something I’ve been using for my music).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;AI agents don’t try to be everything to everyone. Rather, they focus on one field, like a developer who specializes in a single tech stack and eventually becomes an expert.&lt;/p&gt;

&lt;h2&gt;
  
  
  Other Info
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;AI Training and Model Optimization:&lt;/strong&gt; AI models don’t start out smart. They need to be trained. The more relevant data they analyze, the better they get. It’s like debugging code — test, refine, reiterate, and optimize until it works correctly.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Why More Data Improves AI Accuracy:&lt;/strong&gt; AI works best when trained with diverse, quality (and often high-volumes of) data. The more real examples it can learn from, the better its predictions become. Like as a developer writes more code, they get better at troubleshooting and problem-solving.&lt;/p&gt;

&lt;h2&gt;
  
  
  What’s Next?
&lt;/h2&gt;

&lt;p&gt;Now that we know what Vertical AI Agents are, my next article will explain how they’re built. We’ll break down how they gather topic-specific data, train on real-world examples, and get deployed to solve specific problems.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>agents</category>
    </item>
    <item>
      <title>Dropbox Cautionary Tale</title>
      <dc:creator>Mark Freedman</dc:creator>
      <pubDate>Tue, 03 Nov 2020 19:50:19 +0000</pubDate>
      <link>https://dev.to/markfreedman/dropbox-cautionary-tale-3b2e</link>
      <guid>https://dev.to/markfreedman/dropbox-cautionary-tale-3b2e</guid>
      <description>&lt;p&gt;This is my first post in a long time. You know, pandemic, elections, world falling apart distractions. Will be prepping for my next AWS cert soon. But in the meantime, I have a cautionary tale about Dropbox.&lt;/p&gt;

&lt;p&gt;I love Dropbox. It has saved me so much time. It’s the most reliable synchronizing software I’ve used. But beware of a fatal flaw I came across last week, which was almost a disaster.&lt;/p&gt;

&lt;p&gt;I’m a backup fanatic. I have four copies of everything locally, and three more copies of everything in cloud services.&lt;/p&gt;

&lt;p&gt;But I did not include my Dropbox folder in that set of backups. Because, you know, Dropbox.&lt;/p&gt;

&lt;p&gt;On my 8-year-old MacBook Air (my traveling Mac machine), I have my Dropbox folder on an external drive. The internal drive is simply too small. If I restart the system without the drive connected, I properly get a simple warning, and I temporarily disable it. But last week the machine was already booted up, and the drive was connected. While moving things around to clean the room it was in, the drive got disconnected. I was unaware that this happened, so I did not yet reconnect it.&lt;/p&gt;

&lt;p&gt;But Dropbox considered that to mean “delete all Dropbox files.”&lt;/p&gt;

&lt;p&gt;All my files were disappearing on all the machines where I had Dropbox installed. I mean like five different machines. I noticed this when a development project I was working on started failing. I saw it was due to files disappearing. Panicked, I scrambled to turn off the automatic start setting of Dropbox on each machine and shut it down.&lt;/p&gt;

&lt;p&gt;Too late.&lt;/p&gt;

&lt;p&gt;But then I remembered I have the 30 day deletion recovery feature on Dropbox, so I desperately tried restoring. Over a million files. The site could not handle it. Spinning cursor. Spinning. Spinning. Canceling, I then tried restoring subfolders piecemeal. Spinning. Spinning. Error message.&lt;/p&gt;

&lt;p&gt;I started digging through the site looking for a number to call to ask if I could get a physical hard drive sent to me with all my deleted files. I never found such an option (still don’t know if that’s available).&lt;/p&gt;

&lt;p&gt;More panic.&lt;/p&gt;

&lt;p&gt;Had lunch to take a breath. Then I realized, luckily, because the drive was never reconnected all the files were still on that drive! Saved! It took me several days, but now all my files are restored.&lt;/p&gt;

&lt;p&gt;And now my Dropbox folder is part of my regular backup process.&lt;/p&gt;

&lt;p&gt;I hope this helps others avoid this situation. We already have enough to worry about these days. Stay safe, everyone.&lt;/p&gt;

</description>
      <category>technology</category>
      <category>dropbox</category>
      <category>lesson</category>
      <category>recovery</category>
    </item>
    <item>
      <title>AWS Elastic Compute Cloud (EC2)</title>
      <dc:creator>Mark Freedman</dc:creator>
      <pubDate>Mon, 20 Jan 2020 13:20:43 +0000</pubDate>
      <link>https://dev.to/markfreedman/aws-elastic-compute-cloud-ec2-39cb</link>
      <guid>https://dev.to/markfreedman/aws-elastic-compute-cloud-ec2-39cb</guid>
      <description>&lt;p&gt;&lt;a href="https://aws.amazon.com/ec2/"&gt;EC2&lt;/a&gt; is essentially a virtual server in the cloud. It can be available within minutes after setting it up. Compare that to how long it would take to provision and prepare a physical server in your own datacenter. Even ordering it and awaiting initial configuration and shipment can take weeks. General knowledge about EC2 is one of the key categories in the &lt;a href="https://dev.to/markfreedman/aws-certified-cloud-practitioner-2mi3"&gt;AWS Certified Cloud Practitioner&lt;/a&gt; exam.&lt;/p&gt;

&lt;h4&gt;
  
  
  Instance Types
&lt;/h4&gt;

&lt;p&gt;EC2 is region-specific, so we should launch instances in a region that makes sense for latency and regulatory reasons. When we set up an EC2 instance, we get to choose from a large selection of pre-canned images across several different Linux offshoots and Microsoft Windows OSs. There are several &lt;a href="https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/instance-types.html"&gt;instance types&lt;/a&gt; for these operating systems that we can select from. Here’s a &lt;a href="https://ec2instances.info/"&gt;great chart&lt;/a&gt; to help find the most appropriate instances to use.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; &lt;em&gt;There’s a common mnemonic we can use to help us remember the different instance types, but that likely won’t appear on the&lt;/em&gt; &lt;strong&gt;AWS Certified Cloud Practitioner&lt;/strong&gt; &lt;em&gt;exam:&lt;/em&gt; &lt;strong&gt;FIGHT DR MCPXZ&lt;/strong&gt; &lt;em&gt;(Fight Dr. McPixie), although with a recent change, the newer mnemonic could be&lt;/em&gt; &lt;strong&gt;FIGHT DR MACPXZ&lt;/strong&gt;&lt;em&gt;, due to the A1 class that was added in 2018. Keep in mind that some exams may refer to the older mnemonic.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;In general, these are the existing type categories:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;F – For &lt;strong&gt;F&lt;/strong&gt;PGA (&lt;a href="https://simple.wikipedia.org/wiki/Field-programmable_gate_array"&gt;Field Programmable Gate Arrays&lt;/a&gt;) (&lt;a href="https://aws.amazon.com/ec2/instance-types/f1/"&gt;F1 instances&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;I – For &lt;strong&gt;I&lt;/strong&gt;OPS (&lt;a href="https://aws.amazon.com/ec2/instance-types/#Storage_Optimized"&gt;Storage Optimized&lt;/a&gt;, backed by &lt;a href="https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-volume-types"&gt;IOPS SSD EBS&lt;/a&gt;).&lt;/li&gt;
&lt;li&gt;G – For &lt;strong&gt;G&lt;/strong&gt;raphics (&lt;a href="https://aws.amazon.com/ec2/instance-types/#Accelerated_Computing"&gt;Accelerated Computing&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;H – &lt;strong&gt;H&lt;/strong&gt;igh Disk Throughput (&lt;a href="https://aws.amazon.com/ec2/instance-types/i3/"&gt;I3 instances&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;T – Cheap general purpose, like &lt;a href="https://aws.amazon.com/ec2/instance-types/t2/"&gt;&lt;strong&gt;T&lt;/strong&gt;2&lt;/a&gt; Micro&lt;/li&gt;
&lt;li&gt;D – &lt;strong&gt;D&lt;/strong&gt;ensity (&lt;a href="https://aws.amazon.com/about-aws/whats-new/2015/03/now-available-d2-instances-the-latest-generation-of-amazon-ec2-dense-storage-instances/"&gt;D2 instances&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;R – &lt;strong&gt;R&lt;/strong&gt;AM (&lt;a href="https://aws.amazon.com/ec2/instance-types/high-memory/"&gt;High Memory instances&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;M – &lt;strong&gt;M&lt;/strong&gt;ain choice for general-purpose apps (&lt;a href="https://aws.amazon.com/ec2/instance-types/m5/"&gt;M class instances&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;A – &lt;strong&gt;A&lt;/strong&gt;RM-based workloads (&lt;a href="https://aws.amazon.com/ec2/instance-types/a1/"&gt;A1 instances&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;C – &lt;strong&gt;C&lt;/strong&gt;ompute (&lt;a href="https://aws.amazon.com/ec2/instance-types/c5/"&gt;C class instances&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;P – &lt;strong&gt;P&lt;/strong&gt;ics (graphics) (&lt;a href="https://aws.amazon.com/ec2/instance-types/g3/"&gt;G class instances&lt;/a&gt;) (an alternative is now &lt;a href="https://aws.amazon.com/about-aws/whats-new/2018/11/amazon-ec2-elastic-gpus-is-now-amazon-elastic-graphics/"&gt;Amazon EC2 Elastic GPUs&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;X – Extreme Memory (&lt;a href="https://aws.amazon.com/ec2/instance-types/x1/"&gt;&lt;strong&gt;X&lt;/strong&gt;1&lt;/a&gt; and &lt;a href="https://aws.amazon.com/ec2/instance-types/x1e/"&gt;X1e&lt;/a&gt; instances)&lt;/li&gt;
&lt;li&gt;Z – Extreme Memory and CPU (&lt;a href="https://aws.amazon.com/ec2/instance-types/z1d/"&gt;&lt;strong&gt;z&lt;/strong&gt;1d instances&lt;/a&gt;)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Aside from these AWS-supplied instance images, we can also create instances from a &lt;a href="https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AMIs.html"&gt;saved image (AMI)&lt;/a&gt; that we created from previously configured instances, or from AMIs purchased from the &lt;a href="https://aws.amazon.com/marketplace"&gt;AWS Marketplace&lt;/a&gt;. This is often used for &lt;a href="https://aws.amazon.com/ec2/autoscaling/"&gt;Auto Scaling&lt;/a&gt; launch configurations and &lt;a href="https://aws.amazon.com/elasticloadbalancing/"&gt;Elastic Load Balancing&lt;/a&gt; target groups. This will be covered in another article.&lt;/p&gt;

&lt;p&gt;We should always design for failure. So, at minimum, we should create an EC2 instance in each availability zone in the region.&lt;/p&gt;

&lt;h4&gt;
  
  
  Security
&lt;/h4&gt;

&lt;p&gt;When we create an instance, we also need to create a &lt;a href="https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/security-group-rules-reference.html"&gt;Security Group&lt;/a&gt; to poke holes in the firewall for ports from specific IP address(es) or from anywhere: 0.0.0.0/0. Think of this as a virtual firewall at the instance level. By default, the SSH port (22) is opened up. But other common ports we may want to open up are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;HTTP (80)&lt;/li&gt;
&lt;li&gt;HTTPS (443)&lt;/li&gt;
&lt;li&gt;RDP (3389)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If we want to have fine-grained (specific IP-level) access control to our EC2 instances, we’d apply &lt;a href="https://docs.aws.amazon.com/vpc/latest/userguide/vpc-network-acls.html"&gt;network ACLs (NACLs)&lt;/a&gt; to our EC2s’ VPC and subnets.&lt;/p&gt;

&lt;h4&gt;
  
  
  Storage
&lt;/h4&gt;

&lt;p&gt;When setting up an EC2 instance, we also have to configure the storage we want attached to our instance. We do that by specifying the &lt;a href="https://aws.amazon.com/ebs/"&gt;Elastic Block Storage (EBS)&lt;/a&gt; type(s) to attach. These are virtual disks in the cloud, and are created in the same availability zone (AZ) as the EC2 instance. Each virtual storage device is auto-replicated:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;SSD&lt;/strong&gt; (Solid-State Drive)

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;GP2&lt;/strong&gt; is a general purpose SSD, often used as the main root volume.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;I01&lt;/strong&gt; is provisioned IOPS SSD high-performance drives, which are best for high-performance databases.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;HDD&lt;/strong&gt; (Magnetic Drive)

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;HDD&lt;/strong&gt; drives cannot be boot volumes.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;ST1&lt;/strong&gt; is a throughput-optimized HDD, which is a low-cost volume for frequent, throughput-intensive workloads, such as database servers.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;SC1&lt;/strong&gt; is a “cold” HDD, which is the lowest cost option for less frequently accessed workloads, such as file servers.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Magnetic&lt;/strong&gt; is a previous generation EBS type, and is being phased out.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Instance Access
&lt;/h4&gt;

&lt;p&gt;Once we get our EC2 instance(s) configured and started, we’ll often need direct access to the machines. The most common method is via &lt;a href="https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AccessingInstancesLinux.html"&gt;SSH&lt;/a&gt; (port 22). Upon launch of an EC2 instance, we’re prompted to select or create a “key pair” (&lt;a href="https://en.wikipedia.org/wiki/Public-key_cryptography"&gt;public/private key&lt;/a&gt;) that we’ll need to SSH into Linux instances and to obtain a password to &lt;a href="https://docs.aws.amazon.com/AWSEC2/latest/WindowsGuide/connecting_to_windows_instance.html"&gt;RDP into Windows instances&lt;/a&gt;. This creates a private key (.PEM file) that can be used directly from a Linux-based OS (including MacOS) to SSH into the instance. To use this key file from Windows, we’d need to use a utility like &lt;a href="https://www.putty.org/"&gt;PuTTY&lt;/a&gt; to convert the key file into a .PPK file and SSH into the instance.&lt;/p&gt;

&lt;p&gt;From a Linux-based OS, after saving the .PEM file, we need to apply read-only rights to the file owner by running &lt;code&gt;chmod 400 keyname.pem&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;From a local machine, we can connect via SSH by running &lt;code&gt;ssh ec2-user@x.x.x.x -i keyname.pem&lt;/code&gt;, where x.x.x.x is the IP address we can grab from the AWS console’s &lt;strong&gt;IPv4 Public IP&lt;/strong&gt; field on the EC2 instance &lt;strong&gt;Description&lt;/strong&gt; panel.&lt;/p&gt;

&lt;p&gt;We also use this key pair to run &lt;a href="https://aws.amazon.com/cli/"&gt;AWS CLI&lt;/a&gt; commands. Although this is a topic for another article, please note that we need to store this key locally (in plain text) in the &lt;code&gt;~/.aws&lt;/code&gt; folder. If we want to run CLI commands from the EC2 instance itself, it is far more secure to apply &lt;a href="https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles.html"&gt;IAM Roles&lt;/a&gt; to the instance instead. If anyone ever got access to the EC2 instance, they could grab complete control of the instance for good if the private key was available in the file system.&lt;/p&gt;

&lt;h4&gt;
  
  
  Pricing
&lt;/h4&gt;

&lt;p&gt;I cover EC2 pricing in much more detail &lt;a href="https://dev.to/markfreedman/aws-pricing-and-billing-part-1-4f0g"&gt;in another article&lt;/a&gt;. In general, there are four main EC2 pricing models:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;On-Demand&lt;/strong&gt; (low-cost and flexible)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Reserved&lt;/strong&gt; (steady-state, predictable usage)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Dedicated&lt;/strong&gt; (for regulatory requirements)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Spot&lt;/strong&gt; (flexible start and end times)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This may be the last article I can write before my first AWS certificate exam, so wish me luck 🙂&lt;/p&gt;

</description>
      <category>aws</category>
      <category>ec2</category>
      <category>certification</category>
      <category>ebs</category>
    </item>
    <item>
      <title>AWS Simple Storage Service (S3)</title>
      <dc:creator>Mark Freedman</dc:creator>
      <pubDate>Sat, 18 Jan 2020 16:52:12 +0000</pubDate>
      <link>https://dev.to/markfreedman/aws-simple-storage-service-s3-1fa6</link>
      <guid>https://dev.to/markfreedman/aws-simple-storage-service-s3-1fa6</guid>
      <description>&lt;p&gt;&lt;em&gt;Last updated: 2020-04-05&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;I’ll be updating my AWS articles from time to time, as I learn more. I got my first cert — the AWS Certified Cloud Practitioner certification — on January 22nd, but as I took the practice exams (5 exams, 2x each) and the actual exam, I learned about gaps in my knowledge. So I’ll be filling those in through the articles I wrote beforehand.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://aws.amazon.com/s3/"&gt;S3&lt;/a&gt; is object-based flat file unlimited storage. It’s unlimited, but that doesn’t mean we should throw files up there without thinking — storage still costs money. It’s not block-based, so it’s not meant for storing operating systems or live databases. But any type of file can be stored (including database file backups), and each can be from 0 bytes up to 5 TB. General knowledge about S3 is one of the key categories in the &lt;a href="https://dev.to/markfreedman/aws-certified-cloud-practitioner-2mi3"&gt;AWS Certified Cloud Practitioner&lt;/a&gt; exam.&lt;/p&gt;

&lt;h4&gt;
  
  
  Buckets
&lt;/h4&gt;

&lt;p&gt;Files are stored in &lt;strong&gt;buckets&lt;/strong&gt;, which we can think of as root-level folders. Bucket names must be globally unique because they resolve to URLs, which are global. Most of the time, we wouldn’t expose these URLs publicly except for static S3 websites.&lt;/p&gt;

&lt;p&gt;&lt;del&gt;When we name our buckets, AWS automatically postfixes “S3” or the bucket's region to the name, depending on the region. Here are the two naming examples:&lt;/del&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;del&gt;&lt;strong&gt;us-east-1&lt;/strong&gt;: &lt;code&gt;https://my-unique-bucket-name.s3.amazon.com/&lt;/code&gt;&lt;/del&gt;&lt;/li&gt;
&lt;li&gt;&lt;del&gt;&lt;strong&gt;All other regions&lt;/strong&gt;: &lt;code&gt;https://my-unique-bucket-name.us-west-2.amazon.com/&lt;/code&gt;&lt;/del&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;del&gt;Based on the above, we may wonder why the names still need to be globally unique instead of regionally. My answer: I don't know. Maybe it's a legacy reason.&lt;/del&gt;&lt;/p&gt;

&lt;p&gt;&lt;del&gt;I recommend using reverse-domain naming using an appropriate domain you own. For example, I start all my bucket names with com.markfreedman. An exception would be when we host a static site. In that case, we need to use a normal domain name (in my case, &lt;code&gt;markfreedman.com&lt;/code&gt;, although I already have this hosted elsewhere).&lt;/del&gt;&lt;/p&gt;

&lt;p&gt;Based on this &lt;a href="https://aws.amazon.com/blogs/aws/amazon-s3-path-deprecation-plan-the-rest-of-the-story/"&gt;article&lt;/a&gt; and &lt;a href="https://docs.aws.amazon.com/AmazonS3/latest/dev/VirtualHosting.html"&gt;this documentation&lt;/a&gt;, please note that the bucket URL naming convention is changing. AWS supports both path-style requests and virtual hosted-style requests. But any buckets created after September 30, 2020 will only support virtual hosted-style requests. Also, the region should be specified in the URL. We could leave out the region, but there’s a slight bit of overhead due to AWS forcing a 307 redirect to the specific region (us-east-1 is checked first).&lt;/p&gt;

&lt;p&gt;Here are the updated virtual hosted-style naming examples:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Regional-specific&lt;/strong&gt;: &lt;code&gt;https://my-unique-bucket-name.s3.us-east-1.amazonaws.com/&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Without specifying a region (will 307 redirect)&lt;/strong&gt;: &lt;code&gt;https://my-unique-bucket-name.s3.amazonaws.com/&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I’m changing my recommended bucket naming convention slightly, due to the Bucket Names with Dots section of this &lt;a href="https://aws.amazon.com/blogs/aws/amazon-s3-path-deprecation-plan-the-rest-of-the-story/"&gt;article&lt;/a&gt;:&lt;/p&gt;

&lt;p&gt;I recommend using reverse-domain naming using an appropriate domain you own, but replacing dots with dashes. For example, I now start all my bucket names with &lt;code&gt;com-markfreedman&lt;/code&gt;. An exception would be when we host a static site. In that case, we need to use a normal domain name (in my case, &lt;code&gt;markfreedman.com&lt;/code&gt;, although I already have this hosted elsewhere).&lt;/p&gt;

&lt;p&gt;Although bucket names must be globally unique, storage of the buckets themselves is region-specific. We should select a bucket’s region based on latency requirements. If most access would be from a certain region, create the bucket in the closest available AWS region. Using CloudFront can alleviate this need, though.&lt;/p&gt;

&lt;p&gt;Public access is blocked by default. AWS requires us to be explicit in exposing buckets to the public Internet. All those stories of hacked data (often exposed S3 buckets) should make us thankful for this default. We can secure buckets with IAM policies (bucket policies).&lt;/p&gt;

&lt;p&gt;We can also set lifecycle management for a bucket, which specifies which storage class to move the bucket to, and when to move it. More on storage classes, below.&lt;/p&gt;

&lt;h4&gt;
  
  
  Objects (Files)
&lt;/h4&gt;

&lt;p&gt;When we upload a file to an S3 bucket, AWS considers the file name to be the &lt;strong&gt;key&lt;/strong&gt;, and refers it as &lt;strong&gt;key&lt;/strong&gt; in the S3 APIs and SDKs. The S3 data model is a flat structure. In other words, there’s no hierarchy of subfolders (sub-buckets?). This is why I described buckets as root-level folders. However, you can simulate a logical folder hierarchy by separating portions of the key name with forward slashes (/).&lt;/p&gt;

&lt;p&gt;The file content is referred to as the &lt;strong&gt;value&lt;/strong&gt;. Therefore, an S3 file is sometimes referred to as a key/value pair.&lt;/p&gt;

&lt;p&gt;Files can be versioned, encrypted, as well as provided with other metadata. We can secure files (objects) with IAM policies (object policies) and set ACLs at the file (object) level. By default, the resource owner has full ACL rights to the file. For extra protection, we can require &lt;a href="https://en.wikipedia.org/wiki/Multi-factor_authentication"&gt;multi-factor authentication (MFA)&lt;/a&gt; in order to delete an object.&lt;/p&gt;

&lt;p&gt;When we upload a file to an S3 bucket, we’ll know the upload was successful if an HTTP 200 code is returned. This is most important when uploading programmatically. If we do it manually, AWS will let us know if it succeeded or not.&lt;/p&gt;

&lt;p&gt;We can expect 99.99% availability, but AWS only guarantees 99.9%. But it also guarantees 99.999999999% durability (11 x 9s). So we can be confident that our files will always be there.&lt;/p&gt;

&lt;p&gt;There are specific “data consistency” rules:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Read after Write Consistency&lt;/strong&gt; — when new files are uploaded, we can read the file immediately afterwards.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Eventual Consistency&lt;/strong&gt; — when files are updated or deleted, immediately attempting to read the file afterwards may result in the old file content. It can take a short period of time (perhaps a few seconds or more) to propagate throughout AWS (replication, cache cleaning), which is why we may see the old file.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;em&gt;(Update, 2020-04-05: I originally mentioned that new files use POST and updates use PUT. But according to what I can gather from their docs and some online Q&amp;amp;A, POST is actually an alternate to PUT that enables browser-based uploads to S3. Parameters can either be passed via HTTP headers by using PUT, or passed via form fields by using POST, no matter if the object is new or being replaced. From what I can tell, S3 doesn't really "replace" objects per se, since versioning is an option.)&lt;/em&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  Storage Classes
&lt;/h4&gt;

&lt;p&gt;S3 supports tiered storage classes, which we can change on demand at the object level. We don’t specify a class at bucket creation time. Keep in mind, when we specify lifecycle rules, we do that at the bucket level, defining the lifecycle rules for the objects in that bucket:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;S3 Standard&lt;/strong&gt; (most common) is designed to sustain loss of 2 facilities concurrently, and has the best performance.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;S3 IA&lt;/strong&gt; (Infrequently Accessed) is lower cost, but we’re charged a retrieval fee.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;S3 One Zone IA&lt;/strong&gt; is a lower cost version of S3 IA, but it doesn’t require multiple zone resilience. It’s the only tier that’s just in one availability zone; all the others are replicated in 3 or more zones.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;S3 Intelligent Tiering&lt;/strong&gt; allows AWS to automatically move data to the most cost-effective tier using machine learning AI of usage patterns. For most buckets, I recommend using this, although it’s best for long-lived data with unpredictable access patterns.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;S3 Glacier&lt;/strong&gt; is a secure, durable, low cost archival tier, which allows for configurable retrieval times, from minutes to hours. It provides query-in-place functionality for data analysis of archived data.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;S3 Glacier Deep Archive&lt;/strong&gt; is the lowest cost tier, but it requires up to 12 hour retrieval time. This is great for archived data that doesn’t need to be readily available.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;S3 RRS&lt;/strong&gt; is Reduced Redundancy Storage, but is being phased out. It appears to be similar to S3 One Zone IA.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Pricing
&lt;/h4&gt;

&lt;p&gt;The prices we’re charged (&lt;a href="https://dev.to/markfreedman/aws-pricing-and-billing-part-2-770"&gt;covered in another article&lt;/a&gt;) using S3 is based on:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Storage&lt;/li&gt;
&lt;li&gt;Requests&lt;/li&gt;
&lt;li&gt;Storage Management&lt;/li&gt;
&lt;li&gt;Data Transfer&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://docs.aws.amazon.com/AmazonS3/latest/dev/transfer-acceleration.html"&gt;Transfer Acceleration&lt;/a&gt;, which enables fast transfer to distant locations by using &lt;a href="https://aws.amazon.com/cloudfront/"&gt;CloudFront&lt;/a&gt; edge locations, making use of backbone networks (much larger network “pipes”).&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://docs.aws.amazon.com/AmazonS3/latest/dev/replication.html"&gt;Cross Region Replication&lt;/a&gt;, which automatically replicates to another region bucket for disaster recovery purposes.&lt;/li&gt;
&lt;li&gt;We can also configure buckets to require the &lt;a href="https://docs.aws.amazon.com/AmazonS3/latest/dev/RequesterPaysBuckets.html"&gt;requester pay&lt;/a&gt; for access.&lt;/li&gt;
&lt;li&gt;If we have multiple accounts under an &lt;a href="https://aws.amazon.com/organizations/"&gt;Organization&lt;/a&gt;, S3 offers us volume discounts when we enable &lt;a href="https://docs.aws.amazon.com/awsaccountbilling/latest/aboutv2/consolidated-billing.html"&gt;Consolidated Billing&lt;/a&gt;.&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>aws</category>
      <category>s3</category>
      <category>devops</category>
      <category>certification</category>
    </item>
    <item>
      <title>AWS Pricing and Billing (Part 3)</title>
      <dc:creator>Mark Freedman</dc:creator>
      <pubDate>Sat, 18 Jan 2020 15:56:35 +0000</pubDate>
      <link>https://dev.to/markfreedman/aws-pricing-and-billing-part-3-3mf7</link>
      <guid>https://dev.to/markfreedman/aws-pricing-and-billing-part-3-3mf7</guid>
      <description>&lt;p&gt;In &lt;a href="https://dev.to/markfreedman/aws-pricing-and-billing-part-2-770"&gt;my last article&lt;/a&gt;, we discussed S3 pricing, which was an entire topic on its own. In this part, we’ll discuss other key service charges, and the options and decisions we need to make when planning.&lt;/p&gt;

&lt;p&gt;I don’t believe the exams will ask about specific prices, as these can always change. But the important thing is understanding the relative pricing, so you could be able to make intelligent cost analysis decisions.&lt;/p&gt;

&lt;p&gt;Because there are often many details and variables that go into cost calculation, I’ll also link directly to the AWS pages for each service cost.&lt;/p&gt;

&lt;h4&gt;
  
  
  Snowball
&lt;/h4&gt;

&lt;p&gt;&lt;strong&gt;Snowball&lt;/strong&gt; is a physical petabyte-scale device used for migrating gigantic data sets into and out of AWS S3 storage. It’s much cheaper and faster to get on-premise data into and out of AWS using a physical device, especially when we’re talking petabytes.&lt;/p&gt;

&lt;h4&gt;
  
  
  Elastic Block Store (EBS)
&lt;/h4&gt;

&lt;p&gt;In &lt;a href="https://dev.to/markfreedman/aws-pricing-and-billing-part-1-4f0g"&gt;my first article on billing and pricing&lt;/a&gt;, we discussed the support options’ costs and EC2 pricing. There are a few services that work hand-in-hand with EC2 that we’ll discuss here. First off, an EC2 (or any server instance) is useless without disk storage. EBS is AWS’s virtual disks in the cloud. These disks are not tied to any single EBS instance. The flexibility of keeping it independent allows us to detach it from one EC2 instance and attach it to another, like moving a physical hard drive from one physical server to another. But like physical servers, an EBS instance can only be attached to a single EC2 instance at a time. In addition, EBS instances can only be attached to EC2 instances in the same region.&lt;/p&gt;

&lt;p&gt;EBS pricing is based on media type and provisioned storage (not necessarily what’s physically used). The monthly prices shown are current as of this writing, and subject to change, but it will still give you an idea of relative costs:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;General purpose SSD (gp2)&lt;/strong&gt; volumes are 10 cents per GB. These are perfect for boot drives and general file storage purposes.&lt;br&gt;
&lt;strong&gt;Provisioned IOPS SSD (io1)&lt;/strong&gt; volumes are 12.5 cents per GB. Because 10s of 1000s of these volumes can be attached to a single EC2 instance, we’re also charged 6.5 cents each per month. These are mainly for I/O-intensive and database workloads. I’m still not exactly sure why so many volumes are desirable, so I hope to learn more about that soon.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Throughput Optimized HDD (st1)&lt;/strong&gt; volumes are 4.5 cents per GB. These are magnetic drives good for large, sequential workloads and data warehouses.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Cold HDD (sc1) volumes are 2.5 cents per GB. These magnetic drives are for inexpensive block storage where throughput is more important than speed.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;EBS Snapshots&lt;/strong&gt; are incremental backups of any EBS store, and are charged at 5 cents per GB of actual data stored; not provisioned size.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;There’s a service fee per “job.” A job is the shipping/loading/shipping/unloading of a single Snowball device. There are two sizes of Snowball; 50 TB and 80 TB. The service fee for each is $200 and $250 ($320 for Singapore and Seoul), respectively. Standard shipping charges are extra. There’s also a daily charge for holding onto the device, although the first 10 days are free. If we go beyond 10 days, we’re charged an extra $15 ($20 for Singapore and Seoul) per day afterwards. It’s free to transfer the data into S3. But transferring data out of S3 runs between 3 and 5 cents per GB, depending upon region.&lt;/p&gt;

&lt;h4&gt;
  
  
  Relational Database Service (RDS)
&lt;/h4&gt;

&lt;p&gt;There are no up-front fees for setting up RDS use. But Amazon prices RDS use by the following:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Clock hours of server time.&lt;/li&gt;
&lt;li&gt;Instance type and size.&lt;/li&gt;
&lt;li&gt;Provisioned storage.&lt;/li&gt;
&lt;li&gt;Additional storage.&lt;/li&gt;
&lt;li&gt;Requests.&lt;/li&gt;
&lt;li&gt;Engine.&lt;/li&gt;
&lt;li&gt;Data transfer.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The monthly prices differ by instance type and database engine:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Aurora&lt;/strong&gt; could potentially be a more cost-effective option if you can take advantage of the performance gains over MySQL and PostgreSQL, and you are looking for no-touch administration. But it could be up to 20% more expensive. Storage is 10 cents per GB and I/O is 20 cents per million requests. Backup storage runs between 2 and 2.5 cents per GB, depending on region.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;MySQL, PostgreSQL, and MariaDB&lt;/strong&gt; costs vary significantly based upon instance type and region. Single-AZ on-demand costs can run from 1.7 cents per hour to almost $14 per hour! These prices double when running in multiple AZs. As with EC2 instances, we can also get significant discounts by reserving instances. 1-year terms can save us around 25%, while 3-year terms can save us around 50%. The savings can even increase as the instance type grows.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Oracle and SQL Server&lt;/strong&gt; prices can vary quite a bit depending upon license and instance size. I’d avoid these engines unless you have very specific needs.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  DynamoDB
&lt;/h4&gt;

&lt;p&gt;There are two “capacity modes” for &lt;strong&gt;DynamoDB&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;On-Demand&lt;/strong&gt; capacity doesn’t require the need to predict usage. &lt;strong&gt;Provisioned&lt;/strong&gt; capacity offers significant savings, if we can predict our need. We’re charged by the following:

&lt;ul&gt;
&lt;li&gt;Read request units.&lt;/li&gt;
&lt;li&gt;Write request units.&lt;/li&gt;
&lt;li&gt;Data storage.&lt;/li&gt;
&lt;li&gt;Continuous backups.&lt;/li&gt;
&lt;li&gt;On-demand backups.&lt;/li&gt;
&lt;li&gt;Backup table restore.&lt;/li&gt;
&lt;li&gt;Global tables.&lt;/li&gt;
&lt;li&gt;Accelerator (DAX).&lt;/li&gt;
&lt;li&gt;Streams.&lt;/li&gt;
&lt;li&gt;Data transfer.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Pricing details for all of the above are too numerous to go through here, so I recommend reading all the details on the &lt;a href="https://aws.amazon.com/dynamodb/pricing/"&gt;AWS site&lt;/a&gt;. Suffice it to say that &lt;strong&gt;DynamoDB&lt;/strong&gt; is a lot cheaper than &lt;strong&gt;RDS&lt;/strong&gt; costs, so only reserve &lt;strong&gt;RDS&lt;/strong&gt; use for where you really need relational access.&lt;/p&gt;

&lt;h4&gt;
  
  
  CloudFront
&lt;/h4&gt;

&lt;p&gt;&lt;strong&gt;CloudFront&lt;/strong&gt; is relatively cheap considering the benefits. There are no up-front fees.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Free Tier&lt;/strong&gt; consists of 50 GB monthly of outbound data transfer and 2 million HTTP/HTTPS monthly requests over a year.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;On-Demand&lt;/strong&gt; pricing for data transferred out to the Internet is on a sliding scale based on volume per month, and by region. This can range from 2 to 17 cents per GB. For regional transfer inside of AWS, the prices range from 2 to 16 cents per GB. HTTP method calls are all around 1 cent per 10,000 calls.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Elastic Load Balancer (ELB)
&lt;/h4&gt;

&lt;p&gt;In order to calculate &lt;strong&gt;Elastic Load Balancer&lt;/strong&gt; pricing, we first have to understand what a &lt;strong&gt;Load Balancer Capacity Unit (LCU)&lt;/strong&gt; is. It’s based on the highest of the following:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;New connections per second (up to 25 = 1 &lt;strong&gt;LCU&lt;/strong&gt;).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Active connections per minute (up to 3,000 active connections per minute = 1 &lt;strong&gt;LCU&lt;/strong&gt;).&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;ELBs are charged at a bit over 2 cents per hour (or partial hour), differing slightly by region. In addition, we’re charged 0.8 cents per LCU-hour (as described above). So if you think about it, we’d be paying well under a dollar a day for a load balancer; probably around $15 or so per month. Definitely not where we’ll take a big hit budget-wise.&lt;/p&gt;

&lt;h4&gt;
  
  
  Lambda
&lt;/h4&gt;

&lt;p&gt;Serverless computing has become enormously popular over the last few years, especially as we consider solutions for &lt;a href="https://microservices.io/"&gt;microservices&lt;/a&gt;. AWS Lambda allows us to write small functions to respond to several types of events in a “single responsibility principle” model — and if you think about it, virtually everything in computing is triggered by an “event.” We can choose from various popular languages for coding these, which eliminates any real learning curve. AWS handles all the provisioning, scaling, and infrastructure for us at very low cost:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Request pricing&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Free tier: 1 million requests per month. Great bargain.
20 cents per 1 million requests thereafter.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;

&lt;p&gt;&lt;strong&gt;Duration pricing&lt;/strong&gt; — this is based on processing time and allocated memory.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;400,000 GB-seconds per month free up to 3.2 million seconds.&lt;/li&gt;
&lt;li&gt;$0.00001667 for every GB-second used thereafter.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;em&gt;Please keep in mind that we are still charged for the resources our Lambda functions use, such as S3 and database transactions.&lt;/em&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;One of the criticisms of Lambda is its cold startup time when it hasn’t been used for awhile. Each Lambda function is containerized, so even though containers are much more efficient than VMs, there is still start-up latency. AWS offers &lt;strong&gt;Provisioned Concurrency Pricing&lt;/strong&gt; to help alleviate this issue, at a cost. As they say on their site, this “keeps functions initialized and hyper-ready to respond in double-digit milliseconds.” &lt;strong&gt;Provisioned Concurrency Pricing&lt;/strong&gt; involves several variables, so it’s best to look at the &lt;a href="https://aws.amazon.com/lambda/pricing/#Provisioned_Concurrency_Pricing"&gt;examples on the AWS website&lt;/a&gt;.&lt;/p&gt;

&lt;h4&gt;
  
  
  Free is for Me
&lt;/h4&gt;

&lt;p&gt;AWS gives us several free service options as well:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href="https://aws.amazon.com/free/free-tier-faqs/"&gt;Free user tier&lt;/a&gt; for new accounts:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Free EC2 micro instances for a year.&lt;/li&gt;
&lt;li&gt;Free S3 usage tier.&lt;/li&gt;
&lt;li&gt;Free Elastic Block Store (EBS).&lt;/li&gt;
&lt;li&gt;Free Elastic Load Balancing (ELB).&lt;/li&gt;
&lt;li&gt;Free data transfer.&lt;/li&gt;
&lt;li&gt;Etc.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;

&lt;p&gt;Free services&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;VPC (virtual data center in the cloud)&lt;/li&gt;
&lt;li&gt;Elastic Beanstalk (but provisioned resources aren’t free)&lt;/li&gt;
&lt;li&gt;CloudFormation (but provisioned resources aren’t free)&lt;/li&gt;
&lt;li&gt;Identity Access Management (IAM)&lt;/li&gt;
&lt;li&gt;Auto Scaling (but provisioned resources aren’t free)&lt;/li&gt;
&lt;li&gt;Opsworks (similar to Elastic Beanstalk)&lt;/li&gt;
&lt;li&gt;Amplify&lt;/li&gt;
&lt;li&gt;AppSync&lt;/li&gt;
&lt;li&gt;CodeStar&lt;/li&gt;
&lt;li&gt;Consolidated Billing&lt;/li&gt;
&lt;li&gt;Cost Explorer&lt;/li&gt;
&lt;li&gt;AMIs (exception: if creating from a running instance, we pay for running a micro instance (about 2 cents per hour, depending on region) plus some EBS fees).&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Pricing Calculators
&lt;/h4&gt;

&lt;p&gt;AWS provides us with a couple of tools for estimating costs. Apparently, this is a big part of the &lt;a href="https://dev.to/markfreedman/aws-certified-cloud-practitioner-2mi3"&gt;AWS Certified Cloud Practitioner&lt;/a&gt; exam, so practice using them:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href="https://calculator.s3.amazonaws.com/index.html"&gt;Simple Monthly Calculator&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Allows us to build out environments for estimating costs.&lt;/li&gt;
&lt;li&gt;Calculates running-cost estimates.&lt;/li&gt;
&lt;li&gt;Hosted on S3 (trivia).&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;

&lt;p&gt;&lt;a href="https://aws.amazon.com/tco-calculator/"&gt;Total Cost of Ownership (TCO) Calculator&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;This is a great tool for generating reports to convince C-level management the cost benefits of moving to the cloud.&lt;/li&gt;
&lt;li&gt;Allows us to compare on-premise vs. AWS cloud costs.&lt;/li&gt;
&lt;li&gt;Breaks things down into four categories (for overhead, space, power, and cooling costs):

&lt;ul&gt;
&lt;li&gt;Server costs&lt;/li&gt;
&lt;li&gt;Storage costs&lt;/li&gt;
&lt;li&gt;Network costs&lt;/li&gt;
&lt;li&gt;IT labor costs&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Again, practice, practice, practice.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>pricing</category>
      <category>billing</category>
      <category>devops</category>
    </item>
    <item>
      <title>AWS Pricing and Billing (Part 2)</title>
      <dc:creator>Mark Freedman</dc:creator>
      <pubDate>Fri, 10 Jan 2020 15:18:46 +0000</pubDate>
      <link>https://dev.to/markfreedman/aws-pricing-and-billing-part-2-770</link>
      <guid>https://dev.to/markfreedman/aws-pricing-and-billing-part-2-770</guid>
      <description>&lt;p&gt;&lt;em&gt;Last updated: 2020-02-18&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;I’ll be updating my AWS articles from time to time, as I learn more. I got my first cert — the AWS Certified Cloud Practitioner certification — on January 22nd, but as I took the practice exams (5 exams, 2x each) and the actual exam, I learned about gaps in my knowledge. So I’ll be filling those in through the articles I wrote beforehand.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;In &lt;a href="https://dev.to/markfreedman/aws-pricing-and-billing-part-1-4f0g"&gt;my last article&lt;/a&gt;, we discussed the support options’ costs and EC2 pricing. This is a large topic, so I’m going to have to have a 3rd part since S3 takes up a lot.&lt;/p&gt;

&lt;p&gt;I don’t believe the exams will ask about specific prices, as these can always change. But the important thing is understanding the relative pricing, so you could be able to make intelligent cost analysis decisions.&lt;/p&gt;

&lt;h4&gt;
  
  
  S3 Pricing
&lt;/h4&gt;

&lt;p&gt;S3 is charged based on:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Storage class&lt;/li&gt;
&lt;li&gt;Storage volume&lt;/li&gt;
&lt;li&gt;Storage time duration&lt;/li&gt;
&lt;li&gt;Requests (GET, PUT, COPY)&lt;/li&gt;
&lt;li&gt;Lifecycle rules&lt;/li&gt;
&lt;li&gt;Pattern monitoring for Intelligent-Tiering&lt;/li&gt;
&lt;li&gt;Data transfer&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In general, monthly storage pricing rates go down as more storage is used. I’m not going to get into many specifics here because the prices can be in flux. You can always see the latest prices and details on the AWS S3 pricing page.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Storage and Transfer Requests:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;I haven’t covered S3 storage classes in detail yet, so for now, I’ll give brief descriptions, and give you relative pricing between the classes. These are all monthly charges:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;S3 Standard&lt;/strong&gt; – this is the most commonly used class for frequent millisecond access, and storage is usually charged (as of this writing) in a sliding scale of between 2 and 2.5 cents per GB of storage, a half cent per 1000 inbound transfer requests (note that “transfer requests” is different from “transfer volume,” discussed later), and a minuscule 0.04 cents per 1000 outbound transfers.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;S3 Standard – Infrequent Access&lt;/strong&gt; – as the name implies, this is for infrequently accessed storage, but also with millisecond access. Storage is a lot cheaper, though; around half the price of &lt;strong&gt;S3 Standard&lt;/strong&gt;. The savings over &lt;strong&gt;S3 Standard&lt;/strong&gt; is only for storage. Since it should be infrequently accessed, transfer request costs are more expensive (around double the cost of &lt;strong&gt;S3 Standard&lt;/strong&gt;).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;S3 One Zone – Infrequent Access&lt;/strong&gt; – Similar to the above, but only in a single availability zone (AZ). Storage is slightly cheaper than the above; about 1 cent per GB. Transfer rates are the same.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;S3 Glacier&lt;/strong&gt; – long-term archival backups with potential access delay time of several hours, but cheap, with a storage price under 0.5 cents per GB. Inbound transfer requests is a half cent per GB, and outbound is the same as &lt;strong&gt;S3 Standard&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;S3 Glacier Deep Archive&lt;/strong&gt; – similar to the above, but with up to a 12 hour delay, and only accessed once or twice per year, but very cheap, with a storage price at a minuscule 1/10 of a cent per GB. Transfer request costs are the same as above.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;First Year S3 Standard&lt;/strong&gt; – this is free for up to 5 GB with 20,000 outbound transfers, 2,000 inbound transfer requests and lists, and 15 GB transfer out per month.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;One other transfer request cost is lifecycle costs (you can create rules to transfer to other storage options based on certain metrics we define, or &lt;strong&gt;Intelligent Tiering&lt;/strong&gt;, which does this automatically for us based upon machine learning). To transfer to another standard storage class, the price is currently 1 cent per 1000 requests. Transferring to a glacier class is at 5 cents per 1000 requests.&lt;/p&gt;

&lt;p&gt;There are also costs for retrieving data from glacier storage:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Glacier Expedited Retrieval&lt;/strong&gt; – $10 per 1000 requests plus 3 cents per GB.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Glacier Standard Retrieval&lt;/strong&gt; – 5 cents and 1 cent, respectively.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Glacier Bulk Retrieval&lt;/strong&gt; – 2.5 cents and 1/4 cent, respectively. TBH, I’m not totally sure what “bulk” transfer actually is, and couldn’t find a clear explanation. I believe it means at an entire bucket level (instead of specific files), but not sure. If anyone could clarify this for me, I’d appreciate it.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Glacier Deep Archive&lt;/strong&gt; doesn’t have an expedited option, but standard is twice the cost of &lt;strong&gt;Glacier&lt;/strong&gt; and bulk is the same price as &lt;strong&gt;Glacier&lt;/strong&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Data Transfer Volume:
&lt;/h4&gt;

&lt;p&gt;As mentioned above, we are also charged for data transfer volume per month:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Transferring data &lt;strong&gt;&lt;em&gt;into&lt;/em&gt;&lt;/strong&gt; S3 from the Internet is always free. The same is true for S3 data transferred to EC2 instances in the same region as the S3 buckets. Data transferred out to CloudFront is also free.&lt;/li&gt;
&lt;li&gt;Transferring data out to the Internet from S3 is free up to 1 GB.&lt;/li&gt;
&lt;li&gt;The next 9.999 TB is currently charged at 9 cent per GB.&lt;/li&gt;
&lt;li&gt;The next 40 TB is currently 8.5 cent per GB.&lt;/li&gt;
&lt;li&gt;The next 100 TB is currently 7 cents per GB.&lt;/li&gt;
&lt;li&gt;Anything over 150 TB is currently 5 cents per GB.&lt;/li&gt;
&lt;li&gt;Remember, these are all per-month costs.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Transferring data from S3 to other regions is generally charged at 2 cents per GB, with the exception of N. Virginia, which is 1 cent per GB.&lt;/p&gt;

&lt;p&gt;Finally, &lt;strong&gt;Transfer Acceleration&lt;/strong&gt; (uses edge locations to boost speed) has a premium cost on top of the standard. This is generally 4 cents per GB, with the exception of inbound transfers to regions outside of the US, Europe and Japan, which is 8 cents per GB.&lt;/p&gt;

&lt;p&gt;If we have multiple accounts under an &lt;a href="https://aws.amazon.com/organizations/"&gt;Organization&lt;/a&gt;, S3 offers us volume discounts when we enable &lt;a href="https://docs.aws.amazon.com/awsaccountbilling/latest/aboutv2/consolidated-billing.html"&gt;Consolidated Billing&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>billing</category>
      <category>pricing</category>
      <category>s3</category>
    </item>
    <item>
      <title>AWS Pricing and Billing (Part 1)</title>
      <dc:creator>Mark Freedman</dc:creator>
      <pubDate>Thu, 09 Jan 2020 18:32:45 +0000</pubDate>
      <link>https://dev.to/markfreedman/aws-pricing-and-billing-part-1-4f0g</link>
      <guid>https://dev.to/markfreedman/aws-pricing-and-billing-part-1-4f0g</guid>
      <description>&lt;p&gt;&lt;em&gt;Last updated: 2020-02-22&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;I’ll be updating my AWS articles from time to time, as I learn more. I got my first cert — the AWS Certified Cloud Practitioner certification — on January 22nd, but as I took the practice exams (5 exams, 2x each) and the actual exam, I learned about gaps in my knowledge. So I’ll be filling those in through the articles I wrote beforehand.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;One of the critical categories of questions for the AWS Certified Cloud Practitioner exam is billing. It’s estimated to be from 12% to 20% of the exam.&lt;/p&gt;

&lt;p&gt;Every service and support plan have their own pricing models, and I’ll try to clarify all of these here. This is a large topic, so I’m going to split this over three articles.&lt;/p&gt;

&lt;p&gt;I don't believe the exams will ask about specific prices, as these can always change. But the important thing is understanding the relative pricing, so you could be able to make intelligent cost analysis decisions.&lt;/p&gt;

&lt;h4&gt;
  
  
  Support Plans
&lt;/h4&gt;

&lt;p&gt;There are four basic plans offered by AWS, which is similarly tiered as other services you may be familiar with:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Basic&lt;/strong&gt; – This is the free tier. We get no direct tech support from Amazon. They’ll only provide us with accounting support and access to forums. 7 Trusted Advisor (future topic) checks are included.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Developer&lt;/strong&gt; – $29 and up (sliding scale) per month. We get everything included in Basic, plus a primary tech contact with 12-24 hour response time via email only. They’ll respond within 12 hours if the system is impaired or down. But they’ll provide no 3rd-party support. We’re on our own for that.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Business&lt;/strong&gt; – $100 and up (sliding scale) per month. We get everything included in Developer, but with 24×7 support, 1 hour or less response time for urgent cases (system down), within 4 hours for impaired system issues. They will help with 3rd-party issues. Communication is available via email, chat, and phone. We also have full access to Trusted Advisor checks, and access to the AWS Support API, which seems really cool, but I’ve never used it.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Enterprise&lt;/strong&gt; – $15k and up (sliding scale) per month. We get everything in Business, plus a dedicated Technical Account Manager (TAM), a personal Support Concierge, access to Event Management, seasonal promotions, events, and migrations, and 15 minute priority response time for critical issues. It seems like a huge jump in price between Business and Enterprise, but I guess they take into account that an enterprise can afford to pay a lot more for really top-notch support.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  AWS Partner Network (APN) Support Options
&lt;/h4&gt;

&lt;p&gt;Although there’s no direct AWS billing around the following two services, for the certification it’s important to be aware of what’s available. Also, as we earn our AWS certifications we’ll be directly helping our company if one of their goals is to be included in the third-party &lt;a href="https://aws.amazon.com/partners/"&gt;AWS Partner Network&lt;/a&gt;. There are several requirements for each tier, and our companies will gain plenty of benefits as its members become certified.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://aws.amazon.com/partners/consulting/"&gt;&lt;strong&gt;APN Consulting Partners&lt;/strong&gt;&lt;/a&gt;: These are professional service firms like System Integrators and VARs that can supplement our internal team’s knowledge and help us take advantage of AWS.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://aws.amazon.com/partners/technology/"&gt;&lt;strong&gt;APN Technology Partners&lt;/strong&gt;&lt;/a&gt;: These partners provide pre-canned SaaS or PaaS solutions and dev and security tools we can install on the AWS platform, AWS services we can integrate with, or hardware and network vendors. These are created by ISVs and often made available on the huge &lt;a href="https://aws.amazon.com/marketplace/"&gt;AWS Marketplace&lt;/a&gt;. We never need to start from scratch to get our company up and running quickly on AWS.&lt;/p&gt;

&lt;h4&gt;
  
  
  Billing Preferences and Alerts
&lt;/h4&gt;

&lt;p&gt;We should put cost controls in place before the environment grows. I recommend turning on all these &lt;a href="https://console.aws.amazon.com/billing/home?#/preferences"&gt;billing preference settings&lt;/a&gt; so there are no surprises:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Receive PDF Invoice by Email&lt;/li&gt;
&lt;li&gt;Receive Free Tier Usage Alerts&lt;/li&gt;
&lt;li&gt;Receive Billing Alerts&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I also recommend setting alarms for when certain thresholds have been reached. Alarms are set via &lt;a href="https://aws.amazon.com/cloudwatch/"&gt;CloudWatch&lt;/a&gt;. Select the Billing metrics. You can choose from several individual metrics/services, but to start out, I recommend setting one on the Total Estimated Charge. The options are pretty self-explanatory. To be alerted, you’ll also need to select an SNS (Simple Notification Service) topic (future article), and email list to alert. Please note that the only email lists allowed are those in the account.&lt;/p&gt;

&lt;p&gt;Keep in mind that we can only track the estimated charges in CloudWatch; not our actual resource utilization. Also, we can only set coverage targets for our reserved EC2 instances in Budgets or the Cost Explorer, but not in CloudWatch.&lt;/p&gt;

&lt;h4&gt;
  
  
  Billing Tools
&lt;/h4&gt;

&lt;p&gt;We can enable &lt;a href="https://aws.amazon.com/aws-cost-management/aws-cost-explorer/"&gt;Cost Explorer&lt;/a&gt; to visualize and manage our costs and usage over time. When we enable this, we automatically get recommendations to help us reduce our costs. We have to explicitly enable this before it starts tracking our usage.&lt;/p&gt;

&lt;p&gt;We can use the &lt;a href="https://aws.amazon.com/aws-cost-management/aws-cost-and-usage-reporting/"&gt;Cost and Usage Reports&lt;/a&gt; for detailed, granular hourly and daily usage reports that can be exported to a spreadsheet. We also have to explicitly enable this before it starts tracking our usage. These reports give us additional service, pricing, and reserved instance metadata to better help us analyze our usage and make changes in how we allocate our resources. These reports also make great use of tags that we assign to our resources, for better categorization.&lt;/p&gt;

&lt;p&gt;As an FYI, billing metrics only run in region us-east-1 (N. Virginia).&lt;/p&gt;

&lt;h4&gt;
  
  
  General Service Pricing Info
&lt;/h4&gt;

&lt;p&gt;A major benefit of using cloud services are that instead of CapEx (capital expenditures), where we need to pay up-front fixed, sunk costs, cloud computing costs are OpEx (operational expenditures), where we pay for what we use, like electricity, water, or gas. We can reduce overall costs by 70% or more.&lt;/p&gt;

&lt;p&gt;Pricing is based on:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Paying as you go&lt;/li&gt;
&lt;li&gt;Paying for what’s used&lt;/li&gt;
&lt;li&gt;Paying less as more is used&lt;/li&gt;
&lt;li&gt;Paying even less when reserving capacity&lt;/li&gt;
&lt;li&gt;Paying even less as AWS grows&lt;/li&gt;
&lt;li&gt;Custom pricing&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In general, services are charged by compute, storage, and outbound data transfer. They are priced transparently and independently. This enables our businesses to be fully elastic and allows us to focus on innovation. We don’t have to pay for services that aren’t running.&lt;/p&gt;

&lt;p&gt;A key point to remember during the exams: billing is charged for data transfer &lt;em&gt;between&lt;/em&gt; regions.&lt;/p&gt;

&lt;h4&gt;
  
  
  &lt;a href="https://aws.amazon.com/ec2/pricing/"&gt;EC2 Pricing Models&lt;/a&gt;
&lt;/h4&gt;

&lt;p&gt;There are very flexible pricing models for EC2:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href="https://aws.amazon.com/ec2/pricing/on-demand/"&gt;On-Demand Instances&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;This is the default launch type.&lt;/li&gt;
&lt;li&gt;It’s low-cost and flexible.&lt;/li&gt;
&lt;li&gt;We’re charged by the hour or minute, depending upon the instance type (compute power, memory size, etc.).&lt;/li&gt;
&lt;li&gt;It’s best for short-term, unpredictable workloads, and for first-time apps when starting out. Always monitor usage to see if you could benefit from other plans in the long run.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;

&lt;p&gt;&lt;a href="https://aws.amazon.com/ec2/pricing/reserved-instances/"&gt;Reserved Instances&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;These are best for long-term savings.&lt;/li&gt;
&lt;li&gt;It’s good for steady-state, predictable usage or reserved capacity.&lt;/li&gt;
&lt;li&gt;We have to make a commitment over a period of time, though; 1 or 3 years.&lt;/li&gt;
&lt;li&gt;These could be shared between multiple AWS accounts for the same organization.&lt;/li&gt;
&lt;li&gt;Unused instances can be sold in the &lt;a href="https://aws.amazon.com/ec2/purchasing-options/reserved-instances/marketplace/"&gt;Reserved Instance Marketplace&lt;/a&gt;, so it’s not a huge concern if the need disappears before the commitment period ends.&lt;/li&gt;
&lt;li&gt;It’s priced based on term, instance class, and payment class option.  These are the payment classes. Please note that all the percentages are approximate and subject to change. I’m just providing them to give you a general idea:

&lt;ul&gt;
&lt;li&gt;Standard – up to a 75% savings. We can’t change the attributes, though.&lt;/li&gt;
&lt;li&gt;Convertible – up to a 54% savings. We can upgrade attributes (but not downgrade; this is why it’s best to start small and work your way up).&lt;/li&gt;
&lt;li&gt;Scheduled – we can reserved instances for specific periods of time. Savings vary depending on the selected periods.
Terms are 1 or 3 year contracts. Of course, 3 years gives us more savings.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;Payment term options:

&lt;ul&gt;
&lt;li&gt;All upfront (~40% discount over On-Demand)&lt;/li&gt;
&lt;li&gt;Partial upfront (~39% discount over On-Demand)&lt;/li&gt;
&lt;li&gt;No upfront (~36% discount over On-Demand)&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;

&lt;p&gt;&lt;a href="https://aws.amazon.com/ec2/dedicated-hosts/"&gt;Dedicated Hosts&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;This is the most expensive option.&lt;/li&gt;
&lt;li&gt;These are physical servers dedicated for a customer’s use.&lt;/li&gt;
&lt;li&gt;They give us visibility and control over how instances are placed.&lt;/li&gt;
&lt;li&gt;This is useful when regulatory requirements need to be met.&lt;/li&gt;
&lt;li&gt;They are single-tenant (physical isolation) as opposed to the multi-tenant instances used in other options.&lt;/li&gt;
&lt;li&gt;We would also use our own third-party software licenses that are bound to physical cores or sockets.&lt;/li&gt;
&lt;li&gt;These are offered in both On-Demand and Reserved (up to 70% savings).&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;

&lt;p&gt;&lt;a href="https://aws.amazon.com/ec2/pricing/dedicated-instances/"&gt;Dedicated Instances&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Hardware is dedicated to a single customer.&lt;/li&gt;
&lt;li&gt;These instances are physically isolated at the host hardware level from other AWS accounts.&lt;/li&gt;
&lt;li&gt;The instances may share hardware with other instances from the same AWS account that aren’t necessarily dedicated instances.&lt;/li&gt;
&lt;li&gt;These are also offered in both On-Demand and Reserved, as well as Spot Instances (up to 90% savings).&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;

&lt;p&gt;&lt;a href="https://aws.amazon.com/ec2/spot/"&gt;Spot Instances&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;This will bring the biggest savings — up to 90%.&lt;/li&gt;
&lt;li&gt;This allows us to take advantage of unused EC2 capacity sitting out there in the AWS cloud. &lt;/li&gt;
&lt;li&gt;
&lt;a href="https://medium.com/@mrpowers/playing-the-aws-ec2-spot-market-74b703454f4f"&gt;We bid on Spot instances&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;These are only useful if our apps have flexible start and end times.&lt;/li&gt;
&lt;li&gt;These are good for non-critical background tasks, like AWS Batch use.&lt;/li&gt;
&lt;li&gt;These instances can be terminated at any time, so the applications running on them must be able to handle interruptions and intelligent restarts.&lt;/li&gt;
&lt;li&gt;If AWS terminates a spot instance (for example, when the current price exceeds our bid price), we aren’t charged for partial hour usage. But if we terminate, we are charged for partial hour usage.&lt;/li&gt;
&lt;li&gt;We decide the type of task needed upfront:

&lt;ul&gt;
&lt;li&gt;Load balancing workloads&lt;/li&gt;
&lt;li&gt;Flexible workloads&lt;/li&gt;
&lt;li&gt;Big data workloads&lt;/li&gt;
&lt;li&gt;Defined duration workloads (1 to 6 hours)&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;

&lt;p&gt;&lt;a href="https://aws.amazon.com/free/"&gt;Free Tier to start out&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;We get free EC2 Micro instances for a year.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Please bear in mind that we’re also charged on the number of instances, the type of load balancing we use, auto scaling, detailed monitoring, and the use of elastic IP addresses. We’re also charged for resources used by EC2 instances, such as Elastic Block Storage (EBS), EBS snapshots, AMIs, and the actual drives on the instances themselves. We’ll be talking about these more in detail in the next billing articles, and we’ll discuss more about what these services are in later articles.&lt;/p&gt;

&lt;p&gt;Just one more point about EC2 and &lt;a href="https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/elastic-ip-addresses-eip.html"&gt;Elastic IP&lt;/a&gt; addresses: Elastic IPs are only charged when created but &lt;em&gt;unused&lt;/em&gt;. They are &lt;em&gt;not&lt;/em&gt; charged when assigned to EC2 instances. It's relatively cheap, but we do want to make sure we get rid of them if we don't need to use them.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>billing</category>
      <category>ec2</category>
      <category>pricing</category>
    </item>
    <item>
      <title>AWS Identity and Access Management (IAM)</title>
      <dc:creator>Mark Freedman</dc:creator>
      <pubDate>Tue, 07 Jan 2020 18:08:22 +0000</pubDate>
      <link>https://dev.to/markfreedman/aws-identity-and-access-management-iam-52d0</link>
      <guid>https://dev.to/markfreedman/aws-identity-and-access-management-iam-52d0</guid>
      <description>&lt;p&gt;&lt;em&gt;Last updated: 2020-02-11&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;I’ll be updating my AWS articles from time to time, as I learn more. I got my first cert — the AWS Certified Cloud Practitioner certification — on January 22nd, but as I took the practice exams (5 exams, 2x each) and the actual exam, I learned about gaps in my knowledge. So I’ll be filling those in through the articles I wrote beforehand.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;One of the first things you’ll do after creating an AWS account is add users and groups via the &lt;a href="https://aws.amazon.com/iam/"&gt;IAM console&lt;/a&gt;. We will initially do this as the “root” account user. But be careful — root accounts are not restricted in any way. Therefore, I strongly recommend following the &lt;strong&gt;Security Status&lt;/strong&gt; checklist FIRST when visiting the IAM console:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Delete your root access keys (only shown when logged in as the root user)

&lt;ul&gt;
&lt;li&gt;Since root accounts should only be used for console access, it should not be used for programmatic access, so the access keys should be deleted. I’ll discuss what access keys are for, later.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;Activate &lt;a href="https://aws.amazon.com/iam/features/mfa/"&gt;MFA (multi-factor authentication)&lt;/a&gt; on your root account

&lt;ul&gt;
&lt;li&gt;ALWAYS do this for the root account.&lt;/li&gt;
&lt;li&gt;We can use a “virtual” device, such as the &lt;a href="https://en.wikipedia.org/wiki/Google_Authenticator"&gt;&lt;strong&gt;Google Authenticator&lt;/strong&gt; app or browser extension&lt;/a&gt;. You’ll generate a QR code to scan. I like to add the account to both the app and the Chrome extension, so when the QR code is displayed I scan it in the app, and while it is still displayed I scan it from the extension. If you skip to the next step, you will never see the QR code again, so make sure you capture it both ways before moving on. Also, you cannot just take a screen shot and scan in the image from the Chrome extension afterwards. It won’t let you. So make sure you do it when you’re still on the IAM console page.&lt;/li&gt;
&lt;li&gt;We can also use a UTF security key or other hardware MFA device instead. I’ve seen recommendations that we should use a hardware device for root accounts, but I’ve never done that. In a high-security company with exceptionally sensitive data, that may be a requirement.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;Create individual &lt;a href="https://docs.aws.amazon.com/IAM/latest/UserGuide/id_users.html"&gt;IAM users&lt;/a&gt;

&lt;ul&gt;
&lt;li&gt;Aside from initially creating an initial user with admin rights, and giving a user access to the &lt;strong&gt;Billing&lt;/strong&gt; console (future article), I’d avoid using the root account for anything else beyond going through this initial checklist.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;Use &lt;a href="https://docs.aws.amazon.com/IAM/latest/UserGuide/id_groups.html"&gt;IAM groups&lt;/a&gt; to assign permissions

&lt;ul&gt;
&lt;li&gt;It almost never makes sense to assign permissions directly to individual users. Even if you initially have a single user, create a group to assign the user to, and apply permissions to the group. It’s much easier to assign users to multiple groups if we have to, rather than to assign permissions haphazardly to individual users.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;Apply an &lt;a href="https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_passwords_account-policy.html"&gt;IAM password policy&lt;/a&gt;

&lt;ul&gt;
&lt;li&gt;This is where we specify the minimum length, what characters are allowed, reuse rules, expiration rules, and if an admin is required to reset the password after expiration or if the user can do this on their own.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;Rotate your &lt;a href="https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_access-keys.html"&gt;access keys&lt;/a&gt; (not shown when logged in as root)

&lt;ul&gt;
&lt;li&gt;This is more of a user account recommendation than a checklist item.&lt;/li&gt;
&lt;li&gt;I’m assuming this isn’t shown for the root account because the first checklist item is to delete any root access keys to begin with. Again, I’ll discuss what access keys are for in a bit.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Sign-in Link
&lt;/h4&gt;

&lt;p&gt;At the top of the IAM console, we’ll see an &lt;a href="https://docs.aws.amazon.com/IAM/latest/UserGuide/getting-started_how-users-sign-in.html"&gt;IAM users sign-in link&lt;/a&gt;:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--QSsiSiQl--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/8rkzgnvagb9c4hmt30qa.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--QSsiSiQl--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/8rkzgnvagb9c4hmt30qa.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;By clicking the &lt;strong&gt;Customize&lt;/strong&gt; link, we can create an easy to remember alias instead of the numeric AWS account number as the first part of the URL. This URL will allow users to directly visit the console login page. Be aware, though, that if you have access to several AWS accounts, and you store your passwords in a manager tool such as LastPass or 1Password, what the browser pre-fills into the login fields can be misleading. Even if you customize the fields in the password manager tool, don’t trust the pre-filled values you see, and always select the specific account from the password manager dropdown. Even then, sometimes I had to manually enter the values, which is a real PITA.&lt;/p&gt;

&lt;h4&gt;
  
  
  Creating a Group
&lt;/h4&gt;

&lt;p&gt;Now that we made it through the checklist, it’s time to create an “admins” group (or whatever you want to call it) from the &lt;strong&gt;Groups&lt;/strong&gt; menu item on the left. When you hit the step for attaching policies, note that we are limited to 10 policies for a group. That shouldn’t be an issue, because we can apply several permissions to a single policy. For this group, we should just apply a single policy; &lt;strong&gt;AdministratorAccess&lt;/strong&gt;. Since this encompasses all permissions, we don’t need to add any other policies to the group. That’s all we need to do to create a group.&lt;/p&gt;

&lt;h4&gt;
  
  
  Creating a User
&lt;/h4&gt;

&lt;p&gt;Now we can create users to assign to this group. Let’s create a single user for now by clicking the &lt;strong&gt;Users&lt;/strong&gt; menu item on the left. Note that we can create several users at a time if they share the same access types and permissions. Also note that it’s at the user level where we decide what type of “access” the user can have; not at the group level:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--iTEgd-Mj--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/gi7vdue8m7i11j5zg7r7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--iTEgd-Mj--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/gi7vdue8m7i11j5zg7r7.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Programmatic access&lt;/strong&gt;, which gives the user the right to access AWS via the CLI (future article), API, or SDK. Programmatic access requires the creation of an access key. Recall that we deleted the access keys for the root user, because that account should only ever have console access.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;AWS Management Console&lt;/strong&gt; access, which gives the user the right to log into this console.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I strongly recommend creating separate user accounts for programmatic vs. console access, although if you’re just creating a temporary user to learn AWS, a single user is fine for access to both.&lt;/p&gt;

&lt;p&gt;When we select console access, the &lt;strong&gt;Console password&lt;/strong&gt; fields are displayed, where we can request an autogenerated or manual password. I recommend requiring the users we create change their password upon first login, so we really only need to request “autogenerated” here. Let’s enforce that by selecting the &lt;strong&gt;Require password reset&lt;/strong&gt; option as well. FYI, setting this option automatically assigns the &lt;strong&gt;IAMUserChangePassword&lt;/strong&gt; policy to the user so they’ll be able to do this. Aside from this, users have zero permissions when first created.&lt;/p&gt;

&lt;h4&gt;
  
  
  Access Keys
&lt;/h4&gt;

&lt;p&gt;After we create the user, we’ll see a “success” page, where we’ll also see the system-generated access key. Once we leave this page, we will never, ever be able to see the secret access key or password again. So I recommend either saving them into a secure location (such as the LastPass “vault” or similar tool), or downloading the CSV file containing all the values (not very secure). If we do lose this, it’s not the end of the world. We’ll just have to regenerate a new access key and delete or inactivate the lost one. From this page, we can also request AWS send an email to the user with this info.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--y8_Vr5xS--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/n4v7o7vc9wqvxd0chtp4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--y8_Vr5xS--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/n4v7o7vc9wqvxd0chtp4.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  Adding a User to a Group
&lt;/h4&gt;

&lt;p&gt;The next step is to add the user to a group — in our case, the admins group we created earlier. We could instead select the option to copy permissions from another user, or attach policies directly, but as I mentioned earlier, I don’t recommend that.&lt;/p&gt;

&lt;p&gt;We can skip over the Tags page for now; that’s a separate article, and not essential at this time.&lt;/p&gt;

&lt;h4&gt;
  
  
  More on &lt;a href="https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies.html"&gt;Policies&lt;/a&gt;
&lt;/h4&gt;

&lt;p&gt;We should take a look at the &lt;strong&gt;Policies option&lt;/strong&gt; (select from the menu on the left). Policies can “allow” or “deny” specific rights. We’ll see the hundreds (and hundreds, and hundreds) of pre-canned AWS policies. We can create our own as well. Policies are actually defined under the hood via JSON key/value pairs. When we create our own policies, we’ll often do this directly in JSON, although there is a visual editor as well. But this is for a future article.&lt;/p&gt;

&lt;p&gt;Since there are so many policies, AWS makes it easy for us to filter these by type (“customer managed” — ones we create, AWS managed, etc.) or by how they’re used. We can also enter a partial search string.&lt;/p&gt;

&lt;h4&gt;
  
  
  &lt;a href="https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles.html"&gt;Roles&lt;/a&gt;
&lt;/h4&gt;

&lt;p&gt;One other thing you may be interested in on the IAM console is a feature called &lt;strong&gt;Roles&lt;/strong&gt;. These are not associated with a specific user or group within the account. We can use roles for granting/delegating permissions to resources (including IAM users in other accounts).&lt;/p&gt;

&lt;p&gt;Roles allow us to avoid sharing long-term access keys. For example, if we want to access S3 from the AWS CLI from an EC2 instance's terminal, you can avoid storing access keys/secret keys on the machine by giving the EC2 instance S3 access via a role that has S3 access policies.&lt;/p&gt;

&lt;p&gt;Applications can also assume roles, similar to AWS users assuming roles.&lt;/p&gt;

&lt;p&gt;Of course, if you have no idea what I'm talking about, don't worry -- this will be a future article.&lt;/p&gt;

&lt;h4&gt;
  
  
  &lt;a href="https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_getting-report.html"&gt;Credential Reports&lt;/a&gt;
&lt;/h4&gt;

&lt;p&gt;One other thing you may be interested in on the IAM console: we can generate and download a &lt;strong&gt;Credential report&lt;/strong&gt; that will show us user account details which an admin should review on a regular basis.&lt;/p&gt;

&lt;p&gt;One final note — although many AWS services are available at the “region” level, IAM settings are “global.”&lt;/p&gt;

</description>
      <category>aws</category>
      <category>iam</category>
      <category>security</category>
    </item>
    <item>
      <title>AWS Certified Cloud Practitioner</title>
      <dc:creator>Mark Freedman</dc:creator>
      <pubDate>Mon, 06 Jan 2020 17:56:05 +0000</pubDate>
      <link>https://dev.to/markfreedman/aws-certified-cloud-practitioner-2mi3</link>
      <guid>https://dev.to/markfreedman/aws-certified-cloud-practitioner-2mi3</guid>
      <description>&lt;p&gt;&lt;em&gt;Last updated: 2020-02-11&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;I’ll be updating my AWS articles from time to time, as I learn more. I got my first cert — the AWS Certified Cloud Practitioner certification — on January 22nd, but as I took the practice exams (5 exams, 2x each) and the actual exam, I learned about gaps in my knowledge. So I’ll be filling those in through the articles I wrote beforehand.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;The &lt;a href="https://aws.amazon.com/certification/certified-cloud-practitioner/"&gt;AWS Certified Cloud Practitioner&lt;/a&gt; exam is the most basic &lt;a href="https://aws.amazon.com/"&gt;AWS&lt;/a&gt; certification you could obtain. It certifies that you have a basic knowledge of the core features and benefits of AWS. It’s a great kickoff point if you want to work towards other AWS certifications.&lt;/p&gt;

&lt;p&gt;I’ve never had a high opinion of certifications in my field. Most exams seemed to better gauge how good someone is at taking tests or memorizing facts than actually displaying understanding and knowledge. But the AWS exams appear different, and seemed to be designed to be a lot more representative of someone’s knowledge and understanding.&lt;/p&gt;

&lt;p&gt;It’s been many years since I’ve actively pursued a certification, but at this point in my career I feel becoming certified in AWS (and cloud computing, in general) is worth the time and effort. The benefits of cloud computing is enormous, and is the most game-changing development since I entered the field. I’ve forced myself to make use of AWS and Azure just to remain current, and to make better solution choices.&lt;/p&gt;

&lt;p&gt;In order to pass the most basic AWS exam, we have to be well aware of these benefits:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Cloud computing allows businesses to trade capital expenses for variable operational expenses.&lt;/li&gt;
&lt;li&gt;We can take advantage of massive economies of scale. By sharing costs with other AWS users, machine cycles are used much more efficiently, saving all of us an enormous amount of money.&lt;/li&gt;
&lt;li&gt;We no longer have to guess our capacity needs ahead of time. We can quickly start small, and by analyzing our usage and needs (including assistance from the &lt;a href="https://aws.amazon.com/premiumsupport/technology/trusted-advisor/"&gt;AWS Trusted Advisor&lt;/a&gt; and &lt;a href="https://docs.aws.amazon.com/awsaccountbilling/latest/aboutv2/ce-what-is.html"&gt;Cost Explorer&lt;/a&gt;) we can easily scale up (and down) as-needed, both manually and automatically. This is one of my favorite benefits.&lt;/li&gt;
&lt;li&gt;There are no long-term contracts, although there are ways to save even more money by committing to 1 or 3 year contracts for certain resources.&lt;/li&gt;
&lt;li&gt;As you can tell, cloud computing gives us tremendous increase of speed and agility. Small (even single-person) teams can quickly test and implement solutions that were impossible or cost-restrictive just a decade or so ago.&lt;/li&gt;
&lt;li&gt;We can go global in mere minutes by deploying and replicating our solutions across regions.
Not only are the costs of starting up incredibly small (even free for the first year), we no longer have to pay for running and maintaining our own data centers. From experience, I can’t even begin to express how huge this benefit is. We can focus on business solutions instead of constantly worrying about infrastructure, which was a full-time job in itself.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;As I work towards my certifications, I’m going to be writing a series of articles. This will help me prepare for the exams (and for better understanding and using AWS). But I hope this will also help you work towards those same goals. Right now, cloud computing expertise is in great demand, and it is still so hard to find experts. This can be a huge boost for anyone’s tech career, no matter if you have an IT or developer background. If you always considered yourself a well-rounded tech person who is often involved in development, architecture, and implementation of solutions, becoming a &lt;a href="https://en.wikipedia.org/wiki/DevOps"&gt;DevOp&lt;/a&gt; is a natural next step.&lt;/p&gt;

&lt;p&gt;My subsequent posts may be delayed as I create several of them while I work towards my certs, and I’ll post them fairly quickly once I’m done. I want to make sure my subsequent posts are well thought-out and accurate before I post them.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>certification</category>
      <category>devops</category>
      <category>career</category>
    </item>
  </channel>
</rss>
