<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: jcmullis</title>
    <description>The latest articles on DEV Community by jcmullis (@jcmullis).</description>
    <link>https://dev.to/jcmullis</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/jcmullis"/>
    <language>en</language>
    <item>
      <title>Utilizing CloudWatch Alarms for monitoring cost and performance</title>
      <dc:creator>jcmullis</dc:creator>
      <pubDate>Tue, 25 Aug 2020 01:35:47 +0000</pubDate>
      <link>https://dev.to/jcmullis/utilizing-cloudwatch-alarms-for-monitoring-cost-and-performance-4gip</link>
      <guid>https://dev.to/jcmullis/utilizing-cloudwatch-alarms-for-monitoring-cost-and-performance-4gip</guid>
      <description>&lt;p&gt;In my last post, I explored one of the many options in automation to cut costs of operation within AWS by setting up scheduled EC2 shutdowns. As I stated in my last post, my free-tier benefits officially end in August, which lead me to explore ways of cutting costs. One of my favorite ways to monitor and control costs is by utilizing the all-powerful CloudWatch!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--32Lpc8Tv--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://geekylane.com/wp-content/uploads/2019/05/How-to-create-a-billing-alarm-on-AWS-CloudWatch-.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--32Lpc8Tv--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://geekylane.com/wp-content/uploads/2019/05/How-to-create-a-billing-alarm-on-AWS-CloudWatch-.png" alt="Alt text of image"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;CloudWatch is extremely versatile with a well-designed dashboard showing you metrics that include CPU Utilization, disk writes and reads and so much more. This can be an excellent tool to monitor AWS services and when paired with AWS SNS, you can get notifications when critical changes occur. &lt;/p&gt;

&lt;p&gt;In my last post I discussed scheduled shutdowns of EC2 instances using CloudWatch events. What if you want to know if the EC2 instances you are using are efficient for the tasks you're running? What if you want alerts set up based on billing costs of those instances or an alert sent if one or more instances unexpectedly has a state change? This is where CloudWatch really shines!&lt;/p&gt;

&lt;p&gt;I wanted to try out a few different scenarios for my own personal account to keep an eye on some specific metrics. These are the steps I followed to create alarms for EC2 state change, EC2 billing and a general AWS account billing alarm as well.  &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;First off, I created SNS topics for all 3 alarms I wanted to monitor and subscribed to these topics. I used strictly email as my notification of choice but SMS Text could come in handy as well. Be sure to confirm the subscriptions within your own email.&lt;/li&gt;
&lt;li&gt;For my EC2 instance change alarm, I went into CloudWatch and created an event rule. I selected EC2 as the service name and for event type I chose "EC2 instance State-change notification". Finally for my target, I chose SNS and selected the topic created for EC2 state change. Once I had this completed I had to run a simple test of stopping an EC2 instance and checking to see if I received the alert stating that the change had occurred. Voila! Simple as that, my first alarm has been successfully created. &lt;/li&gt;
&lt;li&gt;My next alarm to setup in CloudWatch was the EC2 billing metric. With these types of alarms you can set a threshold (in this case, A dollar amount) and once that limit has been surpassed an alarm will be sent out. This is fantastic for monitoring projected costs for specific services. More of the same, select your specific metric to monitor, choose your SNS topic created for the alert and add a brief description of what the alert is for. This will fire off anytime the set threshold (dollar amount) is exceeded.&lt;/li&gt;
&lt;li&gt;Finally, the most useful alarm (in my opinion) is the account billing alarm, which has its own dashboard built within CloudWatch and can be easily setup in minutes. This will monitor the estimated cost of your account for the month and once the set threshold is surpassed you will be alerted. This has been crucial for me as my free-tier ends. I have already discovered ways of cutting costs and found areas I could save in! Whew!&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://i.giphy.com/media/9HQRIttS5C4Za/giphy.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://i.giphy.com/media/9HQRIttS5C4Za/giphy.gif" alt="Alt text of image"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;AWS offers so many different avenues of cost-management and various reports that can demonstrate areas of improvement. Cost Explorer, AWS Budgets and AWS Compute Optimizer are all fantastic as well. For the depth of metrics that can be monitored and versatility of it's performance, I will always be partial to CloudWatch. &lt;/p&gt;

&lt;p&gt;Jerry&lt;/p&gt;

</description>
      <category>aws</category>
      <category>architecture</category>
      <category>devops</category>
      <category>cloudwatch</category>
    </item>
    <item>
      <title>Stopping EC2 instances on a scheduled basis</title>
      <dc:creator>jcmullis</dc:creator>
      <pubDate>Mon, 17 Aug 2020 19:32:25 +0000</pubDate>
      <link>https://dev.to/jcmullis/stopping-ec2-instances-on-a-scheduled-basis-hf1</link>
      <guid>https://dev.to/jcmullis/stopping-ec2-instances-on-a-scheduled-basis-hf1</guid>
      <description>&lt;p&gt;The end of August will officially mark a full year of learning AWS and with that comes an end to my free-tier benefits. This had me thinking that maybe it was time to look into discussing AWS budgeting and ways to cut costs. While the majority of my projects that I have completed thus far have been heavily involved with serverless, I thought this would be a great opportunity to look into EC2 cost-saving options as well.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--72aSGhG7--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/knayst0dykepnmgt7hta.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--72aSGhG7--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/knayst0dykepnmgt7hta.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Those familiar with AWS are well-aware of the benefits of using spot-instances, proper load-balancing and auto-scaling setup and simply choosing the correct type of instance for your usage criteria to begin with. Beyond that, there are still other ways to cut EC2 costs. The first option that came to mind was utilizing CloudWatch events to schedule shut-down of EC2 instances during off-hours. &lt;br&gt;
For example: Say we have a rather large development business that needs multiple M5 instances up and running during normal business hours of Monday - Friday, 7 a.m. to 5 p.m. We could easily schedule shutdown of these instances at 6 p.m. each night and spin them back up at 6 a.m. the next morning during the week while leaving them shutdown during the weekend. This could potentially lead to quite a large amount of savings over the span of a year. So, what are the steps I would take to go about doing this? Glad you asked!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://i.giphy.com/media/12WPxqBJAwOuIM/giphy.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://i.giphy.com/media/12WPxqBJAwOuIM/giphy.gif" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;First things first, I created a Lambda function from scratch using Python 3.8 as my runtime and added a custom role. This custom role includes the standard access to CloudWatch CreateLogGroups, CreateLogStreams and PutLogEvents but also includes access to EC2 DescribeInstances, DescribeRegions, StartInstances and StopInstances. I named this role, saved and began writing my function code. Here's a screen capture of what this looked like in my policy manager:&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--jPLsGsm2--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/b7gv0un35tpggy9kn976.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--jPLsGsm2--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/b7gv0un35tpggy9kn976.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Using boto3 I wrote a small amount of code that would get a list of regions, iterate through each region, obtain only the running instances (accomplished this using a filter) and finally stopping the instance while returning "Stopped Instance" and the instance ID. Once this is completed, it's also necessary to increase the function timeout slightly so that there is enough time to run through our code and perform the action of shutting down our instances.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Testing the function is the obvious next step. This is to ensure we have proper permissions and our code runs successfully.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Now that our Lambda function is working properly we have to setup a trigger that will invoke this function to run. Within CloudWatch events, I created a rule with a cron expression and set a time along with a range of Mon-Fri as well as setting the target as my Lambda function. Once this time is reached, it will trigger the target (my Lambda function) and if all works well will shutdown the running EC2 instances.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;I tested this successfully and plan on exploring more ways in the near future to cut costs within the realm of EC2. &lt;/p&gt;

&lt;p&gt;Jerry&lt;/p&gt;

</description>
      <category>aws</category>
      <category>serverless</category>
      <category>python</category>
      <category>devops</category>
    </item>
    <item>
      <title>Transcoding video with AWS Lambda and Elastic Transcoder</title>
      <dc:creator>jcmullis</dc:creator>
      <pubDate>Sun, 07 Jun 2020 20:36:03 +0000</pubDate>
      <link>https://dev.to/jcmullis/transcoding-video-with-aws-lambda-and-elastic-transcoder-18d7</link>
      <guid>https://dev.to/jcmullis/transcoding-video-with-aws-lambda-and-elastic-transcoder-18d7</guid>
      <description>&lt;p&gt;I recently completed a small project in which I used AWS resources to automate the process of transcoding 4K video files. With the combination of S3, Lambda, Elastic Transcoder pipeline and SNS I was able to do this task effortlessly. Once a file is uploaded to an S3 source bucket it is then converted and saved in the transcoded complete bucket. SNS notifies of the initiation of this process and of its completion. This is incredibly useful for image handling as well. Below are the steps I took to complete this, along with my findings once finished.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Before beginning this project, I created the initial resources needed to get things started. First, I created 3 separate S3 buckets and attached bucket policies to allow them to be accessed publicly. One bucket for the source uploaded content, one bucket for the completed transcoded files and lastly a bucket for thumbnail images to be saved.&lt;/li&gt;
&lt;li&gt;The next step was to create and subscribe to a SNS topic. For this protocol I chose to use email and SMS notifications. Once this was complete I created the subscription and verified my email. You can also publish a test message through SNS as well to verify all notification sources are receiving the messages. &lt;/li&gt;
&lt;li&gt;With the initial sources setup it was now time to create and initiate my Elastic Transcoder pipeline. I named it and then set my input bucket as my S3 source bucket. For transcoded files I selected my transcoding completed bucket and finally for thumbnails I selected my thumbnails S3 bucket. Pipeline now initiated and I made note of my pipeline ID for later use.&lt;/li&gt;
&lt;li&gt;Now on to the Lambda function creation. I created my function from scratch and used Python 3.7 (Boto3 SDK) for my runtime. For a role, I created a custom role and policy and created my Lambda function. &lt;/li&gt;
&lt;li&gt;At this point I configured an S3 trigger for my function. I made my source bucket the trigger and selected "All object create events". Essentially this will relay to Lambda that a file has been added and that my function has a task to complete. Now it's time to add the function code into the editor and save. Finally, I added an environment variable and made the key "PIPELINE_ID" and the value of the actual ID. &lt;/li&gt;
&lt;li&gt;Now it's finally time to test. I downloaded a few short 4K clips and uploaded the first to my S3 source bucket. I was notified via text and email that a transcoding job had begun. Also notified when completed as well. At this time I checked my transcoding completed bucket and my thumbnails bucket and verified the files were uploaded. Everything working properly and my simple transcoding project is complete. Finally I checked my CloudWatch log files and confirmed there as well.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;This was a fun little project and could be incredibly helpful with image conversion as well. Lambda can be useful for tasks such as this and I plan to dive deeper with more complicated workflows. Thanks for reading! &lt;/p&gt;

&lt;p&gt;Jerry C. Mullis&lt;/p&gt;

</description>
      <category>serverless</category>
      <category>aws</category>
      <category>tutorial</category>
      <category>functional</category>
    </item>
    <item>
      <title>The Cloud Resume Challenge</title>
      <dc:creator>jcmullis</dc:creator>
      <pubDate>Thu, 21 May 2020 17:49:28 +0000</pubDate>
      <link>https://dev.to/jcmullis/the-cloud-resume-challenge-2mic</link>
      <guid>https://dev.to/jcmullis/the-cloud-resume-challenge-2mic</guid>
      <description>&lt;p&gt;I started the cloud resume challenge as a recommendation by friends on Reddit. I'm aspiring to be an AWS cloud engineer and needed more hands on experience. After 14 years of healthcare management and working the frontlines of the hospital setting to treat patients, I am excited to pursue a new challenge in life with this career move into the world of cloud. I currently have obtained 4 AWS certifications and greatly enjoy learning more about AWS tech. This resume challenge, presented by Forrest Brazeal, took me days to accomplish and after several trial and error setups I finally completed the challenge. Here were my steps and findings along the way:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;First I created a simple html resume and included my certifications and experience. I created a Github repository and added my assets, css and index.html file. I used Atom to write the code and push it to my Github repository.&lt;/li&gt;
&lt;li&gt;The next step of the challenge was to upload these files to my newly created S3 bucket. I also setup my bucket to host a static website.&lt;/li&gt;
&lt;li&gt;Next I obtained a domain name through Route53, created an SSL certificate through AWS Certificate Manager and finally tied this all together with a CloudFront distribution. Once these steps were completed I could visit my newly created site and noticed that it was indeed secured through SSL.&lt;/li&gt;
&lt;li&gt;I skipped around the challenge somewhat and went ahead and created my website counter and javascript. I added these files to both my repository and S3 bucket. One challenge I wasn't prepared for was having to invalidate my CloudFront distributions once I made changes to files in my S3 bucket. Finally my website counter is on my static website and working properly.&lt;/li&gt;
&lt;li&gt;The next step was to build a DynamoDB table and format it to count visits made to my site. Creating this database was simple but the steps following provided me great challenges.&lt;/li&gt;
&lt;li&gt;I created a Lambda function with proper IAM roles/policies to access DyanmoDB. I setup the Lambda function to send data to the database once triggered. One of the requirements of this challenge was to use Python. This was slightly difficult for me as I've never used python before and was struggling finding the correct code to actually send the data to the db in the correct structure and increment. &lt;/li&gt;
&lt;li&gt;At this point I configured API with both GET and PUT methods. I deployed and tested these successfully in the console. I also used Postman to run GET and PUT tests as well. Once this was completed, I deployed my prod API and attached my invoke url to my website counter javascript code. I attached my Lambda function to the PUT method and my website counter api url as an http endpoint for the GET method. This would pull the current website count, trigger the Lambda function to put the updated number onto my DynamoDB table. Tested and working appropriately.&lt;/li&gt;
&lt;li&gt;Finally, I used AWS CodePipeline to configure my deployment to my S3 bucket. I connected this with my Github repository so that code/documentation changes made going forward would be reflected in my bucket. I will be attaching this short write-up to my Github repository code and if all goes well it will be displayed on my static website.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Finished product: &lt;a href="https://www.jcmresume.com"&gt;https://www.jcmresume.com&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This was a great learning experience and I'm incredibly excited about making a career change into the world of Cloud. &lt;/p&gt;

&lt;p&gt;Jerry C. Mullis &lt;/p&gt;

</description>
      <category>aws</category>
      <category>serverless</category>
      <category>codepen</category>
      <category>sql</category>
    </item>
  </channel>
</rss>
