<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Chloe McAteer</title>
    <description>The latest articles on DEV Community by Chloe McAteer (@chloemcateer3).</description>
    <link>https://dev.to/chloemcateer3</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/chloemcateer3"/>
    <language>en</language>
    <item>
      <title>Resizing EC2 instances without downtime</title>
      <dc:creator>Chloe McAteer</dc:creator>
      <pubDate>Mon, 12 Jun 2023 21:05:27 +0000</pubDate>
      <link>https://dev.to/aws-builders/resizing-ec2-instances-without-downtime-34gn</link>
      <guid>https://dev.to/aws-builders/resizing-ec2-instances-without-downtime-34gn</guid>
      <description>&lt;p&gt;As your project grows and user base expands, scaling up your compute resources becomes essential. In this blog I will explore the process of resizing AWS EC2 instances without incurring downtime and discuss some considerations to ensure a seamless transition.&lt;/p&gt;

&lt;h2&gt;
  
  
  Real world walkthrough
&lt;/h2&gt;

&lt;p&gt;When scaling your infrastructure, it is important to regularly evaluate whether your current instance sizes are meeting the demands of your project. Consider factors such as CPU utilisation and user growth to determine when resizing becomes necessary.&lt;/p&gt;

&lt;p&gt;I have been working on a project that has recently went live and over the past few months, its user base has been continually growing. I have auto scaling policies  in place to scale up when the CPU utilisation gets to a certain level - however, there is a point when you realise you need to scale up your instances instead of continuously scaling out.&lt;/p&gt;

&lt;p&gt;I have used Terraform for all my AWS infrastructure as code - so this is where I have defined my EC2 configuration. I have the config set up so that there should always be at least one healthy instance.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

resource "aws_autoscaling_group" "example_ec2_asg" {
  name                      = "example-asg"
  vpc_zone_identifier       = [aws_subnet.private_subnet_1.id, aws_subnet.private_subnet_2.id]
  launch_configuration      = aws_launch_configuration.ecs_launch_config.name
  force_delete              = true
  health_check_grace_period = 10

  desired_capacity = 2
  min_size         = 1
  max_size         = var.max_size

  lifecycle {
    create_before_destroy = true
  }
}

# EC2 launch configuration
resource "aws_launch_configuration" "example_ecs_launch_config" {
  name_prefix          = "ecs_launch_config-"
  image_id             = data.aws_ami.example_ami.id
  iam_instance_profile = aws_iam_instance_profile.ecs_agent.name
  security_groups      = [aws_security_group.ecs_tasks_sg.id]
  instance_type               = var.ec2_instance_type // passing in as var
  associate_public_ip_address = false
  user_data                   = &amp;lt;&amp;lt;EOF
    #!/bin/bash
    echo ECS_CLUSTER=${aws_ecs_cluster.main.name} &amp;gt;&amp;gt; /etc/ecs/ecs.config
    EOF

  depends_on = [aws_ecs_cluster.main]
  lifecycle {
    create_before_destroy = true
  }
}


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;In the code snippet above you can see the launch configuration for my EC2 instances. You will notice  that I pass the instance type in as a variable - this is because I use different instance sizes depending in deployment environment (e.g. QA, test, production). &lt;/p&gt;

&lt;p&gt;You can see I have a lifecycle event set up, which specifies to always create a new instance before one is destroyed. I also have a desired capacity set of 2 instances running at any time.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Goal 
&lt;/h3&gt;

&lt;p&gt;My EC2 instances are currently &lt;code&gt;t2.small&lt;/code&gt; and I want to update them to be &lt;code&gt;t2.medium&lt;/code&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  The Worry
&lt;/h3&gt;

&lt;p&gt;For code deployments I have it set up to do rolling deployments, but my worry was that this change is bigger than just a code update, it is updating the actual compute the code is running on. &lt;br&gt;
I was afraid if I updated my terraform configuration and ran it, it might try to update all instances at once and cause downtime. &lt;br&gt;
I decided to test this update in a test AWS environment, before trying in QA or Production so I wouldn't cause any issues for the development team or end customers, while trying to understand this process.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Plan
&lt;/h3&gt;

&lt;p&gt;I updated the environment variable for terraform to now be a &lt;code&gt;t2.medium&lt;/code&gt; and ran a terraform plan to view the pending changes:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa6zpg2iu98lqzirbkcwb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa6zpg2iu98lqzirbkcwb.png" alt="Screenshot of terraform plan"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In the screenshot, you can see the terraform plan is showing that there will be a &lt;code&gt;force replacement&lt;/code&gt; of the EC2 instance to update the type.&lt;/p&gt;

&lt;p&gt;To test and see if it would create downtime, I confirmed the deployment in my test environment. Once the terraform plan had finished, I went into the EC2 console and I could see that my instances were both still size t2.small. &lt;/p&gt;

&lt;p&gt;I thought when the deployment had completed, it would have triggered each instance to start updating to the new size. However this is not the case - because I have updated the EC2 size in the launch configuration, which only gets triggered when you launch a new instance.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Test
&lt;/h3&gt;

&lt;p&gt;I have two instances running, so I thought I would try and stop one instance to test my theory and see what would happen. Within the AWS Console (again in my test environment), I selected one of my instances and selected to Stop it.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqhsm9wroauiu3152wnfp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqhsm9wroauiu3152wnfp.png" alt="Screenshot of stoping EC2 instance"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;When I clicked this, I began hitting my APIs health endpoint, to verify traffic was still being routed to the running instance and when I checked back in the console I could a new medium sized instance was automatically being created.&lt;br&gt;
I hit a health endpoint a few more times to verify everything was still running as expected while the new instance was being created and it was!&lt;/p&gt;

&lt;p&gt;Once the new medium instance was in a ready state and I could see in ECS that the service was up and running, I then stopped the second small instance. Again once it was stopped, I could see a new medium instance being created in its place and being registered in ECS.&lt;/p&gt;

&lt;p&gt;I now have confidence that I can do this upgrade on my QA and Production environments without any end user downtime.&lt;/p&gt;

&lt;h2&gt;
  
  
  Considerations
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Utilise an infrastructure-as-code tool to help keep track of configuration updates.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Implementing rolling deployments to ensure one deployment is successful before you start another one.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Ensure you have a desired instance count higher than one instance, to enable a rolling update strategy and also help in the event one of your instances become unhealthy.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Ensure you have a health endpoint, set up to allow you to easily test the service during the infrastructure update to make sure traffic is being routed as expected.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Have a roll back strategy in place in event that something fails during your update&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Resizing EC2 instances without incurring downtime is an essential aspect of managing a growing project. By following best practices such as the ones above you can seamlessly resize your instances to meet the changing demands of your application and confidently scale your infrastructure while providing a smooth user experience.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>cloud</category>
      <category>womenintech</category>
      <category>devops</category>
    </item>
    <item>
      <title>Dynamo DB Cross Region Migration</title>
      <dc:creator>Chloe McAteer</dc:creator>
      <pubDate>Sat, 24 Oct 2020 10:05:35 +0000</pubDate>
      <link>https://dev.to/aws-builders/dynamo-db-cross-region-migration-3l36</link>
      <guid>https://dev.to/aws-builders/dynamo-db-cross-region-migration-3l36</guid>
      <description>&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fn8u6lkqjp9q47a0ztgm5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fn8u6lkqjp9q47a0ztgm5.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;When setting up a database such as DynamoDB it is important to choose a region that is closest to where your users are located in order to reduce latency for sending/retrieving data. However, if you have realised that your database is not in the optimal region for your users, it is possible to do a cross region restore with DynamoDB. In this blog, I am going to take you through restoring a table and updating it with Cloud Formation templates!&lt;/p&gt;

&lt;h2&gt;
  
  
  Overview of steps we will take:
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;Create a backup from the Point-in-Time Recovery&lt;/li&gt;
&lt;li&gt;Restore the backup to a new region&lt;/li&gt;
&lt;li&gt;Import the newly restored table into Cloud Formation&lt;/li&gt;
&lt;li&gt;Update the Cloud Formation stack for the new table with the template of the original table&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Creating the backup
&lt;/h2&gt;

&lt;p&gt;When you create an on-demand backup, a timestamp of the request is catalogued. The backup is created asynchronously by applying all of the changes until the time of the request to the last full table snapshot.&lt;/p&gt;

&lt;p&gt;To create a backup of a table, click on the &lt;code&gt;Backups&lt;/code&gt; tab on the left hand side panel in the DynamoDB console and click on the blue &lt;code&gt;Create backup&lt;/code&gt; button at the top.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fbl9mrhrh33nnx5z9v58x.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fbl9mrhrh33nnx5z9v58x.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This will then prompt you for the name of the table you wish to backup and also for a name for the backup itself:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F3mrmpu9mdtbclhni2bxv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F3mrmpu9mdtbclhni2bxv.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once the backup has been created you will see it listed on the console and you are able to select it:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fh7rtpyrpw41olf35ka4t.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fh7rtpyrpw41olf35ka4t.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;With the backup selected you can then press the Restore backup button at the top!  &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;💡 For more information on backups check out the docs &lt;a href="https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Backup.Tutorial.html" rel="noopener noreferrer"&gt;here&lt;/a&gt;!  &lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Cross Region Restore
&lt;/h2&gt;

&lt;p&gt;To restore the table you’ll need to provide a name for the new table, in this case I kept it the same as the original as it will be moving regions:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fg9slwxnx09gdmqiwug15.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fg9slwxnx09gdmqiwug15.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To restore this table in to a different region all you have to do is select &lt;code&gt;Cross Region&lt;/code&gt; and specify the region you want to restore it to — in this case I want to move it to EU West:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F9ndbyhvgi60rdexqw0h5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F9ndbyhvgi60rdexqw0h5.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You are presented with an overview of the restore and then you can simply hit &lt;code&gt;Restore&lt;/code&gt; at the bottom!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fi1k0igmtt4oh43sp37ld.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fi1k0igmtt4oh43sp37ld.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And now if you go to the region you selected, you will be able to see that the table is being restored:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fhuusr270cdth223oimry.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fhuusr270cdth223oimry.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once it is restored, the tables status will be set to &lt;code&gt;Active&lt;/code&gt;:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fgszlbva1kms0pwnvt1yi.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fgszlbva1kms0pwnvt1yi.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;However, the restored table that is created is not actually identical to the original table. It is missing the following items:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Point-In-Time-Recovery&lt;/li&gt;
&lt;li&gt;Auto scaling policies&lt;/li&gt;
&lt;li&gt;AWS IAM policies&lt;/li&gt;
&lt;li&gt;CloudWatch metrics &amp;amp; alarms&lt;/li&gt;
&lt;li&gt;Tags&lt;/li&gt;
&lt;li&gt;Stream settings&lt;/li&gt;
&lt;li&gt;Time to Live&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;To set the above items to the same as what the original table had we will want to use the Cloud Formation template of the original table!&lt;/em&gt;&lt;/strong&gt;  &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;💡 For more information on restores check out the docs &lt;a href="https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Restore.Tutorial.html" rel="noopener noreferrer"&gt;here&lt;/a&gt;!  &lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Importing to Cloud Formation
&lt;/h2&gt;

&lt;p&gt;As the table has already been created through the restore, it doesn’t have a Cloud Formation template associated with it yet. However, we can import this resource into a Cloud Formation stack through the following steps:&lt;/p&gt;

&lt;p&gt;In the Cloud Formation console in the selected region (in this case Ireland), click &lt;code&gt;Create Stack&lt;/code&gt; and in the drop down menu select &lt;code&gt;With existing resources&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F94nb1ltvdxmrykxli8x9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F94nb1ltvdxmrykxli8x9.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once selected you will need to specify a template — for this you can select &lt;code&gt;Upload a template file&lt;/code&gt;. When importing resources into a stack, no changes are allowed to the existing resources, so we can’t add any of the tags or extra permissions at this point, we need to add a template that reflects the restored table at this given time! I have created an example Cloud Formation template that matches the exact specifications of the table that is created during a restore and the role associated with it, which can be viewed &lt;a href="https://github.com/chloeMcAteer/blog-resources/blob/main/restoreTable.yaml" rel="noopener noreferrer"&gt;here&lt;/a&gt;, so you can use it and select next!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F6aazd29c8qky4zi7qrnv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F6aazd29c8qky4zi7qrnv.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now you will need to provide an identifier to map the logical IDs in the template with the existing resources, in this case I used the name of the DynamoDB table and the name of the role:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F37tvzdeirntd3w1gjmd9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F37tvzdeirntd3w1gjmd9.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Then you just have to give the stack a name:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F2wku3qcvl04z61rf2ijx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F2wku3qcvl04z61rf2ijx.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And your presented an overview of the changes that will occur — so in this case the imports required:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F3aw2a51f94qftuzkafu4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F3aw2a51f94qftuzkafu4.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once the import is complete, we can now move onto updating the stack!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fyl66hqv41i7wbdgeonab.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fyl66hqv41i7wbdgeonab.png" alt="Alt Text"&gt;&lt;/a&gt;  &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;💡 For more information on importing existing resources to cloud formation check out the docs &lt;a href="https://aws.amazon.com/blogs/aws/new-import-existing-resources-into-a-cloudformation-stack/" rel="noopener noreferrer"&gt;here&lt;/a&gt;!  &lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Updating the Cloud Formation Template
&lt;/h2&gt;

&lt;p&gt;Now the resources are imported into a stack in Cloud Formation, we can update them to use the same template that the original DynamoDB and attached role had — to do this select the new stack and select &lt;code&gt;Update&lt;/code&gt; at the top right hand side:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F3p7yljyruswadmjseamo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F3p7yljyruswadmjseamo.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;For this, we just want to &lt;code&gt;Replace current template&lt;/code&gt; and give it the template we used to create the original table to make sure everything is kept consistent. The template for my original table can be found &lt;a href="https://github.com/chloeMcAteer/blog-resources/blob/main/testBlogTable.yaml" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fnz8iswetxm75fgxds5bs.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fnz8iswetxm75fgxds5bs.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You’ll then see the resources that are going to be modified and you can go ahead and select &lt;code&gt;Update Stack&lt;/code&gt; at the bottom:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Ft4cd33cibrdpzzmp5xzg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Ft4cd33cibrdpzzmp5xzg.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;And that’s it! The table has now been migrated from one region to another, with identical settings 🎉&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;You can check the tags and permissions of the table in the new region and you will see that they have been updated!&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Analysing Chocolate with Athena</title>
      <dc:creator>Chloe McAteer</dc:creator>
      <pubDate>Fri, 14 Aug 2020 18:06:06 +0000</pubDate>
      <link>https://dev.to/aws-builders/analysing-chocolate-with-athena-f8k</link>
      <guid>https://dev.to/aws-builders/analysing-chocolate-with-athena-f8k</guid>
      <description>&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--aZRV-r1c--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/i/jq4ef7gxyxxjp83eylcj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--aZRV-r1c--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/i/jq4ef7gxyxxjp83eylcj.png" alt="Alt Text" width="760" height="380"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Recently, I have been wanting to up my game when it comes to analysing data — so I decided to use this as an opportunity to take AWS Athena for a whirl and see what it’s capable of.&lt;/p&gt;

&lt;p&gt;Throughout this blog I am going to try and understand Athena and the features it has while working with a chocolate dataset.&lt;/p&gt;

&lt;h2&gt;
  
  
  So what is Athena? 🤔
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--qwTdUdcK--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/i/bmp720jyjas0cv574epq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--qwTdUdcK--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/i/bmp720jyjas0cv574epq.png" alt="Alt Text" width="250" height="176"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://aws.amazon.com/athena/?whats-new-cards.sort-by=item.additionalFields.postDateTime&amp;amp;whats-new-cards.sort-order=desc"&gt;AWS Athena&lt;/a&gt; is an interactive query service that analyses data using standard SQL. Athena is able to work with both structured and unstructured data and can work directly with data stored in s3!&lt;/p&gt;

&lt;h2&gt;
  
  
  What we’ll be using 👩‍💻
&lt;/h2&gt;

&lt;p&gt;In this blog we are going to be integrating with a number of different AWS services including:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;S3&lt;/li&gt;
&lt;li&gt;IAM&lt;/li&gt;
&lt;li&gt;Glue&lt;/li&gt;
&lt;li&gt;Athena&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;We are going to be storing our data in an S3 bucket and then using a Glue crawler to create the table schema required by Athena — don’t worry if this sounds a bit scary now, we will be going through and explaining this step by step.&lt;/p&gt;

&lt;h2&gt;
  
  
  Show me the data 🍫
&lt;/h2&gt;

&lt;p&gt;I am going to be using a &lt;a href="https://www.kaggle.com/rtatman/chocolate-bar-ratings"&gt;chocolate dataset from Kaggle&lt;/a&gt;, which is a CSV file containing over 1700 ratings for chocolate bars and includes information regarding the type of bean being used, the regional origin and the percentage of cocoa they contain. The rating is a score between 1–5 (5 being great and 1 being unpleasant.)&lt;/p&gt;

&lt;h2&gt;
  
  
  Storing the data 🗄
&lt;/h2&gt;

&lt;p&gt;Before we dive straight into working with Athena we need to put our data in AWS Simple Storage Service (S3).&lt;/p&gt;

&lt;p&gt;You will need to create a bucket within s3 that has two folders inside it, one for the chocolate dataset and one for the results of the queries.&lt;/p&gt;

&lt;p&gt;If you have not worked with S3 before, check out my &lt;a href="https://medium.com/@chloemcateer/aws-simple-storage-service-s3-852ad6920d3a"&gt;previous post&lt;/a&gt; that will guide you though creating your bucket, uploading data and creating folders!&lt;/p&gt;

&lt;h2&gt;
  
  
  Athena Time ⏰
&lt;/h2&gt;

&lt;p&gt;So in this tutorial we want to use Athena to run the following queries against our data:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Get all the countries listed in the dataset&lt;/li&gt;
&lt;li&gt;Sort countries by rating&lt;/li&gt;
&lt;li&gt;Discover the relationship between cocoa solids percentage and rating&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Lets dive straight in 🏊‍♂️
&lt;/h2&gt;

&lt;p&gt;In the AWS console we will navigate to Athena:&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--v1k1VL6n--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/i/fqnebl3pxjch2skez31u.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--v1k1VL6n--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/i/fqnebl3pxjch2skez31u.png" alt="Alt Text" width="700" height="342"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once Athena opens we can go ahead and click on get started:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--jxPtangC--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/i/y9vicreyk8628gaopkzy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--jxPtangC--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/i/y9vicreyk8628gaopkzy.png" alt="Alt Text" width="700" height="240"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Connecting the Data 🧩
&lt;/h2&gt;

&lt;p&gt;First thing we will need to do is connect to the data that we have stored in S3. In Athena you will see at the top left hand side of the screen there is an option to &lt;code&gt;Connect Data Source&lt;/code&gt;, we will want to select this.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--KR8cth8L--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/i/2zg37b8lpvlz4adl6lgm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--KR8cth8L--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/i/2zg37b8lpvlz4adl6lgm.png" alt="Alt Text" width="700" height="391"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once selected, you will see this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--NpKuyi8K--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/i/egxgi9u5xl0rc1okmmat.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--NpKuyi8K--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/i/egxgi9u5xl0rc1okmmat.png" alt="Alt Text" width="700" height="327"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This will allow us to choose our data source and connection method. We will choose S3 for our datasource as this is where our data lives and for the metatdata catalog we will go for the default which is AWS Glue.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is Glue I hear you asking? 🤔
&lt;/h2&gt;

&lt;blockquote&gt;
&lt;p&gt;AWS Glue is a fully managed extract, transform, and load (ETL) service that makes it easy for customers to prepare and load their data for analytics&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;In Athena, tables are formed from the metadata definitions of the data’s schema. However, since S3 is just the data, we need to use a &lt;code&gt;Glue Data Catalog&lt;/code&gt; to store this metadata about what lives within our selected S3 location (e.g. location, structure, column names, data types etc.) It is this metadata that will allow Athena to query our dataset!&lt;/p&gt;

&lt;p&gt;One thing you might be thinking at this point is, how are we going to get this metadata to store in our Glue Data Catalog? Well, this is were &lt;code&gt;Glue Crawlers&lt;/code&gt; come into play!&lt;/p&gt;

&lt;p&gt;We can use a Glue Crawler to automatically extract our metadata, and create our table definitions!&lt;/p&gt;

&lt;p&gt;In the previous step when we choose our data source and connection method, this screen is then displayed and it is here where we want to select to set up our Glue Crawler:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--WT8n93Yy--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/i/k909pjk70ywembseq4ko.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--WT8n93Yy--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/i/k909pjk70ywembseq4ko.png" alt="Alt Text" width="700" height="208"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;When we go ahead and click connect &lt;code&gt;AWS Glue&lt;/code&gt;, it will open up the Glue console for us:&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--nnz8PF_v--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/i/iaalls9skrxvbktb1tfb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--nnz8PF_v--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/i/iaalls9skrxvbktb1tfb.png" alt="Alt Text" width="700" height="392"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Here we select &lt;code&gt;Get Started&lt;/code&gt; then select &lt;code&gt;Add table using a Crawler&lt;/code&gt; :&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--5qBT8RjS--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/i/xr4nnnsbabfdkudti9ax.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--5qBT8RjS--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/i/xr4nnnsbabfdkudti9ax.png" alt="Alt Text" width="700" height="322"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This prompts us to give our crawler a name:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--NeGBQw_3--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/i/qlcfio4w25mbdqy0n5z5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--NeGBQw_3--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/i/qlcfio4w25mbdqy0n5z5.png" alt="Alt Text" width="700" height="277"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;With our crawler named, we now also need to select Data Stores as our crawler source type:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--FFSI-L5J--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/i/bvmvuell67xu9gsuj0qz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--FFSI-L5J--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/i/bvmvuell67xu9gsuj0qz.png" alt="Alt Text" width="700" height="277"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now we need to actually point it to our S3 bucket, typically a connection is not required for Amazon Simple Storage(S3) sources/targets, so we can leave this part blank:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--L-fyRd_n--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/i/8sr40w2dv6d9lst3jcll.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--L-fyRd_n--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/i/8sr40w2dv6d9lst3jcll.png" alt="Alt Text" width="700" height="289"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Creating an IAM role
&lt;/h2&gt;

&lt;blockquote&gt;
&lt;p&gt;An IAM role is an Identity and Access Management entity that defines a set of permissions for making AWS service requests.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Our next step involves creating an IAM role to allow the crawler to have permission to access the data that we have put in S3.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--zrhne7Fp--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/i/nyzfsvsn73mbmev7ivh3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--zrhne7Fp--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/i/nyzfsvsn73mbmev7ivh3.png" alt="Alt Text" width="700" height="329"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;When working with data that is constantly changing, you might have it set up that new data is being added to S3 hourly, daily or monthly — for this you can schedule the crawler to make sure it is always working with your most up to date data, by creating Crawler schedules expressed in cron format. However for this tutorial we are just going to select the &lt;code&gt;Run on Demand&lt;/code&gt; setting as we only have the one dataset and we want to trigger it:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--eindHeFC--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/i/vae7fwldbnj4d977d8hl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--eindHeFC--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/i/vae7fwldbnj4d977d8hl.png" alt="Alt Text" width="700" height="329"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now we have nearly got our crawler set up, we just need to add a database for the data to be store in:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--LKlQnu5S--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/i/n24l2e1n7te217rthb5r.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--LKlQnu5S--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/i/n24l2e1n7te217rthb5r.png" alt="Alt Text" width="634" height="363"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once the database has been created, you are presented with an overall summary, if everything looks good — click &lt;code&gt;Finish&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--vryyay5K--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/i/ik4byrl58jb5va6h4dc0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--vryyay5K--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/i/ik4byrl58jb5va6h4dc0.png" alt="Alt Text" width="700" height="329"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;With our crawler set up, we can go ahead and kick it off, by selecting &lt;code&gt;Run Now&lt;/code&gt; and we will then be notified once the crawler has ran:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--PBl8H8o5--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/i/l9uuo53n4hzl47kzslj5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--PBl8H8o5--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/i/l9uuo53n4hzl47kzslj5.png" alt="Alt Text" width="700" height="228"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The crawler has now went through our data, and inspected portions of it to determine the schema. Once we click into view it, we can see that it has been able to pick out each of the columns names and the data type for each column:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--YV_8Gugk--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/i/lz00ec47tk64wp0zyfvh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--YV_8Gugk--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/i/lz00ec47tk64wp0zyfvh.png" alt="Alt Text" width="466" height="463"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now if we flip back to Athena, we can see that our database and table have now been populated with what we have just created:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Vz_MWiu2--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/i/3w6lkeifbpkvfijwxtfs.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Vz_MWiu2--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/i/3w6lkeifbpkvfijwxtfs.png" alt="Alt Text" width="373" height="208"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;One last thing we will need to set up before we query our data, is the results location — to do this, you can click on the link at the top of the page in the blue notification box:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--I2KCPawW--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/i/x0lslopylti4j3tyrfax.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--I2KCPawW--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/i/x0lslopylti4j3tyrfax.png" alt="Alt Text" width="700" height="231"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Athena needs to know where the results from each query should be stored. For this we want to direct it to the results folder we created in S3:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--KjNelaM8--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/i/2yq5ew3sz1v76xayzra3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--KjNelaM8--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/i/2yq5ew3sz1v76xayzra3.png" alt="Alt Text" width="700" height="332"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Let the fun begin 🎬
&lt;/h2&gt;

&lt;p&gt;Now we have all our set up done, we can dive in and start querying the data!&lt;/p&gt;

&lt;p&gt;To query the data we can use standard SQL commands such as &lt;code&gt;SELECT&lt;/code&gt;, &lt;code&gt;FROM&lt;/code&gt;, &lt;code&gt;WHERE&lt;/code&gt;, &lt;code&gt;GROUP BY&lt;/code&gt;, &lt;code&gt;ORDER BY&lt;/code&gt;, etc. I will go over some of this below, but to actually run the queries we need to enter them into the Query Panel on Athena, which is shown in the screen shot below:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ot1wy3h4--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/i/7cy3plkya21ko16vn47q.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ot1wy3h4--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/i/7cy3plkya21ko16vn47q.png" alt="Alt Text" width="700" height="222"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To start off I am just going to try and select all the data, just to make sure everything is set up correct and to make sure we are getting data back. To do this I am going to run the following query in the query panel:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;SELECT * FROM athena_chocolate_analyser;
~~~~~

and we can see, everything has been set up correctly and we are receiving results back:

![Alt Text](https://dev-to-uploads.s3.amazonaws.com/i/kflk8i4c4spwayyifob6.png)

Now let’s try out some of our other queries! First up, getting a list of all the countries contained in the dataset:

~~~~
SELECT DISTINCT companyLocation
FROM athena_chocolate_analyser;
~~~~

Here we have also used the `DISTINCT` statement, to make sure that we aren’t getting back duplicates! This gives us back the following list:

![Alt Text](https://dev-to-uploads.s3.amazonaws.com/i/uhd50vr28mb1pa0blbwl.png)

Next we want to sort countries by their ratings to see which ones produce the highest rated bars, for this I used the following query:

~~~~
SELECT companylocation,
         AVG(rating) AS averageRating
FROM athena_chocolate_analyser
WHERE
  companylocation IS NOT NULL AND rating IS NOT NULL
GROUP BY  companylocation
ORDER BY averageRating DESC
~~~~

Here I have thrown in a couple more SQL statements for example `AVG` to find the average rating, `AS` to create a alias temporary name for a column, `NOT NULL` to make sure we aren’t getting any null or empty values back and then also the `GROUP BY` and `ORDER BY` statements to group and sort the data returned!

Which then brings back a list sorted by the average rating for that country, so we can see that the highest rated chocolate comes from Chile!

![Alt Text](https://dev-to-uploads.s3.amazonaws.com/i/tm6kgw4q9pnrvpy5djs8.png)

Our final query, is to try and see the relationship between the percentage of cocoa and the average rating. To do this I used the following query to find the average rating and the cocoa percentage and to group the results by the cocoa percentage:

~~~~
SELECT cocoapercent,
         AVG(rating) AS averageRating
FROM athena_chocolate_analyser
WHERE
  cocoapercent IS NOT NULL AND rating IS NOT NULL
GROUP BY cocoapercent
ORDER BY averageRating DESC
~~~~

![Alt Text](https://dev-to-uploads.s3.amazonaws.com/i/tvmkieip9fts23tz6wft.png)

I find this all really impressive, as it’s super easy and fast to query the data to get these results!

You can view the history of the query’s ran against the data here in the history tab, which is useful to look back on:

![Alt Text](https://dev-to-uploads.s3.amazonaws.com/i/n38havz75a72ibvwbyn4.png)

And if you want to save any of your query results, you can click the `Save As` button at the bottom of the query panel and this will save your results into the results folder you have set up in s3 — you will notice in the screen shot about that each query has a unique identifier called a `Query ID`. This makes it easier work with/find query result files.

![Alt Text](https://dev-to-uploads.s3.amazonaws.com/i/pqq6eb0slssfw8xifn3r.png)

##Conclusion

I have really enjoyed my first attempt working with Athena, it seems super fast and powerful. With its ability to query data sitting in S3 and export results, I can already see so many real world use cases for example; querying billing/usage reports, to gather insights on spending.

I plan on taking a more detailed look into it, now that I have scratched the surface, so watch this space for more blogs to come!

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

</description>
    </item>
    <item>
      <title>DynamoDB Scan Vs Query</title>
      <dc:creator>Chloe McAteer</dc:creator>
      <pubDate>Thu, 14 May 2020 21:52:51 +0000</pubDate>
      <link>https://dev.to/chloemcateer3/dynamodb-scan-vs-query-2p0p</link>
      <guid>https://dev.to/chloemcateer3/dynamodb-scan-vs-query-2p0p</guid>
      <description>&lt;p&gt;DynamoDB is Amazon's managed NoSQL database service. This blog will be focusing on data retrieval and how it is critical to think about what your data will look like, to make an informed decision about your database design.&lt;/p&gt;

&lt;p&gt;When working with DynamoDB there is really two ways of retrieving information - one being scanning and filtering and the other being querying the database! So what is the difference and what should I use?&lt;/p&gt;

&lt;p&gt;Before we get started, something we will be talking about a lot is partition keys, so let's start with a short definition of what this is:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Partition Key&lt;/strong&gt; - Is a primary key that DynamoDB uses to partition the data and determine storage.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  First things first, what is scanning?
&lt;/h3&gt;

&lt;p&gt;Scanning involves reading each and every item in the database. It allows you to add filters if you are looking for something in particular, so that only items matching your requirements are returned. However, every single record still needs to be read, as the filter is only applied &lt;strong&gt;after&lt;/strong&gt; the scan has taken place!&lt;/p&gt;

&lt;h3&gt;
  
  
  Example:
&lt;/h3&gt;

&lt;p&gt;If we had the following data and say we set the employeeID as the &lt;strong&gt;partition key&lt;/strong&gt; once we set up the database:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight jsx"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;item&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="na"&gt;employeeID&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;S&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;4beb73f0-2fc0-41b2-a8e9&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt; &lt;span class="p"&gt;},&lt;/span&gt; &lt;span class="c1"&gt;//Set as partition key on DB creation&lt;/span&gt;
    &lt;span class="na"&gt;startDate&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;S&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;2020-05-09T17:53:00+00:00&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="nl"&gt;name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;S&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;example-name&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt; &lt;span class="p"&gt;},&lt;/span&gt;
    &lt;span class="na"&gt;title&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;S&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;example-title&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt; &lt;span class="p"&gt;},&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We could scan the database using the following as our scan params:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight jsx"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;params&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="na"&gt;TableName&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;employees&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="na"&gt;ProjectionExpression&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;employeeID, name&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="na"&gt;FilterExpression&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;title = :title&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="na"&gt;ExpressionAttributeValues&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
         &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;:title&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="na"&gt;S&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;example-title&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="p"&gt;};&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The above code snippet would scan each item and would then filter for items that have a title the same as the one specified! The filter expression here could filter for any column/attributes in this database (e.g. employeeID, startDate, name, title).&lt;/p&gt;

&lt;h3&gt;
  
  
  But what is querying and why is it different?
&lt;/h3&gt;

&lt;p&gt;Querying allows you to retrieve data in a quick and efficient fashion, as it involves accessing the physical locations where the data is stored. However, the main difference here is that you would need to specify an equality condition for the &lt;strong&gt;partition key&lt;/strong&gt;, in order to query!&lt;/p&gt;

&lt;h3&gt;
  
  
  Example:
&lt;/h3&gt;

&lt;p&gt;If we take the same example again:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight jsx"&gt;&lt;code&gt;&lt;span class="nx"&gt;item&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="na"&gt;employeeID&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;S&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;4beb73f0-2fc0-41b2-a8e9&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt; &lt;span class="p"&gt;},&lt;/span&gt; &lt;span class="c1"&gt;//Set as partition key on DB creation&lt;/span&gt;
    &lt;span class="na"&gt;startDate&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;S&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;2020-05-09T17:53:00+00:00&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt; &lt;span class="p"&gt;},&lt;/span&gt;
    &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;S&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;example-name&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt; &lt;span class="p"&gt;},&lt;/span&gt;
    &lt;span class="na"&gt;title&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;S&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;example-title&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt; &lt;span class="p"&gt;},&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Since we want to query the table this time, we can make use of employeeID as the partition key and we would be able to write query params like this, where our &lt;code&gt;KeyConditionExpression&lt;/code&gt; is looking for a particular ID:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight jsx"&gt;&lt;code&gt;&lt;span class="nx"&gt;queryParams&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;tableName&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;employees&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;KeyConditionExpression&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;employeeID = :id&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;ExpressionAttributeValues&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;:id&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;4beb73f0-2fc0-41b2-a8e9&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="p"&gt;},&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;With using the partition key the query would be more efficient as it doesn't need to read each item in the database, because DynamoDB stores and retrieves each item based on this partition key value!&lt;/p&gt;

&lt;h3&gt;
  
  
  But what if we want to Query for something that is not the partition key?
&lt;/h3&gt;

&lt;p&gt;If I want to query another value that is not the partition key e.g. what if we only have the employees name and want to get all their details by that name?&lt;/p&gt;

&lt;p&gt;At the minute with our current set up, we would not be able to write a query for this because as I mentioned before - queries need to use the partition key in the equality condition! However, there is still a way we could query for this without having to do a scan.&lt;/p&gt;

&lt;h3&gt;
  
  
  Secondary Indexes
&lt;/h3&gt;

&lt;p&gt;Using secondary indexes allows us to create a subset of attributes from a table, with an alternative key to create a different access point for query operations.&lt;/p&gt;

&lt;p&gt;You can create multiple secondary indexes on a db, which would give your applications access to a lot more query patterns.&lt;/p&gt;

&lt;p&gt;We can create a secondary index on DyanmoDB by specifying the partition key for it and naming the index:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi.imgur.com%2F0WcXoe8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi.imgur.com%2F0WcXoe8.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now with our secondary index set up, we can go ahead and query using it:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight jsx"&gt;&lt;code&gt;&lt;span class="nx"&gt;queryParams&lt;/span&gt;  &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;TableName&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;employees&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;IndexName&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;name-index&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; 
        &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;ProjectionExpression&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;employeeID, startDate, name, title&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;KeyConditionExpression&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;name= :name&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;ExpressionAttributeValues&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; 
            &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;:name&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="na"&gt;S&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;example-name&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt; 
    &lt;span class="p"&gt;};&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Notice that we are using the new secondary index within our query. We can now find the employee details by using the employees name!&lt;/p&gt;

&lt;p&gt;Setting up secondary indexes do have associated costs, but when working with large amounts of data, it can really increase the performance and efficiency of data retrieval. It can get items based on storage location without having to read every item in the whole database.&lt;/p&gt;

&lt;p&gt;To improve efficiency further, you could also look into adding composites keys or indexes which can be made up of a partition key and a &lt;a href="https://aws.amazon.com/blogs/database/using-sort-keys-to-organize-data-in-amazon-dynamodb/" rel="noopener noreferrer"&gt;sort key&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Scan or Query?
&lt;/h3&gt;

&lt;p&gt;So coming back to our main question, when do we use scan and when does it make sense to use query?&lt;/p&gt;

&lt;p&gt;And honestly, it all depends on the size and amount of data you are working with!&lt;/p&gt;

&lt;p&gt;If you are working with a small amount of data, you could totally go for scanning and filtering the database and not have to worry about adding all these extra keys. If the data is already small, the scan time won't take long anyway, so adding in things like secondary keys to partition into even smaller sets, isn't likely to increase your performance by a significant amount and therefore might not be worth the additional overhead of implementing these.&lt;/p&gt;

&lt;p&gt;However, if you are working with large amounts of data, that is likely to keep growing - it is really worth spending time and making sure you choose the right secondary indexes. &lt;/p&gt;

&lt;p&gt;When creating a database with indexes, it is really beneficial to spend time considering what queries are you likely to be doing. Understanding what data you will need to retrieve will help you choose your partition keys. Taking the initial time to think this through will make sure your database is set up the right way for you to retrieve data from it in the quickest, most efficient manner! Failure to think about this up front may limit you data access points down the line.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Transforming Teaching with Teachingo - Update #4</title>
      <dc:creator>Chloe McAteer</dc:creator>
      <pubDate>Mon, 20 Apr 2020 20:00:36 +0000</pubDate>
      <link>https://dev.to/chloemcateer3/transforming-teaching-with-teachingo-update-4-1cgj</link>
      <guid>https://dev.to/chloemcateer3/transforming-teaching-with-teachingo-update-4-1cgj</guid>
      <description>&lt;p&gt;&lt;em&gt;This is an update on our #TwilioHackathon project progress - you can see the original post here: &lt;div class="ltag__link"&gt;
  &lt;a href="/chloemcateer3" class="ltag__link__link"&gt;
    &lt;div class="ltag__link__pic"&gt;
      &lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--xP7J_8jY--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://res.cloudinary.com/practicaldev/image/fetch/s--hlbaCqb8--/c_fill%2Cf_auto%2Cfl_progressive%2Ch_150%2Cq_auto%2Cw_150/https://dev-to-uploads.s3.amazonaws.com/uploads/user/profile_image/361195/c8edcab6-6a9f-4f6b-8b66-8c089b1146c2.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--xP7J_8jY--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://res.cloudinary.com/practicaldev/image/fetch/s--hlbaCqb8--/c_fill%2Cf_auto%2Cfl_progressive%2Ch_150%2Cq_auto%2Cw_150/https://dev-to-uploads.s3.amazonaws.com/uploads/user/profile_image/361195/c8edcab6-6a9f-4f6b-8b66-8c089b1146c2.jpg" alt="chloemcateer3"&gt;&lt;/a&gt;
    &lt;/div&gt;
  &lt;/a&gt;
  &lt;a href="/chloemcateer3/transforming-teaching-with-teachingo-4lc5" class="ltag__link__link"&gt;
    &lt;div class="ltag__link__content"&gt;
      &lt;h2&gt;Transforming Teaching with Teachingo - #TwilioHackathon Submission&lt;/h2&gt;
      &lt;h3&gt;Chloe McAteer ・ Apr 6 '20 ・ 4 min read&lt;/h3&gt;
      &lt;div class="ltag__link__taglist"&gt;
        &lt;span class="ltag__link__tag"&gt;#twiliohackathon&lt;/span&gt;
        &lt;span class="ltag__link__tag"&gt;#javascript&lt;/span&gt;
        &lt;span class="ltag__link__tag"&gt;#node&lt;/span&gt;
        &lt;span class="ltag__link__tag"&gt;#react&lt;/span&gt;
      &lt;/div&gt;
    &lt;/div&gt;
  &lt;/a&gt;
&lt;/div&gt;
&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Authentication
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://i.giphy.com/media/5bgMxHIDzsayEyeTWg/giphy.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://i.giphy.com/media/5bgMxHIDzsayEyeTWg/giphy.gif" alt="Brooklyn Nine Nine, Gina's Face ID!" width="478" height="266"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Because the software is going to be used by schools it needs to be secure - for this we wanted to ensure that not just anyone can access it and join any lesson! We wanted to set up accounts for students and teachers to ensure that only the students that belong to that particular class can access it.&lt;/p&gt;

&lt;h2&gt;
  
  
  Keeping passwords secure
&lt;/h2&gt;

&lt;p&gt;Of course we didn't want to store the users password directly in the database, because if the passwords where stored in plain text, it would mean that if anyone, either an attacker or a developer carrying out maintenance on the database would be able to see exactly what people have set as their passwords and so the security of the system would be breached.&lt;/p&gt;

&lt;p&gt;To overcome this we wanted some way of salting and hashing them. For this we decided to use &lt;a href="https://www.npmjs.com/package/bcrypt"&gt;Bcrypt&lt;/a&gt; - due to some previous experience using it.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://i.giphy.com/media/cd58YHM3cZbNe6aq9X/giphy.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://i.giphy.com/media/cd58YHM3cZbNe6aq9X/giphy.gif" alt="Brooklyn Nine Nine, Terry's passwords!" width="480" height="250"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As soon as a user creates an account, I use Bcrypt to salt &amp;amp; hash the password and then store the hashed version of the password in the database. Then once a user tries to log in we can use the Bcrypt &lt;code&gt;.compare()&lt;/code&gt; function to compare the password the user entered with the hashed version from the database to authenticate them.&lt;/p&gt;

&lt;h2&gt;
  
  
  Handling User Sessions
&lt;/h2&gt;

&lt;p&gt;As an extra layer of security, to ensure that users have been authenticated to use the applications services, the project creates user session tokens when the user logs in. To facilitate this, we decided to utilise &lt;a href="https://jwt.io/"&gt;JSON Web Tokens (JWT)&lt;/a&gt;. Doing so ensures that no one can bypass login and access the services pages by changing the URL or hit the backend API directly.&lt;/p&gt;

&lt;p&gt;Once a user successfully logs in, a session token is created for them and this token is sent with each request the user sends. Once the request is being handled, we do a check for two things - one, is the token valid and two, has the token expired. If these checks pass the request is carried out, however if it fails, a 401 error is thrown as the user is not authorised!&lt;/p&gt;

</description>
      <category>twiliohackathon</category>
      <category>javascript</category>
    </item>
    <item>
      <title>Transforming Teaching with Teachingo - Update #2</title>
      <dc:creator>Chloe McAteer</dc:creator>
      <pubDate>Sat, 18 Apr 2020 16:17:19 +0000</pubDate>
      <link>https://dev.to/chloemcateer3/transforming-teaching-with-teachingo-update-2-12bm</link>
      <guid>https://dev.to/chloemcateer3/transforming-teaching-with-teachingo-update-2-12bm</guid>
      <description>&lt;p&gt;&lt;em&gt;This is an update on our #TwilioHackathon project progress - you can see the original post here: &lt;div class="ltag__link"&gt;
  &lt;a href="/chloemcateer3" class="ltag__link__link"&gt;
    &lt;div class="ltag__link__pic"&gt;
      &lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Fuser%2Fprofile_image%2F361195%2Fc8edcab6-6a9f-4f6b-8b66-8c089b1146c2.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Fuser%2Fprofile_image%2F361195%2Fc8edcab6-6a9f-4f6b-8b66-8c089b1146c2.jpg" alt="chloemcateer3"&gt;&lt;/a&gt;
    &lt;/div&gt;
  &lt;/a&gt;
  &lt;a href="/chloemcateer3/transforming-teaching-with-teachingo-4lc5" class="ltag__link__link"&gt;
    &lt;div class="ltag__link__content"&gt;
      &lt;h2&gt;Transforming Teaching with Teachingo - #TwilioHackathon Submission&lt;/h2&gt;
      &lt;h3&gt;Chloe McAteer ・ Apr 6 '20&lt;/h3&gt;
      &lt;div class="ltag__link__taglist"&gt;
        &lt;span class="ltag__link__tag"&gt;#twiliohackathon&lt;/span&gt;
        &lt;span class="ltag__link__tag"&gt;#javascript&lt;/span&gt;
        &lt;span class="ltag__link__tag"&gt;#node&lt;/span&gt;
        &lt;span class="ltag__link__tag"&gt;#react&lt;/span&gt;
      &lt;/div&gt;
    &lt;/div&gt;
  &lt;/a&gt;
&lt;/div&gt;
&lt;/em&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  Data is the new oil 🛢
&lt;/h2&gt;

&lt;p&gt;When thinking about the different user flows for an application like this, it is clear to see that there are a lot of different data points being generated, which meant we had to think about adding a persistence layer to our application.&lt;/p&gt;

&lt;p&gt;So we initially spent some time thinking about what the best way to store this data was. Both of us have previous experience working with NoSQL, so we considered using &lt;a href="https://www.mongodb.com/" rel="noopener noreferrer"&gt;MongoDB&lt;/a&gt; since it is quick &amp;amp; easy to get up and running. &lt;/p&gt;

&lt;p&gt;However, we took the time to understand what all data we would be working with, we realised we would need to store all of the following:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Users email address, password, name, mobile and if they where a student or teacher.&lt;/li&gt;
&lt;li&gt;Class names, teacher that teaches it, students that attend it&lt;/li&gt;
&lt;li&gt;Lesson time/date, number of questions asked in lesson, lesson feedback etc.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://i.giphy.com/media/XoM3WIZDHMYwQKlXsC/giphy.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://i.giphy.com/media/XoM3WIZDHMYwQKlXsC/giphy.gif" alt="Brooklyn Nine Nine, There is another way"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;From seeing this, it was clear that it made more sense for us to opt for a more structured database approach, the relationships between the different data points were more complex than we initially thought. We had a quick brainstorm about the database platform, ultimately settling on &lt;a href="https://www.postgresql.org/" rel="noopener noreferrer"&gt;PostgreSQL&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Integrations.js ✨
&lt;/h3&gt;

&lt;p&gt;Having decided on the platform, we needed to understand how our Node.js service could interact with a Postgres instance. I came across &lt;a href="https://knexjs.org/" rel="noopener noreferrer"&gt;Knex.js&lt;/a&gt; which is an SQL query builder that can be used with Postgres!&lt;/p&gt;

&lt;p&gt;It allowed me to be able to define schemas for each table within the code and create functions for getting, adding and removing data from the db - I was amazed by how powerful it was and how much of the heavy lifting it could do out of the box.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://i.giphy.com/media/5qFQhmVkF0mfDoymL5/giphy.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://i.giphy.com/media/5qFQhmVkF0mfDoymL5/giphy.gif" alt="Brooklyn Nine Nine, Boyle holding Terry"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;It also meant that once someone else pulled down the repository, they could run the database migrations, to get all the tables set up the correct way!&lt;/p&gt;

&lt;p&gt;Knex also allowed me to define and generate seed data for the application, which meant I could put large amounts of dummy data into the database.&lt;/p&gt;

&lt;p&gt;We now have our database up and working, but we did face some problems along the way, when it came to actually modelling it. For example: duplication of data and over complicated tables.&lt;/p&gt;

</description>
      <category>twiliohackathon</category>
      <category>postgres</category>
      <category>node</category>
      <category>javascript</category>
    </item>
    <item>
      <title>Transforming Teaching with Teachingo - #TwilioHackathon Submission</title>
      <dc:creator>Chloe McAteer</dc:creator>
      <pubDate>Mon, 06 Apr 2020 21:48:34 +0000</pubDate>
      <link>https://dev.to/chloemcateer3/transforming-teaching-with-teachingo-4lc5</link>
      <guid>https://dev.to/chloemcateer3/transforming-teaching-with-teachingo-4lc5</guid>
      <description>&lt;h2&gt;
  
  
  The Team
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://twitter.com/PMc_A"&gt;Peter&lt;/a&gt; and I are two software engineers from Belfast, Northern Ireland who graduated from university last summer! Whenever we discovered the Twilio/DEV hackathon, we thought it was a great opportunity to jump into something that we can really get stuck in to.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--7pUmttoj--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://pbs.twimg.com/media/ETjqQhiWsAALO6y%3Fformat%3Djpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--7pUmttoj--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://pbs.twimg.com/media/ETjqQhiWsAALO6y%3Fformat%3Djpg" alt="" width="800" height="600"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;With the hackathon being the majority of the month of April, this allowed us to really take our time with the idea and build something that could have a real impact in the world right now.&lt;/p&gt;

&lt;p&gt;Given the current state of affairs in the world, everyone is flocking to the internet and various software/resources for communicating. &lt;em&gt;Everyone.&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  The Problem
&lt;/h2&gt;

&lt;p&gt;Most governments around the world have temporarily closed educational institutions in an attempt to contain the spread of the COVID-19 pandemic.&lt;br&gt;
These global closures are impacting over 89% of the world's student population (source - &lt;a href="https://en.unesco.org/covid19/educationresponse"&gt;https://en.unesco.org/covid19/educationresponse&lt;/a&gt;).&lt;/p&gt;

&lt;p&gt;With these closures, schools across the globe are attempting to transition to online, remote learning. With some of our personal connections working in the education space, they had voiced their frustration with the lack of general tools in the wild that they could use to fit their needs - specifically, when it comes to video conferencing software.&lt;/p&gt;

&lt;p&gt;Sure, there are a bunch of services that provide that facility to video call one another but they are mostly aimed at corporate businesses rather than education.&lt;/p&gt;
&lt;h2&gt;
  
  
  Our Proposed Solution
&lt;/h2&gt;

&lt;p&gt;We are creating an E-Learning platform that is specific to teachers in order to fulfil their different needs when teaching a remote lesson. Some of these features include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Video conferencing including: screen share, the ability to mute users and facilitate live chat.&lt;/li&gt;
&lt;li&gt;Automated attendance checker.&lt;/li&gt;
&lt;li&gt;Automated message sent to students that did not attend lesson.&lt;/li&gt;
&lt;li&gt;Reported lesson statistics - who/how many asked questions in live chat, what percentage of the class attended (list of who did and who didn't).&lt;/li&gt;
&lt;li&gt;Request a transcript or recording of the lesson (could be emailed to the students who attended or to the students that missed the class).&lt;/li&gt;
&lt;li&gt;Feedback request - students can show they understand the topic being taught via red, amber, green feedback mid-lesson.&lt;/li&gt;
&lt;li&gt;General student feedback about topics post-lesson.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
  What we built
&lt;/h2&gt;
&lt;h4&gt;
  
  
  Category Submission: COVID-19 Communications/Interesting Integrations
&lt;/h4&gt;
&lt;h3&gt;
  
  
  Choosing the right technology 📚
&lt;/h3&gt;

&lt;p&gt;We wanted to make our solution platform agnostic, thus we opted to create a web application that both the teachers and students could use.&lt;/p&gt;

&lt;p&gt;We decided to play to our strengths, with both of us having some experience with React and a lot of experience with JavaScript, we decided to build all the things with JS. For more details on our tech choice, check out our first progress update blog 👇&lt;/p&gt;


&lt;div class="ltag__link"&gt;
  &lt;a href="/pmca" class="ltag__link__link"&gt;
    &lt;div class="ltag__link__pic"&gt;
      &lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--5zZVOQlC--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://res.cloudinary.com/practicaldev/image/fetch/s--uC5X_3dp--/c_fill%2Cf_auto%2Cfl_progressive%2Ch_150%2Cq_auto%2Cw_150/https://dev-to-uploads.s3.amazonaws.com/uploads/user/profile_image/151942/738844a8-80e2-4b49-8975-358145a24d64.jpg" alt="pmca"&gt;
    &lt;/div&gt;
  &lt;/a&gt;
  &lt;a href="/pmca/transforming-teaching-with-teachingo-update-1-5df2" class="ltag__link__link"&gt;
    &lt;div class="ltag__link__content"&gt;
      &lt;h2&gt;Transforming Teaching with Teachingo - Update #1&lt;/h2&gt;
      &lt;h3&gt;Peter McAree ・ Apr 14 '20 ・ 2 min read&lt;/h3&gt;
      &lt;div class="ltag__link__taglist"&gt;
        &lt;span class="ltag__link__tag"&gt;#twiliohackathon&lt;/span&gt;
        &lt;span class="ltag__link__tag"&gt;#javascript&lt;/span&gt;
        &lt;span class="ltag__link__tag"&gt;#node&lt;/span&gt;
        &lt;span class="ltag__link__tag"&gt;#react&lt;/span&gt;
      &lt;/div&gt;
    &lt;/div&gt;
  &lt;/a&gt;
&lt;/div&gt;


&lt;h3&gt;
  
  
  Let's talk data 🔢
&lt;/h3&gt;

&lt;p&gt;With our tech stack chosen, it was time to really think about the data that would be passing through our systems. How would it be structured? Where would it be stored? And most importantly, how would it be secured? Check out more about our data decisions in our second progress update blog:&lt;/p&gt;


&lt;div class="ltag__link"&gt;
  &lt;a href="/chloemcateer3" class="ltag__link__link"&gt;
    &lt;div class="ltag__link__pic"&gt;
      &lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--xP7J_8jY--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://res.cloudinary.com/practicaldev/image/fetch/s--hlbaCqb8--/c_fill%2Cf_auto%2Cfl_progressive%2Ch_150%2Cq_auto%2Cw_150/https://dev-to-uploads.s3.amazonaws.com/uploads/user/profile_image/361195/c8edcab6-6a9f-4f6b-8b66-8c089b1146c2.jpg" alt="chloemcateer3"&gt;
    &lt;/div&gt;
  &lt;/a&gt;
  &lt;a href="/chloemcateer3/transforming-teaching-with-teachingo-update-2-12bm" class="ltag__link__link"&gt;
    &lt;div class="ltag__link__content"&gt;
      &lt;h2&gt;Transforming Teaching with Teachingo - Update #2&lt;/h2&gt;
      &lt;h3&gt;Chloe McAteer ・ Apr 18 '20 ・ 2 min read&lt;/h3&gt;
      &lt;div class="ltag__link__taglist"&gt;
        &lt;span class="ltag__link__tag"&gt;#twiliohackathon&lt;/span&gt;
        &lt;span class="ltag__link__tag"&gt;#postgres&lt;/span&gt;
        &lt;span class="ltag__link__tag"&gt;#node&lt;/span&gt;
        &lt;span class="ltag__link__tag"&gt;#javascript&lt;/span&gt;
      &lt;/div&gt;
    &lt;/div&gt;
  &lt;/a&gt;
&lt;/div&gt;


&lt;h3&gt;
  
  
  Prepare to Lanuch 🚀
&lt;/h3&gt;

&lt;p&gt;With just a simple spike created of our front end application, server and database - we wanted to hit the ground running and set up a CI/CD pipeline to automate deployment. Check out how we set it up below 👇&lt;/p&gt;


&lt;div class="ltag__link"&gt;
  &lt;a href="/pmca" class="ltag__link__link"&gt;
    &lt;div class="ltag__link__pic"&gt;
      &lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--5zZVOQlC--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://res.cloudinary.com/practicaldev/image/fetch/s--uC5X_3dp--/c_fill%2Cf_auto%2Cfl_progressive%2Ch_150%2Cq_auto%2Cw_150/https://dev-to-uploads.s3.amazonaws.com/uploads/user/profile_image/151942/738844a8-80e2-4b49-8975-358145a24d64.jpg" alt="pmca"&gt;
    &lt;/div&gt;
  &lt;/a&gt;
  &lt;a href="/pmca/transforming-teaching-with-teachingo-update-3-5daf" class="ltag__link__link"&gt;
    &lt;div class="ltag__link__content"&gt;
      &lt;h2&gt;Transforming Teaching with Teachingo - Update #3&lt;/h2&gt;
      &lt;h3&gt;Peter McAree ・ Apr 19 '20 ・ 3 min read&lt;/h3&gt;
      &lt;div class="ltag__link__taglist"&gt;
        &lt;span class="ltag__link__tag"&gt;#twiliohackathon&lt;/span&gt;
        &lt;span class="ltag__link__tag"&gt;#node&lt;/span&gt;
        &lt;span class="ltag__link__tag"&gt;#react&lt;/span&gt;
        &lt;span class="ltag__link__tag"&gt;#javascript&lt;/span&gt;
      &lt;/div&gt;
    &lt;/div&gt;
  &lt;/a&gt;
&lt;/div&gt;


&lt;h3&gt;
  
  
  Security, Security, Security 🕵️‍♀️
&lt;/h3&gt;

&lt;p&gt;As the application is going to be used as an educational tool, it is essential that it is secure - to see some of the security measures we have taken, have a read of our blog number 4:&lt;/p&gt;


&lt;div class="ltag__link"&gt;
  &lt;a href="/chloemcateer3" class="ltag__link__link"&gt;
    &lt;div class="ltag__link__pic"&gt;
      &lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--xP7J_8jY--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://res.cloudinary.com/practicaldev/image/fetch/s--hlbaCqb8--/c_fill%2Cf_auto%2Cfl_progressive%2Ch_150%2Cq_auto%2Cw_150/https://dev-to-uploads.s3.amazonaws.com/uploads/user/profile_image/361195/c8edcab6-6a9f-4f6b-8b66-8c089b1146c2.jpg" alt="chloemcateer3"&gt;
    &lt;/div&gt;
  &lt;/a&gt;
  &lt;a href="/chloemcateer3/transforming-teaching-with-teachingo-update-4-1cgj" class="ltag__link__link"&gt;
    &lt;div class="ltag__link__content"&gt;
      &lt;h2&gt;Transforming Teaching with Teachingo - Update #4&lt;/h2&gt;
      &lt;h3&gt;Chloe McAteer ・ Apr 20 '20 ・ 2 min read&lt;/h3&gt;
      &lt;div class="ltag__link__taglist"&gt;
        &lt;span class="ltag__link__tag"&gt;#twiliohackathon&lt;/span&gt;
        &lt;span class="ltag__link__tag"&gt;#javascript&lt;/span&gt;
      &lt;/div&gt;
    &lt;/div&gt;
  &lt;/a&gt;
&lt;/div&gt;


&lt;h3&gt;
  
  
  Twilio Time ⏰
&lt;/h3&gt;

&lt;p&gt;Next step was to check out the Twilio services, SDKs and more to see what they could offer us. At a first glance we couldn't believe what Twilio could do out of the box and jumped straight into working with it. Take a look at how we got started and some of our code snippets in progress update number 5!&lt;/p&gt;


&lt;div class="ltag__link"&gt;
  &lt;a href="/pmca" class="ltag__link__link"&gt;
    &lt;div class="ltag__link__pic"&gt;
      &lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--5zZVOQlC--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://res.cloudinary.com/practicaldev/image/fetch/s--uC5X_3dp--/c_fill%2Cf_auto%2Cfl_progressive%2Ch_150%2Cq_auto%2Cw_150/https://dev-to-uploads.s3.amazonaws.com/uploads/user/profile_image/151942/738844a8-80e2-4b49-8975-358145a24d64.jpg" alt="pmca"&gt;
    &lt;/div&gt;
  &lt;/a&gt;
  &lt;a href="/pmca/transforming-teaching-with-teachingo-update-5-58g2" class="ltag__link__link"&gt;
    &lt;div class="ltag__link__content"&gt;
      &lt;h2&gt;Transforming Teaching with Teachingo - Update #5&lt;/h2&gt;
      &lt;h3&gt;Peter McAree ・ Apr 28 '20 ・ 3 min read&lt;/h3&gt;
      &lt;div class="ltag__link__taglist"&gt;
        &lt;span class="ltag__link__tag"&gt;#twiliohackathon&lt;/span&gt;
        &lt;span class="ltag__link__tag"&gt;#node&lt;/span&gt;
        &lt;span class="ltag__link__tag"&gt;#javascript&lt;/span&gt;
        &lt;span class="ltag__link__tag"&gt;#twilio&lt;/span&gt;
      &lt;/div&gt;
    &lt;/div&gt;
  &lt;/a&gt;
&lt;/div&gt;
 
&lt;h2&gt;
  
  
  Demo Link
&lt;/h2&gt;

&lt;p&gt;We recorded a &lt;a href="https://drive.google.com/open?id=1h9GgnBrbDLK4oHnUhiDgYJ1T0ddvsEOa"&gt;short demo&lt;/a&gt; of the main features in Teachingo.&lt;/p&gt;

&lt;p&gt;You can check out the deployed application &lt;a href="https://confident-pike-86a4c7.netlify.app"&gt;here&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  Link to Code
&lt;/h2&gt;


&lt;div class="ltag-github-readme-tag"&gt;
  &lt;div class="readme-overview"&gt;
    &lt;h2&gt;
      &lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--A9-wwsHG--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev.to/assets/github-logo-5a155e1f9a670af7944dd5e12375bc76ed542ea80224905ecaf878b9157cdefc.svg" alt="GitHub logo"&gt;
      &lt;a href="https://github.com/pmc-a"&gt;
        pmc-a
      &lt;/a&gt; / &lt;a href="https://github.com/pmc-a/teachingo-client"&gt;
        teachingo-client
      &lt;/a&gt;
    &lt;/h2&gt;
    &lt;h3&gt;
      Web application that powers the Teachingo platform 🧠
    &lt;/h3&gt;
  &lt;/div&gt;
&lt;/div&gt;



&lt;div class="ltag-github-readme-tag"&gt;
  &lt;div class="readme-overview"&gt;
    &lt;h2&gt;
      &lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--A9-wwsHG--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev.to/assets/github-logo-5a155e1f9a670af7944dd5e12375bc76ed542ea80224905ecaf878b9157cdefc.svg" alt="GitHub logo"&gt;
      &lt;a href="https://github.com/pmc-a"&gt;
        pmc-a
      &lt;/a&gt; / &lt;a href="https://github.com/pmc-a/teachingo-api"&gt;
        teachingo-api
      &lt;/a&gt;
    &lt;/h2&gt;
    &lt;h3&gt;
      Node.js service that powers the Teachingo Client 🚀
    &lt;/h3&gt;
  &lt;/div&gt;
&lt;/div&gt;


&lt;p&gt;&lt;strong&gt;GitHub Profiles:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://github.com/chloeMcAteer"&gt;Chloe - chloeMcAteer&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/pmc-a"&gt;Peter - pmc-a&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Completed Features
&lt;/h2&gt;

&lt;p&gt;Once we started this hackathon, we had an endless list of possible features we wanted to add to this application, but unfortunately due to time constraints it wasn't possible to add everything we wanted, but below is a full feature list of what we &lt;strong&gt;achieved&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Ability to securely log in as a teacher or student&lt;/li&gt;
&lt;li&gt;Ability to view upcoming lessons&lt;/li&gt;
&lt;li&gt;Ability to start/join a video call&lt;/li&gt;
&lt;li&gt;Ability to mute/unmute mic, turn on/off camera and share screen&lt;/li&gt;
&lt;li&gt;Ability to live chat with everyone in the lesson to ask questions&lt;/li&gt;
&lt;li&gt;Abiltity for the teacher to view summary lesson statistcs at the end of the lesson.&lt;/li&gt;
&lt;li&gt;Ability for the teacher to send an SMS to students who missed the lesson.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Some additional features that we wanted to add and didn't have time to complete:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Ability for students to show red, amber or green to highlight their understanding of the topic&lt;/li&gt;
&lt;li&gt;Ability to request a transcript/recording of the lesson&lt;/li&gt;
&lt;li&gt;Ability to obtain general student feedback at the end of the lesson&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Additional Resources/Info
&lt;/h2&gt;

&lt;p&gt;We hope you like our submission! We have been tweeting our progress throughout the hackathon so if you want to see our journey check us out at &lt;a href="https://twitter.com/chloeMcAteer3"&gt;@chloeMcAteer3&lt;/a&gt; &amp;amp; &lt;a href="https://twitter.com/PMc_A"&gt;@PMc_A&lt;/a&gt;!&lt;/p&gt;

</description>
      <category>twiliohackathon</category>
      <category>javascript</category>
      <category>node</category>
      <category>react</category>
    </item>
  </channel>
</rss>
