<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Jason Wiker</title>
    <description>The latest articles on DEV Community by Jason Wiker (@wiker).</description>
    <link>https://dev.to/wiker</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/wiker"/>
    <language>en</language>
    <item>
      <title>How I'm moving into Web3 as a Web Developer</title>
      <dc:creator>Jason Wiker</dc:creator>
      <pubDate>Tue, 28 Dec 2021 22:48:16 +0000</pubDate>
      <link>https://dev.to/wiker/new-post-772</link>
      <guid>https://dev.to/wiker/new-post-772</guid>
      <description>&lt;p&gt;8 years ago I started getting into cryptocurrencies as a hobby. I had found a currency you may know about called &lt;a href="https://dogecoin.com/"&gt;"Dogecoin"&lt;/a&gt;. I was initially skeptical of a currency based on a meme but after finding the community that was forming around the new coin I was hooked. I setup my computer to mine the coins and even bought hats to help &lt;a href="https://www.theguardian.com/technology/2014/mar/27/nascar-dogecoin-sponsor-josh-wise-talladega-superspeedway"&gt;Sponsor a nascar driver&lt;/a&gt; and fund the &lt;a href="https://www.theguardian.com/technology/2014/jan/20/jamaican-bobsled-team-raises-dogecoin-winter-olympics"&gt;Jamaican bobsled team&lt;/a&gt;. After a while though I sold the coins when the first bear market hit. I now regret that decision 🤣&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--a7mnU_jJ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/hg94peel489xz92y9efp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--a7mnU_jJ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/hg94peel489xz92y9efp.png" alt="dogecar" width="880" height="462"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Recently with the rise of NFT's and Web3 my passion has returned. I see the community that is currently forming around Ethereum with things like &lt;a href="https://ethereum.org/en/dao/"&gt;DAO's&lt;/a&gt; and it brought me right back to 2014 when I was mining doge in my bedroom for fun. Not only that but as a developer I see the power of Ethereum and other blockchains and I have started to consume any content I can. The issue currently is that the field moves so quick that it's hard to find a way to get an idea of what you need to start building. So I decided to document at least what I have been doing in the case that it could help someone else out.&lt;/p&gt;

&lt;p&gt;I'm going to assume you have at least a baseline knowledge of javascript and basic web development. You should be able to at least be able to spin up something like a react site and a node server and then be able to deploy them. If not you can jump over to youtube and find hundreds of tutorials and come back here.&lt;/p&gt;

&lt;h1&gt;
  
  
  Building a Theoretical Base
&lt;/h1&gt;

&lt;p&gt;While there are multiple blockchains currently competing over who will be the primary &lt;a href="https://www.binance.com/en/blog/fiat/layer-1-blockchain-tokens-everything-you-need-to-know-421499824684903155"&gt;Layer 1&lt;/a&gt; solution, the biggest developer ecosystem exists currently around Ethereum so that's the place I would recommend starting at. The first thing I did was read over the &lt;a href="https://ethereum.org/en/developers/docs/"&gt;Ethereum Developer documentation&lt;/a&gt; which gives you a good primer on what smart contracts are, what gas is and all the other more theoretical parts of the current blockchain landscape.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--bFwNR-sH--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/3x8nmwod8bfw9s4zqi3i.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--bFwNR-sH--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/3x8nmwod8bfw9s4zqi3i.png" alt="Ethereum Docs" width="880" height="363"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Getting Hands-On with Smart Contracts
&lt;/h1&gt;

&lt;p&gt;The next thing you probably want to do is start getting a deeper understanding about Solidity and how to start writing smart contracts because they are going to be the backend of your new Web3 apps. The best way I found was to go through the &lt;a href="https://cryptozombies.io/"&gt;CryptoZombies tutorial &lt;/a&gt;. It will walk you though the Solidity syntax as well as how to optimize your gas fees. After this I also read through the &lt;a href="https://docs.openzeppelin.com/contracts/4.x/"&gt;OpenZeppelin docs&lt;/a&gt; for the various ERC standards because you will be using them extensively when writing your own contracts. By the end of it you should have enough a good enough handle on Ethereum smart contracts to start writing your own.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--kdtw09Ce--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/87z4w9m8r9dwfpdzpp81.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--kdtw09Ce--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/87z4w9m8r9dwfpdzpp81.png" alt="CryptoZombies" width="880" height="599"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Your First Dapp
&lt;/h1&gt;

&lt;p&gt;I like to learn by doing so the next thing that I did was building a demo NFT marketplace by following &lt;a href="https://www.youtube.com/watch?v=GKJBEEXUha0"&gt;this tutorial&lt;/a&gt; by &lt;a href="https://dev.to/dabit3"&gt;Nader Dabit&lt;/a&gt;. It will go through:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;The basics of building an &lt;a href="https://docs.openzeppelin.com/contracts/3.x/erc721"&gt;ERC721&lt;/a&gt; token contract as well as an NFT Marketplace contract&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;It also covers how to deploy the contract to the &lt;a href="https://polygon.technology/"&gt;Polygon Network&lt;/a&gt; which is an Ethereum layer 2 sidechain that you can use to minimize gas fee's (more on that later.)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Setting up a &lt;a href="https://hardhat.org/"&gt;Hardhat development environment&lt;/a&gt; to test and deploy your Solidity contracts&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Scaffolding a basic Next.js app with &lt;a href="https://docs.ethers.io/v5/"&gt;Ethers.js&lt;/a&gt; and &lt;a href="https://github.com/Web3Modal/web3modal"&gt;Web3Modal&lt;/a&gt; to interact with your deployed contracts&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;After all this you should have a baseline understanding of full-stack Ethereum development but the field is moving so quickly that there is so much more to learn.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--dHRqJBur--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/1lalfewic8p72vv9k6j8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--dHRqJBur--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/1lalfewic8p72vv9k6j8.png" alt="Metaverse market" width="880" height="492"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Diving Deeper
&lt;/h1&gt;

&lt;p&gt;Now that we have build a foundation for blockchain development we can start diving deeper into the field. I have been listening to podcasts like:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://podcast.app/chris-dixon-and-naval-ravikant-the-wonders-of-web-how-to-pick-the-right-hill-to-climb-finding-the-right-amount-of-crypto-regulation-friends-with-benefits-and-the-untapped-potential-of-nft-e300330060/?utm_source=ios&amp;amp;utm_medium=share"&gt;This Tim Ferris episode&lt;/a&gt; where he goes over the current Web3 Landscape with two large investors in the field&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://podcast.app/vitalik-buterin-creator-of-ethereum-on-understanding-ethereum-eth-vs-btc-eth-scaling-plans-and-timelines-nfts-future-considerations-life-extension-and-more-featuring-naval-ravikant-e130582179/?utm_source=ios&amp;amp;utm_medium=share"&gt;This Tim Ferris episode&lt;/a&gt; with Vitalik Buterin who is one of the founders of Ethereum&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Any episodes from the &lt;a href="https://podcast.app/bankless-p1013243/?utm_source=ios&amp;amp;utm_medium=share"&gt;Bankless podcast &lt;/a&gt; which will keep you updated on the Crypto universe as well as educating you in the process&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I have also been reading &lt;a href="https://github.com/sherminvo/TokenEconomyBook/wiki"&gt;this book&lt;/a&gt; which you can read on Github for free. It's one of the best books currently available for a deeper dive on the theory of the blockchain and token economies. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--1tCTn0eh--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/pa7oleuypy0yrj3l863a.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--1tCTn0eh--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/pa7oleuypy0yrj3l863a.png" alt="token book" width="880" height="594"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I have also been using lots of the current Ethereum Dapps like Opensea and even bought an ENS domain (wiker.eth 😎). You can view the current top Dapps by using &lt;a href="https://dappradar.com/rankings"&gt;this site&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--1l9_UA8O--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/b2x5lhrh6kb18cng3a39.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--1l9_UA8O--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/b2x5lhrh6kb18cng3a39.png" alt="dapp radar" width="880" height="609"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Exploring outside Ethereum
&lt;/h1&gt;

&lt;p&gt;Currently Ethereum has very high gas fees which makes building any dapps on it very cost prohibitive. The Ethereum organization has a roadmap for fixing this though and one of the ways is using &lt;a href="https://ethereum.org/en/developers/docs/scaling/layer-2-rollups/"&gt;Layer 2 rollup solutions&lt;/a&gt;. I have been exploring some of them like:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://polygon.technology/"&gt;Polygon&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://offchainlabs.com/"&gt;Arbitrum&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://www.optimism.io/"&gt;Optimism&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You can also take a look at some of the other Layer 1 Solutions outside Ethereum like:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://solana.com/"&gt;Solana&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://www.avax.network/"&gt;Avalanche&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://www.binance.org/en/smartChain"&gt;Binance Smart Chain&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h1&gt;
  
  
  Conclusion
&lt;/h1&gt;

&lt;p&gt;I'm super excited about everything going on in Web3 and I hope this helped you get a better idea on how to start developing and building more apps. This is only the beginning though and I hope you're able to learn even more about the ecosystem. If you have any other questions feel free to reach out for sure and all the best on your Web3 journey!&lt;/p&gt;

</description>
      <category>blockchain</category>
      <category>javascript</category>
      <category>web3</category>
      <category>react</category>
    </item>
    <item>
      <title>Data Pipeline for Mobile Analytics</title>
      <dc:creator>Jason Wiker</dc:creator>
      <pubDate>Mon, 27 Dec 2021 01:04:00 +0000</pubDate>
      <link>https://dev.to/wiker/creating-an-aws-data-lake-and-real-time-pipeline-for-mobile-app-analytics-with-aws-cdk-29n9</link>
      <guid>https://dev.to/wiker/creating-an-aws-data-lake-and-real-time-pipeline-for-mobile-app-analytics-with-aws-cdk-29n9</guid>
      <description>&lt;p&gt;Organizations that successfully generate business value from their data will outperform their peers. These leaders are able to do new types of analytics like machine learning over new sources like log files, data from click-streams, social media, and internet connected devices stored in data lakes. This helps them to identify, and act upon opportunities for business growth faster by attracting and retaining customers, boosting productivity, proactively maintaining devices, and making informed decisions.&lt;/p&gt;

&lt;p&gt;Given the breadth of services that AWS offers for analytics it can make the process of starting this data transformation difficult. To help this I have created a demonstration data lake for a fictional music iOS app called “Xerris Music”. In this app, users are able to stream music from all their favourite artists similar to Spotify or Apple Music. The app has started to take off now and the company wants to be able to better understand the listening and usage habits of their customers.&lt;/p&gt;

&lt;p&gt;To accomplish this we will create a data pipeline leveraging AWS Amplify, Pinpoint and Kinesis Firehose. The pipeline will send the data to a landing S3 data lake where we will store all our raw data. We will use AWS Glue crawlers to catalog the raw data and then use a schedualed glue job that runs a spark ETL script on the data to flatten and reformat the data to parquet. This transformed data will be stored in a different bucket from which we can use other AWS tools to run queries, build dashboards and create machine learning models. In this demo we will use AWS Athena which is an ad-hoc querying engine to run SQL queries from data stored in S3. All of this will be provisioned using code with AWS CDK.&lt;/p&gt;

&lt;p&gt;Here is a quick visual overview of the architecture we are going to build:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--5cjQwTqy--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/4260/1%2AWrNNOgZiPI4GMbCMiHceFA.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--5cjQwTqy--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/4260/1%2AWrNNOgZiPI4GMbCMiHceFA.png" alt="" width="880" height="397"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Amplify Setup
&lt;/h2&gt;

&lt;p&gt;To start off we will get the iOS app setup with Amplify and Pinpoint. AWS Amplify is a set of tools and services that can be used together or on their own, to help front-end web and mobile developers build scalable full stack applications. AWS Pinpoint is a service that we will use for tracking user metrics. First, make sure you have the AWS Amplify CLI installed on your machine:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;npm install -g @aws-amplify/cli
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;Then configure the CLI to your AWS account:&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;amplify configure
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;Then open up the iOS project and initialize Amplify and Pinpoint. Note the name you use for the Pinpoint project when you are asked, it will be used later to hook up the rest of the pipeline:&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;amplify init
amplify add analytics
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;Now we need to initialize Amplify in our App Delegate file:&lt;br&gt;
&lt;/p&gt;
&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;

&lt;p&gt;&lt;br&gt;&lt;br&gt;
Now we can push up all the changes we have made and create the Amplify infrastructure in AWS:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;amplify push
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;The next step is getting to the actual app itself which looks like:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--tdLe9bvw--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2340/1%2AsjHWqpKnB1so2gZDhHvY3A.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--tdLe9bvw--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2340/1%2AsjHWqpKnB1so2gZDhHvY3A.jpeg" alt="" width="880" height="1904"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;For demonstration purposes this app doesn’t actually play any music. Instead, it takes your last.fm username and pulls the listening history from their api so we can have a large set of data to work with. From their API we can get the date the track was played, the artist, as well as tags like the genre to add more dimensions to the data.&lt;/p&gt;

&lt;p&gt;We will take these properties and creates a “SongListen” event to be stored. Pinpoint also stores events such as “SessionStart” and “SessionEnd” by default to add more data to your analytics.&lt;br&gt;
&lt;/p&gt;
&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;



&lt;h2&gt;
  
  
  Pipeline Infrastructure
&lt;/h2&gt;

&lt;p&gt;Now that we have the data being sent to pinpoint it’s time to set up the rest of the pipeline to push the data to out S3 lake. We will use Kinesis Firehose for this part which is a way to reliably load near real-time streaming data into data lakes, data stores, and analytics services. It can capture, transform, and deliver streaming data to Amazon S3, Amazon Redshift, Amazon Elasticsearch Service, generic HTTP endpoints, and service providers like Datadog, New Relic, MongoDB, and Splunk.&lt;/p&gt;

&lt;p&gt;We will use CDK to set this part up so now we will move over to an infrastructure folder in our project and start up a new CDK stack:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;CDK init
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;Inside our newly created stack let’s start off by creating our landing bucket to store our data. Then we will create a Kinesis Firehose stream to ingest the pinpoint data to go into our landing bucket. Then finally we will create an event stream in pinpoint to stream the data into Kinesis Firehose. Included is also the various IAM roles needed for all the infrastructure to access all the other services. Note that you need to enter your Pinpoint application ID you generated with Amplify in the previous step.:&lt;br&gt;
&lt;/p&gt;
&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;

&lt;p&gt;&lt;br&gt;&lt;br&gt;
Now if you deploy this stack as-is you can go into the app and test it out. Enter your Last.FM username, press play and you should see the data populate in the new landing bucket of our data lake.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--TUwnGKN1--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/3860/1%2AvFG7nWB4hKiFzEyFv0wCFA.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--TUwnGKN1--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/3860/1%2AvFG7nWB4hKiFzEyFv0wCFA.png" alt="" width="880" height="471"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Data ETL and Cataloging Infrastructure
&lt;/h2&gt;

&lt;p&gt;Now that we have our data stored in the landing bucket can we start generating insights from it?&lt;/p&gt;

&lt;p&gt;Not yet.&lt;/p&gt;

&lt;p&gt;The data from pinpoint is stored in nested JSON files which are difficult to query and read. Not only that but we don’t even know what schema is available to read from. To help us with this step we are going to use AWS Glue. AWS Glue is a serverless data integration service that makes it easy to discover, prepare, and combine data for analytics, machine learning, and application development. AWS Glue provides all of the capabilities needed for data integration so that you can start analyzing your data and putting it to use in minutes instead of months.&lt;/p&gt;

&lt;p&gt;We will start by creating a Glue crawler to look at the landing bucket to generate an initial schema from the data. This will be used later to transform the data.&lt;br&gt;
&lt;/p&gt;
&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;
&lt;br&gt;
After the crawler is run we can take a look at the schema generated and see that most of it is nested inside structs:

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--HFcHhOHD--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/6036/1%2ARIcK2WkE3edu8tyNd1mkDw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--HFcHhOHD--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/6036/1%2ARIcK2WkE3edu8tyNd1mkDw.png" alt="" width="880" height="393"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now we can take a look at the Spark script for the ETL of the data. It takes the data in the landing bucket, flattens it using the built-in “Relationalize” transform that Glue offers then writes the data in parquet format into our new bucket we will use for transformed data. Parquet is designed for efficient as well as performant flat columnar storage format of data compared to row based files like CSV or TSV files.&lt;br&gt;
&lt;/p&gt;
&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;
&lt;br&gt;
Now we can create all the infrastructure we will need to finish the transformations of our data. This first involves uploading our spark script to S3, then setting up buckets for the glue crawler temp data and the transformed lake. Then we can create the job for glue and give it all the parameters to write and read from. Finally we are creating a crawler to catalog all the new transformed data so we can query it later in Athena.&lt;br&gt;
&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;
&lt;br&gt;
We can run the final crawler and take a look at our new schema which includes all our data:

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--DVdorxBg--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/6120/1%2AG9NLsfcbxKMBZ95LAKnYRw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--DVdorxBg--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/6120/1%2AG9NLsfcbxKMBZ95LAKnYRw.png" alt="" width="880" height="543"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Analyzing our Data
&lt;/h2&gt;

&lt;p&gt;Now is what we have been waiting for from the start. Now that we have our data streaming and transformed in real time, we can use AWS Athena to gather insights from the data. Athena works with the Glue schema that was created to help you query and see all the data that is available. You can try any normal SQL query on the dataset:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--XcsSbt4P--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/7124/1%2AW5SvYnarz7aY3u5RSSpt_Q.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--XcsSbt4P--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/7124/1%2AW5SvYnarz7aY3u5RSSpt_Q.png" alt="" width="880" height="459"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Let’s see what the top artists are:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--DTu7rj0U--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/5336/1%2AatvMLgCxn3Nr8iWp6mg8ow.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--DTu7rj0U--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/5336/1%2AatvMLgCxn3Nr8iWp6mg8ow.png" alt="" width="880" height="620"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Let’s see some of the top tags:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--euqFLur---/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/5216/1%2AC_oZAso-cnxgwfCFahfwtg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--euqFLur---/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/5216/1%2AC_oZAso-cnxgwfCFahfwtg.png" alt="" width="880" height="590"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;We have built out a full data lake and pipeline and now we can unlock the power of our data in ways we were not able to before. Also because it is entirely created using code with CDK we can reproduce this in multiple environments quickly and easily.&lt;/p&gt;

&lt;p&gt;Thank you for reading and you can find the full code &lt;a href="https://github.com/xerris/DataLakeDemo"&gt;here&lt;/a&gt;. If you want to find out more about Timestream and other AWS services feel free to get in touch with us at &lt;a href="https://www.xerris.com/"&gt;Xerris&lt;/a&gt; and we can help you craft innovative cloud focused solutions for your business.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--GCDW0SZK--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2000/1%2A5Ugd2NTu8GSIyYfVvAfztw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--GCDW0SZK--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2000/1%2A5Ugd2NTu8GSIyYfVvAfztw.png" alt="" width="296" height="150"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>datascience</category>
      <category>javascript</category>
      <category>aws</category>
      <category>ios</category>
    </item>
    <item>
      <title>AWS Timestream with Apple HealthKit</title>
      <dc:creator>Jason Wiker</dc:creator>
      <pubDate>Mon, 27 Dec 2021 01:01:00 +0000</pubDate>
      <link>https://dev.to/wiker/aws-timestream-introduction-with-apple-healthkit-grafana-and-aws-cdk-328d</link>
      <guid>https://dev.to/wiker/aws-timestream-introduction-with-apple-healthkit-grafana-and-aws-cdk-328d</guid>
      <description>&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--FDmraepv--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2000/1%2AdSk3llUj1pFaU2DGxgjgXg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--FDmraepv--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2000/1%2AdSk3llUj1pFaU2DGxgjgXg.png" alt="" width="296" height="150"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;AWS Timestream is a managed time-series database, like DynamoDB for time data. This makes it easy to store and analyze trillions of events per day up to 1,000 times faster and at as little as 1/10th the cost of relational databases. It is designed for IOT, Devops or general analytics use cases where you have large amounts of data for various time intervals and you need a way to store it without the overhead of server management. To help show the power of it I have created an example here where if you have an iPhone it will take the activity data from the Health app and send it into Timestream with an AWS Lambda function. We can then visualize all the data with Grafana to view detailed trends in our activity. All the infrastructure will be created by AWS CDK to leverage the benefits of infrastructure as code (IAC). Xerris has other post’s on CDK if you want to hear more of the benefits of that tool.&lt;/p&gt;

&lt;p&gt;Here is a quick visual overview of the architecture we are going to build:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--67Cyi1oj--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/4616/1%2Au6UtW75aV2EbT2p5MY7-0Q.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--67Cyi1oj--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/4616/1%2Au6UtW75aV2EbT2p5MY7-0Q.jpeg" alt="" width="880" height="366"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Timestream Lambda Handler
&lt;/h2&gt;

&lt;p&gt;We will start off with the Lambda Node.JS function code which will take in the data and push it into Timestream. Note that you will need to package a newer version of the AWS SDK in your node modules because the version included with lambda does not yet support Timestream.&lt;/p&gt;

&lt;p&gt;First the Timestream client “writeClient” is created and then we have a route called for the path /healthInput that takes all the records and inserts them as measures in Timestream. Measures are the primary value you want to track per record and in this case we are tracking the step count we took. The other way to track additional data is through Dimensions. In this case the only metadata we are including is the duration which we are setting to hourly. So each entry will represent an hours worth of steps.&lt;/p&gt;

&lt;p&gt;After that we just tell the Timestream client which database and table we want to write to and that covers the extent of this function. You can add a ton of software engineering depth to this but I wanted to focus primarily on writing to Timestream and to show what is possible.&lt;br&gt;
&lt;/p&gt;
&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;h2&gt;
  
  
  Infrastructure Setup with CDK
&lt;/h2&gt;

&lt;p&gt;Now let’s spin up the AWS Architecture with CDK written in C#. This covers the creation of the lambda, the API Gateway in front of it as well as the Timestream database itself. For any further details on this you can see other Xerris blog posts on CDK.&lt;/p&gt;

&lt;p&gt;The only thing I want to mention about Timestream setup and about the memory store and the magnetic store. The memory store is a more expensive but faster storage tier and the magnetic store is a cheaper but slower storage tier. When creating the table you are required to give retention periods for both tiers. So when you send data to timestream it starts off in the memory store and then after that retention period ends it goes to the magnetic store. Then finally when the magnetic store retention period ends the data is deleted.&lt;/p&gt;

&lt;p&gt;So you need to make sure that any data you are adding to Timestream has a timestamp that is within the memory store retention period because all data needs to start there. Another thing to note that currently in CDK there is no way to set either of these values so if you want to update the defaults you need to manually update them. This will definitely be updated as time goes on but Timestream is still a new service so this is to be expeted.&lt;br&gt;
&lt;/p&gt;
&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;h2&gt;
  
  
  Sending the activity data using HealthKit on iOS
&lt;/h2&gt;

&lt;p&gt;Now let’s go and create a quick iOS app that will send the data to our Lambda. All the code is availiable but I won’t code into too much detail. All you need to know is that we authorize the app to access Healthkit and then pull the step data for the last month. Then we send that off to the lambda we created. You will need to update the url of the request on line 58 with the default domain of the API Gateway that was created for. In a production iOS you would use packages like Alamofire for the API requests but for this I am using pure Swift and iOS API’s to keep everything simple. This also includes the code for the View which is written in SwiftUI.&lt;br&gt;
&lt;/p&gt;
&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;
&lt;br&gt;
&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--heoTVVvN--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2340/1%2AS-VK2sO6LBYaqKrKgcO5IA.jpeg" alt="" width="880" height="1904"&gt;

&lt;p&gt;When you run the app on your iPhone you will see it pulls and shows your steps for the day and then in background pushes your data to the lambda that will populate Timestream.&lt;/p&gt;

&lt;p&gt;Now let’s open up the AWS console and explore the Timestream Query editor to see how our data is being populated. Timestream can be fully queried by standard SQL queries so here we are doing a SELECT * of the table but limiting it to 10 results. We see the data that was from our iPhone:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Qx5wpzk5--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/6788/1%2AbpeHSWM7NcgOGhS3ht7kfA.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Qx5wpzk5--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/6788/1%2AbpeHSWM7NcgOGhS3ht7kfA.png" alt="" width="880" height="478"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Visualizing with Grafana
&lt;/h2&gt;

&lt;p&gt;Now we can start to visualize the data we have created in Timestream. For this I am using Grafana which is a multi-platform open source analytics and interactive visualization web application. It provides charts, graphs, and alerts for the web when connected to supported data sources. We are going to be connecting it to Timestream here to visualize our activity data.&lt;/p&gt;

&lt;p&gt;First go and install Grafana on your machine from &lt;a href="https://grafana.com/get"&gt;here&lt;/a&gt; and log into the admin user. You will also need to install the Timestream plugin from &lt;a href="https://grafana.com/grafana/plugins/grafana-timestream-datasource/installation"&gt;here&lt;/a&gt;. We are going to go to Configuration -&amp;gt; Data Sources and add our Timestream Table. Setup the auth provider, region, and endpoint settings and now we can start playing with the data. Go to the Dashboards area and create a new Graph visualization. For the query we are going to set it to pull all of the data in our table:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;SELECT * FROM $__database.$__table
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Now go and play with the data and see the times of day you are most active!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--7LGRs6Ti--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/5332/1%2AdUD2H0PFj08K12zawIwoyQ.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--7LGRs6Ti--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/5332/1%2AdUD2H0PFj08K12zawIwoyQ.png" alt="" width="880" height="357"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Overall while Timestream is a new service I found it very easy to use and because there are no servers to manage and no capacity to provision, you can just focus on building your applications. It provides high throughput ingestion, rapid point-in-time queries through its memory store, and fast analytical queries through its cost optimized magnetic store. Then as well, you pay only for the data you ingest, store, and query.&lt;/p&gt;

&lt;p&gt;Thank you for reading and you can find the full code &lt;a href="https://github.com/xerris/timestreamDemo"&gt;here&lt;/a&gt;. If you want to find out more about Timestream and other AWS services feel free to get in touch with us at &lt;a href="https://www.xerris.com/"&gt;Xerris&lt;/a&gt; and we can help you craft innovative cloud focused solutions for your business.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--FDmraepv--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2000/1%2AdSk3llUj1pFaU2DGxgjgXg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--FDmraepv--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2000/1%2AdSk3llUj1pFaU2DGxgjgXg.png" alt="" width="296" height="150"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>database</category>
      <category>ios</category>
      <category>aws</category>
    </item>
    <item>
      <title>EKS Clusters using CDK</title>
      <dc:creator>Jason Wiker</dc:creator>
      <pubDate>Mon, 27 Dec 2021 00:59:00 +0000</pubDate>
      <link>https://dev.to/wiker/create-both-development-and-production-ready-aws-eks-clusters-using-aws-cdk-5fcb</link>
      <guid>https://dev.to/wiker/create-both-development-and-production-ready-aws-eks-clusters-using-aws-cdk-5fcb</guid>
      <description>&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--dWkZSKLw--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2920/1%2AQEK5_rDJd1bT9Woo-UxCSQ.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--dWkZSKLw--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2920/1%2AQEK5_rDJd1bT9Woo-UxCSQ.png" alt="" width="880" height="469"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;How to maintain a Kurbernetes cluster in AWS the easiest way possible.&lt;/p&gt;

&lt;p&gt;As a &lt;a href="https://www.xerris.com"&gt;Xerris&lt;/a&gt; Solutions Architect, I sometimes get customers asking about how to maintain a Kurbernetes cluster in AWS the easiest way possible. Kubernetes is becoming the de-facto standard for running container workloads and provides many benefits over traditional virtual machine based architectures. It enables you to scale your compute resources seamlessly while providing fast development and deployment cycles and fast rollbacks. The cost of this previously has been the high levels of administration that come with maintaining your own cluster but AWS has a managed service that aim’s just to ease this problem.&lt;/p&gt;

&lt;h2&gt;
  
  
  AWS EKS AND CDK
&lt;/h2&gt;

&lt;p&gt;AWS EKS is a fully managed Kubernetes service that frees you from having to deal with the day-to-day cluster maintenance and instead let’s you focus on the applications running on your cluster. It also is deeply integrated into the AWS ecosystem with services such as Amazon CloudWatch, Auto Scaling Groups, AWS Identity and Access Management (IAM), and Amazon Virtual Private Cloud (VPC). In addition to this, AWS will automatically apply the latest security patches to your cluster so you can know that any known vulnerabilities are taken care of.&lt;/p&gt;

&lt;p&gt;When it comes to deploying your cluster you have a couple options as well including Cloudformation and Terraform but the tool that I am going to use here is called the AWS Cloud Development Kit (CDK). It allows you to describe your infrastructure using existing programming languages like C# or Python.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;In this post, we will set up both a cost effective development cluster and a highly-available production cluster from scratch using CDK and C#.&lt;/strong&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Prerequisites
&lt;/h2&gt;

&lt;p&gt;Before we start we need to get our CLI setup. Install the AWS CLI on your machine with a user who has been given AdministratorAccess. This is due to the extensive access that is needed for the CDK. In a production environment this should be limited to only the needed permissions for creating the infrastructure. Now we need to install the CDK which can be done through npm:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;npm install -g aws-cdk
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;Now let’s create a CDK project:&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cdk init app --language dotnet
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;You can open up your project in Visual Studio and install the CDK NuGet Package and we can get started. It’s going to be a lot of code but at the end it should all come together into a neat and maintainable way to manage your infrastructure.&lt;/p&gt;
&lt;h2&gt;
  
  
  VPC Setup
&lt;/h2&gt;

&lt;p&gt;First let’s start by setting up the VPC that our cluster will reside in. We are setting it up with large subnets for future growth in our cluster as well as creating them over 4 AZ’s for maximum availability.&lt;br&gt;
&lt;/p&gt;
&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;



&lt;h2&gt;
  
  
  ECR Setup
&lt;/h2&gt;

&lt;p&gt;We will need a place to store our container images so we will create an ECR repository for each deployment environment we plan to run in.&lt;br&gt;
&lt;/p&gt;
&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;
&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;h2&gt;
  
  
  EKS Setup
&lt;/h2&gt;

&lt;p&gt;Now we have to actually create our cluster. This involves setting up an administrator IAM role to access the cluster as well as outputs that allow us to quickly extract these values and login after the stack has finished creating.&lt;br&gt;
&lt;/p&gt;
&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;h2&gt;
  
  
  Node Group Setup
&lt;/h2&gt;

&lt;p&gt;Here we are setting up an Abstract class that both our development and production node groups can inherit from. In here we also describe the basic autoscaling policy that both will use.&lt;br&gt;
&lt;/p&gt;
&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;h2&gt;
  
  
  Development Node Group Setup
&lt;/h2&gt;

&lt;p&gt;Spot instances are a great way to save money on workloads that can be terminated without notice. While a production environment might not fit here, a development workload is a perfect situation to leverage spot instances to save cost. Here we are calculating our spot price with the list price and discount then launching our cluster with md5.large nodes in an ASG.&lt;br&gt;
&lt;/p&gt;
&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;h2&gt;
  
  
  Production Node Group Setup
&lt;/h2&gt;

&lt;p&gt;For our production node group we want to maximize availability and reliability. We do this by creating our node groups across all AZ’s in our VPC . We also define our autoscaler manifest which will be created after cluster creation.&lt;br&gt;
&lt;/p&gt;
&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;
&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;h2&gt;
  
  
  Putting it all Together
&lt;/h2&gt;

&lt;p&gt;Here we put all the pieces together into two stacks. A development stack with the spot instance node group as well as a production stack with the high availability node group.&lt;br&gt;
&lt;/p&gt;
&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;
&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;h2&gt;
  
  
  Deploying our Infrastructure
&lt;/h2&gt;

&lt;p&gt;To deploy the development environment all you need to do it run:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cdk bootstrap
cdk deploy dev-demo
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Or the production environment:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cdk bootstrap
cdk deploy prod-demo
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Then once everything is deployed it will output what you need to configure your kubectl to connect to your cluster. The format will be similar to:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws eks update-kubeconfig --name demo--role-arn arn:aws:iam::123456789:role/dev-demo-cluster-cluster-administrator --region us-west-2
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Then to start the autoscaler:&lt;/p&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl apply -f autoscaler.yaml&lt;br&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h2&gt;
&lt;br&gt;
  &lt;br&gt;
  &lt;br&gt;
  Deleting our Infrastructure&lt;br&gt;
&lt;/h2&gt;

&lt;p&gt;Once you are done you can delete everything by going:&lt;/p&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cdk delete dev-demo&lt;br&gt;
cdk delete prod-demo&lt;br&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h2&gt;
&lt;br&gt;
  &lt;br&gt;
  &lt;br&gt;
  Conclusions&lt;br&gt;
&lt;/h2&gt;

&lt;p&gt;Now that we have our cluster up and running we can now look at tools to deploy our images like Flux or monitoring tools like Prometheus. This is out of scope for this post but the sky is the limit with Kubernetes and it’s extensibility leads to lots of great workflows. Thank you for reading and you can find the full CDK code &lt;a href="https://github.com/xerris/KubernetesDemo"&gt;here&lt;/a&gt;. If you want to find out more about scaling up your infrastructure using Kubernetes feel free to get in touch with us at &lt;a href="https://www.xerris.com"&gt;Xerris&lt;/a&gt; and we can help you craft innovative cloud focused solutions for your business.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--GCDW0SZK--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2000/1%2A5Ugd2NTu8GSIyYfVvAfztw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--GCDW0SZK--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2000/1%2A5Ugd2NTu8GSIyYfVvAfztw.png" alt="" width="296" height="150"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>eks</category>
      <category>aws</category>
      <category>kubernetes</category>
      <category>csharp</category>
    </item>
  </channel>
</rss>
