<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Bill "The Vest Guy" Penberthy</title>
    <description>The latest articles on DEV Community by Bill "The Vest Guy" Penberthy (@billvest).</description>
    <link>https://dev.to/billvest</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/billvest"/>
    <language>en</language>
    <item>
      <title>.NET and Amazon EventBridge</title>
      <dc:creator>Bill "The Vest Guy" Penberthy</dc:creator>
      <pubDate>Tue, 06 Sep 2022 14:58:30 +0000</pubDate>
      <link>https://dev.to/aws-builders/net-and-amazon-eventbridge-nf0</link>
      <guid>https://dev.to/aws-builders/net-and-amazon-eventbridge-nf0</guid>
      <description>&lt;p&gt;As briefly mentioned in an earlier post, &lt;a href="https://aws.amazon.com/eventbridge/" rel="noopener noreferrer"&gt;Amazon EventBridge&lt;/a&gt; is a serverless event bus service designed to deliver data from applications and services to a variety of targets. It uses a different methodology than does SNS to distribute events.&lt;/p&gt;

&lt;p&gt;The event producers submit their events to the service bus. From there, a set of rules determines what messages get sent to which recipients. This flow is shown in Figure 1.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi0.wp.com%2Fbillthevestguy.com%2Fwp-content%2Fuploads%2F2022%2F09%2Fimage-8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi0.wp.com%2Fbillthevestguy.com%2Fwp-content%2Fuploads%2F2022%2F09%2Fimage-8.png" alt="Figure 1. Message flow through Amazon EventBridge."&gt;&lt;/a&gt;&lt;em&gt;Figure 1. Message flow through Amazon EventBridge.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;The key difference between SNS and EventBridge is that in SNS you send your message to a topic, so the sender makes some decisions about where the message is going. These topics can be very broadly defined and domain-focused so that any application interested in order-related messages subscribes to the order topic, but this still obligates the sender to have some knowledge about the messing system.&lt;/p&gt;

&lt;p&gt;In EventBridge you simply toss messages into the bus and the rules sort them to the appropriate destination. Thus, unlike SNS where the messages themselves don’t really matter as much as the topic; in EventBridge you can’t define rules without an understanding of the message on which you want to apply rules. With that in mind, we’ll go in a bit of a different order now and go into using EventBridge within a .NET application, that way we’ll have a definition of the message on which we want to apply rules.&lt;/p&gt;

&lt;h2&gt;
  
  
  .NET and Amazon EventBridge
&lt;/h2&gt;

&lt;p&gt;The first step to interacting with EventBridge from within your .NET application is to install the appropriate NuGet package, &lt;strong&gt;AWSSDK.EventBridge&lt;/strong&gt;. This will also install &lt;strong&gt;AWSSDK.Core&lt;/strong&gt;. Once you have the NuGet package, you can access the appropriate APIs by adding several using statements:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

using Amazon.EventBridge;
using Amazon.EventBridge.Model;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;You will also need to ensure that you have added:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

using System.Collections.Generic;
using System.Text.Json;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;These namespaces provide access to the **AmazonEventBridgeClient **class that manages the interaction with the EventBridge service as well as the models that are represented in the client methods. As with SNS, you can manage all aspects of creating the various EventBridge parts such as service buses, rules, endpoints, etc. You can also use the client to push events to the bus, which is what we do now. Let’s first look at the complete code and then we will walk through the various sections.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

static void Main(string[] args)
{
    var client = new AmazonEventBridgeClient();

    var order = new Order();

    var message = new PutEventsRequestEntry
    {
        Detail = JsonSerializer.Serialize(order),
        DetailType = "CreateOrder",
        EventBusName = "default",
        Source = "ProDotNetOnAWS"
    };

    var putRequest = new PutEventsRequest
    {
        Entries = new List&amp;lt;PutEventsRequestEntry&amp;gt; { message }
    };

    var response = client.PutEventsAsync(putRequest).Result;
    Console.WriteLine(
$"Request processed with ID of  
          #{response.ResponseMetadata.RequestId}");
    Console.ReadLine();
}


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;The first thing we are doing in the code is newing up our &lt;strong&gt;AmazonEventBridgeClient&lt;/strong&gt;so that we can use the &lt;strong&gt;PutEventsAsync&lt;/strong&gt; method, which is the method used to send the event to EventBridge. That method expects a &lt;strong&gt;PutEventsRequest&lt;/strong&gt; object that has a field Entries that are a list of &lt;strong&gt;PutEventsRequestEntry&lt;/strong&gt; objects. There should be a &lt;strong&gt;PutEventsRequestEntry&lt;/strong&gt; object for every event that you want to be processed by EventBridge, so a single push to EventBridge can include multiple events.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;&lt;strong&gt;Tip:&lt;/strong&gt; One model of event-based architecture is to use multiple small messages that imply different items of interest. Processing an order, for example, may result in a message regarding the order itself as well as messages regarding each of the products included in the order so that the inventory count can be managed correctly. This means the Product domain doesn’t listen for order messages, they only pay attention to product messages. Each of these approaches has its own advantages and disadvantages.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;The &lt;strong&gt;PutEventsRequestEntry&lt;/strong&gt; contains the information to be sent. It has the following properties:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Detail&lt;/strong&gt; – a valid JSON object that cannot be more than 100 levels deep.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;DetailType&lt;/strong&gt; – a string that provides information about the kind of detail contained within the event.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;EventBusName&lt;/strong&gt; – a string that determines the appropriate event bus to use. If absent, the event will be processed by the default bus.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Resources&lt;/strong&gt; – a List that contains ARNs which the event primarily concerns. May be empty.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Source&lt;/strong&gt; – a string that defines the source of the event.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Time&lt;/strong&gt; – a string that sets the time stamp of the event. If not provided EventBridge will use the time stamp of when the Put call was processed.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In our code, we only set the &lt;em&gt;Detail&lt;/em&gt;, &lt;em&gt;DetailType&lt;/em&gt;, &lt;em&gt;EventBusName&lt;/em&gt;, and &lt;em&gt;Source&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;This code is set up in a console, so running the application gives results similar to that shown in Figure 2&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi0.wp.com%2Fbillthevestguy.com%2Fwp-content%2Fuploads%2F2022%2F09%2Fimage-9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi0.wp.com%2Fbillthevestguy.com%2Fwp-content%2Fuploads%2F2022%2F09%2Fimage-9.png" alt="Figure 2. Console application that sent a message through EventBridge"&gt;&lt;/a&gt;&lt;em&gt;Figure 2. Console application that sent a message through EventBridge.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;We then used &lt;a href="https://www.telerik.com/fiddler" rel="noopener noreferrer"&gt;Progress Telerik Fiddler&lt;/a&gt; to view the request so we can see the message that was sent. The JSON from this message is shown below.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

{
    "Entries":
    [
        {
            "Detail": "{\"Id\":0,
                        \"OrderDate\":
                                \"0001-01-01T00:00:00\",
                        \"CustomerId\":0,
                        \"OrderDetails\":[]}",
            "DetailType": "CreateOrder",
            "EventBusName": "default",
            "Source": "ProDotNetOnAWS"
        }
    ]
}


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Now that we have the message that we want to process in EventBridge, the next step is to set up EventBridge. At a high level, configuring EventBridge in the AWS console is simple.&lt;/p&gt;

&lt;h2&gt;
  
  
  Configuring EventBridge in the Console
&lt;/h2&gt;

&lt;p&gt;You can find Amazon EventBridge by searching in the console or by going into the &lt;em&gt;Application Integration&lt;/em&gt; group. Your first step is to decide whether you wish to use your account’s default event bus or create a new one. Creating a custom event bus is simple as all you need to provide is a name, but we will use the default event bus.&lt;/p&gt;

&lt;p&gt;Before going any further, you should translate the event that you sent to the event that EventBridge will be processing. You do this by going into Event buses and selecting the default event bus. This will bring you to the Event bus detail page. On the upper right, you will see a button &lt;strong&gt;Send events&lt;/strong&gt;. Clicking this button will bring you to the Send events page where you can configure an event. Using the values from the JSON we looked at earlier, fill out the values as shown in Figure 3.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi0.wp.com%2Fbillthevestguy.com%2Fwp-content%2Fuploads%2F2022%2F09%2Fimage-10.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi0.wp.com%2Fbillthevestguy.com%2Fwp-content%2Fuploads%2F2022%2F09%2Fimage-10.png" alt="Figure 3. Getting the “translated” event for EventBridge"&gt;&lt;/a&gt;&lt;em&gt;Figure 3. Getting the “translated” event for EventBridge.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Once filled out, clicking the &lt;strong&gt;Review&lt;/strong&gt; button brings up a window with a JSON object. Copy and paste this JSON as we will use it shortly. The JSON that we got is displayed below.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

{
  "version": "0",
  "detail-type": "CreateOrder",
  "source": "ProDotNetOnAWS",
  "account": "992271736046",
  "time": "2022-08-21T19:48:09Z",
  "region": "us-west-2",
  "resources": [],
  "detail": "{\"Id\":0,\"OrderDate\":\"0001-01-01T00:00:00\",\"CustomerId\":0,\"OrderDetails\":[]}"
}


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;The next step is to create a rule that will evaluate the incoming messages and route them to the appropriate recipient. To do so, click on the &lt;strong&gt;Rules&lt;/strong&gt; menu item and then the &lt;strong&gt;Create rule&lt;/strong&gt; button. This will bring up S_tep 1 of the Create rule wizard_. Here, you define the rule by giving it a name that must be unique by event bus, select the event bus on which the rule will run, and choose between &lt;em&gt;Rule with an event pattern&lt;/em&gt; and &lt;em&gt;Schedule&lt;/em&gt;. Selecting to create a schedule rule will create a rule that is run regularly on a specified schedule. We will choose to create a rule with an event pattern.&lt;/p&gt;

&lt;p&gt;Step 2 of the wizard allows you to select the &lt;strong&gt;Event source&lt;/strong&gt;. You have three options, &lt;em&gt;AWS events or EventBridge partner events&lt;/em&gt;, &lt;em&gt;Other&lt;/em&gt;, or &lt;em&gt;All events&lt;/em&gt;. The first option references the ability to set rules that identify specific AWS or EventBridge partner services such as SalesForce, GitHub, or Stripe, while the last option allows you to set up destinations that will be forwarded every event that comes through the event bus. We typically see this when there is a requirement to log events in a database as they come in or some special business rule such as that. We will select &lt;strong&gt;Other&lt;/strong&gt; so that we can handle custom events from our application(s).&lt;/p&gt;

&lt;p&gt;You next can add in a sample event. You don’t have to take this action, but it is recommended to do this when writing and testing the event pattern or any filtering criteria. Since we have a sample message, we will select Enter my own and paste the sample event into the box as shown in Figure 4.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi0.wp.com%2Fbillthevestguy.com%2Fwp-content%2Fuploads%2F2022%2F09%2Fimage-11.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi0.wp.com%2Fbillthevestguy.com%2Fwp-content%2Fuploads%2F2022%2F09%2Fimage-11.png" alt="Figure 4. Adding a Sample Event when configuring EventBridge"&gt;&lt;/a&gt;&lt;em&gt;Figure 4. Adding a Sample Event when configuring EventBridge.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Be warned, however, if you just paste the event directly into the sample event it will not work as the matching algorithms will reject it as invalid without an id added into the JSON as highlighted by the golden arrow in Figure 4.&lt;/p&gt;

&lt;p&gt;Once you have your sample event input, the next step is to create the &lt;strong&gt;Event pattern&lt;/strong&gt; that will determine where this message should be sent. Since we are using a custom event, select the &lt;strong&gt;Custom patterns&lt;/strong&gt; (JSON editor) option. This will bring a JSON editor window in which you enter your rule. There is a drop-down of helper functions that will help you put the proper syntax into the window but, of course, there is no option for simple matching – you have to know what that syntax is already. Fortunately, it is identical to the rule itself, so adding an event pattern that will select every event that has a detail-type of “Create Order” is:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

{
  "detail-type": ["CreateOrder"]
}


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Adding this into the JSON editor and selecting the Test pattern button will validate that the sample event matched the event pattern. Once you have successfully tested your pattern select the Next button to continue.&lt;/p&gt;

&lt;p&gt;You should now be on the &lt;em&gt;Step 3 Select Target(s) _screen where you configure the targets that will receive the event. You have three different target types that you can select from, _EventBridge event bus&lt;/em&gt;, &lt;em&gt;EventBridge API Destination&lt;/em&gt;, or &lt;em&gt;AWS Service&lt;/em&gt;. Clicking on each of the different target types will change the set of information that you will need to manage that detail the target. We will examine two of these in more detail, the EventBridge API destination, and the AWS Service, starting with the AWS service.&lt;/p&gt;

&lt;p&gt;Selecting the &lt;em&gt;AWS service&lt;/em&gt; radio button brings up a drop-down list of AWS services that can be targeted. Select the &lt;strong&gt;SNS target option&lt;/strong&gt;. This will bring up a drop-down list of the available topics. Select the topic we worked with in the previous section and click the Next button. You will have the option &lt;em&gt;configure Tags&lt;/em&gt; and then can &lt;strong&gt;Create rule&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Once we had this rule configured, we re-ran our code to send an event from the console. Within several seconds we received the email that was sent from our console application running on our local machine to EventBridge where the rule filtered the event to send it to SNS which then configured and sent the email containing the order information that we submitted from the console.&lt;/p&gt;

&lt;p&gt;Now that we have verified the rule the fun way, let’s go back into it and make it more realistic. You can edit the targets for a rule by going into &lt;strong&gt;Rules&lt;/strong&gt; from the Amazon EventBridge console and selecting the rule that you want to edit. This will bring up the details page. Click on the &lt;strong&gt;Targets&lt;/strong&gt; tab and then click the &lt;em&gt;Edit button&lt;/em&gt;. This will bring you back to the &lt;em&gt;Step 3 Select Target(s)&lt;/em&gt; screen. From here you can choose to add an additional target (you can have up to 5 targets for each rule) or replace the target that pointed to the SNS service. We chose to replace our existing target.&lt;/p&gt;

&lt;p&gt;Since we are looking at using EventBridge to communicate between various microservices in our application we will configure the target to go to a custom endpoint. To do so requires that we choose a &lt;strong&gt;Target type&lt;/strong&gt; of &lt;em&gt;EventBridge API&lt;/em&gt; destination. We will then choose to Create a new API destination which will provide all of the destination fields that we need to configure. These fields are listed below.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Name&lt;/strong&gt; – the name of the API destination. Destinations can be reused in different rules, so make sure the name is clear.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Description&lt;/strong&gt; – optional value describing the destination.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;API destination endpoint&lt;/strong&gt; – the URL to the endpoint which will receive the event.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;HTTP Method&lt;/strong&gt; – the HTTP method used to send the event, can be any of the HTTP methods.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Invocation rate limit per second&lt;/strong&gt; – an optional value, defaulted to 300, of the number of invocations per second. Smaller values mean that events may not be delivered.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The next section to configure is the &lt;em&gt;Connection&lt;/em&gt;. The connection contains information about authorization as every API request must have some kind of security method enabled. &lt;em&gt;Connections&lt;/em&gt; can be reused as well, and there are three different Authorization types supported. These types are:&lt;/p&gt;

&lt;p&gt;&lt;em&gt;&lt;strong&gt;Basic (Username/Password)&lt;/strong&gt; – where a username and password combination is entered into the connection definition.&lt;br&gt;
&lt;strong&gt;OAuth Client Credentials **– where you enter the OAuth configuration information such as Authorization endpoint, Client ID, and Client secret.&lt;br&gt;
**API Key&lt;/strong&gt; – which adds up to 5 key\value pairs in the header.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Once you have configured your authorization protocol you can select the &lt;strong&gt;Next&lt;/strong&gt; button to once again complete moving through the EventBridge rules creation UI.&lt;/p&gt;

&lt;p&gt;There are two approaches that are commonly used when creating the rule to target API endpoint mapping. The first is a single endpoint per type of expected message. This means that, for example, if you were expecting “OrderCreated” and “OrderUpdated” messages then you would have created two separate endpoints, one to handle each message. The second approach is to create a generic endpoint for your service to which all inbound EventBridge messages are sent and then the code within the service evaluates each message and manages it from there.&lt;/p&gt;

&lt;h2&gt;
  
  
  Modern Event Infrastructure Creatiom
&lt;/h2&gt;

&lt;p&gt;So far, we have managed all the event management through the console, creating topics and subscriptions in SNS and rules, connections, and targets in EventBridge. However, taking this approach in the real world will be extremely painful. Instead, modern applications are best served by modern methods of creating services; methods that can be run on their own without any human intervention. There are two approaches that we want to touch on now, Infrastructure-as-Code (IaC) and in-application code.&lt;/p&gt;

&lt;h2&gt;
  
  
  Infrastructure-as-Code
&lt;/h2&gt;

&lt;p&gt;Using &lt;a href="https://aws.amazon.com/cloudformation/" rel="noopener noreferrer"&gt;AWS CloudFormation&lt;/a&gt; or &lt;a href="https://aws.amazon.com/cdk/" rel="noopener noreferrer"&gt;AWS Cloud Development Kit&lt;/a&gt; within the build and release process allows developers to manage the growth of their event infrastructure as their usage of events grows. Typically, you would see the work breakdown as being the teams building the systems sending events are responsible for creating the infrastructure required for sending, and the teams for the systems listening for events need to manage the creation of that infrastructure. Thus, if you are planning on using SNS then the sending system would have the responsibility for adding the applicable topic(s) while the receiving system would be responsible for adding the appropriate subscription(s) to the topics in which they are interested.&lt;/p&gt;

&lt;p&gt;Using IaC to build out your event infrastructure allows you to scale your use of events easily and quickly. It also makes it easier to manage any changes that you may feel are necessary, as it is very common for the messaging approach to be adjusted several times as you determine the level of messaging that is appropriate for the interactions needed within your overall system.&lt;/p&gt;

&lt;h2&gt;
  
  
  In-Application Code
&lt;/h2&gt;

&lt;p&gt;In-Application code is a completely different approach from IaC as the code to create the infrastructure resides within your application. This approach is commonly used in “configuration-oriented design”, where configuration is used to define the relationship(s) that each application plays. An example of a configuration that could be used when an organization is using SNS is below.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

{
     “sendrules”:[{“name”:”Order”, “key”:”OrdersTopic”}],
     “receiverules”: [{“name”:” ProductUpdates”, 
                       “key”:” Products”,
                       “endpoint”:”$URL/events/product”}],
}


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;The code in the application would then ensure that every entry in the &lt;em&gt;sendrules&lt;/em&gt; property has the appropriate topic created, so using the example above the name value represents the topic name and the key value represents the value that will be used within the application to map to the “Order” topic in SNS. The code in the application would then evaluate the &lt;em&gt;receiverules&lt;/em&gt; value and create subscriptions for each entry.&lt;/p&gt;

&lt;p&gt;This seems like a lot of extra work, but for environments that do not support IaC then this may be the easiest way to allow developers to manage the building of the event’s infrastructure. We have seen this approach built as a framework library included in every application that used events, and every application provided a configuration file that represented the messages they were sending and receiving. This framework library would evaluate the service(s) to see if there was anything that needed to be added and if so then add them.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>dotnet</category>
      <category>cloud</category>
      <category>architecture</category>
    </item>
    <item>
      <title>.NET and Amazon Simple Notification Service (SNS)</title>
      <dc:creator>Bill "The Vest Guy" Penberthy</dc:creator>
      <pubDate>Fri, 02 Sep 2022 14:17:26 +0000</pubDate>
      <link>https://dev.to/aws-builders/net-and-amazon-simple-notification-service-sns-334c</link>
      <guid>https://dev.to/aws-builders/net-and-amazon-simple-notification-service-sns-334c</guid>
      <description>&lt;p&gt;SNS, as you can probably guess from its name, is a straightforward service that uses pub\sub messaging to deliver messages. Pub\Sub, or Publish\Subscribe, messaging is an asynchronous communication method. This model includes the publisher who sends the data, a subscriber that receives the data, and the message broker that handles the coordination between the publisher and subscriber. In this case, &lt;a href="https://aws.amazon.com/sns"&gt;Amazon SNS&lt;/a&gt; is the message broker because it handles the message transference from publisher to subscriber.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;&lt;strong&gt;Note&lt;/strong&gt; – The language used when looking at events and messaging as we did in the &lt;a href="https://dev.to/aws-builders/modern-net-application-design-5gp9"&gt;previous post&lt;/a&gt; can be confusing. Messaging is the pattern we discussed last post. Messages are the data being sent and are part of both events and messaging. The term “message” is considered interchangeable with notification or event – even to the point where you will see articles about the messaging pattern that refer to the messages as events.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;The main responsibility of the message broker is to determine which subscribers should be sent what messages. It does this using a topic. A topic can be thought of as a category that describes the data contained within the message. These topics are defined based on your business. There will be times that a broad approach is best, so perhaps topics for “Order” and “Inventory” where all messages for each topic are sent. Thus, the order topic could include messages for “Order Placed” and “Order Shipped” and the subscribers will get all of those messages. There may be other times where a very narrow focus is more appropriate, in which case you may have an “Order Placed” topic and an “Order Shipped” topic where systems can subscribe to them independently. Both approaches have their strength and weaknesses.&lt;/p&gt;

&lt;p&gt;When you look at the concept of messaging, where one message has one recipient, the advantage that a service like SNS offers is the ability to distribute a single message to multiple recipients as shown in Figure 1, which is one of the key requisites of event-based architecture.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Db6jArw0--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://i0.wp.com/billthevestguy.com/wp-content/uploads/2022/09/image.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Db6jArw0--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://i0.wp.com/billthevestguy.com/wp-content/uploads/2022/09/image.png" alt="Figure 1. Pub\Sub pattern using Amazon SNS" width="825" height="422"&gt;&lt;/a&gt;&lt;em&gt;Figure 1. Pub\Sub pattern using Amazon SNS&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Now that we have established that SNS can be effectively used when building in an event-based architecture, let’s go do just that!&lt;/p&gt;

&lt;h2&gt;
  
  
  Using AWS Toolkit for Visual Studio
&lt;/h2&gt;

&lt;p&gt;If you’re a Visual Studio user, you can do a lot of the configuration and management through the toolkit. Going into Visual Studio and examining the AWS Explorer will show that one of the options is Amazon SNS. At this point, you will not be able to expand the service in the tree control because you have not yet started to configure it. Right-clicking on the service will bring up a menu with three options, &lt;em&gt;Create topic&lt;/em&gt;, &lt;em&gt;View subscriptions&lt;/em&gt;, and &lt;em&gt;Refresh&lt;/em&gt;. Let’s get started by creating our first topic. Click on the &lt;strong&gt;Create topic&lt;/strong&gt; link and create a topic. We created a topic named “ProDotNetOnAWS” – it seems to be a trend... Once you save the topic you will see it show up in the AWS Explorer.&lt;/p&gt;

&lt;p&gt;Right-click on the newly created topic and select to &lt;strong&gt;View topic&lt;/strong&gt;. This will add the topic details screen into the main window as shown in Figure 2.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--syI0Natm--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://i0.wp.com/billthevestguy.com/wp-content/uploads/2022/09/image-1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--syI0Natm--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://i0.wp.com/billthevestguy.com/wp-content/uploads/2022/09/image-1.png" alt="Figure 2. SNS topic details screen in the Toolkit for Visual Studio" width="825" height="346"&gt;&lt;/a&gt;&lt;em&gt;Figure 2. SNS topic details screen in the Toolkit for Visual Studio&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;In the details screen, you will see a button to &lt;strong&gt;Create New Subscription&lt;/strong&gt;. Click this button to bring up the &lt;em&gt;Create New Subscription&lt;/em&gt; popup window. There are two fields that you can complete, &lt;em&gt;Protocol&lt;/em&gt; and &lt;em&gt;Endpoint&lt;/em&gt;. The protocol field is a dropdown and contains various choices.&lt;/p&gt;

&lt;h3&gt;
  
  
  HTTP or HTTPS Protocol Subscription
&lt;/h3&gt;

&lt;p&gt;The first two of these choices are &lt;strong&gt;HTTP&lt;/strong&gt; and &lt;strong&gt;HTTPS&lt;/strong&gt;. Choosing one of these protocols will result in SNS making an HTTP (or HTTPS) POST to the configured endpoint. This POST will result in a JSON document with the following name-value pairs.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Message&lt;/strong&gt; – the content of the message that was published to the topic&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;MessageId&lt;/strong&gt; – A universally unique identifier for each message that was published&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Signature&lt;/strong&gt; – Base64-encoded signature of the Message, MessageId, Subject, Type, Timestamp, and TopicArn values.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;SignatureVersion&lt;/strong&gt; – version of the signature used&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;SigningCertUrl&lt;/strong&gt; – the URL of the certificate that was used to sign the message&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Subject&lt;/strong&gt; – the optional subject parameter used when the notification was published to a topic. In those examples where the topic is broadly-based, the subject can be used to narrow down the subscriber audience.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Timestamp&lt;/strong&gt; – the time (GMT) when the notification was published.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;TopicARN&lt;/strong&gt; – the Amazon Resource Name (ARN) for the topic&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Type&lt;/strong&gt; – the type of message being sent. For an SNS message, this type is &lt;em&gt;Notification&lt;/em&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;At a minimum, your subscribing system will care about the message, as this message contains the information that was provided by the publisher. One of the biggest advantages of using an HTTP or HTTPS protocol subscription is that the system that is subscribed does not have to do anything other than accept the message that is submitted. There is no special library to consume, no special interactions that must happen, just an endpoint that accepts requests.&lt;/p&gt;

&lt;p&gt;Some considerations as you think about using SNS to manage your event notifications. There are several different ways to manage the receipt of these notifications. The first is to create a single endpoint for each topic to which you subscribe. This makes each endpoint very discrete and only responsible for handling one thing; usually considered a plus in the programming world. However, this means that the subscribing service has some limitations as there are now external dependencies on multiple endpoints. Changing an endpoint URL, for example, will now require coordination across multiple systems.&lt;/p&gt;

&lt;p&gt;On the other hand, there is another approach where you create a single endpoint that acts as the recipient of messages across multiple topics. The code within the endpoint identifies the message and then forwards it through the appropriate process. This approach abstracts away any work within the system, as all of those changes happen below this single broadly bound endpoint. We have seen both of those approaches working successfully, it really comes down to your own business needs and how you see your systems evolving as you move forward.&lt;/p&gt;

&lt;h3&gt;
  
  
  Other Protocol Subscriptions
&lt;/h3&gt;

&lt;p&gt;There are other protocol subscriptions that are available in the toolkit. The next two in the list are &lt;em&gt;Email and Email (JSON)&lt;/em&gt;. Notifications sent under this protocol are sent to the email address that is entered as the endpoint value. This email is sent in two ways, where the Message field of the notification becomes the body of the email or where the email body is a JSON object very similar to that used when working with the HTTP\HTTPS protocols. There are some business-to-business needs for this, such as sending a confirmation to a third party upon processing an order; but you will generally find any discussion of these two protocols under Application-to-Person (A2P) in the documentation and examples.&lt;/p&gt;

&lt;p&gt;The next protocol that is available in the toolkit is &lt;em&gt;Amazon SQS&lt;/em&gt;. Amazon Simple Queue Service (SQS) is a queue service that follows the messaging pattern that we discussed earlier where one message has one recipient and one recipient only.&lt;/p&gt;

&lt;p&gt;The last protocol available in the toolkit is &lt;em&gt;Lambda&lt;/em&gt;. Choosing this protocol means that a specified Lambda function will be called with the message payload being set as an input parameter. This option makes a great deal of sense if you are building a system based on serverless functions. Of course, you can also use HTTP\HTTPS protocol and make the call to the endpoint that surfaces the Lambda method; but using this direct approach will remove much of that intermediate processing.&lt;/p&gt;

&lt;p&gt;Choosing either the SQS or Lambda protocols will activate the &lt;em&gt;Add permission for SNS topic to send messages to AWS resources&lt;/em&gt; checkbox as shown in Figure 3.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--EW09IMFT--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://i0.wp.com/billthevestguy.com/wp-content/uploads/2022/09/image-2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--EW09IMFT--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://i0.wp.com/billthevestguy.com/wp-content/uploads/2022/09/image-2.png" alt="Figure 3. Create New Subscription window in the Toolkit for Visual Studio" width="759" height="426"&gt;&lt;/a&gt;&lt;em&gt;Figure 3. Create New Subscription window in the Toolkit for Visual Studio&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Checking this box will create the necessary permissions allowing the topic to interact with AWS resources. This is not necessary if you are using HTTP\HTTPS or Email.&lt;/p&gt;

&lt;p&gt;For the sake of this walk-through, we used an approach that is ridiculous for enterprise systems; we selected the Email (JSON) protocol. Why? So we could easily show you the next few steps in a way that you could easily duplicate. This is important because all you can do in the Toolkit is to create the topic and the subscription. However, as shown in Figure 4, this leaves the subscription in a &lt;em&gt;PendingConfirmation&lt;/em&gt; state.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--CiArmRz0--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://i0.wp.com/billthevestguy.com/wp-content/uploads/2022/09/image-3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--CiArmRz0--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://i0.wp.com/billthevestguy.com/wp-content/uploads/2022/09/image-3.png" alt="Figure 4. Newly created SNS topic subscription in Toolkit for Visual Studio" width="825" height="266"&gt;&lt;/a&gt;&lt;em&gt;Figure 4. Newly created SNS topic subscription in Toolkit for Visual Studio&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Subscriptions in this state are not yet fully configured, as they need to be confirmed before they are able to start receiving messages. Confirmation happens after a &lt;em&gt;SubscriptionConfirmation&lt;/em&gt; message is sent to the endpoint, which happens automatically when creating a new subscription through the Toolkit. The JSON we received in email is shown below:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
  "Type" : "SubscriptionConfirmation",
  "MessageId" : "b1206608-7661-48b1-b82d-b1a896797605",
  "Token" : "TOKENVALUE", 
  "TopicArn" : "arn:aws:sns:xxxxxxxxx:ProDotNetOnAWS",
  "Message" : "You have chosen to subscribe to the topic arn:aws:sns:xxxxxxx:ProDotNetOnAWS.\nTo confirm the subscription, visit the SubscribeURL included in this message.",
  "SubscribeURL" : "https://sns.us-west-2.amazonaws.com/?Action=ConfirmSubscription&amp;amp;TopicArn=xxxxxxxx",
  "Timestamp" : "2022-08-20T19:18:27.576Z",
  "SignatureVersion" : "1",
  "Signature" : "xxxxxxxxxxxxx==",
  "SigningCertURL" : "https://sns.us-west-2.amazonaws.com/SimpleNotificationService-56e67fcb41f6fec09b0196692625d385.pem"
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;em&gt;Message&lt;/em&gt; indicates the action that needs to be taken – you need to visit the &lt;strong&gt;SubscribeURL&lt;/strong&gt; that is included in the message. Clicking that link will bring you to a confirmation page in your browser like that shown in Figure 5.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--4vZ43ZF1--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://i0.wp.com/billthevestguy.com/wp-content/uploads/2022/09/image-4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--4vZ43ZF1--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://i0.wp.com/billthevestguy.com/wp-content/uploads/2022/09/image-4.png" alt="Figure 5. Subscription confirmation message displayed in browser" width="825" height="157"&gt;&lt;/a&gt;&lt;em&gt;Figure 5. Subscription confirmation message displayed in browser&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Refreshing the topic in the Toolkit will show you that the &lt;em&gt;PendingConfirmation&lt;/em&gt; message is gone and has been replaced with a real &lt;em&gt;Subscription ID&lt;/em&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Using the Console
&lt;/h2&gt;

&lt;p&gt;The process for using the console is very similar to the process we just walked through in the Toolkit. You can get to the service by searching in the console for Amazon SNS or by going into the &lt;em&gt;Application Integration&lt;/em&gt; group under the services menu. Once there, select &lt;strong&gt;Create topic&lt;/strong&gt;. At this point, you will start to see some differences in the experiences.&lt;/p&gt;

&lt;p&gt;The first is that you have a choice on the topic &lt;em&gt;Type&lt;/em&gt;, as shown in Figure 6. You can select from &lt;em&gt;FIFO (first-in, first-out)&lt;/em&gt; and &lt;em&gt;Standard&lt;/em&gt;. FIFO is selected by default. However, selecting FIFO means that the service will follow the messaging architectural approach that we went over earlier where there is exactly-once message delivery and message ordering is strictly preserved. The Standard type, on the other hand, supports “at least once message delivery” which means that it supports multiple subscriptions.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--IzgbFHMQ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://i0.wp.com/billthevestguy.com/wp-content/uploads/2022/09/image-5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--IzgbFHMQ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://i0.wp.com/billthevestguy.com/wp-content/uploads/2022/09/image-5.png" alt="Figure 6. Creating an SNS topic in the AWS Console" width="825" height="428"&gt;&lt;/a&gt;&lt;em&gt;Figure 6. Creating an SNS topic in the AWS Console&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Figure 6 also displays a checkbox labeled &lt;em&gt;Content-based message deduplication&lt;/em&gt;. This selection is only available when &lt;em&gt;FIFO&lt;/em&gt; type is selected. When selected, the message being sent is assumed to be unique and SNS will not provide a unique deduplication value. Otherwise, SNS will add a unique value to each message that it will use to determine whether a particular message has been delivered.&lt;/p&gt;

&lt;p&gt;Another difference between creating a topic in the console vs in the toolkit is that you can optionally set preferences around message encryption, access policy, delivery status logging, delivery retry policy (HTTP\S), and, of course, tags. Let’s look in more detail at two of those preferences. The first of these is the &lt;em&gt;Delivery retry policy&lt;/em&gt;. This allows you to set retry rules for how SNS will retry sending failed deliveries to HTTP/S endpoints. These are the only endpoints that support retry. You can manage the following values:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Number of retries&lt;/strong&gt; – defaults to 3 but can be any value between 1 and 100&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Retries without delay&lt;/strong&gt; – defaults to 0 and represents how many of those retries should happen before the system waits for a retry&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Minimum delay&lt;/strong&gt; – defaults to 20 seconds with a range from 1 to the value of the Maximum delay.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Maximum delay&lt;/strong&gt; – defaults to 20 seconds with a range from the Minimum delay to 3,600.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Retry backoff function&lt;/strong&gt; – defaults to linear. There are four options, &lt;em&gt;Exponential&lt;/em&gt;, &lt;em&gt;Arithmetic&lt;/em&gt;, &lt;em&gt;Linear&lt;/em&gt;, and &lt;em&gt;Geometric&lt;/em&gt;. Each of those functions processes the timing for retries differently. You can see the differences between these options at &lt;a href="https://docs.aws.amazon.com/sns/latest/dg/sns-message-delivery-retries.html"&gt;https://docs.aws.amazon.com/sns/latest/dg/sns-message-delivery-retries.html&lt;/a&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The second preference that is available in the console but not the toolkit is &lt;em&gt;Delivery status logging&lt;/em&gt;. This preference will log delivery status to CloudWatch Logs. You have two values to determine. This first is &lt;em&gt;Log delivery status for these protocols&lt;/em&gt; which presents a series of checkboxes for &lt;em&gt;AWS Lambda&lt;/em&gt;, &lt;em&gt;Amazon SQS&lt;/em&gt;, &lt;em&gt;HTTP/S&lt;/em&gt;, &lt;em&gt;Platform application endpoint&lt;/em&gt;, and &lt;em&gt;Amazon Kinesis Data Firehouse&lt;/em&gt;. These last two options are a preview of the next big difference between working through the toolkit or through the console.&lt;/p&gt;

&lt;h2&gt;
  
  
  Additional Subscriptions in the Console
&lt;/h2&gt;

&lt;p&gt;Once you have finished creating the topic, you can then create a subscription. There are several protocols available for use in the console that are not available in the toolkit. These include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Amazon Kinesis Data Firehouse&lt;/strong&gt; – configure this subscription to go to Kinesis Data Firehouse. From there you can send notifications to Amazon S3, Amazon Redshift, Amazon OpenSearch Service, and third-party service providers such as Datadog, New Relic, MongoDB, and Splunk.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Platform-application endpoint&lt;/strong&gt; – this protocol sends the message to an application on a mobile device. Push notification messages sent to a mobile endpoint can appear in the mobile app as message alerts, badge updates, or even sound alerts. Go to &lt;a href="https://docs.aws.amazon.com/sns/latest/dg/sns-mobile-application-as-subscriber.html"&gt;https://docs.aws.amazon.com/sns/latest/dg/sns-mobile-application-as-subscriber.html&lt;/a&gt; for more information on configuring your SNS topic to deliver to a mobile device.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;SMS&lt;/strong&gt; – this protocol delivers text messages, or SMS messages, to SMS-enabled devices. Amazon SNS supports SMS messaging in several regions, and you can send messages to more than 200 countries and regions. An interesting aspect of SMS is that your account starts in a SMS sandbox or non-production environment with a set of limits. Once you are convinced that everything is correct you must create a case with AWS support to move your account out of the sandbox and actually start sending messages to non-limited numbers.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Now that we have configured our SNS topic and subscription, lets next look at sending a message.&lt;/p&gt;

&lt;h2&gt;
  
  
  .NET and Amazon SNS
&lt;/h2&gt;

&lt;p&gt;The first step to interacting with SNS from within your .NET application is to install the appropriate NuGet package, &lt;strong&gt;AWSSDK.SimpleNotificationService&lt;/strong&gt;. This will also install &lt;strong&gt;AWSSDK.Core&lt;/strong&gt;. Once you have the NuGet package, you can access the appropriate APIs by adding several using statements&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;using Amazon.SimpleNotificationService;
using Amazon.SimpleNotificationService.Model;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;These namespaces provide access to the &lt;strong&gt;AmazonSimpleNotificationServiceClient&lt;/strong&gt; class that manages the interaction with the SNS service as well as the models that are represented in the client methods. There are a lot of different types of interactions that you can support with this client. A list of the more commonly used methods is displayed below:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;PublishAsync&lt;/strong&gt; – Send a message to a specific topic for processing by SNS&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;PublishBatchAsync&lt;/strong&gt; – Send multiple messages to a specific topic for processing by SNS.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Subscribe&lt;/strong&gt; – Subscribe a new endpoint to a topic&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Unsubscribe&lt;/strong&gt; – Remove an endpoint’s subscription to a topic.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These four methods allow you to add and remove subscriptions as well as publish messages. There are dozens of other methods available from that client, including the ability to manage topics and confirm subscriptions.&lt;/p&gt;

&lt;p&gt;The code below is a complete console application that sends a message to a specific topic.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;static void Main(string[] args)
{
    string topicArn = "Arn for the topic to publish";
    string messageText = "Message from ProDotNetOnAWS_SNS";

    var client = new AmazonSimpleNotificationServiceClient();

    var request = new PublishRequest
    {
        TopicArn = topicArn,
        Message = messageText,
        Subject = Guid.NewGuid().ToString()
    };

    var response = client.PublishAsync(request).Result;

    Console.WriteLine(
       $"Published message ID: {response.MessageId}");

    Console.ReadLine();
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;As you can see, the topic needs to be described with the &lt;em&gt;Arn&lt;/em&gt; for the topic rather than simply the topic name. Publishing a message entails the instantiation of the client and then defining a &lt;strong&gt;PublishRequest&lt;/strong&gt; object. This object contains all of the fields that we are intending to send to the recipient, which in our case is simply the subject and message. Running the application presents a console as shown in Figure 7.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--VGqHeo0n--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://i0.wp.com/billthevestguy.com/wp-content/uploads/2022/09/image-6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--VGqHeo0n--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://i0.wp.com/billthevestguy.com/wp-content/uploads/2022/09/image-6.png" alt="Figure 7. Console application that sent message through SNS" width="825" height="71"&gt;&lt;/a&gt;&lt;em&gt;Figure 7. Console application that sent message through SNS&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;The message that was processed can be seen in Figure 8. Note the MessageId values are the same as in Figure 7.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--lUm67q8q--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://i0.wp.com/billthevestguy.com/wp-content/uploads/2022/09/image-7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--lUm67q8q--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://i0.wp.com/billthevestguy.com/wp-content/uploads/2022/09/image-7.png" alt="Figure 8. Message sent through console application" width="825" height="429"&gt;&lt;/a&gt;&lt;em&gt;Figure 8. Message sent through console application&lt;/em&gt; &lt;/p&gt;

&lt;p&gt;We have only touched on the capabilities of Amazon SNS and its capacity to help implement event-driven architecture. However, there is another AWS service that is even more powerful, Amazon EventBridge. Let’s look at that in the next post.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>dotnet</category>
      <category>eventdriven</category>
      <category>architecture</category>
    </item>
    <item>
      <title>Modern .NET Application Design</title>
      <dc:creator>Bill "The Vest Guy" Penberthy</dc:creator>
      <pubDate>Tue, 30 Aug 2022 17:59:43 +0000</pubDate>
      <link>https://dev.to/aws-builders/modern-net-application-design-5gp9</link>
      <guid>https://dev.to/aws-builders/modern-net-application-design-5gp9</guid>
      <description>&lt;p&gt;In this post, we will go over several modern application design architectures, with the predominant one being event and message-based architecture. After this mostly theoretical discussion, we will move into practical implementation. We will do this by going over two different AWS services. The first of these services, Amazon Simple Notification Service (SNS), is a managed messaging service that allows you to decouple publishers from subscribers. The second service is Amazon EventBridge, which is a serverless event bus. As we are going over each, we will also review the inclusion of these services into a .NET application so that you can see how it works.&lt;/p&gt;

&lt;h2&gt;
  
  
  Modern Application Design
&lt;/h2&gt;

&lt;p&gt;The growth in the public cloud and its ability to quickly scale computing resources up and down has made the building of complex systems much easier. Let’s start by looking at what the Microservice Extractor for .NET does for you. For those of you unaware of this tool, you can check out the user guide or see a blog article on its use. Basically, however, the tool analyzes your code and helps you determine what areas of the code you can split out into a separate microservice. Figure 1 shows the initial design and then the design after the extractor was run.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--qpfHMV2o--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://i0.wp.com/billthevestguy.com/wp-content/uploads/2022/08/image-20.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--qpfHMV2o--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://i0.wp.com/billthevestguy.com/wp-content/uploads/2022/08/image-20.png" alt="Figure 1. Pre and Post design after running the Microservice Extractor" width="825" height="358"&gt;&lt;/a&gt;&lt;em&gt;Figure 1. Pre and Post design after running the Microservice Extractor&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Why is this important? Well, consider the likely usage of this system. If you think about a typical e-commerce system, you will see that the inventory logic, the logic that was extracted, is a highly used set of logic. It is needed to work with the catalog pages. It is needed when working with orders, or with the shopping cart. This means that this logic may act as a bottleneck for the entire application. To get around this with the initial design means that you would need to deploy additional web applications to ease the load off and minimize this bottleneck.&lt;/p&gt;

&lt;p&gt;Evolving into Microservices&lt;br&gt;
However, the extractor allows us to use a different approach. Instead of horizontally scaling the entire application, scale the set of logic that gets the most use.  This allows you to minimize the number of resources necessary to keep the application optimally running. There is another benefit to this approach as you now have an independently managed application, which means that it can have its own development and deployment processes and can be interacted with independently of the rest of the application stack. This means that a fully realized microservices approach could look more like that shown in Figure 2.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--65hda9If--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://i0.wp.com/billthevestguy.com/wp-content/uploads/2022/08/image-21.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--65hda9If--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://i0.wp.com/billthevestguy.com/wp-content/uploads/2022/08/image-21.png" alt="Figure 2. Microservices-based system design" width="825" height="538"&gt;&lt;/a&gt;&lt;em&gt;Figure 2. Microservices-based system design&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;This approach allows you to scale each web service as needed. You may only need one “Customer” web service running but need multiples of the “Shopping Cart” and “Inventory” running to ensure performance. This approach means you can also do work in one of the services, say “Shopping Cart” and not have to worry about testing anything within the other services because those won’t have been impacted – and you can be positive of that because they are completely different code lines.&lt;/p&gt;

&lt;p&gt;This more decoupled approach also allows you to manage business changes more easily.&lt;/p&gt;

&lt;p&gt;Note – Tightly coupled systems have dependencies between the systems that affect the flexibility and reusability of the code. Loosely coupled, or decoupled, systems have minimal dependencies between each other and allow for greater code reuse and flexibility.&lt;/p&gt;

&lt;p&gt;Consider Figure 3 and what it would have taken to build this new mobile application with the “old-school” approach. There most likely would have been some duplication of business logic, which means that as each of the applications evolve, they would likely have drifted apart. Now, that logic is in a single place so it will always be the same for both applications (excluding any logic put into the UI part of the application that may evolve differently – but who does that?)&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Zd9CdZ-0--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://i0.wp.com/billthevestguy.com/wp-content/uploads/2022/08/image-22.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Zd9CdZ-0--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://i0.wp.com/billthevestguy.com/wp-content/uploads/2022/08/image-22.png" alt="Figure 3. Microservices-based system supporting multiple applications" width="825" height="524"&gt;&lt;/a&gt;&lt;em&gt;Figure 3. Microservices-based system supporting multiple applications&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;One look at Figure 3 shows how this system is much more loosely coupled than was the original application. However, there is still a level of coupling within these different subsystems. Let’s look at those next and figure out what to do about them.&lt;/p&gt;

&lt;h2&gt;
  
  
  Deep Dive into Decoupling
&lt;/h2&gt;

&lt;p&gt;Without looking any deeper into the systems than the drawing in Figure 3 you should see one aspect of tight coupling that we haven’t addressed. The “Source Database.” Yes, this shared database indicates that there is still a less than optimal coupling between the different web services. Think about how we used the Extractor to pull out the “Inventory” service so we could scale that independently of the regular application. We did not do the same to the database service that is being accessed by all these web services. So, we still have that quandary, only at the database layer than at the business logic layer.&lt;/p&gt;

&lt;p&gt;The next logical step in decoupling these systems would be to break out the database responsibilities as well, resulting in a design like that shown in Figure 4.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--cv8-4R32--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://i0.wp.com/billthevestguy.com/wp-content/uploads/2022/08/image-23.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--cv8-4R32--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://i0.wp.com/billthevestguy.com/wp-content/uploads/2022/08/image-23.png" alt="Figure 4. Splitting the database to support decoupled services" width="825" height="518"&gt;&lt;/a&gt;&lt;em&gt;Figure 4. Splitting the database to support decoupled services&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Unfortunately, it is not that easy. Think about what is going on within each of these different services; how useful is a “Shopping Cart” or an “Order” without any knowledge of the “Inventory” being added to the cart, or sold? Sure, those services do not need to know everything about “Inventory”, but they need to either interact with the “Inventory” service or go directly into the database to get information. These two options are shown in Figure 5.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--cmrgIu1D--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://i0.wp.com/billthevestguy.com/wp-content/uploads/2022/08/image-24.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--cmrgIu1D--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://i0.wp.com/billthevestguy.com/wp-content/uploads/2022/08/image-24.png" alt="Figure 5. Sharing data through a shared database or service-to-service calls" width="825" height="352"&gt;&lt;/a&gt;&lt;em&gt;Figure 5. Sharing data through a shared database or service-to-service calls&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;As you can see, however, we have just added back in some coupling, as in either approach the “Order” service and “Shopping Cart” service now have dependencies on the “Inventory” service in some form or another. However, this may be unavoidable based on certain business requirements – those requirements that mean that the “Order” needs to know about “Inventory.” Before we stress out too much about this design, let’s further break down this “need to know” by adding in an additional consideration about when the application needs to know about the data. This helps us understand the required consistency.  &lt;/p&gt;

&lt;h2&gt;
  
  
  Strong Consistency
&lt;/h2&gt;

&lt;p&gt;Strong consistency means that all applications and systems see the same data at the same time. The solutions in Figure 3 represent this approach because, regardless of whether you are calling the database directly or through the web service, you are seeing the most current set of the data, and it is available immediately after the data is persisted. There may easily be requirements where that is required. However, there may just as easily be requirements where a slight delay between the “Inventory” service and the “Shopping Cart” service knowing information may be acceptable.&lt;/p&gt;

&lt;p&gt;For example, consider how a change in inventory availability (the quantity available for the sale of a product) may affect the shopping cart system differently than the order system. The shopping cart represents items that have been selected as part of an order, so inventory availability is important to it – it needs to know that the items are available before those items can be processed as an order. But when does it need to know that? That’s where the business requirements come into play. If the user must know about the change right away, that will likely require some form of strong consistency. If, on the other hand, the inventory availability is only important when the order is placed, then strong consistency is not as necessary. That means there may be a case for eventual consistency.&lt;/p&gt;

&lt;h2&gt;
  
  
  Eventual Consistency
&lt;/h2&gt;

&lt;p&gt;As the name implies, data will be consistent within the various services eventually – not right away. This difference may be as slight as milliseconds or it can be seconds or even minutes, all depending upon business needs and system design. The smaller the timeframe necessary, the more likely you will need to use strong consistency. However, there are plenty of instances where seconds and even minutes are ok. An order, for example, needs some information about a product so that it has context. This could be as simple as the product name or more complex relationships such as the warehouses and storage locations for the products. But the key factor is that changes in this data are not really required to be available immediately to the order system. Does the order system need to know about a new product added to the inventory list? Probably not – as it is highly unlikely that this new product will be included in an order within milliseconds of becoming active.  Being available within seconds should be just fine. Figure 6 shows a time series graph of the differences between strong and eventual consistency.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Xl0TZ2F2--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://i0.wp.com/billthevestguy.com/wp-content/uploads/2022/08/image-25.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Xl0TZ2F2--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://i0.wp.com/billthevestguy.com/wp-content/uploads/2022/08/image-25.png" alt="Figure 6. Time series showing the difference between strong and eventual consistency" width="825" height="471"&gt;&lt;/a&gt;&lt;em&gt;Figure 6. Time series showing the difference between strong and eventual consistency&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;What does the concept of eventual consistency mean when we look at Figure 3 showing how these three services can have some coupling? It gives us the option for a paradigm shift. Our assumption up to this time is that data is stored in a single source, whether all the data is stored in a big database or whether each service has its own database – such as the Inventory service “owning” the Inventory database that stores all the Inventory information. Thus, any system needing inventory data would have to go through these services\databases in some way.&lt;/p&gt;

&lt;p&gt;This means our paradigm understands and accepts the concept of a microservice being responsible for maintaining its own data – that relationship between the inventory service and the inventory database. Our paradigm shift is around the definition of the data that should be persisted in the microservices database. For example, currently, the order system stores only data that describes orders – which is why we need the ability to somehow pull data from the inventory system. However, this other information is obviously critical to the order so instead of making the call to the inventory system we instead store that critical inventory-related data in the order system. Think what that would be like.&lt;/p&gt;

&lt;h2&gt;
  
  
  Oh No! Not duplicated data!
&lt;/h2&gt;

&lt;p&gt;Yes, this means some data may be saved in multiple places. And you know what? That’s ok. Because it is not going to be all the data, but just those pieces of data that the other systems may care about. That means the databases in a system may look like those shown in Figure 7 where there may be overlap in the data being persisted.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--t7YSVOfk--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://i0.wp.com/billthevestguy.com/wp-content/uploads/2022/08/image-26.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--t7YSVOfk--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://i0.wp.com/billthevestguy.com/wp-content/uploads/2022/08/image-26.png" alt="Figure 7. Data duplication between databases" width="825" height="305"&gt;&lt;/a&gt;&lt;em&gt;Figure 7. Data duplication between databases&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;This data overlap or duplication is important because it eliminates the coupling that we identified when we realized that the inventory data was important to other systems. By including the interesting data in each of the subsystems, we no longer have that coupling, and that means our system will be much more resilient.&lt;/p&gt;

&lt;p&gt;If we continued to have that dependency between systems, then an outage in the inventory system means that there would also be an outage in the shopping cart and order systems, because those systems have that dependency upon the inventory system for data. With this data being persisted in multiple places, an outage in the inventory system will NOT cause any outage in those other systems. Instead, those systems will continue to happily plug along without any concern for what is going on over in inventory-land.  It can go down, whether intentionally because of a product release or unintentionally, say by a database failure, and the rest of the systems continue to function. That is the beauty of decoupled systems, and why modern system architectural design relies heavily on decoupling business processes.&lt;/p&gt;

&lt;p&gt;We have shown the importance of decoupling and how the paradigm shift of allowing some duplication of data can lead to that decoupling. However, we haven’t touched on how we would do this. In this next section, we will go into one of the most common ways to drive this level of decoupling and information sharing.&lt;/p&gt;

&lt;p&gt;Designing a messaging or event-based architecture&lt;br&gt;
The key to this level of decoupling requires that one system notify the other systems when data has changed. The most powerful method for doing this is through either messaging or events. While both messaging and events provide approaches for sending information to other systems, they represent different forms of communication and different rules that they should follow.&lt;/p&gt;

&lt;h2&gt;
  
  
  Messaging
&lt;/h2&gt;

&lt;p&gt;Conceptually, the differences are straightforward. Messaging is used when:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;em&gt;Transient Data is needed&lt;/em&gt; – this data is only stored until the message consumer has processed the message or it hits a timeout or expiration period.&lt;/li&gt;
&lt;li&gt;
&lt;em&gt;Two-way Communication is desired&lt;/em&gt; – also known as a request\reply approach, one system sends a request message, and the receiving system sends a response message in reply to the request.&lt;/li&gt;
&lt;li&gt; &lt;em&gt;Reliable Targeted Delivery&lt;/em&gt; – Messages are generally targeted to a specific entity. Thus, by design, a message can have one and only one recipient as the message will be removed from the queue once the first system processes it.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Even though messages tend to be targeted, they provide decoupling because there is no requirement that the targeted system is available when the message is sent. If the target system is down, then the message will be stored until the system is back up and accepting messages. Any missed messages will be processed in a First In – First Out process and the targeted system will be able to independently catch up on its work without affecting the sending system.&lt;/p&gt;

&lt;p&gt;When we look at the decoupling we discussed earlier, it becomes apparent that messaging may not be the best way to support eventual consistency as there is more than one system that could be interested in the data within the message. And, by design, messaging isn’t a big fan of this happening. So, with these limitations, when would messaging make sense?&lt;/p&gt;

&lt;p&gt;&lt;em&gt;&lt;strong&gt;Note&lt;/strong&gt;: There are technical design approaches that allow you to send a single message that can be received by multiple targets. This is done through a recipient list, where the message sender sends a single message and then there is code around the recipient list that duplicates that message to every target in the list. We won’t go into these approaches here.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;The key thing to consider about messaging is that it focuses on assured delivery and once and once-only processing. This provides insight into the types of operations best supported by messaging. An example may be the web application submitting an order. Think of the chaos if this order was received and processed by some services but not the order service. Instead, this submission should be a message targeted at the order service. Sure, in many instances we are handling this as an HTTP Request (note the similarities between a message and the HTTP request) but that may not always be the best approach. Instead, our ordering system sends a message that is assured of delivery to a single target.&lt;/p&gt;

&lt;h2&gt;
  
  
  Events
&lt;/h2&gt;

&lt;p&gt;Events, on the other hand, are traditionally used to represent “something that happened” – an action performed by the service that some other systems may find interesting. Events are for when you need:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;em&gt;Scalable consumption&lt;/em&gt; – multiple systems may be interested in the content within a single event&lt;/li&gt;
&lt;li&gt;
&lt;em&gt;History&lt;/em&gt; – the history of the “thing that happened” is useful. Generally, the database will provide the current state of information. The event history provides insight into when and what caused changes to that data. This can be very valuable insight.&lt;/li&gt;
&lt;li&gt;
&lt;em&gt;Immutable data&lt;/em&gt; – since an event represents “something that already happened” the data contained in an event is immutable – that data cannot be changed. This allows for very accurate tracing of changes, including the ability to recreate database changes.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Events are generally designed to be sent by a system, with that system having no concern about whether other systems receive the event or act upon it. The sender fires the event and then forgets about it.&lt;/p&gt;

&lt;p&gt;When you consider the decoupled design that we worked through earlier, it becomes quickly obvious that events are the best approach to provide any changed inventory data to the other systems. In the next article, we will jump right into Amazon Simple Notification Service (SNS), and talk more about events within our application using SNS as our guide.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>AWS Schema Conversion Tool</title>
      <dc:creator>Bill "The Vest Guy" Penberthy</dc:creator>
      <pubDate>Fri, 26 Aug 2022 13:54:53 +0000</pubDate>
      <link>https://dev.to/aws-builders/aws-schema-conversion-tool-dpn</link>
      <guid>https://dev.to/aws-builders/aws-schema-conversion-tool-dpn</guid>
      <description>&lt;p&gt;The &lt;a href="https://aws.amazon.com/dms/schema-conversion-tool/"&gt;AWS Schema Conversion Tool&lt;/a&gt; is designed to make cross-engine database migrations more predictable. It does this by automatically converting not only the source data, but also most of the database objects such as views, stored procedures, and functions. If you think back to the previous section, you may recall that there was no mention of those database objects; the objective was simply to move all of the database tables. And, since that is all our sample database had, that was quite sufficient. However, many “enterprisey” databases will have these database objects. The Schema Conversion Tool will help you with those.&lt;/p&gt;

&lt;p&gt;Firstly, the schema conversion tool is a downloadable tool, available for use on Microsoft Windows, Fedora Linux, and Ubuntu Linux. You can access the download links at &lt;a href="https://aws.amazon.com/dms/schema-conversion-tool"&gt;https://aws.amazon.com/dms/schema-conversion-tool&lt;/a&gt;. We will use the Windows version of the tool for our walkthrough. Second, the tool will only migrate relational data into Amazon RDS or Amazon Redshift. Table 1 displays the source and target database combinations supported by the tool.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Source&lt;/th&gt;
&lt;th&gt;Aurora MySQL&lt;/th&gt;
&lt;th&gt;Aurora PGSQL&lt;/th&gt;
&lt;th&gt;MariaDB&lt;/th&gt;
&lt;th&gt;MySQL&lt;/th&gt;
&lt;th&gt;PGSQL&lt;/th&gt;
&lt;th&gt;SQL Server&lt;/th&gt;
&lt;th&gt;Redshift&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Oracle&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Oracle Data Warehouse&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Azure SQL Database&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Microsoft SQL Server&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Teradata&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;IBM Netezza&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Greenplum&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;HPE Vertica&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;MySQL&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;PostgreSQL (PGSQL)&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;IBM DB2 LUW&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;IBM Db2 for z/OS&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;SAP ASE&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Amazon Redshift&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Azure Synapse Analytics&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Snowflake&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;&lt;em&gt;Table 1. Databases available as sources and targets for Schema Conversion Tool&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Clicking the download tool link will start the downloading of a zip file. Once the file is downloaded, extract the content to a working directory. There will be a .msi installation file and two folders. Run the installation file and start the application when the installation is completed.&lt;/p&gt;

&lt;h2&gt;
  
  
  Configuring the Source
&lt;/h2&gt;

&lt;p&gt;Upon your first run of the tool, you will be presented with the terms of use. Accepting these terms will open the application and present the &lt;em&gt;Create a new database migration project&lt;/em&gt; screen as shown in Figure 1.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--YwlZ5YNN--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://billthevestguy.com/wp-content/uploads/2022/08/image-12.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--YwlZ5YNN--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://billthevestguy.com/wp-content/uploads/2022/08/image-12.png" alt="Figure 1. Create a new database migration project screen in SCT" width="825" height="472"&gt;&lt;/a&gt;&lt;em&gt;Figure 1. Create a new database migration project screen in SCT&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;We selected Microsoft SQL Server as our source engine and this enabled the three radio buttons that give some direction as to how the conversion process should proceed. The three choices are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;I want to switch engines and optimize for the cloud (default)&lt;/li&gt;
&lt;li&gt;I want to keep the same engine but optimize it for the cloud&lt;/li&gt;
&lt;li&gt;I want to see a combined report for database engine switch and optimization to the cloud.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Each of these selections will alter the logic of the migration project, for example, selecting to keep the same engine will provide you with a different set of destinations than selecting to switch engines.&lt;/p&gt;

&lt;p&gt;Completing the fields in Step 1 and clicking the &lt;strong&gt;Next&lt;/strong&gt; button will take you to the &lt;em&gt;Step 2 – Connect to the source database&lt;/em&gt; screen as shown in Figure 2.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--r0LcwEVL--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://billthevestguy.com/wp-content/uploads/2022/08/image-13.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--r0LcwEVL--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://billthevestguy.com/wp-content/uploads/2022/08/image-13.png" alt="Figure 2. Specifying connection information for the source database" width="825" height="472"&gt;&lt;/a&gt;&lt;em&gt;Figure 2. Specifying connection information for the source database&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;As shown in Figure 2 there are four fields that have the upper left corner of the field displaying a red tick. Those fields are required for the connection, and three of them are fields that you should be well acquainted with by now, the &lt;em&gt;Server Name&lt;/em&gt;, &lt;em&gt;User name&lt;/em&gt;, and &lt;em&gt;Password&lt;/em&gt; (when accessing the source database using SQL Server Authentication). However, the last field, &lt;em&gt;Microsoft SQL Server driver path&lt;/em&gt;, is a new one and points to the directory in which the Microsoft SQL Server JDBC driver is located, which we didn’t have installed. Fortunately, AWS helpfully provides a page with links to the various database drivers at &lt;a href="https://docs.aws.amazon.com/SchemaConversionTool/latest/userguide/CHAP_Installing.html"&gt;https://docs.aws.amazon.com/SchemaConversionTool/latest/userguide/CHAP_Installing.html&lt;/a&gt;. You will need to install drivers for both your source and target databases. We went through and downloaded the drivers for SQL Server (our source database) and the drivers for Amazon Aurora MySQL (our destination database). Once the appropriate JDBC drivers are installed, you can point to the SQL Server driver path as shown in Figure 3.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--q-EN2Y9A--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://billthevestguy.com/wp-content/uploads/2022/08/image-14.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--q-EN2Y9A--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://billthevestguy.com/wp-content/uploads/2022/08/image-14.png" alt="Figure 3. Specifying SQL Server JDBC path" width="825" height="52"&gt;&lt;/a&gt;&lt;em&gt;Figure 3. Specifying SQL Server JDBC path&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Once you have your server, authentication, and driver path filled out you can click the &lt;strong&gt;Test connection&lt;/strong&gt; button to ensure everything works as expected. If that is successful, you can select &lt;strong&gt;Next&lt;/strong&gt; to continue.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;&lt;strong&gt;Note:&lt;/strong&gt; Our Microsoft SQL Server JDBC download contained three different .jar files, jre8, jre11, and jre17. The tool would allow the selection of jre8 and jre11 but would not allow the selection of the jre17 file. This will likely change as the tool continues to evolve.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;The tool will next display a screen that indicates that it is loading the metadata for the server. This metadata includes databases, schemas, and triggers. Once that loading is completed, you will be in S_tep 3. Choose a schema_ where you will get a list of all databases and the schemas available within each one. This list includes all of the system databases, such as master, model, msdb, and tempdb. You will probably not want to include those! Once you have selected the schema(s), click the &lt;strong&gt;Next&lt;/strong&gt; button. You will see the “Loading metadata” screen again as the tool gets all the database objects based upon your selected schema(s). This process will take a few minutes.&lt;/p&gt;

&lt;h2&gt;
  
  
  Database Migration Assessment Screen
&lt;/h2&gt;

&lt;p&gt;Once completed, you will be taken to the &lt;em&gt;Step 4. Run the Database migration assessment&lt;/em&gt; screen. The first thing that you will see is the assessment report. This report was created by the tool taking all the metadata that it found and analyzing it to see how well it would convert into the various source databases. At the top of the report is the &lt;em&gt;Executive summary&lt;/em&gt;. This lists all of the potential target platforms and summarizes the types of actions that need to be taken. An example of this report is shown in Figure 4.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--iGn6r0nx--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://billthevestguy.com/wp-content/uploads/2022/08/image-15.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--iGn6r0nx--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://billthevestguy.com/wp-content/uploads/2022/08/image-15.png" alt="Figure 4. Executive summary of the migration assessment report" width="825" height="264"&gt;&lt;/a&gt;&lt;em&gt;Figure 4. Executive summary of the migration assessment report&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Immediately under the executive summary is a textual analysis of the data in the chart. Each of the line items is described with an estimation of the percentage of database storage objects and database code objects that can be automatically converted.  In our case, both Amazon RDS for MySQL and Amazon Aurora (MySQL compatible) can be converted at 100%. None of the other target platforms scored that high.&lt;/p&gt;

&lt;p&gt;Additional detail is displayed further down the page as shown in Figure 5.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--_Eom2sH6--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://billthevestguy.com/wp-content/uploads/2022/08/image-16.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--_Eom2sH6--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://billthevestguy.com/wp-content/uploads/2022/08/image-16.png" alt="Figure 5. Details on migrating to Amazon Aurora MySQL" width="825" height="375"&gt;&lt;/a&gt;&lt;em&gt;Figure 5. Details on migrating to Amazon Aurora MySQL&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;This section demonstrates that 1 schema, 10 tables, 17 constraints, 7 procedures and 1 scalar function can be successfully converted to Amazon Aurora (MySQL compatible).&lt;/p&gt;

&lt;h2&gt;
  
  
  Configuring the Destination
&lt;/h2&gt;

&lt;p&gt;Once you have completed your review of the potential destination, click the &lt;strong&gt;Next&lt;/strong&gt; button. This will bring you to &lt;em&gt;Step 5. Choose a target&lt;/em&gt; page where you select the target engine and configure the connection to the target database. When we got to the page Amazon RDS for MySQL was selected as the target engine, so we went with that and created a new Amazon RDS for MySQL instance in the RDS console, making sure that we enabled external access. Filling out the connection information and clicking the &lt;strong&gt;Test connection&lt;/strong&gt; button demonstrated that we had filled the information out appropriately, so we clicked the &lt;strong&gt;Finish&lt;/strong&gt; button.&lt;/p&gt;

&lt;h2&gt;
  
  
  Completing the Migration
&lt;/h2&gt;

&lt;p&gt;This brings you to the project page as shown in Figure 6.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ra4QKx-f--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://billthevestguy.com/wp-content/uploads/2022/08/image-17.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ra4QKx-f--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://billthevestguy.com/wp-content/uploads/2022/08/image-17.png" alt="Figure 6. Schema conversion tool conversion project" width="825" height="482"&gt;&lt;/a&gt;&lt;em&gt;Figure 6. Schema conversion tool conversion project&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Just like with DMS, the conversion tool gives you the ability to add mapping and transformation rules. You do this by clicking on the &lt;strong&gt;Main view&lt;/strong&gt; icon in the toolbar and selecting the &lt;strong&gt;Mapping view&lt;/strong&gt;. This changes the center section of the screen. In this section you can add Transformation rules. These transformation rules, just as with DMS, allow you to alter the name of items that are going to be migrated. You can create a rule where you create the appropriate filters to determine which objects will be affected, and you have the following options on how the names will be changed:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Add prefix&lt;/li&gt;
&lt;li&gt;Add suffix&lt;/li&gt;
&lt;li&gt;Convert lowercase&lt;/li&gt;
&lt;li&gt;Convert uppercase&lt;/li&gt;
&lt;li&gt;Move to&lt;/li&gt;
&lt;li&gt;Remove prefix&lt;/li&gt;
&lt;li&gt;Remove suffix&lt;/li&gt;
&lt;li&gt;Rename&lt;/li&gt;
&lt;li&gt;Replace prefix&lt;/li&gt;
&lt;li&gt;Replace suffix&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These different transformations are useful when working with database schemas that user older approaches such as using a prefix of “t” before the name to show that the object is a table, or “v” to indicate that it’s a view. We will not be using any transformations as part of our conversion.&lt;/p&gt;

&lt;p&gt;Since we are converting our &lt;em&gt;ProDotNetOnAWS&lt;/em&gt; database and its &lt;em&gt;dbo&lt;/em&gt; schema, you need to go to the left window where the SQL Server content is displayed, right-click on the &lt;strong&gt;dbo&lt;/strong&gt; schema, and select &lt;strong&gt;Convert schema&lt;/strong&gt; from the popup menu. You will get an additional popup that shows the copying of the schema to the source destination. Once completed, the right window will look like Figure 7 where it shows that the schema has been copied over along with tables, procedures, views, and functions (if you have all of those).&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--SnhHlhtP--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://billthevestguy.com/wp-content/uploads/2022/08/image-18.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--SnhHlhtP--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://billthevestguy.com/wp-content/uploads/2022/08/image-18.png" alt="Figure 7. Schema converted to source database" width="570" height="1001"&gt;&lt;/a&gt;&lt;em&gt;Figure 7. Schema converted to source database&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Note that this has not yet been applied to the destination server and is instead a local representation of what that it would look like once applied. Your next step is to apply the changes to the destination. You do this by right-clicking on the destination schema and selecting &lt;strong&gt;Apply to database&lt;/strong&gt;. This will bring up a pop-up confirmation window after which you will see the schema being processed. The window will close once completed.&lt;/p&gt;

&lt;p&gt;At this point, your schema has been transferred to the source database. Figure 8 shows the destination database in MySQL Workbench, and you can see that the schema defined in the tool has been successfully migrated.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--qIzCaHqp--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://billthevestguy.com/wp-content/uploads/2022/08/image-19.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--qIzCaHqp--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://billthevestguy.com/wp-content/uploads/2022/08/image-19.png" alt="Figure 8. Viewing converted schema in MySQL Workbench" width="825" height="624"&gt;&lt;/a&gt;&lt;em&gt;Figure 8. Viewing converted schema in MySQL Workbench&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Once your data has been migrated, the last step is to convert your code so that it can access your new database.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>database</category>
      <category>migration</category>
      <category>cloud</category>
    </item>
    <item>
      <title>AWS Database Migration Service (Part 2 of 2)</title>
      <dc:creator>Bill "The Vest Guy" Penberthy</dc:creator>
      <pubDate>Thu, 25 Aug 2022 14:38:25 +0000</pubDate>
      <link>https://dev.to/aws-builders/aws-database-migration-service-part-2-of-2-e1f</link>
      <guid>https://dev.to/aws-builders/aws-database-migration-service-part-2-of-2-e1f</guid>
      <description>&lt;p&gt;In the &lt;a href="https://dev.to/aws-builders/aws-database-migration-service-part-1-of-2-3e6f"&gt;previous article&lt;/a&gt; we went over the high-level approach around using &lt;a href="https://aws.amazon.com/dms/"&gt;AWS DMS&lt;/a&gt; and then created the replication instance on which your migration processing will run and then created the source and target endpoints that manage the connection to the source and target databases. The last step is to create the database migration task.&lt;/p&gt;

&lt;h2&gt;
  
  
  Creating your Database Migration Task
&lt;/h2&gt;

&lt;p&gt;So far, we have defined the resource set that is going to do the work as well as the places where data will be coming from and where it will be going. There is one area that we have not yet defined, and that is the database migration task. This task defines the work that will be done. As part of this task, you can specify which tables to use, define any special processing, configure logging, etc. Let’s take a look at creating one of these tasks.&lt;/p&gt;

&lt;p&gt;First, go into the &lt;em&gt;Database migration tasks&lt;/em&gt; screen in the AWS DMS console and then click the &lt;strong&gt;Create task&lt;/strong&gt; button. This will bring up the creation screen, with the first section being &lt;em&gt;Task configuration&lt;/em&gt;. This section allows you to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Provide a &lt;em&gt;Task identifier&lt;/em&gt; or name for the task&lt;/li&gt;
&lt;li&gt;Select the &lt;em&gt;Replication instance&lt;/em&gt; to use&lt;/li&gt;
&lt;li&gt;Select the &lt;em&gt;Source database&lt;/em&gt; endpoint&lt;/li&gt;
&lt;li&gt;Select the &lt;em&gt;target database&lt;/em&gt; endpoint&lt;/li&gt;
&lt;li&gt;Select the &lt;em&gt;Migration type&lt;/em&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The Migration type is where you tell DMS the kind of work that you want this task to perform. There are three different options that you can select. The first is to &lt;em&gt;Migrate existing data&lt;/em&gt;. Using this as a migration type means that you’re looking to do a one-time copy of the data and would be ideal for doing that one-time migration. The next option is to &lt;em&gt;Migrate existing data and replicate ongoing changes&lt;/em&gt;. The name pretty much describes what is going on with this approach, and is most appropriate when you need to run both the source and target systems in parallel but want them to stay as updated as possible. This approach is especially common in Data Lake scenarios where data is being moved from a transactional system to an analytics or reporting system. The last migration type option is to &lt;em&gt;Replicate data changes only&lt;/em&gt; where you replicate any changes in data but do not perform that one-time migration.&lt;/p&gt;

&lt;p&gt;The next major section to complete when creating a migration task is the &lt;em&gt;Task settings&lt;/em&gt;. Task settings control the behavior of your task and can be configured through a &lt;em&gt;Wizard&lt;/em&gt; or through a &lt;em&gt;JSON editor&lt;/em&gt;. We will use the wizard mode so that we can more easily talk about the major settings.&lt;/p&gt;

&lt;p&gt;The first item to configure is the &lt;em&gt;Target table preparation mode&lt;/em&gt;, or how DMS should be preparing the tables at the target endpoint. There are three options, &lt;em&gt;Do nothing&lt;/em&gt;, &lt;em&gt;Drop tables on target&lt;/em&gt;, and &lt;em&gt;Truncate&lt;/em&gt;. When you select the “do nothing” option then target tables will not be affected. Any tables that do not exist will be created. When you select to drop the tables, then DMS will drop and recreate all affected tables. Truncating means that all tables and metadata remain, but all of the data is removed.&lt;/p&gt;

&lt;p&gt;The next item to configure is &lt;em&gt;Include LOB columns in replication&lt;/em&gt;. LOB are large objects and you have the option as to whether or not you want to include those object columns in the target data. You have three options, the first of which is &lt;em&gt;Don’t include LOB columns&lt;/em&gt;, and the second of which is &lt;em&gt;Full LOB mode&lt;/em&gt;; both of which are rather straightforward. The third option is &lt;em&gt;Limited LOB mode&lt;/em&gt;. In this mode, DMS will truncate each LOB to a defined size, the &lt;em&gt;Maximum LOB size (kb)&lt;/em&gt; value.&lt;/p&gt;

&lt;p&gt;You then can configure whether you want to &lt;em&gt;Enable validation&lt;/em&gt;. Checking this box will cause DMS to compare the source and target data immediately after the full load is performed. This ensures your data is migrated correctly, but it takes additional time to perform, and thus increases cost. You next can &lt;em&gt;Enable CloudWatch logs&lt;/em&gt;. There are also some advanced task settings, but we won’t go into those as part of this discussion.&lt;/p&gt;

&lt;p&gt;The next section is &lt;em&gt;Table mappings&lt;/em&gt;. This section is where you define the rules about what data is moved and how it is moved. At a high-level, you will create a &lt;em&gt;Selection rule&lt;/em&gt;, which determines the data that you wish to replicate, and then you can create a &lt;em&gt;Transformation rule&lt;/em&gt; that modifies the selected data before it is provided to the destination endpoint. The table mappings section also gives you the opportunity to use a Wizard approach or a JSON editor to enter all table mappings in JSON. We will walk through using the wizard.&lt;/p&gt;

&lt;p&gt;The first step is to select the &lt;strong&gt;Add new selection rule&lt;/strong&gt; button. This expands the selection rule section as shown in Figure 1.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--2qHarxle--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://i0.wp.com/billthevestguy.com/wp-content/uploads/2022/08/image-9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--2qHarxle--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://i0.wp.com/billthevestguy.com/wp-content/uploads/2022/08/image-9.png" alt="Figure 1. Creating selection rules for a database migration task" width="825" height="500"&gt;&lt;/a&gt;&lt;em&gt;Figure 1. Creating selection rules for a database migration task&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Expanding the &lt;em&gt;Schema&lt;/em&gt; drop-down will show that there is only one option – to &lt;em&gt;Enter a schema&lt;/em&gt;. Selecting this option will add another textbox in which you can provide the &lt;em&gt;Source name&lt;/em&gt;. This allows you to limit, by schema, the data that is being selected. You can enter % to select all schemas in the database or enter the schema name. You do the same for the Source table name, entering % if you want all the tables replicated. Once you have those defined, you then select the appropriate &lt;em&gt;Action&lt;/em&gt;, to either &lt;em&gt;Include&lt;/em&gt; or &lt;em&gt;Exclude&lt;/em&gt; the items that fit your selection criteria. You can create as many rules as desired, however, you must always have at least one rule with an &lt;strong&gt;include&lt;/strong&gt; action.&lt;/p&gt;

&lt;p&gt;Once you have the selection rule configured you can &lt;em&gt;Add column filter&lt;/em&gt;. This allows you to limit the number and type of records. A column filter requires the &lt;em&gt;Column name&lt;/em&gt;, one or more &lt;em&gt;Conditions&lt;/em&gt;, and then one or more comparative values. You have the following options for the conditions:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Less than or equal to&lt;/li&gt;
&lt;li&gt;Greater than or equal to&lt;/li&gt;
&lt;li&gt;Equal to&lt;/li&gt;
&lt;li&gt;Not equal to&lt;/li&gt;
&lt;li&gt;Equal to or between to values&lt;/li&gt;
&lt;li&gt;Not between two values&lt;/li&gt;
&lt;li&gt;Null&lt;/li&gt;
&lt;li&gt;Not null&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You can create any number of column filters per each selection rule.&lt;/p&gt;

&lt;p&gt;Once you have completed your selection rule you can then add one or more &lt;em&gt;Transformation rules&lt;/em&gt;. These rules allow you to change or transform schema, table, or column names of some or all the items that you have selected. Since we are simply copying the database across, we do not need to add any of these, especially since any changes will likely break our code!&lt;/p&gt;

&lt;p&gt;Your next option is to determine whether you want to &lt;em&gt;Enable premigration assessment run&lt;/em&gt;. This will warn you of any potential migration issues. Checking the box will expand the UI and present you with a set of Assessments to run as shown in Figure 2.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--yJNJVLNP--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://i0.wp.com/billthevestguy.com/wp-content/uploads/2022/08/image-10.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--yJNJVLNP--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://i0.wp.com/billthevestguy.com/wp-content/uploads/2022/08/image-10.png" alt="Figure 2. Enabling premigration assessment run on a scheduled task" width="825" height="998"&gt;&lt;/a&gt;&lt;em&gt;Figure 2. Enabling premigration assessment run on a scheduled task&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Once you have all of your selection and transformation rules created, you can select to &lt;em&gt;Start migration task&lt;/em&gt; either &lt;em&gt;Automatically on Create&lt;/em&gt;, the default, or &lt;em&gt;Manually later&lt;/em&gt;. Lastly, add any tags that you desire and click the &lt;strong&gt;Create task&lt;/strong&gt; button.&lt;/p&gt;

&lt;p&gt;This will bring you back to the database migration tasks list screen where you will see your task being created. Once created you can either start the task manually or allow it to run itself if so configured. You will be able to watch the table count move from &lt;em&gt;Tables queued&lt;/em&gt; to &lt;em&gt;Tables Loading&lt;/em&gt; to &lt;em&gt;Tables loaded&lt;/em&gt; as they are processed. Returning to the AWS DMS Dashboard will show that there is 1 Load complete as shown in Figure 3.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Tvt9aS21--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://i0.wp.com/billthevestguy.com/wp-content/uploads/2022/08/image-11.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Tvt9aS21--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://i0.wp.com/billthevestguy.com/wp-content/uploads/2022/08/image-11.png" alt="Figure 3. Dashboard showing completed migration task" width="825" height="416"&gt;&lt;/a&gt;&lt;em&gt;Figure 3. Dashboard showing completed migration task&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;For those cases where you simply want to migrate data sets with minimal changes other than perhaps renaming some columns, the Database Migration Service works like a dream. Relatively painless to setup and powerful enough to move data between servers, even servers that are of dissimilar types, such as where we just copied data from SQL Server to Amazon Aurora. However, there is a tool that will help you move more disparate data between different database engines. We will take a look at that tool next.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>database</category>
      <category>migration</category>
      <category>cloud</category>
    </item>
    <item>
      <title>AWS Database Migration Service (Part 1 of 2)</title>
      <dc:creator>Bill "The Vest Guy" Penberthy</dc:creator>
      <pubDate>Tue, 16 Aug 2022 02:32:19 +0000</pubDate>
      <link>https://dev.to/aws-builders/aws-database-migration-service-part-1-of-2-3e6f</link>
      <guid>https://dev.to/aws-builders/aws-database-migration-service-part-1-of-2-3e6f</guid>
      <description>&lt;p&gt;The &lt;a href="https://aws.amazon.com/dms/" rel="noopener noreferrer"&gt;AWS Database Migration Service (AWS DMS)&lt;/a&gt; was designed to help quickly and securely migrate databases into AWS. The premise is that the source database remains available during the migration to help minimize application downtown. AWS DMS supports homogeneous migrations such as SQL Server to SQL Server or Oracle to Oracle as well as some heterogeneous migrations between different platforms. You can also use the service to continuously replicate data from any supported source to any supported target, meaning you can use DMS for both one-time replications as well as ongoing replications. AWS DMS works with relational databases and NoSQL databases as well as other types of data stores. One thing to note, however, is that at least one end of your migration must be on an AWS service, you cannot use AWS DMS to migrate between two on-premises databases.&lt;/p&gt;

&lt;h2&gt;
  
  
  How Does it Work?
&lt;/h2&gt;

&lt;p&gt;You can best think of DMS as replication software running on a server in the cloud. There are literally dozens of these kinds of tools, some cloud-based, some that you install locally to move data between on-premise systems. The DMS’ claim to fame is that you only pay for the work that you have it perform – there is no licensing fee for the service itself like with most of the other software solutions.&lt;/p&gt;

&lt;p&gt;Figure 1 shows DMS at a high level. The green box in Figure 1 is the overall service and contains three major subcomponents. Two of these are endpoints used to connect to the source and target databases, and the third is the replication instance.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi0.wp.com%2Fbillthevestguy.com%2Fwp-content%2Fuploads%2F2022%2F08%2Fimage.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi0.wp.com%2Fbillthevestguy.com%2Fwp-content%2Fuploads%2F2022%2F08%2Fimage.png" alt="Figure 1. A high-level look at AWS Data Migration Service" width="800" height="400"&gt;&lt;/a&gt;&lt;em&gt;Figure 1. A high-level look at AWS Data Migration Service&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;The replication instance is an Amazon EC2 instance that provides the resources necessary to carry out the database migration. Since it is a replication instance, you can get high availability and failover support if you select to use a multi-region-based process.&lt;/p&gt;

&lt;p&gt;AWS DMS uses this replication instance to connect to your source database through the source endpoint. The instance then reads the source data and performs any data formatting necessary to make it compatible with the target database. The instance then loads that data into the target database. Much of this processing is done in memory, however large data sets may need to be buffered onto disk as part of the transfer. Logs and other replication-specific data are also written onto the replication instance.&lt;/p&gt;

&lt;h2&gt;
  
  
  Creating a Replication Instance
&lt;/h2&gt;

&lt;p&gt;Enough about the way that it is put together, let’s jump directly into creating a migration service, and we will go over the various options as they come up in the process.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;&lt;strong&gt;Note:&lt;/strong&gt; Not all EC2 instance classes are available for use as a replication instance. As of the time of this writing, only T3 (general purpose), C5 (compute-optimized), and R5 (memory-optimized) Amazon EC2 instance classes can be used. You can use a t3.micro instance under the AWS Free Tier, however, there is a chance that you may be charged if the utilization of the instance over a rolling 24-hour period exceeds the baseline utilization. This will not be a problem in our example, but it may be with other approaches, especially if you use ongoing replication.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;You can get to the AWS DMS console by searching for “DMS” or by going into the &lt;em&gt;Migration &amp;amp; Transfer&lt;/em&gt; service group and selecting it there. Click the &lt;strong&gt;Create replication instance&lt;/strong&gt; button once you get to the console landing page. This will take you to the creation page. Remember as you go through this that all we are doing here is creating the EC2 instance that DMS will use for processing, so all the questions will be around that.&lt;/p&gt;

&lt;p&gt;The fields that you can enter in the &lt;em&gt;Replication instance configuration&lt;/em&gt; section are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Name&lt;/strong&gt; – must be unique across all replication instances in the current region&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Descriptive Amazon Resource Name (ARN)&lt;/strong&gt; – This field is optional, but it allows you to use a friendly name for the ARN rather than the typical set of nonsense that AWS creates by default. This value cannot be changed after creation.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Description&lt;/strong&gt; – Short description of the instance&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Instance class&lt;/strong&gt; – This is where you select the instance class on which your migration process will be running.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Engine version&lt;/strong&gt; – This option allows the targeting of previous versions of DMS, or the software that runs within the instance class – though we have no idea why you would ever target an older version.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Allocated storage&lt;/strong&gt; – The amount of storage space that you want in your instance. This is where items like log files will be stored and will also be used for disc caching if the instance’s memory is not sufficient to handle all of the processing.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;VPC&lt;/strong&gt; – Where the instance should be run.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Multi AZ&lt;/strong&gt; – You can choose between Production workload which will set up multi-AZ or Dev or test workload which will create the instance in a single AZ.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Publicly accessible&lt;/strong&gt; – This is necessary if you are looking to connect to databases outside of your VPC, or even outside of AWS.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;There are three additional sections that you can configure. The first of these is &lt;em&gt;Advanced security and network configuration&lt;/em&gt; where you can define the specific subnet group for your replication instance, the availability zone in which your replication instance should run, and VPC security groups that you want to be assigned to your replication instance, and the AWS Key Management Service key that you would like used.&lt;/p&gt;

&lt;p&gt;The next section is &lt;em&gt;Maintenance&lt;/em&gt;, where you can define the weekly maintenance window that AWS will use for maintaining the DMS engine software and operating system. You must have this configured, and AWS will set up a default window for you. The last section that you can configure is, of course, &lt;em&gt;Tags&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;Once you click the &lt;strong&gt;Create&lt;/strong&gt; button you will see that your replication instance is being created as shown in Figure 2. This creation process will take several minutes.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi0.wp.com%2Fbillthevestguy.com%2Fwp-content%2Fuploads%2F2022%2F08%2Fimage-1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi0.wp.com%2Fbillthevestguy.com%2Fwp-content%2Fuploads%2F2022%2F08%2Fimage-1.png" alt="Figure 2. Creating a DMS replication instance" width="800" height="400"&gt;&lt;/a&gt;&lt;em&gt;Figure 2. Creating a DMS replication instance&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Now that you have a replication instance, the next step is to create your endpoints.&lt;/p&gt;

&lt;h2&gt;
  
  
  Creating your Source and Target Endpoints
&lt;/h2&gt;

&lt;p&gt;As briefly mentioned above, the endpoints manage the connection to your source and target databases. They are managed independently from the replication instance because there are many cases where there are multiple replications that talk to a single source or target, such as copying one set of data to one target and another set of data from the same source to a second target such as shown in Figure 3.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi0.wp.com%2Fbillthevestguy.com%2Fwp-content%2Fuploads%2F2022%2F08%2Fimage-2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi0.wp.com%2Fbillthevestguy.com%2Fwp-content%2Fuploads%2F2022%2F08%2Fimage-2.png" alt="Figure 3. Multiple replications against a single source endpoint" width="800" height="400"&gt;&lt;/a&gt;&lt;em&gt;Figure 3. Multiple replications against a single source endpoint&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;To create an endpoint, go into &lt;em&gt;Endpoints&lt;/em&gt; and select &lt;strong&gt;Create endpoint&lt;/strong&gt;. This will bring up the &lt;em&gt;Create endpoint&lt;/em&gt; screen. Your first option is to define the Endpoint type, as shown in Figure 4.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi0.wp.com%2Fbillthevestguy.com%2Fwp-content%2Fuploads%2F2022%2F08%2Fimage-3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi0.wp.com%2Fbillthevestguy.com%2Fwp-content%2Fuploads%2F2022%2F08%2Fimage-3.png" alt="Figure 4. Endpoint type options when creating a DMS endpoint" width="800" height="400"&gt;&lt;/a&gt;&lt;em&gt;Figure 4. Endpoint type options when creating a DMS endpoint&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Your first option when creating the endpoint is to determine whether the endpoint is going to be a source or target endpoint. You would think that this wouldn’t really matter because a database connection is a database connection whether you are reading or writing, but DMS has made decisions around which databases they will support reading from and which databases you can write to, and, as you can likely predict, they are not the same list. Table 1 lists the different databases supported for each endpoint type, as of the time of this writing.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Database&lt;/th&gt;
&lt;th&gt;As Source&lt;/th&gt;
&lt;th&gt;As Target&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Oracle v10.2 and later&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;SQL Server 2005 and later&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;MySQL 5.5 and later&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;MariaDB 10.0.24 and later&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;PostgreSQL 9.4 and later&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;SAP Adaptive Server Enterprise (ASE) 12.5 and above&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;IBM DB2 multiple versions&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Redis 6.x&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Azure SQL Database&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Google Cloud for MySQL&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;All RDS instance databases&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Amazon S3&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Amazon DocumentDB&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Amazon OpenSearch Service&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Amazon ElastiCache for Redis&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Amazon Kinesis Data Streams&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Amazon DynamoDB&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Amazon Neptune&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Apache Kafka&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;&lt;em&gt;Table 1. Databases available as sources and targets&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;The next option in the &lt;em&gt;Endpoint type&lt;/em&gt; section is a checkbox to &lt;em&gt;Select RDS DB instance&lt;/em&gt;. Checking this box will bring up a dropdown containing a list of RDS instances as shown in Figure 5.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi0.wp.com%2Fbillthevestguy.com%2Fwp-content%2Fuploads%2F2022%2F08%2Fimage-4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi0.wp.com%2Fbillthevestguy.com%2Fwp-content%2Fuploads%2F2022%2F08%2Fimage-4.png" alt="Figure 5. Selecting an RDS database when creating an endpoint" width="800" height="400"&gt;&lt;/a&gt;&lt;em&gt;Figure 5. Selecting an RDS database when creating an endpoint&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;The next section is the &lt;em&gt;Endpoint configuration&lt;/em&gt;. There are two primary sections to this section, the first section allows you to name the endpoint and select the type of database to which you are connecting and the second is &lt;em&gt;Endpoint settings&lt;/em&gt; where you can define those additional settings needed to access a specific database. Selecting the &lt;em&gt;Source\Target engine&lt;/em&gt; will expand the form, adding some additional fields.&lt;/p&gt;

&lt;p&gt;The first of these fields is &lt;em&gt;Access to endpoint database&lt;/em&gt;. There are two options available and the choice you make will change the rest of the form. These two options are &lt;em&gt;AWS Secrets Manager&lt;/em&gt;, where you use stored secrets for the login credentials, or &lt;em&gt;Provide access information manually&lt;/em&gt; where you manually configure the database connection.&lt;/p&gt;

&lt;p&gt;Selecting to use &lt;em&gt;AWS Secrets Manager&lt;/em&gt; will bring up additional fields as described below. These fields are used to fetch and access the appropriate secret.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Secret ID&lt;/strong&gt; – the actual secret to be used when logging into the database&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;IAM role&lt;/strong&gt; – the IAM role that grants Amazon DMS the appropriate permissions to use the necessary secret&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Secure Socket Layer (SSL) mode&lt;/strong&gt; – whether to use SSL when connecting to the database.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Selecting to &lt;em&gt;Provide access information manually&lt;/em&gt; brings up the various fields necessary to connect to that identified engine. Figure 6 shows what this looks like when connecting to a SQL Server, and hopefully, all these values look familiar because we have used them multiple times in earlier articles.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi0.wp.com%2Fbillthevestguy.com%2Fwp-content%2Fuploads%2F2022%2F08%2Fimage-5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi0.wp.com%2Fbillthevestguy.com%2Fwp-content%2Fuploads%2F2022%2F08%2Fimage-5.png" alt="Figure 6. Providing SQL Server information manually for an endpoint" width="800" height="400"&gt;&lt;/a&gt;&lt;em&gt;Figure 6. Providing SQL Server information manually for an endpoint&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;The next section is the &lt;em&gt;Endpoint settings&lt;/em&gt; section. The purpose of this section is to add any additional settings that may be necessary for this particular instance of the database to which it is connecting. There are two ways in which you can provide this information. The first is through a &lt;em&gt;Wizard&lt;/em&gt;, while the second is through an &lt;em&gt;Editor&lt;/em&gt;. When using the Wizard approach, clicking the &lt;strong&gt;Add new setting&lt;/strong&gt; button will bring up a &lt;em&gt;Setting \ Value&lt;/em&gt; row, with the &lt;em&gt;Setting&lt;/em&gt; being a drop-down list of known settings as shown in Figure 7. These values will be different for each engine as well as whether you are using the endpoint as a source or a target.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi0.wp.com%2Fbillthevestguy.com%2Fwp-content%2Fuploads%2F2022%2F08%2Fimage-6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi0.wp.com%2Fbillthevestguy.com%2Fwp-content%2Fuploads%2F2022%2F08%2Fimage-6.png" alt="Figure 7. Endpoint settings section when creating a SQL Server endpoint" width="800" height="400"&gt;&lt;/a&gt;&lt;em&gt;Figure 7. Endpoint settings section when creating a SQL Server endpoint&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Selecting to use the &lt;em&gt;Editor&lt;/em&gt; approach will bring up a large text box where you can enter the endpoint settings in JSON format. This would likely be the best approach if you need to configure multiple DMS endpoints with the same additional settings.&lt;/p&gt;

&lt;p&gt;Once you have &lt;em&gt;Endpoint&lt;/em&gt; configuration section complete, the next section is &lt;em&gt;KMS key&lt;/em&gt; where you select the appropriate key to be used when encrypting the data that you have input into the configuration. The next section is &lt;em&gt;Tags&lt;/em&gt;. The last section entitled &lt;em&gt;Test endpoint connection (optional)&lt;/em&gt; is shown in Figure 8 and is where you can test all the information that you have just filled out.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi0.wp.com%2Fbillthevestguy.com%2Fwp-content%2Fuploads%2F2022%2F08%2Fimage-7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi0.wp.com%2Fbillthevestguy.com%2Fwp-content%2Fuploads%2F2022%2F08%2Fimage-7.png" alt="Figure 8. Testing an endpoint configuration" width="800" height="400"&gt;&lt;/a&gt;&lt;em&gt;Figure 8. Testing an endpoint configuration&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;There are two values that you must identify before you can run the test, and that is the VPC and replication instance that you want to use, which is why we had you create the replication instance first! These are necessary because these are the resources that will be used to perform the work of connecting to the database. Once the values are selected, click the &lt;strong&gt;Run test&lt;/strong&gt; button. After a surprising amount of time where you see indications that the test is running, you should get confirmation that your test was successful. This output is shown in Figure 9.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi0.wp.com%2Fbillthevestguy.com%2Fwp-content%2Fuploads%2F2022%2F08%2Fimage-8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi0.wp.com%2Fbillthevestguy.com%2Fwp-content%2Fuploads%2F2022%2F08%2Fimage-8.png" alt="Figure 9. Successful test on an endpoint configuration" width="800" height="400"&gt;&lt;/a&gt;&lt;em&gt;Figure 9. Successful test on an endpoint configuration&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Obviously, you will need to configure at least one source endpoint and one target endpoint before you can run DMS end to end. However, you also need to make sure that you have each of them configured before you can configure the database migration task. We’ll finish that up in the next article!&lt;/p&gt;

</description>
      <category>database</category>
      <category>cloud</category>
      <category>migration</category>
      <category>aws</category>
    </item>
    <item>
      <title>Deploying New Container Using AWS App2Container</title>
      <dc:creator>Bill "The Vest Guy" Penberthy</dc:creator>
      <pubDate>Thu, 07 Jul 2022 03:19:08 +0000</pubDate>
      <link>https://dev.to/aws-builders/deploying-new-container-using-aws-app2container-2k2i</link>
      <guid>https://dev.to/aws-builders/deploying-new-container-using-aws-app2container-2k2i</guid>
      <description>&lt;p&gt;In our last article, we went through the containerization of a running application. The last step of this process is to deploy the container.  The default approach is to deploy a container image to ECR and then create the CloudFormation templates to run that image in Amazon ECS using Fargate. If you would prefer to deploy to Amazon EKS instead, you will need to go to the &lt;em&gt;deployment.json&lt;/em&gt; file in the output directory. This editable file contains the default settings for the application, ECR, ECS, and EKS. We will walk through each of the major areas in turn.&lt;/p&gt;

&lt;p&gt;The first section is responsible for defining the application and is shown below.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;"a2CTemplateVersion": "1.0",
"applicationId": "iis-tradeyourtools-6bc0a317",
"imageName": "iis-tradeyourtools-6bc0a317",
"exposedPorts": [
       {
              "localPort": 80,
              "protocol": "http"
       }
],
"environment": [],
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;em&gt;applicationId&lt;/em&gt; and the &lt;em&gt;imageName&lt;/em&gt; are values we have seen before when going through App2Containers. The &lt;em&gt;exposedPorts&lt;/em&gt; value should contain all of the IIS ports configured for the application. The one used in the example was not configured for HTTPS, but if it was there would be another entry for that value. The environment value allows you to enter any environment variables as key/value pairs that may be used by the application. Unfortunately, App2Container is not able to determine those because it does its analysis on running code rather than the code base. In our example, there are no environmental variables that are necessary.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;&lt;strong&gt;Note&lt;/strong&gt; – If you aren’t sure whether there are environment variables that your application may access, you can see which variables are available by going into the &lt;strong&gt;System -&amp;gt; Advanced system settings -&amp;gt; Environment variables&lt;/strong&gt;. This will provide you with a list of available variables and you can evaluate those as to their relevance to your application.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;The next section is quite small and contains the ECR configuration. The ECR repository that will be created is named with the &lt;em&gt;imageName&lt;/em&gt; from above and then versioned with the value in the &lt;em&gt;ecrRepoTag&lt;/em&gt; as shown below.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;"ecrParameters": {
       "ecrRepoTag": "latest"
},
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We are using the value &lt;em&gt;latest&lt;/em&gt; as our version tag.&lt;/p&gt;

&lt;p&gt;There are two remaining sections in the deployment.json file. The first is the ECS setup information with the second being the EKS setup information. We will first look at the ECS section. This entire section is listed below.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;"ecsParameters": {
       "createEcsArtifacts": true,
       "ecsFamily": "iis-tradeyourtools-6bc0a317",
       "cpu": 2,
       "memory": 4096,
       "dockerSecurityOption": "",
       "enableCloudwatchLogging": false,
       "publicApp": true,
       "stackName": "a2c-iis-tradeyourtools-6bc0a317-ECS",
       "resourceTags": [
              {
                     "key": "example-key",
                     "value": "example-value"
              }
       ],
       "reuseResources": {
              "vpcId": "vpc-f4e4d48c",
              "reuseExistingA2cStack": {
                     "cfnStackName": "",
                     "microserviceUrlPath": ""
              },
              "sshKeyPairName": "",
              "acmCertificateArn": ""
       },
       "gMSAParameters": {
              "domainSecretsArn": "",
              "domainDNSName": "",
              "domainNetBIOSName": "",
              "createGMSA": false,
              "gMSAName": ""
       },
       "deployTarget": "FARGATE",
       "dependentApps": []
},
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The most important value here is &lt;em&gt;createEcsArtifacts&lt;/em&gt;, which if set to true means that deploying with App2Container will deploy the image into ECS. The next ones to look at are &lt;em&gt;cpu&lt;/em&gt; and &lt;em&gt;memory&lt;/em&gt;. These values are only used for Linux containers. In our case, these values do not matter because this is a Windows container. The next two values, &lt;em&gt;dockerSecurityOption&lt;/em&gt; and &lt;em&gt;enableCloudwatchLogging&lt;/em&gt; are only changed in special cases, so they will generally stay at their default values. The next value, &lt;em&gt;publicApp&lt;/em&gt;, determines whether the application will be configured into a public subnet with a public endpoint. This is set to &lt;em&gt;true&lt;/em&gt; because this is our hoped-for behavior. The next value, &lt;em&gt;stackName&lt;/em&gt;, defines the name of the CloudFormation stack while the value after that, &lt;em&gt;resourceTags&lt;/em&gt;, are the custom tags that should be added to the ECS task definition. There is a default set of key/values in the file, but those will not be used if kept in; only keys that are not defined as &lt;strong&gt;example-key&lt;/strong&gt; will be added.&lt;/p&gt;

&lt;p&gt;The next section, &lt;em&gt;reuseResources&lt;/em&gt;, is where you can configure whether you wish to use any pre-existing resources, namely VPC – which is added to the vpcId value. When left blank, as shown below, App2Container will create a new VPC.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;"reuseResources": {
     "vpcId": "",
     "reuseExistingA2cStack": {
            "cfnStackName": "",
            "microserviceUrlPath": ""
     },
     "sshKeyPairName": "",
     "acmCertificateArn": ""
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Running the deployment with these settings will result in a brand new VPC being created. This means that, by default, you wouldn’t be able to connect in or out of the VPC without making changes to the VPC. If, however, you have an already existing VPC that your want to use, update the &lt;em&gt;vpcId&lt;/em&gt; key with the ID of the appropriate VPC.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;&lt;strong&gt;Note&lt;/strong&gt;: App2Container requires that the included VPS has a routing table that is associated with at least two subnets and an internet gateway. The CloudFormation template for the ECS service requires this so that there is a route from your service to the internet from at least two different AZs for availability. Currently, there is no way for you to define these subnets. You will receive a &lt;strong&gt;Resource creation failures: PublicLoadBalancer: At least two subnets in two different Availability Zones must be specified&lt;/strong&gt; message if your VPC is not set up properly.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;You can also choose to reuse an existing stack created by App2Container. Doing this will ensure that the application is deployed into the already existing VPC and that the URL for the new application is added to the already created Application Load Balancer rather than being added to a new ALB.&lt;/p&gt;

&lt;p&gt;The next value, &lt;em&gt;sshKeyPairName&lt;/em&gt;, is the name of the EC2 key pair used for the instances on which your container runs. Using this rather defeats the point of using containers, so we left it blank as well. The last value, &lt;em&gt;acmCertificateArn&lt;/em&gt;, is for the AWS Certificate Manager ARN that you want to use if you are enabling HTTPS on the created ALB. This parameter is required if you use an HTTPS endpoint for your ALB, and remember as we went over earlier this means that the request being forwarded into the application will be on port 80 and unencrypted because this would have been handled in the ALB.&lt;/p&gt;

&lt;p&gt;The next set of configuration values are part of the &lt;em&gt;gMSAParameters&lt;/em&gt; section. This becomes important to manage if your application relies upon group Managed Service Account (gMSA) Active Directory groups. This can only be used if deploying to EC2 and not Fargate (more on this later). These individual values are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;em&gt;domainSecretsArn&lt;/em&gt; – The AWS Secrets Manager ARN containing the domain credentials required to join the ECS nodes to Active Directory.&lt;/li&gt;
&lt;li&gt;
&lt;em&gt;domainDNSName&lt;/em&gt; – The DNS Name of the Active Directory the ECS nodes will join.&lt;/li&gt;
&lt;li&gt;
&lt;em&gt;domainNetBIOSName&lt;/em&gt; – The NetBIOS name of the Active Directory to join.&lt;/li&gt;
&lt;li&gt;
&lt;em&gt;createGMSA&lt;/em&gt; – A flag determining whether to create the gMSA Active Directory security group and account using the name supplied in the gMSAName field.&lt;/li&gt;
&lt;li&gt;
&lt;em&gt;gMSAName&lt;/em&gt; – The name of the Active Directory account the container should use for access.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;There are two fields remaining, &lt;em&gt;deployTarget&lt;/em&gt; and &lt;em&gt;dependentApps&lt;/em&gt;. For deployTarget there are two valid values for .NET applications running on Windows; &lt;em&gt;fargate&lt;/em&gt; and &lt;em&gt;ec2&lt;/em&gt;. You can only deploy to Fargate if your container is Windows 2019 or more recent. This would only be possible if your worker machine, the one you used for containerizing, was running Windows 2019+. Also, you cannot deploy to Fargate if you are using gMSA.&lt;/p&gt;

&lt;p&gt;The value &lt;em&gt;dependentApps&lt;/em&gt; is interesting, as it handles those applications that AWS defines as “complex Windows applications”. We won’t go into it in more detail here, but you can go to &lt;a href="https://docs.aws.amazon.com/app2container/latest/UserGuide/summary-complex-win-apps.html"&gt;https://docs.aws.amazon.com/app2container/latest/UserGuide/summary-complex-win-apps.html&lt;/a&gt; if you are interested in learning more about these types of applications.&lt;/p&gt;

&lt;p&gt;The next section in the deployment.json file is &lt;em&gt;eksParameters&lt;/em&gt;. You will see that much of these parameters are the same as what we went over when talking about the ECS parameters. The only differences are the &lt;em&gt;createEksArtifacts&lt;/em&gt; parameter, which needs to be set to true if deploying to EKS, and in the gMSA section, the &lt;em&gt;gMSAName&lt;/em&gt; parameter has inexplicably been changed to &lt;em&gt;gMSAAccountName&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;Once you have the deployment file set as desired, you next deploy the container:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;PS C:\App2Container&amp;gt; app2container generate app-deployment --application-id APPID --deploy
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This process takes several minutes, and you should get an output like Figure 1. The gold arrow points to the URL where you can go see your deployed application – go ahead and look at it to confirm that it has been successfully deployed and is running.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--jj_gu_oh--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://i0.wp.com/billthevestguy.com/wp-content/uploads/2022/07/image-5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--jj_gu_oh--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://i0.wp.com/billthevestguy.com/wp-content/uploads/2022/07/image-5.png" alt="Figure 1. Output from generating an application deployment in App2Container" width="825" height="247"&gt;&lt;/a&gt;&lt;em&gt;Figure 1. Output from generating an application deployment in App2Container&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Logging in to the AWS console and going to Amazon ECR will show you the ECR repository that was created to store your image as shown in Figure 2.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--8y552AQt--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://i0.wp.com/billthevestguy.com/wp-content/uploads/2022/07/image-6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--8y552AQt--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://i0.wp.com/billthevestguy.com/wp-content/uploads/2022/07/image-6.png" alt="Figure 2. Verifying the new container image is available in ECR" width="825" height="187"&gt;&lt;/a&gt;&lt;em&gt;Figure 2. Verifying the new container image is available in ECR&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Once everything has been deployed and verified, you can poke around in ECS to see how it is all put together. Remember though, if you are looking to make modifications it is highly recommended that you use the CloudFormation templates, make the changes there, and then re-upload them as a new version. That way you will be able to easily redeploy as needed and not worry about losing any changes that you may have added. You can either alter the templates in the CloudFormation section of the console or you can find the templates in your App2Container working directory, update those, and then use those to update the stack.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>dotnet</category>
      <category>docker</category>
      <category>app2container</category>
    </item>
    <item>
      <title>Containerizing a Running Application with AWS App2Container</title>
      <dc:creator>Bill "The Vest Guy" Penberthy</dc:creator>
      <pubDate>Tue, 05 Jul 2022 13:25:39 +0000</pubDate>
      <link>https://dev.to/aws-builders/containerizing-a-running-application-with-aws-app2container-1dj2</link>
      <guid>https://dev.to/aws-builders/containerizing-a-running-application-with-aws-app2container-1dj2</guid>
      <description>&lt;p&gt;Now that we have gone through containerizing an already existing application where you have access to the source code, let’s look at containerizing a .NET application in a different way. This is for those applications you may have that are running and where you may not have access to the source code, or you don’t deploy it, or there are other reasons where you don’t want to change the source code as we just went over earlier. Instead, you want to containerize the application by just “picking it up off its server” and moving it into a container. Up until recently, that was not a simple thing to do. However, AWS created a tool to help you do just that. Let’s look at that now.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is AWS App2Container?
&lt;/h2&gt;

&lt;p&gt;AWS App2Container is a command-line tool that is designed to help migrate .NET web applications into a container format. You can learn more about and download this tool at &lt;a href="https://aws.amazon.com/app2container/"&gt;https://aws.amazon.com/app2container/&lt;/a&gt;.  It also does Java, but hey, we're all about .NET, so we won’t talk about that anymore! You can see the process in Figure 1, but at a high level, there are five major steps.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--5rSyRFPx--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://billthevestguy.com/wp-content/uploads/2022/07/image.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--5rSyRFPx--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://billthevestguy.com/wp-content/uploads/2022/07/image.png" alt="Figure 1. How AWS App2Container works" width="825" height="213"&gt;&lt;/a&gt;&lt;em&gt;Figure 1. How AWS App2Container works&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;These steps are:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Inventory&lt;/strong&gt; – This step goes through the applications running on the server looking for running applications. At the time of writing, App2Container supports ASP.NET 3.5, and greater, applications running in IIS 7.5+ on Windows.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Analyze&lt;/strong&gt; – A chosen application is analyzed in detail to identify dependencies including known cooperating processes and network port dependencies. You can also manually add any dependencies that App2Container was unable to find.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Containerize&lt;/strong&gt; – In this step, all the application artifacts discovered during the “Analyze” phase are “dockerized.” &lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Create&lt;/strong&gt; – This step creates the various deployment artifacts (generally as CloudFormation templates) such as ECS task or Kubernetes pod definitions.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Deploy&lt;/strong&gt; – Store the image in Amazon ECR and deploy to ECS or EKS as desired.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;There are three different modes in which you can use App2Container. The first is a mode where you perform the steps on two different machines. If using this approach, App2Container must be installed on both machines. The first machine, the &lt;em&gt;Server&lt;/em&gt;, is the machine on which the application(s) that you want to containerize is running. You will run the first two steps on the server. The second machine, the &lt;em&gt;Worker&lt;/em&gt;, is the machine that will perform the final three steps of the process based on artifacts that you copy from the server. The second mode is when you perform all the steps on the same machine, so it basically fills both the server and worker roles. The third mode is when you run all the commands on your worker machine, connecting to the server machine using the Windows Remote Management (WinRM) protocol. This approach has the benefit of not having to install App2Container on the server, but it also means that you must have WinRM installed and running. We will not be demonstrating this mode.&lt;/p&gt;

&lt;p&gt;App2Container is a command-line tool that has some prerequisites that must be installed before the tool will run. These prerequisites are listed below.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;AWS CLI&lt;/strong&gt; – must be installed on both server and worker&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;PowerShell 5.0+&lt;/strong&gt; – must be installed on both server and worker&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Administrator rights&lt;/strong&gt; – You must be running as a Windows administrator&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Appropriate permissions&lt;/strong&gt; – You must have AWS credentials stored on the worker machine as was discussed in the earlier chapters when installing the AWS CLI.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Docker tools&lt;/strong&gt; – Docker version 17.07 or later must be installed on worker&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Windows Server OS&lt;/strong&gt; – Your worker system must run on Windows OS versions that support containers, namely Windows Server 2016 or 2019. If working in server\worker mode, the server system must be Windows 2008+.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Free Space&lt;/strong&gt; – 20-30 GB of free space should be available on both server and worker&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The currently supported types of applications are&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Simple ASP.NET applications running on a single server&lt;/li&gt;
&lt;li&gt;A Windows service running on a single server&lt;/li&gt;
&lt;li&gt;Complex ASP.NET applications that depend on WCF, running on a single server or multiple servers&lt;/li&gt;
&lt;li&gt;Complex ASP.NET applications that depend on Windows services or processes outside of IIS, running on a single server or multiple servers&lt;/li&gt;
&lt;li&gt;Complex, multi-node IIS or Windows service applications, running on a single server or multiple servers&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;There are also two types of applications that are not supported:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;ASP.NET applications that use files and registries outside of IIS web application directories&lt;/li&gt;
&lt;li&gt;ASP.NET applications that depend on features of a Windows operating system version prior to Windows Server Core 2016&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Now that we have described App2Container as well as the .NET applications on which it will and will not work, the next step is to show how to use the tool.&lt;/p&gt;

&lt;h2&gt;
  
  
  Using AWS App2Container to Containerize an Application
&lt;/h2&gt;

&lt;p&gt;We will first describe the application that we are going to containerize. We have installed a .NET Framework 4.7.2 application onto a Windows EC2 instance that supports containers; the AMI we used is shown in Figure 2. Please note that since EC2 regularly revises its AMIs, you may see a different Id.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--UtuQ1I3s--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://billthevestguy.com/wp-content/uploads/2022/07/image-1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--UtuQ1I3s--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://billthevestguy.com/wp-content/uploads/2022/07/image-1.png" alt="Figure 2. AMI used to host the website to containerize" width="825" height="129"&gt;&lt;/a&gt;&lt;em&gt;Figure 2. AMI used to host the website to containerize&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;The application is connected to an RDS SQL Server instance for database access using Entity Framework, and the connection string is stored in the &lt;em&gt;web.config&lt;/em&gt; file.&lt;/p&gt;

&lt;p&gt;The next step, now that we have a running application, is to download the AWS App2Container tool. You can access the tool by going to &lt;a href="https://aws.amazon.com/app2container/"&gt;https://aws.amazon.com/app2container/&lt;/a&gt; and clicking the &lt;strong&gt;Download AWS App2Container&lt;/strong&gt; button at the top of the page. This will bring you to the &lt;em&gt;Install App2Container&lt;/em&gt; page in the documentation which has a link to download a zip file containing the App2Container installation package. Download the file and extract it to a folder on the server. If you are doing the work using the server\worker mode, then download and extract the file on both servers. After you unzip the downloaded file, you should have 5 files, one of which is another zipped file.&lt;/p&gt;

&lt;p&gt;Open PowerShell and navigate to the folder containing App2Container. You must then run the install script.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;PS C:\App2Container&amp;gt; .\install.ps1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You will see the script running through several checks and then present some terms and conditions text that will require you to respond with a y to continue. You will then be able to see the tool complete its installation.&lt;/p&gt;

&lt;p&gt;The next step is to initialize and configure App2Container. If using server/worker mode, then you will need to do this on each machine. You start the initializing with the following command.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;PS C:\App2Container&amp;gt; app2container init
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;It will then prompt you for a &lt;em&gt;Workspace directory path for artifacts&lt;/em&gt; value. This is where the files from the analysis and any containerization will be stored. Click &lt;strong&gt;Enter&lt;/strong&gt; to accept the default value or enter a new directory. It will then ask for an &lt;em&gt;Optional AWS Profile&lt;/em&gt;. You can click &lt;strong&gt;Enter&lt;/strong&gt; if you have a default profile setup or you can enter the name of the profile to use if different.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;&lt;strong&gt;Note&lt;/strong&gt;: It is likely that a server running the application you want to containerize does not have the appropriate profile available. If not, you can set one up by running the aws configure command to set up your CLI installation that App2Container will use to create and upload the created container.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Next, the initialization will ask you for an &lt;em&gt;Optional S3 bucket for application artifacts&lt;/em&gt;. Providing a value in this step will result in the tool output also being copied to the provided bucket. You can click &lt;strong&gt;Enter&lt;/strong&gt; to use the default of “no bucket” however, at the time of this writing you must have this value configured so that it can act as storage for moving the container image into ECR. We used an S3 bucket called “prodotnetonaws-app2container”. The next initialization step is whether you wish to &lt;em&gt;Report usage metrics to AWS? (Y/N)&lt;/em&gt;. No personal or confidential information is gathered, so we recommend that you click &lt;strong&gt;Enter&lt;/strong&gt; to accept the default of “Y”. The following initialization prompt asks if you want to &lt;em&gt;Automatically upload logs and App2Container generated artifacts on crashes and internal errors? (Y/N)&lt;/em&gt;. We want AWS to know as soon as possible if something went wrong so we selected “y”. The last initialization prompt is asking whether to &lt;em&gt;Require images to be signed using Docker Content Trust (DCT)? (Y/N)&lt;/em&gt;. We selected the default value, “n”. The initialization will then display the path in which the artifacts will be created and stored. Figure 3 shows our installation when completed.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--_Gx91qfk--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://billthevestguy.com/wp-content/uploads/2022/07/image-2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--_Gx91qfk--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://billthevestguy.com/wp-content/uploads/2022/07/image-2.png" alt="Figure 3. Output from running the App2Container initialization" width="825" height="135"&gt;&lt;/a&gt;&lt;em&gt;Figure 3. Output from running the App2Container initialization&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;For those of you using the server/worker mode approach, take note of the application artifact directory displayed in the last line of the command output as this will contain the artifacts that you will need to move to the worker machine. Now that the application is initialized, the next step is to take the inventory of eligible applications running on the server. You do this by issuing the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;PS C:\App2Container&amp;gt; app2container inventory
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The output from this command is a JSON object collection that has one entry for each application. The output on our EC2 server is shown below:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
     "iis-demo-site-a7b69c34": {
          "siteName": "Demo Site",
          "bindings": "http/*:8080:",
          "applicationType": "IIS"
      },
      "iis-tradeyourtools-6bc0a317": {
          "siteName": "TradeYourTools",
          "bindings": "http/*:80:",
          "applicationType": "IIS"
      }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;As you can see, there are two applications on our server, the “Trade Your Tools” app we described earlier as well as another website “Demo Site” that is running under IIS and is bound to port 8080. The initial key is the application ID that you will need moving forward.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;&lt;strong&gt;Note&lt;/strong&gt;: You can only containerize one application at a time. If you wish to containerize multiple applications from the same server you will need to repeat the following steps for each one of those applications.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;The next step is to analyze the specific application that you are going to containerize. You do that with the following command, replacing the application ID (APPID) in the command with your own.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;PS C:\App2Container&amp;gt; app2container analyze --application-id APPID
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You will see a lot of flashing that shows the progress output as the tool analyzes the application, and when it is complete you will get output like that shown in Figure 4.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--fcbNqsfK--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://billthevestguy.com/wp-content/uploads/2022/07/image-3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--fcbNqsfK--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://billthevestguy.com/wp-content/uploads/2022/07/image-3.png" alt="Figure 4. Output from running the App2Container analyze command" width="825" height="160"&gt;&lt;/a&gt;&lt;em&gt;Figure 4. Output from running the App2Container analyze command&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;The primary output from this analysis is the analysis.json file that is listed in the command output. Locating and opening that file will allow you to see the information that the tool gathered about the application, much of which is a capture of the IIS configuration for the site running your application. We won’t show the contents of the file here as it is several hundred lines long, however, much of the content of this file can be edited as you see necessary. &lt;/p&gt;

&lt;p&gt;The next steps branch depending upon whether you are using a single server or using the server/worker mode.&lt;/p&gt;

&lt;h3&gt;
  
  
  When containerizing on a single server
&lt;/h3&gt;

&lt;p&gt;Once you are done reviewing the artifacts created from the analysis, the next step is to containerize the application. You do this with the following command&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;PS C:\App2Container&amp;gt; app2container containerize --application-id APPID
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The processing in this step may take some time to run, especially if, like us, you used a free-tier low-powered machine! Once completed, you will see output like Figure 5.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--6Ug4H0i2--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://billthevestguy.com/wp-content/uploads/2022/07/image-4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--6Ug4H0i2--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://billthevestguy.com/wp-content/uploads/2022/07/image-4.png" alt="Figure 5. Output from containerizing an application in App2Container" width="825" height="170"&gt;&lt;/a&gt;&lt;em&gt;Figure 5. Output from containerizing an application in App2Container&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;At this point, you are ready to deploy your container and can skip to the “Deploying…” section if you don’t care about containerizing using server/worker mode.&lt;/p&gt;

&lt;h3&gt;
  
  
  When containerizing using server/worker mode
&lt;/h3&gt;

&lt;p&gt;Once you are done reviewing the artifacts created from the analysis, the next step is to extract the application. This will create the archive that will need to be moved to the worker machine for containerizing. Also, the tool will upload the archive to the S3 bucket provided during initialization. Since we didn’t provide a bucket, we must manually copy the file. The command to extract the application is:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;PS C:\App2Container&amp;gt; app2container extract --application-id APPID
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This command will process, and you should get a simple “Extraction successful” message. &lt;/p&gt;

&lt;p&gt;Returning to the artifact directory that was displayed when initializing App2Container, you will see a new zip file named with your Application ID. Copy this file to the worker server.&lt;br&gt;
Once you are on the worker server and App2Container has been initialized, the next step is to containerize the content from the archive. You do that with the following command&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;PS C:\App2Container&amp;gt; app2container containerize --input-archive PathToZip
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The output from this step matches the output from running the containerization on a single server and can be seen in Figure 5 above.&lt;/p&gt;

&lt;p&gt;The next article will show how to deploy this containerized application into AWS.&lt;/p&gt;

</description>
      <category>dotnet</category>
      <category>aws</category>
      <category>docker</category>
      <category>app2container</category>
    </item>
    <item>
      <title>Containerizing a .NET Core-based Application for AWS</title>
      <dc:creator>Bill "The Vest Guy" Penberthy</dc:creator>
      <pubDate>Mon, 27 Jun 2022 13:52:30 +0000</pubDate>
      <link>https://dev.to/aws-builders/containerizing-a-net-core-based-application-for-aws-1pd8</link>
      <guid>https://dev.to/aws-builders/containerizing-a-net-core-based-application-for-aws-1pd8</guid>
      <description>&lt;p&gt;In our last post in this series, we talked about Containerizing a .NET 4.x Application for deployment onto AWS, and as you may have seen it was a somewhat convoluted affair. Containerizing a .NET Core type application is much easier, because a lot of the hoops that you must leap through to manage a Windows container will not be necessary. Instead, all AWS products, as well as IDEs, will support this out the gate.&lt;/p&gt;

&lt;h2&gt;
  
  
  Using Visual Studio
&lt;/h2&gt;

&lt;p&gt;We have already gone through adding container support using Visual Studio, and that we are doing it now using a .NET Core-based application does not change that part of the process at all. What does change, however, is the ease of getting the newly containerized application into AWS. Once the Docker file has been added, the “&lt;em&gt;Publish to AWS&lt;/em&gt;” options when right-clicking on the project name in the Solution Explorer is greatly expanded. Since our objective is to get this application deployed to Amazon ECR, make the choice to &lt;em&gt;Push Container Images to Amazon Elastic Container Registry&lt;/em&gt; and click the &lt;strong&gt;Publish&lt;/strong&gt; button. You will see the process walk through a few steps and it will end with a message stating that the image has been successfully deployed into ECR.&lt;/p&gt;

&lt;h2&gt;
  
  
  Using JetBrains Rider
&lt;/h2&gt;

&lt;p&gt;The process of adding a container using JetBrains Rider is very similar to the process used in Visual Studio. Open your application in Rider, right-click the project, select &lt;strong&gt;Add&lt;/strong&gt;, and then &lt;strong&gt;Docker Support&lt;/strong&gt; as shown in Figure 1.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--WwS3zMp9--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://i0.wp.com/billthevestguy.com/wp-content/uploads/2022/06/image-7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--WwS3zMp9--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://i0.wp.com/billthevestguy.com/wp-content/uploads/2022/06/image-7.png" alt="Figure 1. Adding Docker Support in JetBrains Rider." width="825" height="584"&gt;&lt;/a&gt;&lt;em&gt;Figure 1. Adding Docker Support in JetBrains Rider.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;This will bring up a window where you select the &lt;strong&gt;Target OS&lt;/strong&gt;, in this case, Linux.  Once you have this finished you will see a &lt;em&gt;Dockerfile&lt;/em&gt; show up in your solution. Unfortunately, the AWS Toolkit for Rider does not currently support deploying the new container image to ECR. This means that any deployment to the cloud must be done with the AWS CLI or the AWS Tools for Powershell and would be the same as the upload process used when storing a Windows container in ECR that we went over in an earlier post.&lt;/p&gt;

&lt;p&gt;As you can see, containerizing a .NET Core-based application is much easier to do as well as easier to deploy into AWS.&lt;/p&gt;

</description>
      <category>dotnet</category>
      <category>docker</category>
      <category>jetbrainsrider</category>
      <category>visualstudio</category>
    </item>
    <item>
      <title>Containerizing a .NET 4.x Application for AWS</title>
      <dc:creator>Bill "The Vest Guy" Penberthy</dc:creator>
      <pubDate>Thu, 23 Jun 2022 15:24:04 +0000</pubDate>
      <link>https://dev.to/aws-builders/containerizing-a-net-framework-4x-application-for-aws-4a6m</link>
      <guid>https://dev.to/aws-builders/containerizing-a-net-framework-4x-application-for-aws-4a6m</guid>
      <description>&lt;p&gt;In this post we are going to demonstrate ways in which you can containerize your applications for deployment into the cloud, the next step in minimizing resource usage and likely saving money. This article is different from the previous entries in this series because those were a discussion of containers and running them within the &lt;a href="https://aws.amazon.com" rel="noopener noreferrer"&gt;AWS&lt;/a&gt; infrastructure while this post is much more practical and based upon getting to that point from an existing non-containerized application.&lt;/p&gt;

&lt;h2&gt;
  
  
  Using Visual Studio
&lt;/h2&gt;

&lt;p&gt;Adding container support using Visual Studio is straightforward.&lt;/p&gt;

&lt;h3&gt;
  
  
  Adding Docker Support
&lt;/h3&gt;

&lt;p&gt;Open an old ASP.NET Framework 4.7 application or create a new one. Once open, right-click on the project name, select &lt;strong&gt;Add&lt;/strong&gt;, and then &lt;strong&gt;Docker Support&lt;/strong&gt; as shown in Figure 1.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fbillthevestguy.com%2Fwp-content%2Fuploads%2F2022%2F06%2Fimage.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fbillthevestguy.com%2Fwp-content%2Fuploads%2F2022%2F06%2Fimage.png" alt="Figure 1. Adding Docker Support to an application."&gt;&lt;/a&gt;&lt;em&gt;Figure 1. Adding Docker Support to an application.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Your Output view, when set to showing output from Container Tools, will show multiple steps being performed, and then it should finish successfully. When completed, you will see two new files added in the Solution Explorer, &lt;strong&gt;Dockerfile&lt;/strong&gt;, and a subordinate &lt;strong&gt;.dockerignore&lt;/strong&gt; file. You will also see that your default Debug setting has changed to Docker. You can see both changes in Figure 2.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fbillthevestguy.com%2Fwp-content%2Fuploads%2F2022%2F06%2Fimage-1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fbillthevestguy.com%2Fwp-content%2Fuploads%2F2022%2F06%2Fimage-1.png" alt="Figure 2. Changes in Visual Studio after adding Docker support."&gt;&lt;/a&gt;&lt;em&gt;Figure 2. Changes in Visual Studio after adding Docker support.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;You can test the support by clicking the Docker button. This will build the container, run it under your local Docker Desktop, and then open your default browser. This time, rather than going to a localhost URL you will instead go to an IP address, and if you compare the IP address in the URL to your local IP you will see that they are not the same. That is because this new IP address points to the container running on your system.&lt;/p&gt;

&lt;p&gt;Before closing the browser and stopping the debug process, you will be able to confirm that the container is running by using the Containers view in Visual Studio as shown in Figure 3.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fbillthevestguy.com%2Fwp-content%2Fuploads%2F2022%2F06%2Fimage-2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fbillthevestguy.com%2Fwp-content%2Fuploads%2F2022%2F06%2Fimage-2.png" alt="Figure 3. Using the Containers view in Visual Studio to see the running container"&gt;&lt;/a&gt;&lt;em&gt;Figure 3. Using the Containers view in Visual Studio to see the running container.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;You can also use &lt;a href="https://www.docker.com/products/docker-desktop/" rel="noopener noreferrer"&gt;Docker Desktop&lt;/a&gt; to view running containers. Open Docker Desktop and select &lt;strong&gt;Containers&lt;/strong&gt; / &lt;strong&gt;Apps&lt;/strong&gt;. This will bring you to a list of the running containers and apps, one of which will be the container that you just started as shown in Figure 4.&lt;/p&gt;

&lt;p&gt;![Figure 4. Viewing a running container in Docker Desktop](&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fbillthevestguy.com%2Fwp-content%2Fuploads%2F2022%2F06%2Fimage-3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fbillthevestguy.com%2Fwp-content%2Fuploads%2F2022%2F06%2Fimage-3.png" alt="Image description"&gt;&lt;/a&gt;&lt;em&gt;Figure 4. Viewing a running container in Docker Desktop.&lt;/em&gt; &lt;/p&gt;

&lt;p&gt;Once these steps have been completed, you are ready to save your container in ECR, just as we covered earlier in the series.&lt;/p&gt;

&lt;h2&gt;
  
  
  Deploying your Windows Container to ECR
&lt;/h2&gt;

&lt;p&gt;However, there are some complications with this, as the AWS Toolkit for Visual Studio does not support the container deployment options we saw in an earlier article when working with Windows containers. Instead, we are going to use the &lt;a href="https://aws.amazon.com/powershell/" rel="noopener noreferrer"&gt;AWS PowerShell tools&lt;/a&gt; to build and publish your image to ECR. At a high level, the steps are:&lt;/p&gt;

&lt;p&gt;-&lt;em&gt;Build your application in Release mode&lt;/em&gt;. This is the only way that Visual Studio puts the appropriate files in the right place, namely the obj\Docker\publish subdirectory of your project directory. You can see this value called out in the last line of your Dockerfile: COPY ${source:-obj/Docker/publish} .&lt;br&gt;
-&lt;em&gt;Refresh your ECR authentication token&lt;/em&gt;. You need this later in the process so that you can login to ECR to push the image.&lt;br&gt;
-&lt;em&gt;Build the Docker image&lt;/em&gt;. &lt;br&gt;
-&lt;em&gt;Tag the image&lt;/em&gt;. Creates the image tag on the repository&lt;br&gt;
-&lt;em&gt;Push the image to the server&lt;/em&gt;. Copy the image into ECR&lt;/p&gt;

&lt;p&gt;Let’s walk through them now. The first step is to build your application in Release mode. However, before you can do that, you will need to stop your currently running container. You can do that through either Docker Desktop or the Containers view in Visual Studio. If you do not do this, your build will fail because you will not be able to override the necessary files. Once that is completed, your Release mode build should be able to run without problem.&lt;/p&gt;

&lt;p&gt;Next, open PowerShell and navigate to your project directory. This directory needs to be the one that contains the Docker file. First thing we will do is to set the authentication context. We do that by first getting the command to execute, and then executing that command. That is why this process has two steps.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$loginCommand = Get-ECRLoginCommand -Region &amp;lt;repository region&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And then&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Invoke-Expression $loginCommand.Command
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This refreshed the authentication token into ECR. The remaining commands are based upon an existing ECR repository. You can access this information through the AWS Explorer by clicking on the repository name. This will bring up the details page as shown in Figure 5.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fbillthevestguy.com%2Fwp-content%2Fuploads%2F2022%2F06%2Fimage-4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fbillthevestguy.com%2Fwp-content%2Fuploads%2F2022%2F06%2Fimage-4.png" alt="Figure 5. Viewing a running container in Docker Desktop"&gt;&lt;/a&gt;&lt;em&gt;Figure 5. Viewing a running container in Docker Desktop.&lt;/em&gt; &lt;/p&gt;

&lt;p&gt;The value shown by the 1 is the repository name and by number 2 is the repository URI. You will need both of those values for the remaining steps. Build the image:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker build -t &amp;lt;repository&amp;gt; .
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The next step is to tag the image. In this example we are setting this version as the latest version by appending both the repository name and URI with “:latest”.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker tag &amp;lt;repository&amp;gt;:latest &amp;lt;URI&amp;gt;:latest
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The last step is to push the image to the server:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker push &amp;lt;URI&amp;gt;:latest
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You will see a lot of work going on as everything is pushed to the repository but eventually it will finish processing and you will be able to see your new image in the repository.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;&lt;strong&gt;Note&lt;/strong&gt;: Not all container services on AWS support Windows containers. Amazon ECS on AWS Fargate is one of the services that does as long as you make the appropriate choices as you configure your tasks. There are detailed directions to doing just that at &lt;a href="https://aws.amazon.com/blogs/containers/running-windows-containers-with-amazon-ecs-on-aws-fargate/" rel="noopener noreferrer"&gt;https://aws.amazon.com/blogs/containers/running-windows-containers-with-amazon-ecs-on-aws-fargate/&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;While Visual Studio offers a menu-driven approach to containerizing your application, you always have the option to containerize your application manually.&lt;/p&gt;

&lt;h2&gt;
  
  
  Containerizing Manually
&lt;/h2&gt;

&lt;p&gt;Containerizing an application manually requires several steps. You’ll need to create your Docker file and then coordinate the build of the application so that it works with the Docker file you created. We’ll start with those steps first, and we’ll do it using JetBrains Rider. The first thing you’ll need to do is to add a Docker file to your sample application, called &lt;strong&gt;Dockerfile&lt;/strong&gt;. This file needs to be in the root of your active project directory. Once you have this added to the project, right-click the file to open the &lt;em&gt;Properties&lt;/em&gt; window and change the &lt;strong&gt;Build action&lt;/strong&gt; to &lt;em&gt;None&lt;/em&gt; and the &lt;strong&gt;Copy to output directory&lt;/strong&gt; to &lt;em&gt;Do not copy&lt;/em&gt; as shown in Figure 6.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fbillthevestguy.com%2Fwp-content%2Fuploads%2F2022%2F06%2Fimage-5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fbillthevestguy.com%2Fwp-content%2Fuploads%2F2022%2F06%2Fimage-5.png" alt="Figure 6. Build properties for the new Docker file"&gt;&lt;/a&gt;&lt;em&gt;Figure 6. Build properties for the new Docker file.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;This is important because it makes sure that the Docker file itself will not end up deployed into the container.&lt;/p&gt;

&lt;p&gt;Now that we have the file, let’s start adding the instructions:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;FROM mcr.microsoft.com/dotnet/framework/aspnet:4.8-windowsservercore-ltsc2019
ARG source
WORKDIR /inetpub/wwwroot
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;These commands are defining the source image with &lt;strong&gt;FROM&lt;/strong&gt;, defining an argument, and then defining the directory and entry point where the code is going to be running on the container. The source image that we have defined includes support for ASP.NET and .NET version 4.8, &lt;em&gt;mcr.microsoft.com/dotnet/framework/aspnet:4.8&lt;/em&gt;, and is being deployed onto Windows Server 2019, &lt;em&gt;windowsservercore-ltsc2019&lt;/em&gt;. There is an image for Windows Server 2022, windowsservercore-ltsc2022, but this may not be usable for you if you are not running the most current version of Windows on your machine.&lt;/p&gt;

&lt;p&gt;The last part that we need to do is to configure the Docker file to include the compiled application. However, before we can do that, we need to build the application in such a way that we can access these deployed bits. This is done by publishing the application. In Rider, you publish the application by right-clicking on the project and selecting the &lt;strong&gt;Publish&lt;/strong&gt; option. This will give you the option to publish to either a &lt;em&gt;Local folder&lt;/em&gt; or &lt;em&gt;Server&lt;/em&gt;. This brings up the configuration screen where you can select the directory in which to publish as shown in Figure 7.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fbillthevestguy.com%2Fwp-content%2Fuploads%2F2022%2F06%2Fimage-6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fbillthevestguy.com%2Fwp-content%2Fuploads%2F2022%2F06%2Fimage-6.png" alt="Figure 7. Selecting a publish directory"&gt;&lt;/a&gt;&lt;em&gt;Figure 7. Selecting a publish directory.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;It will be easiest if you select a directory underneath the project directory; we recommend within the bin directory so that the IDEs will tend to ignore it. Clicking the &lt;strong&gt;Run&lt;/strong&gt; button will publish the app to the directory. The last step is to add one more command to the Dockerfile where you point the source command to the directory in which you published the application.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;COPY ${source:-bin/release} .
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once you add this last line into the Dockerfile, you are ready to deploy the Windows container to ECR using the steps that we went through in the last section.&lt;/p&gt;

&lt;p&gt;Now that we have walked through two different approaches for containerizing your older .NET Framework-based Windows application, the next step is to do the same with a .NET Core-based application. As you will see, this process is a lot easier because we will build the application onto a Linux-based container so you will see a lot of additional support in the IDEs. Let’s look at that in our next post.&lt;/p&gt;

</description>
      <category>dotnet</category>
      <category>aws</category>
      <category>docker</category>
    </item>
    <item>
      <title>.NET and AWS App Runner</title>
      <dc:creator>Bill "The Vest Guy" Penberthy</dc:creator>
      <pubDate>Tue, 21 Jun 2022 13:30:03 +0000</pubDate>
      <link>https://dev.to/aws-builders/net-and-aws-app-runner-50fa</link>
      <guid>https://dev.to/aws-builders/net-and-aws-app-runner-50fa</guid>
      <description>&lt;p&gt;The newest entry into AWS container management is designed to help remove the amount of configuration and management that you must use when working with containers. &lt;a href="https://aws.amazon.com/apprunner/" rel="noopener noreferrer"&gt;App Runner&lt;/a&gt; is a fully managed service that automatically builds and deploys the application as well creating the load balancer. App Runner also manages the scaling up and down based upon traffic. What do you, as a developer, have to do to get your container running in App Runner? Let’s take a look.&lt;/p&gt;

&lt;p&gt;First, log into AWS and go to the App Runner console home. If you click on the &lt;em&gt;Services&lt;/em&gt; link you will find App Runner under the &lt;strong&gt;Compute&lt;/strong&gt; section rather than the Containers section, even though its purpose is to easily run containers. Click on the &lt;em&gt;Create an App Runner Service&lt;/em&gt; button to get the Step 1 page as shown in Figure 1.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fbillthevestguy.com%2Fwp-content%2Fuploads%2F2021%2F11%2Ffigure0519.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fbillthevestguy.com%2Fwp-content%2Fuploads%2F2021%2F11%2Ffigure0519.png" alt="Figure 1. Creating an App Runner service"&gt;&lt;/a&gt;&lt;em&gt;Figure 1. Creating an App Runner service&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;The first section, &lt;strong&gt;Source&lt;/strong&gt;, requires you to identify where the container image that you want to deploy is stored. You currently, at the time of this writing, can choose either a container registry, &lt;a href="https://aws.amazon.com/ecr/" rel="noopener noreferrer"&gt;Amazon ECR&lt;/a&gt;, or a source code repository. Since we have already loaded an image in ECR in the last post, let us move forward with this option by ensuring &lt;em&gt;Container registry&lt;/em&gt; and &lt;em&gt;Amazon ECR&lt;/em&gt; are selected, and then clicking the Browse button to bring up the image selection screen as shown in Figure 2.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fbillthevestguy.com%2Fwp-content%2Fuploads%2F2021%2F11%2Ffigure0520.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fbillthevestguy.com%2Fwp-content%2Fuploads%2F2021%2F11%2Ffigure0520.png" alt="Figure 2 – Selecting a container image from ECR"&gt;&lt;/a&gt;&lt;em&gt;Figure 2 – Selecting a container image from ECR&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;In this screen we selected the “prodotnetonaws” image repository that we created in the last post and the container image with the tag of “latest”.&lt;/p&gt;

&lt;p&gt;Once you have completed the &lt;strong&gt;Source&lt;/strong&gt; section, the next step is to determine the &lt;strong&gt;Deployment settings&lt;/strong&gt; for your container. Here, your choices are to use the &lt;em&gt;Deployment trigger&lt;/em&gt; of &lt;strong&gt;Manual&lt;/strong&gt;, which means that you must fire off each deployment yourself using the App Runner console or the AWS CLI, or &lt;strong&gt;Automatic&lt;/strong&gt;, where App Runner watches your repository, deploying the new version of the container every time the image changes. In this case, we will choose &lt;em&gt;Manual&lt;/em&gt; so that we have full control of the deployment.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;&lt;strong&gt;Warning&lt;/strong&gt;: When you have your deployment settings set to Automatic, every time the image is updated App Runner will trigger a deployment. This may be appropriate in a development or even test environment, but it is unlikely that you will want to use this in a production setting.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;The last data that you need to enter on this page is to give App Runner an &lt;em&gt;ECR access role&lt;/em&gt; that App Runner will use to access ECR. In this case, we will select &lt;strong&gt;Create new service&lt;/strong&gt; role and App Runner will pre-select a &lt;em&gt;Service name role&lt;/em&gt;. Click the Next button when completed.&lt;/p&gt;

&lt;p&gt;The next step is entitled &lt;strong&gt;Configure service&lt;/strong&gt; and is designed to, surprisingly enough, help you configure the service. There are 5 sections on this page, &lt;em&gt;Service settings&lt;/em&gt;, &lt;em&gt;Auto scaling&lt;/em&gt;, &lt;em&gt;Health check&lt;/em&gt;, &lt;em&gt;Security&lt;/em&gt;, and &lt;em&gt;Tags&lt;/em&gt;. The only section that is expanded is the first, all of the other sections need to be expanded before you see the options.&lt;/p&gt;

&lt;p&gt;The first section, Service settings, with default settings can be seen in Figure 3.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fbillthevestguy.com%2Fwp-content%2Fuploads%2F2021%2F11%2Ffigure0521.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fbillthevestguy.com%2Fwp-content%2Fuploads%2F2021%2F11%2Ffigure0521.png" alt="Figure 3 – Service settings in App Runner"&gt;&lt;/a&gt;&lt;em&gt;Figure 3 – Service settings in App Runner&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Here you set the &lt;em&gt;Service name&lt;/em&gt;, select the &lt;em&gt;Virtual CPU &amp;amp; memory&lt;/em&gt;, configure any optional &lt;em&gt;Environmental variables&lt;/em&gt; that you may need, and determine the TCP &lt;em&gt;Port&lt;/em&gt; that your service will use. If you are using the sample application that we loaded into ECR in the previous post you will need to change the port value from the default 8080 to port 80 so that it will serve the application we configured in the container. You also have the ability, under &lt;em&gt;Additional configuration&lt;/em&gt;, to add a Start command which will be run on launch. This is generally left blank if you have configured the entry point within the container image. We gave the service the name “ProDotNetOnAWS-AR” and let all the rest of the settings in this section remain as default.&lt;/p&gt;

&lt;p&gt;The next section is &lt;strong&gt;Auto scaling&lt;/strong&gt;, and there are two major options, &lt;em&gt;Default configuration&lt;/em&gt; and &lt;em&gt;Custom configuration&lt;/em&gt;, each of which provide the ability to set the auto scaling values as shown in Figure 4.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fbillthevestguy.com%2Fwp-content%2Fuploads%2F2021%2F11%2Ffigure0522.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fbillthevestguy.com%2Fwp-content%2Fuploads%2F2021%2F11%2Ffigure0522.png" alt="Figure 4 – Setting the Auto scaling settings in App Runner"&gt;&lt;/a&gt;&lt;em&gt;Figure 4 – Setting the Auto scaling settings in App Runner&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;The first of these auto scaling values is &lt;em&gt;Concurrency&lt;/em&gt;. This value represents the maximum number of concurrent requests that an instance can process before App Runner scales up the service. The default configuration has this set at 100 requests that you can customize if using the &lt;em&gt;Custom configuration&lt;/em&gt; setting.&lt;/p&gt;

&lt;p&gt;The next value is &lt;em&gt;Minimum size&lt;/em&gt;, or the number of instances that App Runner provisions for your service, regardless of concurrent usage. This means that there may be times where some of these provisioned instances are not being used. You will be charged for the memory usage of all provisioned instances, but only for the CPU of those instances that are actually handling traffic. The default configuration for minimum size is set to 1 instance.&lt;/p&gt;

&lt;p&gt;The last value is &lt;em&gt;Maximum size&lt;/em&gt;. This value represents the maximum number of instances to which your service will scale; once your service reaches the maximum size there will be no additional scaling no matter the number of concurrent requests. The default configuration for maximum size is 25 instances.&lt;/p&gt;

&lt;p&gt;If any of the default values do not match your need, you will need to create a custom configuration, which will give you control over each of these configuration values. To do this, select &lt;em&gt;Custom configuration&lt;/em&gt;. This will display a drop-down that contains all of the App Runner configurations you have available (currently it will only have “DefaultConfiguration” because we have yet to define a different configuration) and an &lt;strong&gt;Add new&lt;/strong&gt; button. Clicking this button will bring up the entry screen as shown in Figure 5.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fbillthevestguy.com%2Fwp-content%2Fuploads%2F2021%2F11%2Ffigure0523.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fbillthevestguy.com%2Fwp-content%2Fuploads%2F2021%2F11%2Ffigure0523.png" alt="Figure 5 – Customizing auto scaling in App Runner"&gt;&lt;/a&gt;&lt;em&gt;Figure 5 – Customizing auto scaling in App Runner&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;The next section after you configure auto scaling is &lt;strong&gt;Health check&lt;/strong&gt;. The first value you set in this section is the &lt;em&gt;Timeout&lt;/em&gt;, which describes the amount of time, in seconds, that the load balancer will wait for a health check response. The default timeout is 5 seconds. You also can set the &lt;em&gt;Interval&lt;/em&gt;, which is the number of seconds between health checks of each instance and is defaulted to 10 seconds. You can also set the &lt;em&gt;Unhealthy&lt;/em&gt; and &lt;em&gt;Health&lt;/em&gt; thresholds, where the unhealthy threshold is the number of consecutive health check failures that means that an instance is unhealthy and needs to be recycled and the health threshold is the number of consecutive successful health checks necessary for an instance to be determined to be healthy. The default for these values is 5 requests for unhealthy and 1 request for healthy.&lt;/p&gt;

&lt;p&gt;You next can assign an IAM role to the instance in the &lt;strong&gt;Security&lt;/strong&gt; section. This IAM role will be used by the running application if it needs to communicate to other AWS services, such as S3 or a database server. The last section is &lt;strong&gt;Tags&lt;/strong&gt;, where you can enter one or more tags to the App Runner service.&lt;/p&gt;

&lt;p&gt;Once you have finished configuring the service, clicking the &lt;strong&gt;Next&lt;/strong&gt; button will bring you to the review screen. Clicking the &lt;strong&gt;Create and deploy&lt;/strong&gt; button on this screen will give the approval for App Runner to create the service, deploy the container image, and run it so that the application is available. You will be presented with the service details page and a banner that informs you that “Create service is in progress.” This process will take several minutes, and when completed will take you to the properties page as shown in Figure 6.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fbillthevestguy.com%2Fwp-content%2Fuploads%2F2021%2F11%2Ffigure0524.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fbillthevestguy.com%2Fwp-content%2Fuploads%2F2021%2F11%2Ffigure0524.png" alt="Figure 6 – After App Runner is completed"&gt;&lt;/a&gt;&lt;em&gt;Figure 6 – After App Runner is completed&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Once the service is created and the status is displayed as &lt;em&gt;Running&lt;/em&gt;, you will see a value for a &lt;em&gt;Default domain&lt;/em&gt; which represents the external-facing URL. Clicking on it will bring up the home page for your containerized sample application.&lt;/p&gt;

&lt;p&gt;There are five tabs displayed under the domain, &lt;em&gt;Logs&lt;/em&gt;, &lt;em&gt;Activity&lt;/em&gt;, &lt;em&gt;Metrics&lt;/em&gt;, &lt;em&gt;Configuration&lt;/em&gt;, and &lt;em&gt;Custom Domain&lt;/em&gt;. The Logs section displays the &lt;em&gt;Event&lt;/em&gt;, &lt;em&gt;Deployment&lt;/em&gt;, and &lt;em&gt;Application&lt;/em&gt; logs for this App Runner service. This is where you will be able to look for any problems during deployment or running of the container itself. You should be able, under the &lt;em&gt;Event log&lt;/em&gt; section, to see the listing of events from the service creation. The &lt;em&gt;Activity&lt;/em&gt; tab is very similar, in that it displays a list of activities taken by your service, such as creation and deployment.&lt;/p&gt;

&lt;p&gt;The next tab, &lt;em&gt;Metrics&lt;/em&gt;, tracks metrics related to the entire App Runner service. This where you will be able to see information on HTTP connections and requests, as well as being able to track the changes in the number of used and unused instances. By going into the sample application (at the default domain) and clicking around the site, you should see these values change and a graph become available that provides insight into these various activities.&lt;/p&gt;

&lt;p&gt;The &lt;em&gt;Configuration&lt;/em&gt; tab allows you to view and edit many of the settings that we set in the original service creation. There are two different sections where you can make edits. The first is at the &lt;em&gt;Source and deployment&lt;/em&gt; level, where you can change the container source and whether the deployment will be manual or automatically happen when the repository is updated. The second section where you can make changes is at the &lt;em&gt;Configure service&lt;/em&gt; level where you are able to change your current settings for autoscaling, health check, and security.&lt;/p&gt;

&lt;p&gt;The last tab on the details page is &lt;em&gt;Custom domain&lt;/em&gt;. The default domain will always be available for your application; however, it is likely that you will want to have other domain names pointed to it – we certainly wouldn’t want to use &lt;a href="https://SomeRandomValue.us-east-2.awsapprunner.com" rel="noopener noreferrer"&gt;https://SomeRandomValue.us-east-2.awsapprunner.com&lt;/a&gt; for our company’s web address. Linking domains is straightforward from the App Runner side. Simply click the &lt;strong&gt;Link domain&lt;/strong&gt; button and input the custom domain that you would like to link, we of course used “prodotnetonaws.com”. Note that this does not include “www”, because this usage is currently not supported through the App Runner console. Once you enter the customer domain name, you will be presented the &lt;em&gt;Configure DNS&lt;/em&gt; page as shown in Figure 7.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fbillthevestguy.com%2Fwp-content%2Fuploads%2F2021%2F11%2Ffigure0525.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fbillthevestguy.com%2Fwp-content%2Fuploads%2F2021%2F11%2Ffigure0525.png" alt="Figure 7 – Configuring a custom domain in App Runner"&gt;&lt;/a&gt;&lt;em&gt;Figure 7 – Configuring a custom domain in App Runner&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;This page contains a set of certificate validation records that you need to add to your Domain Name System (DNS) so that App Runner can validate that you own or control the domain. You will also need to add CNAME records to your DNS to target the App Runner domain; you will need to add one record for the custom domain and another for the www subdomain if so desired. Once the certificate validation records are added to your DNS, the customer domain status will become &lt;em&gt;Active&lt;/em&gt; and traffic will be directed to your App Runner instance. This evaluation can take anywhere from minutes to up to 48 hours, depending upon your DNS provider.&lt;/p&gt;

&lt;p&gt;Once your App Runner instance is up and running this first time, there are several actions that you can take on it as shown in the upper right corner of Figure 7. The first is the orange &lt;strong&gt;Deploy&lt;/strong&gt; button. This will deploy the container from either the container or source code repository depending upon your configuration. You also can &lt;strong&gt;Delete&lt;/strong&gt; the service, which is straightforward, as well as &lt;strong&gt;Pause&lt;/strong&gt; the service. There are some things to consider when you pause your App Runner service. The first is that your application will lose all state – much as if you were deploying a new service. The second consideration is that if you are pausing your service because of a code defect, you will not be able to redeploy a new (presumably fixed) container without first resuming the service. &lt;/p&gt;

</description>
      <category>aws</category>
      <category>dotnet</category>
      <category>containers</category>
    </item>
    <item>
      <title>.NET and Amazon Elastic Container Registry (ECR)</title>
      <dc:creator>Bill "The Vest Guy" Penberthy</dc:creator>
      <pubDate>Thu, 16 Jun 2022 22:59:27 +0000</pubDate>
      <link>https://dev.to/aws-builders/net-and-amazon-elastic-container-registry-ecr-59g5</link>
      <guid>https://dev.to/aws-builders/net-and-amazon-elastic-container-registry-ecr-59g5</guid>
      <description>&lt;p&gt;&lt;a href="https://aws.amazon.com/ecr" rel="noopener noreferrer"&gt;Amazon Elastic Container Registry&lt;/a&gt; (ECR) is a system designed to support storing and managing Docker and Open Container Initiative (OCI) images and OCI-compatible artifacts. ECR can act as a private container image repository, where you can store your own images, a public container image repository for managing publicly accessible images, and to manage access to other public image repositories. ECR also provides lifecycle policies, that allow you to manage the lifecycles of images within your repository, image scanning, so that you can identify any potential software vulnerabilities, and cross-region and cross-account image replication so that your images can be available wherever you need them.&lt;/p&gt;

&lt;p&gt;As with the rest of AWS services, ECR is built on top of other services. For example, Amazon ECR stores container images in Amazon S3 buckets so that server-side encryption is available by default. Or, if needed, you can use server-side encryption with KMS keys stored in &lt;a href="https://aws.amazon.com/kms" rel="noopener noreferrer"&gt;AWS Key Management Service&lt;/a&gt; (AWS KMS), all of which you can configure as you create the registry. As you can likely guess, IAM manages access rights to the images, supporting everything from strict rules to anonymous access to support the concept of a public repository.&lt;/p&gt;

&lt;p&gt;There are several different ways to create a repository. The first is through the ECR console by selecting the Create repository button. This will take you to the Create repository page as shown in Figure 1.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fbillthevestguy.com%2Fwp-content%2Fuploads%2F2021%2F10%2Ffigure05-ecr1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fbillthevestguy.com%2Fwp-content%2Fuploads%2F2021%2F10%2Ffigure05-ecr1.png" alt="AWS Console screen for creating an ECR repository"&gt;&lt;/a&gt;&lt;em&gt;Figure 1. Creating an ECR repository in the AWS console&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Through this page you can set the &lt;em&gt;Visibility&lt;/em&gt; settings, &lt;em&gt;Image scan settings&lt;/em&gt;, and &lt;em&gt;Encryption settings&lt;/em&gt;. There are two visibility settings, &lt;em&gt;Private&lt;/em&gt; and &lt;em&gt;Public&lt;/em&gt;. Private repositories are managed through permissions managed in IAM and are part of the AWS Free Tier, with 500 MB-month of storage for one year for your private repositories. Public repositories are openly visible and available for open pulls. Amazon ECR offers you 50 GB-month of always-free storage for your public repositories, and you can transfer 500 GB of data to the internet for free from a public repository each month anonymously (without using an AWS account.) If authenticating to a public repository on ECR, you can transfer 5 TB of data to the internet for free each month and you get unlimited bandwidth for free when transferring data from a public repository in ECR to any AWS compute resources in any region.&lt;/p&gt;

&lt;p&gt;Enabling &lt;strong&gt;Scan on push&lt;/strong&gt; on means that every image that is uploaded to the repository will be scanned. This scanning is designed to help identify any software vulnerabilities in the uploaded image, and will automatically run every 24 hours, but turning this on will ensure that the image is checked before it can ever be used. The scanning makes use of the Common Vulnerabilities and Exposures (CVEs) database from the Clair project, outputting a list of scan findings. &lt;/p&gt;

&lt;p&gt;&lt;em&gt;&lt;strong&gt;Note&lt;/strong&gt;: Clair is an open-source project that was created for the static analysis of vulnerabilities in application containers (currently including OCI and Docker). The goal of the project is to enable a more transparent view of the security of container-based infrastructure - the project was named Clair after the French term which translates to clear, bright, or transparent.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;The last section is &lt;em&gt;Encryption&lt;/em&gt; settings. When this is enabled, as shown in Figure 2, ECR will use AWS Key Management Service (KMS) to manage the encryption of the images stored in the repository, rather than the default encryption approach. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fbillthevestguy.com%2Fwp-content%2Fuploads%2F2021%2F10%2Ffigure05-ecr2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fbillthevestguy.com%2Fwp-content%2Fuploads%2F2021%2F10%2Ffigure05-ecr2.png" alt="AWS Console screen for creating an ECR repository"&gt;&lt;/a&gt;&lt;em&gt;Figure 2. Encryption settings&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;You can use either the default settings, where ECR creates a default key (with an alias of aws/ecr) or you can &lt;em&gt;Customize encryption settings&lt;/em&gt; and either select a pre-existing key or create a new key that will be used for the encryption.&lt;/p&gt;

&lt;h2&gt;
  
  
  Pull Through Cache Repositories
&lt;/h2&gt;

&lt;p&gt;Everything we have gone through so far is to upload your own container image into the repository. However, as we will cover in depth a bit later in this post, at the heart of virtually all of your container images is a base container image, generally created by a vendor such as Microsoft or Docker. These base images are typically downloaded from a public repository. However, there may be instances where you prefer to source all images from Amazon Elastic Container Registry to take advantage of its high availability and security. If so, the pull-through cache repositories may be just what you are looking for as they will pull down referenced images from the source and cache them within ECR.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;&lt;strong&gt;Note&lt;/strong&gt;: The pull-through cache repository feature was added for re:Invent 2021 so the available public repositories are gradually being added. It is possible that the repository that you may want to pass-through from is not yet available.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Creating a pull-through cache repository is a relatively simple process. First, begin by selecting &lt;strong&gt;Private registry&lt;/strong&gt; from the left-menu, and then click the &lt;strong&gt;Edit&lt;/strong&gt; button in the &lt;em&gt;Pull through cache&lt;/em&gt; panel to change settings. This will bring up the &lt;em&gt;Pull through cache&lt;/em&gt; configuration page, where you click the &lt;strong&gt;Add rule&lt;/strong&gt; button. This will bring up the Create window as shown in Figure 3.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa46qp4jkt0w2jc15kiqa.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa46qp4jkt0w2jc15kiqa.png" alt="Creating a pull-through cache"&gt;&lt;/a&gt;&lt;em&gt;Figure 3 Creating a pull through cache&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;The first drop-down, &lt;em&gt;Public registry&lt;/em&gt;, contains all the pre-configured public registries available; as you can see we selected &lt;em&gt;ECR Public&lt;/em&gt;. Clicking the &lt;strong&gt;Save&lt;/strong&gt; button will bring you back to the list of pull-through cache rules.&lt;/p&gt;

&lt;p&gt;Since this is a pass-through, you need to combine both the pass through url and the referenced source when using a pull URL using the format of &lt;em&gt;.dkr.ecr..amazonaws.com// :&lt;/em&gt;. If you look into a container image definition you may see the following lines:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Build runtime image
FROM mcr.microsoft.com/dotnet/aspnet:6.0
WORKDIR /app
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Using the pull-through cache means that, if you were going to use the ASP.NET Core base image from Bitnami that is available from the ECR Public repository, you would change the FROM command to reference the pull through source, as shown below.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Build runtime image
FROM xxxxxxxxx.dkr.ecr.use-east-2.amazonaws.com/ecr-public/bitnami/aspnet-core:latest
WORKDIR /app
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Be aware that this is more than a simple pull-through, as the image that you reference will be cached in ECR so that additional calls to that same image will not have to go to the final source repository. This means that the storage used will be charged to your account.&lt;/p&gt;

&lt;h2&gt;
  
  
  Other approaches for creating an ECR repo
&lt;/h2&gt;

&lt;p&gt;Just as with all the other services that we have talked about so far, there is the UI-driven way to build an ECR like we just went through and then several other approaches to creating a repo.&lt;/p&gt;

&lt;h3&gt;
  
  
  AWS CLI
&lt;/h3&gt;

&lt;p&gt;You can create an ECR repository in the AWS CLI using the create-repository command as part of the ECR service.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;C:\&amp;gt;aws ecr create-repository 
    --repository-name prodotnetonaws 
    --image-scanning-configuration scanOnPush=true 
    --region us-east-1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can control all of the basic repository settings through the CLI just as you can when creating the repository through the ECR console, including assigning encryption keys and defining the repository URI.&lt;/p&gt;

&lt;h3&gt;
  
  
  AWS Tools for PowerShell
&lt;/h3&gt;

&lt;p&gt;And, as you probably aren’t too surprised to find out, you can also create an ECR repository using AWS Tools for PowerShell:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;C:\&amp;gt; New-ECRRepository 
-RepositoryName prodotnetonaws
-ImageScanningConfiguration_ScanOnPush $true
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Just as with the CLI, you have the ability to fully configure the repository as you create it.&lt;/p&gt;

&lt;h3&gt;
  
  
  AWS Toolkit for Visual Studio
&lt;/h3&gt;

&lt;p&gt;When using the AWS Toolkit for Visual Studio you must depend upon the extension’s built-in default values because the only thing that you can control through the AWS Explorer is the repository name as shown in Figure 4. As you may notice, the AWS Explorer does not have its own node for ECR and instead puts the Repositories sub-node under the Amazon Elastic Container Service (ECS) node. This is a legacy from the past, before ECR really became its own service, but it is still an effective way to access and work with repositories in Visual Studio.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fbillthevestguy.com%2Fwp-content%2Fuploads%2F2022%2F05%2Fimage-28.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fbillthevestguy.com%2Fwp-content%2Fuploads%2F2022%2F05%2Fimage-28.png" alt="Creating a pull-through cache"&gt;&lt;/a&gt;&lt;em&gt;Figure 4, Creating a repository in Visual Studio&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Once you create a repository in Visual Studio, going to the ECR console and reviewing the repository that was created will show you that it used the default settings, so it is a private repository with both “Scan on push” and “KMS encryption” disabled.&lt;br&gt;
At this point, the easiest way to show how this will all work is to create an image and upload it into the repository. We will then be able to use this container image as we go through the various AWS container management services.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;&lt;strong&gt;Note&lt;/strong&gt;: You will not be able to complete many of these exercises without Docker installed on your machine. You will find download and installation instructions for Docker Desktop at &lt;a href="https://www.docker.com/products/docker-desktop" rel="noopener noreferrer"&gt;https://www.docker.com/products/docker-desktop&lt;/a&gt;. Once you have Desktop installed you will be able to locally build and run container images.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;We will start by creating a simple .NET ASP.NET Core sample web application in Visual Studio through &lt;em&gt;File -&amp;gt; New Project&lt;/em&gt; and selecting the &lt;em&gt;ASP.NET Core Web App (C#)&lt;/em&gt; project template. You then name your project and select where to save the source code. Once that is completed you will get a new screen that asks for additional information. The checkbox for &lt;strong&gt;Enable Docker&lt;/strong&gt; defaults as unchecked, so make sure you check it and then select the Docker OS to use, which in this case is Linux. This will create a simple solution that includes a Dockerfile as shown in Figure 5.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fbillthevestguy.com%2Fwp-content%2Fuploads%2F2021%2F10%2Ffigure05-ecr5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fbillthevestguy.com%2Fwp-content%2Fuploads%2F2021%2F10%2Ffigure05-ecr5.png" alt="New .NET solution with Dockerfile"&gt;&lt;/a&gt;&lt;em&gt;Figure 5. New .NET solution with Dockerfile&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;If you look at the contents of the generated Dockerfile you will see that it is very similar to the Dockerfile that we walked through earlier, containing the instructions to restore and build the application, publish the application, and then copy the published application bits to the final image, setting the ENTRYPOINT.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;FROM mcr.microsoft.com/dotnet/aspnet:6.0 AS base
WORKDIR /app
EXPOSE 80
EXPOSE 443

FROM mcr.microsoft.com/dotnet/sdk:6.0 AS build
WORKDIR /src
COPY ["SampleContainer/SampleContainer.csproj", "SampleContainer/"]
RUN dotnet restore "SampleContainer/SampleContainer.csproj"
COPY . .
WORKDIR "/src/SampleContainer"
RUN dotnet build "SampleContainer.csproj" -c Release -o /app/build

FROM build AS publish
RUN dotnet publish "SampleContainer.csproj" -c Release -o /app/publish

FROM base AS final
WORKDIR /app
COPY --from=publish /app/publish .
ENTRYPOINT ["dotnet", "SampleContainer.dll"]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you look at your build options in Visual Studio as shown in Figure 6 you will see additional ones available for containers. The Docker choice, for example, will work through the Dockerfile, start the final container image within Docker, and then connect the debugger to that container so that you can debug as usual.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fbillthevestguy.com%2Fwp-content%2Fuploads%2F2021%2F10%2Ffigure05-ecr6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fbillthevestguy.com%2Fwp-content%2Fuploads%2F2021%2F10%2Ffigure05-ecr6.png" alt="Build options in a containerized application"&gt;&lt;/a&gt;&lt;em&gt;Figure 6. Build options in a containerized application&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;&lt;strong&gt;Note&lt;/strong&gt;: If you want to see what is going on within the container build process, go to the output window, change the Show output from drop-down to Container Tools and then debug the application in Docker. You will see Docker commands being processed.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;The next step is to create the container image and persist it into the repository. To do so, right click the project name in the Visual Studio Solution Explorer and select &lt;em&gt;Publish Container to AWS&lt;/em&gt; to bring up the wizard as shown in Figure 7.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fbillthevestguy.com%2Fwp-content%2Fuploads%2F2021%2F10%2Ffigure05-ecr7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fbillthevestguy.com%2Fwp-content%2Fuploads%2F2021%2F10%2Ffigure05-ecr7.png" alt="Publish Container to AWS wizard"&gt;&lt;/a&gt;&lt;em&gt;Figure 7. Publish Container to AWS wizard&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Figure 7 shows that the repository that we just created is selected as the repository for saving, and the &lt;em&gt;Publish only the Docker image to Amazon Elastic Container Registry&lt;/em&gt; option in the &lt;strong&gt;Deployment Target&lt;/strong&gt; was selected (these are not the default values for each of these options). Once you have this configured, click the &lt;strong&gt;Publish&lt;/strong&gt; button. You’ll see the window in the wizard grind through a lot of processing, then a console window may pop up to show you the actual upload of the image, and then the wizard will automatically close if successful.&lt;/p&gt;

&lt;p&gt;You can see the new repository in Visual Studio as well as by logging into Amazon ECR and going into the &lt;em&gt;prodotnetonaws&lt;/em&gt; repository (the one we uploaded the image into), as shown in Figure 8, will show that there is now an image available within the repository with a &lt;em&gt;latest&lt;/em&gt; tag, just as configured in the wizard. You can click the icon with the Copy URI text to get the URL that you will use when working with this image. We recommend that you go ahead and do this at this point and paste it somewhere easy to find as that is the value you will use to access the image.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fbillthevestguy.com%2Fwp-content%2Fuploads%2F2021%2F10%2Ffigure05-ecr8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fbillthevestguy.com%2Fwp-content%2Fuploads%2F2021%2F10%2Ffigure05-ecr8.png" alt="Container image stored in Amazon ECR"&gt;&lt;/a&gt;&lt;em&gt;Figure 8. Container image stored in Amazon ECR&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Now that you have a container image stored in the repository, in the next post, we will look at how you could use it!&lt;/p&gt;

</description>
      <category>aws</category>
      <category>dotnet</category>
      <category>containers</category>
    </item>
  </channel>
</rss>
