<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: chaitanya.dev</title>
    <description>The latest articles on DEV Community by chaitanya.dev (@chaitanyasuvarna).</description>
    <link>https://dev.to/chaitanyasuvarna</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/chaitanyasuvarna"/>
    <language>en</language>
    <item>
      <title>Using Dapper over EntityFramework for database operations in .NET Core</title>
      <dc:creator>chaitanya.dev</dc:creator>
      <pubDate>Mon, 28 Jun 2021 17:15:58 +0000</pubDate>
      <link>https://dev.to/chaitanyasuvarna/using-dapper-over-entityframework-for-database-operations-in-net-core-2l46</link>
      <guid>https://dev.to/chaitanyasuvarna/using-dapper-over-entityframework-for-database-operations-in-net-core-2l46</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;We’re entering a new world in which data may be more important than software.&lt;br&gt;
--Tim O’Reilly&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;In most of the use cases for today’s software, data access plays an important role.&lt;br&gt;
.NET core supports multiple framework options to store and access data in your software application may it be a microservice or a web application.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Entity Framework Core&lt;/strong&gt; is one of the most recommended options suggested for applications working with relational database. &lt;a href="https://docs.microsoft.com/en-us/ef/core/" rel="noopener noreferrer"&gt;EF Core&lt;/a&gt; is an object-relational mapper(ORM) that enables .NET developers to persist objects to and from a data source. It eliminates the need for most of the data access code developers would typically need to write.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Dapper&lt;/strong&gt; is a simple object mapper for .NET and owns the title of &lt;strong&gt;King of Micro ORM&lt;/strong&gt; in terms of speed and is virtually as fast as using a raw ADO.NET data reader. &lt;a href="https://github.com/DapperLib/Dapper" rel="noopener noreferrer"&gt;Dapper&lt;/a&gt; operates directly using the IDbConnection interface which is extended by database providers like SQL Server, Oracle, MySQL etc. for their database.&lt;/p&gt;

&lt;p&gt;Dapper is known for its high performance and is especially used by developers when they want to write the SQL query themselves with optimal performance or use stored procedures.&lt;br&gt;
But how fast is it compared to EF Core?&lt;/p&gt;

&lt;p&gt;To figure this out, as always, I have created a demo project &lt;a href="https://github.com/chaitanya-suvarna/efcore-dapper-benchmark" rel="noopener noreferrer"&gt;here&lt;/a&gt; that compares the same read and write operations between EF Core and Dapper using &lt;a href="https://benchmarkdotnet.org/index.html" rel="noopener noreferrer"&gt;BenchmarkDotNet&lt;/a&gt;. I have used SQL Server as the relational database to insert and retrieve data.&lt;/p&gt;
&lt;h2&gt;
  
  
  Structure of the database
&lt;/h2&gt;

&lt;p&gt;For this demo, I have created two tables called &lt;code&gt;Athletes&lt;/code&gt; and &lt;code&gt;Sports&lt;/code&gt;. Before the execution of the benchmarks, I am also clearing down the database and reseeding the data with 16 records in &lt;code&gt;Athletes&lt;/code&gt; and 5 records in &lt;code&gt;Sports&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj856ph7p9sntvfzo69ag.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj856ph7p9sntvfzo69ag.png" title="Athletes table structure" alt="Athletes table structure"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmp80rc0m4l2sh392qoae.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmp80rc0m4l2sh392qoae.png" title="Sports table structure" alt="Sports table structure"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  Read Operation
&lt;/h2&gt;

&lt;p&gt;I have created repository methods for EFCore and Dapper each to fetch data from the database. I have used the same criteria for both, which is to fetch the names of all indoor sport athletes older than 25 years of age.&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;h2&gt;
  
  
  Write Operation
&lt;/h2&gt;

&lt;p&gt;I have created repository methods for EFCore and Dapper each to insert Athlete data into the database. I have used the same data to be inserted in both the methods.&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;h2&gt;
  
  
  Performance benchmarks for read and write
&lt;/h2&gt;

&lt;p&gt;I have used BenchmarkDotNet to benchmark the performance for the database read and write operations using both EF Core and Dapper.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;BenchmarkDotNet&lt;/strong&gt; helps you to transform methods into benchmarks, track their performance, and share reproducible measurement experiments. The results for these benchmarks are presented in a user-friendly form that highlights all the important facts about your experiment.&lt;/p&gt;

&lt;p&gt;I have set the iteration count as &lt;strong&gt;100 iterations&lt;/strong&gt;. This can be configured by assigning the desired value to the variable numberOfIterations.&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;h2&gt;
  
  
  Benchmark results
&lt;/h2&gt;

&lt;p&gt;On running the benchmark project, the below summary is what is displayed as output which shows that for &lt;strong&gt;Dapper both read and write are faster, almost x2 times&lt;/strong&gt;, than EF Core for the same kind of operation.&lt;/p&gt;

&lt;p&gt;The summary also shows that the memory allocated for Dapper operations is far less than the memory allocated for EF Core operations.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkmuibshxujygl230o4a8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkmuibshxujygl230o4a8.png" title="Benchmark summary" alt="Benchmark summary"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The detailed results for both the read and write operations give detailed statistical information for the benchmark. It also gives a histogram for the the time interval for the database operation. BenchmarkDotNet also allows the measured data to be exported in different formats including plots.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpl9hazxvktr6mmegm5pw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpl9hazxvktr6mmegm5pw.png" title="Detailed summary for read operations&amp;lt;br&amp;gt;
" alt="Detailed summary for read operations"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvhx6tb1wf1sj6cl5a31r.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvhx6tb1wf1sj6cl5a31r.png" title="Detailed summary for write operations" alt="Detailed summary for write operations"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;While benchmark results clearly state that Dapper has a higher performance than EF Core for both read and writes, &lt;strong&gt;in most instances the performance for database writes is almost comparable&lt;/strong&gt;. Due to this, some developers prefer to use Dapper for read operations and EF Core for write operations involving complex objects to be written into the database as transforming them into Dapper queries itself may be a cumbersome task.&lt;/p&gt;

&lt;p&gt;Dapper is safe from SQL injection with the use of parameters using anonymous objects and also allows you to use custom queries with performance enhancement which gives you better control over the database operations for eg. using &lt;code&gt;with(nolock)&lt;/code&gt; in select queries.&lt;/p&gt;

&lt;p&gt;Thus we have seen why Dapper is known as King of Micro ORM and how in our example, BenchmarkDotNet helped us showcase that it has a better performance to EF Core. It is important to note that the performance of both EF Core and Dapper vary based on the use case and the type of database operations being performed, but &lt;strong&gt;Dapper in general performs better with database read operations&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;You can find my demo project that I have created to run this benchmark on my github repo here &lt;a href="https://github.com/chaitanya-suvarna/efcore-dapper-benchmark" rel="noopener noreferrer"&gt;https://github.com/chaitanya-suvarna/efcore-dapper-benchmark&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I hope you found this interesting and useful.&lt;/p&gt;

&lt;p&gt;Thank you for reading!&lt;/p&gt;

</description>
      <category>dotnet</category>
      <category>database</category>
      <category>performance</category>
      <category>sql</category>
    </item>
    <item>
      <title>Event Sourcing pattern for microservices in .Net Core</title>
      <dc:creator>chaitanya.dev</dc:creator>
      <pubDate>Sat, 05 Jun 2021 15:44:17 +0000</pubDate>
      <link>https://dev.to/chaitanyasuvarna/event-sourcing-pattern-for-microservices-in-net-core-5adk</link>
      <guid>https://dev.to/chaitanyasuvarna/event-sourcing-pattern-for-microservices-in-net-core-5adk</guid>
      <description>&lt;p&gt;Event Sourcing pattern is a &lt;a href="https://martinfowler.com/bliki/DomainDrivenDesign.html" rel="noopener noreferrer"&gt;Domain Driven Design&lt;/a&gt; pattern that defines an approach to handling operations on data that’s driven by a sequence of events, each of which is recorded in an append-only store.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Why should we follow an architecture pattern?&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Every project’s software development life cycle has 2 opposing forces, &lt;strong&gt;the force of doing things&lt;/strong&gt; and &lt;strong&gt;the force of doing things right&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;While both lead to a working software, &lt;strong&gt;the force of doing things&lt;/strong&gt;, or the force of getting things done as I like to call it, leads to a greater amount of rework.&lt;/p&gt;

&lt;p&gt;By not following a tried and tested architecture pattern and just trying to get a tangible output asap always leads to rework. The rework may be due to the issues faced at a later point in the life cycle due to the incorrect domain definition or frequent changes in design due to the lack of domain knowledge at the design phase. Rework could also be due to the inevitable technical debt which one has to pay at some point.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;What is Event Sourcing pattern? How is it different?&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;The typical approach for most applications that work with data is to maintain the current state of the data by updating it as users work with it. For example, in the traditional CRUD model a typical data process is to read data from the store, make some modifications to it, and update the &lt;strong&gt;current state of the data&lt;/strong&gt; with the new values—often by using transactions that lock the data.&lt;/p&gt;

&lt;p&gt;The fundamental idea of &lt;strong&gt;Event Sourcing&lt;/strong&gt; is that of ensuring &lt;strong&gt;every change to the state&lt;/strong&gt; of an application is &lt;strong&gt;captured in an event object&lt;/strong&gt;, and that these event objects are themselves stored in the sequence they were applied in the &lt;strong&gt;EventStore&lt;/strong&gt;. Not only can we query these events, we can also use the event log to reconstruct past states.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Some key facts to remember while implementing Event Sourcing pattern&lt;/strong&gt;
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;An event is something that has happened in the past.&lt;/li&gt;
&lt;li&gt;Events are expressions of the &lt;a href="https://martinfowler.com/bliki/UbiquitousLanguage.html" rel="noopener noreferrer"&gt;ubiquitous language&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;Events are not imperative and are named using past tense verbs.&lt;/li&gt;
&lt;li&gt;Have a persistent store of events. (Event Store)&lt;/li&gt;
&lt;li&gt;Append-only, no delete. A delete or update action(updateEvent) will also be appended in the event store to maintain the history but no existing records will be modified.&lt;/li&gt;
&lt;li&gt;Replay the (related)events to get the last known state of the entity. (Event aggregator)&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;How does Event Sourcing work?&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Let’s take an example of a conference management system that needs to track the number of completed bookings for a conference so that it can check whether there are seats still available when a potential attendee tries to make a booking.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6njyrmqbr5kxvkictj9x.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6njyrmqbr5kxvkictj9x.png" alt="Conference management design"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In our conference management system we have 3 microservices for the sake of this example. This example also implements the &lt;a href="https://docs.microsoft.com/en-us/azure/architecture/patterns/cqrs" rel="noopener noreferrer"&gt;CQRS pattern&lt;/a&gt; along with the Event Sourcing pattern.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Reservation&lt;/strong&gt; : On receiving the &lt;code&gt;CreateReservationCommand&lt;/code&gt; or &lt;code&gt;UpdateReservationCommand&lt;/code&gt; it creates the corresponding &lt;code&gt;ReservationCreatedEvent&lt;/code&gt;/&lt;code&gt;ReservationUpdatedEvent&lt;/code&gt; and publishes the event to the microservices listening to the event(s) on the message bus.
It also checks for the availability of seats for a given conference and availability of the attendee at the specific time slot by using &lt;code&gt;GetConferenceDetailsQuery&lt;/code&gt; and &lt;code&gt;GetAttendeeStatusQuery&lt;/code&gt; respectively.
It also persists the event to the &lt;strong&gt;Event Store&lt;/strong&gt; which is used when a &lt;code&gt;GetConferenceReservationStatusQuery&lt;/code&gt; is received. The &lt;strong&gt;Event Aggregator&lt;/strong&gt; is used to create the current reservation state of a conference by aggregating the sequence of events stored in the Event Store.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Attendee&lt;/strong&gt; : On receiving the &lt;code&gt;ReservationCreatedEvent&lt;/code&gt;/&lt;code&gt;ReservationUpdatedEvent&lt;/code&gt; in the &lt;strong&gt;Event Store&lt;/strong&gt;, an attendee object is created/updated in the database.
On receiving the &lt;code&gt;GetAttendeeStatusQuery&lt;/code&gt; to check the attendee’s availability for a specific time slot, the &lt;strong&gt;Event Aggregator&lt;/strong&gt; is used to aggregate the sequence of ReservationEvents and find out the availability of the attendee for the specific event.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Conference&lt;/strong&gt; : The Conference microservice uses the &lt;strong&gt;Event Aggregator&lt;/strong&gt; to aggregate the &lt;code&gt;ReservationCreatedEvent&lt;/code&gt;/&lt;code&gt;ReservationUpdatedEvent&lt;/code&gt; in the &lt;strong&gt;Event Store&lt;/strong&gt; to create the current state of number of seats available for the conference. This can be used to send the response for &lt;code&gt;GetConferenceReservationStatusQuery&lt;/code&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Now let’s find out what the terms &lt;strong&gt;Event Store&lt;/strong&gt; and &lt;strong&gt;Event Aggregator&lt;/strong&gt;, which were repeatedly used in our example, mean.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Event Store&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;In software, persistence is made of four key operations through which developers manipulate the state of the system. They are CREATE, UPDATE, and DELETE, to alter the state, and QUERIES to read the state without altering. A key principle of software and especially CQRS inspired software, is that asking a question should not change the answer.&lt;/p&gt;

&lt;p&gt;Similarly an &lt;strong&gt;Event Store&lt;/strong&gt; is where the Events are persisted in the form of streams of immutable events. The event store is the permanent source of information, and so the event data should never be updated. The only way to update an entity to undo a change is to add a compensating event to the event store. &lt;/p&gt;

&lt;p&gt;The &lt;code&gt;CREATE&lt;/code&gt; operation in an event-based persistence scenario is not much different from classic persistence(CREATE). The event is is created with the following attributes in the Event Store.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;EventId&lt;/strong&gt; : used to uniquely identify the event.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Event TimeStamp&lt;/strong&gt; : used to identify when the event was created and is also used to identify the sequence of events.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;EventName&lt;/strong&gt; : used to recognise the type of Event&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;EventData&lt;/strong&gt; : the Event object stored in a serialised form&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Aggregator Id&lt;/strong&gt; : used to identify the entity to which the event belongs. This is used while materialising the view for the entity&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  EventStore table in database: &lt;a&gt;&lt;/a&gt;
&lt;/h4&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;EventId&lt;/th&gt;
&lt;th&gt;Event TimeStamp&lt;/th&gt;
&lt;th&gt;EventName&lt;/th&gt;
&lt;th&gt;EventData&lt;/th&gt;
&lt;th&gt;Aggregator Id&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;1&lt;/td&gt;
&lt;td&gt;2019-10-31T01:48:52&lt;/td&gt;
&lt;td&gt;ReservationCreatedEvent&lt;/td&gt;
&lt;td&gt;{ ReservationCreated..&lt;/td&gt;
&lt;td&gt;1000129&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;2&lt;/td&gt;
&lt;td&gt;2019-10-31T02:12:35&lt;/td&gt;
&lt;td&gt;ReservationCreatedEvent&lt;/td&gt;
&lt;td&gt;{ ReservationCreated..&lt;/td&gt;
&lt;td&gt;1000129&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;3&lt;/td&gt;
&lt;td&gt;2019-10-31T02:30:15&lt;/td&gt;
&lt;td&gt;ReservationUpdatedEvent&lt;/td&gt;
&lt;td&gt;{ ReservationUpdated..&lt;/td&gt;
&lt;td&gt;1000129&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;4&lt;/td&gt;
&lt;td&gt;2019-10-31T07:23:18&lt;/td&gt;
&lt;td&gt;ReservationCreatedEvent&lt;/td&gt;
&lt;td&gt;{ ReservationCreated..&lt;/td&gt;
&lt;td&gt;1000129&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;The &lt;code&gt;UPDATE&lt;/code&gt; operation on an entity is different from a classic update. Here you &lt;strong&gt;don’t override an existing record&lt;/strong&gt;, but just &lt;strong&gt;add another record&lt;/strong&gt; with similar information as create operation, unique event id, timestamp of event, event name, serialised UpdateEvent data and aggregator Id.&lt;/p&gt;

&lt;p&gt;The &lt;code&gt;DELETE&lt;/code&gt; operations are analogous to UPDATE operations. Here also you &lt;strong&gt;don’t delete any existing record&lt;/strong&gt;, but just &lt;strong&gt;add another record to specify the delete event&lt;/strong&gt;. Subsequently, we can say that the deletion is logical and consists in writing that the entity with a given ID is no longer valid and should not be considered for the purposes of the business. This can be specified by the unique event name to specify the entity is deleted.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Event Aggregator&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;The main aspect of event sourcing is the persistence of messages, which enables you to keep track of all changes in the state of the application. Recording individual events doesn’t give you immediate notion, however, by reading back the log of events, you can rebuild the present state of the entity. This aspect is what is commonly called the replay of events and is done by the &lt;strong&gt;Event Aggregator&lt;/strong&gt;. &lt;/p&gt;

&lt;p&gt;Event aggregation is a two-step operation. First, you grab all events stored for a given aggregate in sequence using AggregateId and TimeStamp. Second, you look through all events in some way and extract information from events and copy that information to a fresh instance of the aggregate of choice. &lt;/p&gt;

&lt;p&gt;A key function one expects out of an event-based data store is the ability to return the full or partial stream of events. This function is necessary to rebuild the state of an aggregate out of recorded events. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9jd6u4lkv3l6da780ttc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9jd6u4lkv3l6da780ttc.png" alt="Example for conference booking system from MS Docs"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In our example, let’s consider the conference with conferenceId &lt;strong&gt;1000129&lt;/strong&gt; has the capacity of &lt;strong&gt;50 seats&lt;/strong&gt;.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Reservation m/s receives a &lt;code&gt;CreateReservationCommand&lt;/code&gt; to &lt;strong&gt;reserve 1 seat&lt;/strong&gt; for conference &lt;strong&gt;1000129&lt;/strong&gt; followed by &lt;strong&gt;2 seats&lt;/strong&gt;. Therefore 2 CreateEvents are stored in the event store.&lt;/li&gt;
&lt;li&gt;Reservation m/s the receives a &lt;code&gt;UpdateReservationCommand&lt;/code&gt; to &lt;strong&gt;cancel&lt;/strong&gt; the reservation for &lt;strong&gt;1 seat&lt;/strong&gt; for conference &lt;strong&gt;1000129&lt;/strong&gt;. 1 UpdateEvent is stored in the Event Store.&lt;/li&gt;
&lt;li&gt;Reservation m/s receives a &lt;code&gt;CreateReservationCommand&lt;/code&gt; to reserve &lt;strong&gt;1 seat&lt;/strong&gt; for conference &lt;strong&gt;1000129&lt;/strong&gt;. 1 CreateEvent is stored in the event store.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Reservation m/s now receives the &lt;code&gt;GetConferenceReservationStatusQuery&lt;/code&gt;. The Command Handler calls the &lt;strong&gt;Event Aggregator&lt;/strong&gt; called ReservationAggregator which gets the conferenceId &lt;strong&gt;1000129&lt;/strong&gt; from the query and fetches all events in the EventStore where AggregatorId=&lt;strong&gt;1000129&lt;/strong&gt;. It then finds 4 events from the event store (3 CreateEvents and 1 UpdateEvents). On processing these events the seat availability is calculated and the &lt;code&gt;SeateAvailabilityAggregate&lt;/code&gt; is created.&lt;br&gt;
The response is sent back by the Reservation m/s specifying 47 seats out of the 50 seats are available.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Advantages of Event Sourcing pattern&lt;/strong&gt;
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Performance improvement in insert/update of data&lt;/strong&gt; — Events are immutable and can be stored using an append-only operation. Therefore, we would expect to have much less(or none) deadlocks(or concurrency exceptions) happening whenever and event is stored in Event Store.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Simplification of implementation&lt;/strong&gt; – Events don’t directly update a data store. They’re simply recorded for handling at the appropriate time. This can simplify implementation and management. However, the domain model must still be designed to protect itself from requests that might result in an inconsistent state.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Audit trail&lt;/strong&gt; — since state is constructed from sequence of events it is possible to extract detailed log since the begging and up to current date. This is becomes very useful for a production incident post-mortem. The list of events can also be used to analyse application performance and detect user behaviour trends, or to obtain other useful business information.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Projections &amp;amp; queries&lt;/strong&gt; — The append-only storage of events can be used to monitor actions taken against a data store, regenerate the current state as materialised views or projections by replaying the events at any time. In addition, the requirement to use compensating events to cancel changes provides a history of changes that were reversed, which wouldn’t be the case if the model simply stored the current state.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Thus we have seen how Event Sourcing pattern can be used in microservices using .net core.&lt;/p&gt;

&lt;p&gt;Most of the content that I have referred to get a better understanding of Event Sourcing Pattern and to write this post are as below:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;a href="https://app.pluralsight.com/library/courses/modern-software-architecture-domain-models-cqrs-event-sourcing/table-of-contents" rel="noopener noreferrer"&gt;Modern Software Architecture&lt;/a&gt; course by Dino Esposito on Pluralsight.&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.microsoft.com/en-us/azure/architecture/patterns/event-sourcing" rel="noopener noreferrer"&gt;Microsoft Docs&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://martinfowler.com/eaaDev/EventAggregator.html" rel="noopener noreferrer"&gt;Martin Fowler’s blog&lt;/a&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;This post has been created by me as I was learning and implementing Event Sourcing pattern in the code I develop. There may be some parts missed or not completely covered. Please feel free to reach out to me, I’d love to talk about software development patterns and learn along the way.&lt;/p&gt;

&lt;p&gt;I hope you enjoyed reading this.&lt;/p&gt;

&lt;p&gt;Thank you!&lt;/p&gt;

</description>
      <category>dotnet</category>
      <category>microservices</category>
      <category>architecture</category>
      <category>eventdriven</category>
    </item>
    <item>
      <title>Creating Deployment &amp; Rollback SQL Scripts from EntityFrameworkCore migrations</title>
      <dc:creator>chaitanya.dev</dc:creator>
      <pubDate>Mon, 05 Apr 2021 13:56:56 +0000</pubDate>
      <link>https://dev.to/chaitanyasuvarna/creating-deployment-rollback-sql-scripts-from-entityframeworkcore-migrations-5e75</link>
      <guid>https://dev.to/chaitanyasuvarna/creating-deployment-rollback-sql-scripts-from-entityframeworkcore-migrations-5e75</guid>
      <description>&lt;p&gt;If you have worked on an application implemented in dotnet core, chances are high that changes in data models and database schemas are managed using EF Core. The migrations feature in EF Core provides a way to incrementally update the database schema to keep it in sync with the application’s data model while preserving existing data in the database.&lt;/p&gt;

&lt;p&gt;There are various strategies for applying EF Core migrations, with some being more appropriate for production environments, and others for the development lifecycle.&lt;/p&gt;

&lt;p&gt;Microsoft’s &lt;a href="https://docs.microsoft.com/en-us/ef/core/managing-schemas/migrations/applying?tabs=dotnet-core-cli"&gt;EF Core documentation&lt;/a&gt; suggests that the recommended way to deploy migrations to a production database is by generating SQL scripts. The advantages of this strategy are stated as following:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;SQL scripts can be reviewed for accuracy; this is important since applying schema changes to production databases is a potentially dangerous operation that could involve data loss.&lt;/li&gt;
&lt;li&gt;In some cases, the scripts can be tuned to fit the specific needs of a production database.&lt;/li&gt;
&lt;li&gt;SQL scripts can be used in conjunction with a deployment technology, and can even be generated as part of your CI process.&lt;/li&gt;
&lt;li&gt;SQL scripts can be provided to a DBA, and can be managed and archived separately.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Let’s have a look at how to generate SQL scripts for your migrations&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Generating SQL scripts for applying migrations&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;The below command can be executed using the .NET core CLI to generate SQL script for your migrations. This command generates a SQL script from a blank database to the latest migration.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;dotnet ef migrations script
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Script generation accepts the following two arguments to indicate which range of migrations should be generated:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The &lt;code&gt;from&lt;/code&gt; migration should be the last migration applied to the database before running the script. If no migrations have been applied, specify 0 (this is the default).&lt;/li&gt;
&lt;li&gt;The &lt;code&gt;to&lt;/code&gt; migration is the last migration that will be applied to the database after running the script. This defaults to the last migration in your project.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You can also mention the migration from which you want to create the SQL Script to the latest migration by adding the from migration name as mentioned below&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;dotnet ef migrations script FromMigrationName
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you prefer to generate a SQL script from the specified &lt;code&gt;from&lt;/code&gt; migration to the specified &lt;code&gt;to&lt;/code&gt; migration, you could mention the &lt;code&gt;from&lt;/code&gt; &amp;amp; &lt;code&gt;to&lt;/code&gt; migration as mentioned below&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;dotnet ef migrations script FromMigrationName ToMigrationName
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  &lt;strong&gt;Generating Rollback SQL Scripts for your migrations&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Whenever you deploy changes to any higher environment such as a UAT environment, any other testing environment or a Production/DR environment it is always necessary to have a rollback script ready in case there are some issues faced during deployment and you need to rollback the changes so that the user experience/testing is not impacted.&lt;/p&gt;

&lt;p&gt;You can generate rollback scripts using the ef core script generation command. The only difference would be the from and to migrations would be inverted.&lt;/p&gt;

&lt;p&gt;If your command that generates SQL script for applying migrations looks like below&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;dotnet ef migrations script ThirdMigrationName FifthMigrationName 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;then the command to generate Rollback SQL Script would be :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;dotnet ef migrations script FifthMigrationName ThirdMigrationName
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you want to rollback all migrations then you can specify &lt;strong&gt;0&lt;/strong&gt; in the &lt;code&gt;to&lt;/code&gt; argument and &lt;code&gt;from&lt;/code&gt; argument would contain the migration from which you want the rollback script to start.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Idempotent SQL scripts&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;EF Core also supports generating &lt;strong&gt;idempotent&lt;/strong&gt; scripts, which internally check which migrations have already been applied (via the migrations history table), and only apply missing ones. This is useful if you don’t exactly know what the last migration applied to the database was, or if you are deploying to multiple databases that may each be at a different migration.&lt;/p&gt;

&lt;p&gt;The following generates idempotent migrations :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;dotnet ef migrations script --idempotent
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;It is a good practice to always generate idempotent scripts so that you don’t end up adding the same migration multiple times on your database leaving the database in an inconsistent state.&lt;/p&gt;

&lt;p&gt;Before deploying to production the DBA should always review the script for accuracy. This is to ensure that the SQL script is updated in case the production database might have some minor changes which are not present in the other environments and a DBA might be aware of these changes.&lt;/p&gt;

&lt;p&gt;Thus, we saw how we can generate SQL scripts and rollback scripts for EF Core migrations.&lt;/p&gt;

&lt;p&gt;I hope you found this informative and helpful.&lt;/p&gt;

&lt;p&gt;Thank you for reading!&lt;/p&gt;

</description>
      <category>dotnet</category>
      <category>sql</category>
      <category>entityframework</category>
    </item>
    <item>
      <title>Implementing factory pattern for dependency injection in .NET core</title>
      <dc:creator>chaitanya.dev</dc:creator>
      <pubDate>Tue, 23 Mar 2021 15:53:40 +0000</pubDate>
      <link>https://dev.to/chaitanyasuvarna/implementing-factory-pattern-for-dependency-injection-in-net-core-537l</link>
      <guid>https://dev.to/chaitanyasuvarna/implementing-factory-pattern-for-dependency-injection-in-net-core-537l</guid>
      <description>&lt;p&gt;ASP.NET Core supports the dependency injection (DI) software design pattern, which is a technique for achieving &lt;a href="https://docs.microsoft.com/en-us/dotnet/architecture/modern-web-apps-azure/architectural-principles#dependency-inversion"&gt;Inversion of Control (IoC)&lt;/a&gt; between classes and their dependencies.&lt;/p&gt;

&lt;p&gt;A &lt;em&gt;dependency&lt;/em&gt; is an object that another object depends on. If there is a hardcoded dependency in the class i.e. the object is instantiated in the class, this could create some problems. In case there is a need to replace the dependency with another object, it would require the modification of the class. It also makes it difficult to unit test the class.&lt;/p&gt;

&lt;p&gt;By using dependency injection we move the creation and binding of the dependent objects outside of the class that depends on them. Thus, solving the problems that we face with hardcoded dependency and making the classes loosely coupled.&lt;/p&gt;

&lt;h3&gt;
  
  
  What is Factory Pattern?
&lt;/h3&gt;

&lt;p&gt;The Factory Pattern was first introduced in the book “&lt;em&gt;Design Patterns: Elements of Reusable Object-Oriented Software&lt;/em&gt;” by the “Gang of Four” (Erich Gamma, Richard Helm, Ralph Johnson, and John Vlissides).&lt;/p&gt;

&lt;p&gt;The Factory Pattern is a Design Pattern which defines an interface for creating an object but lets the classes that have a dependency on the interface decide which class to instantiate. This abstracts the process of object generation so that the type of object to be instantiated can be determined at run-time by the class that want’s to instantiate the object. &lt;/p&gt;

&lt;h3&gt;
  
  
  When is Factory Pattern for DI required?
&lt;/h3&gt;

&lt;p&gt;Factory Pattern is useful when there are multiple classes that implement an interface and there is a class that has a dependency on this interface. But this class will determine at run-time, based on user input, which type of object does it want to instantiate for that interface.&lt;/p&gt;

&lt;p&gt;To understand this scenario further, let’s take an example of an interface called &lt;code&gt;IShape&lt;/code&gt;.&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;p&gt;And we also have two classes called Cube and Sphere that implement &lt;code&gt;IShape&lt;/code&gt;.&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;p&gt;Now, imagine we have a service class called &lt;code&gt;ShapeCalculationService&lt;/code&gt; which has a dependency on &lt;code&gt;IShape&lt;/code&gt;. It takes the input from user to choose either Cube or Sphere and based on the input it has to instantiate the &lt;code&gt;Cube&lt;/code&gt; or &lt;code&gt;Sphere&lt;/code&gt; object at runtime, how would that be possible?&lt;/p&gt;

&lt;p&gt;The service class needs to use this corresponding Shape object to Get the input and display the SurfaceArea and Volume for the shape.&lt;/p&gt;

&lt;p&gt;This scenario where multiple classes implement an interface and the object that needs to be instantiated is decided at runtime, is where Factory Pattern comes to use.&lt;/p&gt;

&lt;h3&gt;
  
  
  How to implement Factory Pattern for DI?
&lt;/h3&gt;

&lt;p&gt;Continuing with above example, we will try to implement factory pattern in this scenario.&lt;br&gt;
To abstract the instantiation of the correct Shape object at runtime, we will create a &lt;code&gt;ShapeFactory&lt;/code&gt; class who’s responsibility is to resolve which concrete class is required to be instantiated for a given selection by user.&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;p&gt;Here you can see that the &lt;code&gt;ShapeFactory&lt;/code&gt; class has a dependency on &lt;code&gt;IServiceProvider&lt;/code&gt; so that &lt;code&gt;ShapeFactory&lt;/code&gt; only resolves which concrete class needs to be instantiated but should ask the built-in service container of .Net core to get the instance and resolve its dependencies.&lt;/p&gt;

&lt;p&gt;This is because we want to rely on &lt;strong&gt;IoC container of .Net core&lt;/strong&gt; to resolve our dependencies and don’t want to make changes in out factory every single time a new dependency is introduced in either of &lt;code&gt;Cube&lt;/code&gt; or &lt;code&gt;Sphere&lt;/code&gt; classes.&lt;br&gt;
This further &lt;strong&gt;decouples&lt;/strong&gt; the code and makes it easier to manage.&lt;/p&gt;

&lt;p&gt;The place where you setup the ServiceCollection for your project should look like below so that the DI of .Net core could figure out the required dependencies by either of the service and could resolve them at run time.&lt;/p&gt;

&lt;p&gt;In my example, since this is a simple console application, this code resides in &lt;code&gt;Program.cs&lt;/code&gt;&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;p&gt;Finally, I’d also like you to see the actual &lt;code&gt;ShapeCalculationService&lt;/code&gt; that takes an input from the user and uses &lt;code&gt;ShapeFactory&lt;/code&gt; to get the &lt;code&gt;Shape&lt;/code&gt; class at runtime which is the used to display the surface area and the volume.&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;h3&gt;
  
  
  Conclusion
&lt;/h3&gt;

&lt;p&gt;Thus we have seen how we can inject a factory to get total control of creation of our dependencies at runtime, while still using .Net core’s IoC container to resolve our dependencies.&lt;/p&gt;

&lt;p&gt;If you are like me, who needs a small yet complete demo solution to clearly understand how this works, I have created a demo project in my github repository which would help you understand this better.&lt;br&gt;
You can have a look at it here : &lt;a href="https://github.com/chaitanya-suvarna/DotNetCoreFactoryPattern"&gt;DotNetCoreFactoryPattern&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I hope you found this interesting. Thanks for reading!&lt;/p&gt;

</description>
    </item>
    <item>
      <title>The $81 million Bangladesh bank heist that was assisted with improper software security practices in place</title>
      <dc:creator>chaitanya.dev</dc:creator>
      <pubDate>Mon, 30 Nov 2020 06:59:39 +0000</pubDate>
      <link>https://dev.to/chaitanyasuvarna/the-81-million-bangladesh-bank-heist-that-was-assisted-with-improper-software-security-practices-in-place-m6l</link>
      <guid>https://dev.to/chaitanyasuvarna/the-81-million-bangladesh-bank-heist-that-was-assisted-with-improper-software-security-practices-in-place-m6l</guid>
      <description>&lt;p&gt;In February 2016, hackers broke into Bangladesh’s central bank and were able to steal &lt;strong&gt;$81 million&lt;/strong&gt; from the bank. The ‘breaking in’ was not done physically, instead they broke into the bank’s computer systems due to improper security controls in place. The hackers used a malware that allowed them to hack into the bank’s &lt;a href="https://www.swift.com/about-us/discover-swift/messaging-and-standards"&gt;SWIFT&lt;/a&gt; software to transfer money, as well as hide their tracks.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;What is a SWIFT message?&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;SWIFT&lt;/strong&gt; stands for the Society for Worldwide Interbank Financial Telecommunication and is a consortium that operates a trusted and closed computer network for communication between member banks around the world. The consortium, which dates back to the 1970s, is based in Belgium and is overseen by the National Bank of Belgium and a committee composed of representatives from the US Federal Reserve, the Bank of England, the European Central Bank, the Bank of Japan and other major banks. The SWIFT platform has some &lt;strong&gt;11,000 users&lt;/strong&gt; and processes about &lt;strong&gt;25 million communications a day&lt;/strong&gt;, most of them being money transfer transactions. Financial institutions and brokerage houses that use SWIFT have codes that identify each institution as well as credentials that authenticate and verify transactions. The SWIFT message contains &lt;a href="https://www.investopedia.com/articles/personal-finance/050515/how-swift-system-works.asp"&gt;instructions&lt;/a&gt; through these standardised system of codes.&lt;/p&gt;

&lt;p&gt;Now that we know what SWIFT messages are, let’s have a look at the sequence of events that took place leading to this heist and also discuss the learnings from this event.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Pre-heist&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;In January, weeks before the heist, the hackers obtained the computer credentials of a SWIFT operator at Bangladesh Bank by installing a malware on the bank’s systems. The hackers did a series of test runs, logging into the system briefly several times between Jan 24 and Feb 2. One day they left monitoring software running on the bank’s SWIFT system; on another they deleted files from a database. All this to make sure they have the necessary control over the systems.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;February 4, Thursday&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;During the late evening of Feb 4, when most of the bank’s employees were off work, the hackers initiated 35 fraudulent payment orders via SWIFT worth &lt;strong&gt;$951 million&lt;/strong&gt; from the Bangladesh Bank’s account with the &lt;a href="https://en.wikipedia.org/wiki/Federal_Reserve_Bank_of_New_York"&gt;Federal Reserve Bank of New York&lt;/a&gt; to transfer funds. Out of these 35 requests, 30 requests worth $851 million were flagged for review by the Fed while 5 requests were granted; &lt;strong&gt;$20 million to Sri Lanka&lt;/strong&gt; and &lt;strong&gt;$81 million to Philippines&lt;/strong&gt;. These successful transactions were then forwarded to the correspondent banks to be later transferred to the destination bank accounts. The $20 million transfer to Sri Lanka gained suspicion from Deutsche bank, one of the routing banks, because the &lt;strong&gt;hackers misspelled the word&lt;/strong&gt; “Foundation” as “&lt;strong&gt;Fundation&lt;/strong&gt;” thus putting the transaction at halt until clarifications are received from Bangladesh Bank. The Fed had also sent multiple notifications to the Bangladesh Bank but received no response.&lt;br&gt;
After the requests are sent, the malware checks the SWIFT messaging system on the terminal and deletes any incoming messages or confirmation messages relating to the fraudulent transfers before they are sent to the office printer. Thus cleaning all tracks.&lt;/p&gt;
&lt;h3&gt;
  
  
  &lt;strong&gt;February 5, Friday&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Friday, being a weekend day in Bangladesh, only a handful of staff come into the bank finding an empty printer tray, that would normally contain the SWIFT transaction related messages that are printed automatically. They also find out that the printer is broken, which was not unusual. The boss asks for the printer to be repaired and heads off with his daily routine. At the same time, the $81 million lands in 4 bank accounts in Manila branch of a Philippine bank called &lt;a href="https://en.wikipedia.org/wiki/Rizal_Commercial_Banking_Corporation"&gt;RCBC&lt;/a&gt;. These accounts were later found to be created with fraudulent IDs. The funds were were then transferred to a foreign exchange broker to be converted to Philippine pesos, returned to the RCBC, consolidated in an account.&lt;/p&gt;
&lt;h3&gt;
  
  
  &lt;strong&gt;February 6, Saturday&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;The employees of Bangladesh Bank returned to work on Saturday around 9am and tried again to use the printer only to discover the SWIFT software was not starting up while showing an error that said a file NROFF.EXE “is missing or changed”. When they finally got access to the SWIFT messaging system after a series of approvals from senior officials to use other means to access the system and manually print the SWIFT messages, did they realise what had happened. They promptly &lt;strong&gt;contacted the New York Fed&lt;/strong&gt; through phone, emails and fax details available on their official website but there were &lt;strong&gt;no response&lt;/strong&gt; from the office as it was the &lt;strong&gt;weekend&lt;/strong&gt;.&lt;/p&gt;
&lt;h3&gt;
  
  
  &lt;strong&gt;February 8, Monday&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;SWIFT remotely fixes the messaging system and now Bangladesh Bank has realised that the money has gone to RCBC in Philippines. They send a SWIFT message to RCBC asking them to STOP the transfers, but it’s a &lt;strong&gt;public holiday in the Philippines on 8th Feb&lt;/strong&gt; due to the Chinese New year and the message was received by them only a day later.&lt;/p&gt;
&lt;h3&gt;
  
  
  &lt;strong&gt;February 9, Tuesday&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;The SWIFT message sent to RCBC asking them to STOP the transfers was sent as &lt;strong&gt;normal SWIFT message&lt;/strong&gt; and not as a CANCEL message. Due to this they are added to a pile of other routine messages at the RCBC headquarters and sent to the RCBC branch containing the accounts. By the time the branch gets to the message, the money in the accounts is already transferred to other accounts with most of it ending up into &lt;strong&gt;4 Philippine casinos&lt;/strong&gt;.&lt;br&gt;
Casinos in Philippines are not covered by anti-money laundering laws, which means there are gaps in record-keeping around where money goes once a casino obtains it.&lt;br&gt;
To this day the entirety of $81 million has not been recovered and there are multiple investigations going on at Bangladesh Bank, Philippine’s Anti money Laundering Council and the US Fed.&lt;/p&gt;
&lt;h3&gt;
  
  
  &lt;strong&gt;Why was this heist successful?&lt;/strong&gt;
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Carefully designed heist&lt;/strong&gt; by the hackers. The 4 bank accounts in RCBC bank used to transfer the money were created in May 2015 months ago with fake IDs and were laying cold with just $500 in them for months until the attack. The hackers also planned around the fact that Friday would be a weekend at Bangladesh and Monday would be a holiday in Philippines, not allowing all parties to be available at the same time.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Delay at Bangladesh Bank&lt;/strong&gt;. Had they proactively worked towards finding out why the SWIFT acknowledgement messages were not printed on Friday, they could have found out about the heist and contacted the New York Fed who’d have responded on Friday instead of the off-duty hours on Saturday.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;No 24*7 support by Federal Reserve Bank of New York&lt;/strong&gt; for such emergencies and dependency on the the automated system that mostly reviews just the format of the SWIFT messages and flags suspicious messages to be manually reviewed.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Weak Anti Money Laundering laws in Philippines&lt;/strong&gt; which led the money to be vanished in the Casino system where they can keep customer and account details private without proper record keeping.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Lack of proper software security controls in place&lt;/strong&gt; at the Bangladesh bank was one of the major reasons this hack was successful and I’d like to discuss more on this.&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;
  
  
  &lt;strong&gt;How did improper software security practices enable the heist?&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;The Bangladesh bank had &lt;strong&gt;not protected its computer systems with a firewall&lt;/strong&gt;, and it had used &lt;strong&gt;second-hand $10 electronic switches to network computers&lt;/strong&gt; linked to the SWIFT global payment system. Hackers may have exploited such weaknesses after Bangladesh Bank connected a new electronic payment system, known as real time gross settlement (RTGS) in November the previous year. One of the reports also said that the malware might have installed when one of the employees accessed their mailbox on the &lt;strong&gt;same network&lt;/strong&gt; as the SWIFT system and opened a &lt;strong&gt;contaminated email&lt;/strong&gt; which could have been easily prevented with a proper firewall in place.&lt;br&gt;
“Banks should conduct SWIFT transactions only on computers that are &lt;strong&gt;isolated from other devices&lt;/strong&gt; on their networks”, says Sean Sullivan, an adviser at the security firm F-Secure. “It should be a &lt;strong&gt;dedicated computer for its single task&lt;/strong&gt;”, Sullivan says. Despite Swift’s warnings, the bank had &lt;strong&gt;not segregated its Swift server&lt;/strong&gt; from the rest of the computer network. In addition to all this, the bank had &lt;strong&gt;not updated&lt;/strong&gt; the SWIFT systems to the latest version of the softwares that had the &lt;strong&gt;latest security patches&lt;/strong&gt; included.&lt;br&gt;
Without these processes in place, the hackers were already in the Bank’s network and with enough access to override any local security settings and hiding in plain sight for months gaining an understanding of banks business operations and collecting user credentials and other information to get into the Swift server. This helped the hackers to turn one of Swift’s defining features — its &lt;strong&gt;global reach&lt;/strong&gt; — into a vulnerability.&lt;/p&gt;
&lt;h3&gt;
  
  
  &lt;strong&gt;SWIFT’s statement&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;In a statement provided to Information Security Media Group, SWIFT notes that it is aware of the risks and was taking steps to help banks shore up security.&lt;/p&gt;

&lt;p&gt;“We understand that the malware is designed to hide the traces of fraudulent payments from customers’ local database applications and can only be installed on users’ local systems by attackers that have successfully identified and exploited weaknesses in their local security,” the statement says. “We have developed a facility to assist customers in enhancing their security and to spot inconsistencies in their local database records.”&lt;/p&gt;

&lt;p&gt;“However, the key defense against such attack scenarios &lt;strong&gt;remains for users to implement appropriate security measures in their local environments&lt;/strong&gt; to safeguard their systems – in particular those used to access SWIFT – against such potential security threats. Such protections should be implemented by users to prevent the injection of malware into, or any misappropriation of, their interfaces and other core systems.”&lt;/p&gt;
&lt;h3&gt;
  
  
  &lt;strong&gt;Do we pay enough attention to our security controls and the technology debt?&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;All the folks working in a software development related role have come across the &lt;strong&gt;dilemma&lt;/strong&gt; of having to prioritise between features focused on &lt;strong&gt;enhancing security controls&lt;/strong&gt; in applications/products that are currently LIVE or delivering features that would result in &lt;strong&gt;additional functionality&lt;/strong&gt;. There is also the &lt;strong&gt;technical debt&lt;/strong&gt; backlog that we accrue because of multiple reasons such as lack of time or understanding. These technical debts might sometimes lead to security loopholes in our system that would make it easier for someone with malicious intent to break into our systems and cause damages.&lt;br&gt;
More often than not, teams put off paying-off this technical debt or making sure proper security controls or practices are in place to have a fool-proof system that is unbreakable or is easy to recover before it is too late. Bangladesh bank made the mistake of not having proper firewall and other network security features in place that would have isolated and protected the SWIFT server, hindering the hackers’ actions to get into the network using the malware and would have saved millions for them which was lost in a matter of hours.&lt;/p&gt;


&lt;blockquote class="ltag__twitter-tweet"&gt;
      &lt;div class="ltag__twitter-tweet__media ltag__twitter-tweet__media__video-wrapper"&gt;
        &lt;div class="ltag__twitter-tweet__media--video-preview"&gt;
          &lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--rnyLUA3f--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://pbs.twimg.com/ext_tw_video_thumb/1332141933137858560/pu/img/yR-NEovqW0iQxh4x.jpg" alt="unknown tweet media content"&gt;
          &lt;img src="/assets/play-butt.svg" class="ltag__twitter-tweet__play-butt" alt="Play butt"&gt;
        &lt;/div&gt;
        &lt;div class="ltag__twitter-tweet__video"&gt;
          
            
          
        &lt;/div&gt;
      &lt;/div&gt;

  &lt;div class="ltag__twitter-tweet__main"&gt;
    &lt;div class="ltag__twitter-tweet__header"&gt;
      &lt;img class="ltag__twitter-tweet__profile-image" src="https://res.cloudinary.com/practicaldev/image/fetch/s--CWczz8GT--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://pbs.twimg.com/profile_images/1323752440718454785/nHBxEcsr_normal.jpg" alt="Simpsons Against DevOps profile image"&gt;
      &lt;div class="ltag__twitter-tweet__full-name"&gt;
        Simpsons Against DevOps
      &lt;/div&gt;
      &lt;div class="ltag__twitter-tweet__username"&gt;
        @simpsonsops
      &lt;/div&gt;
      &lt;div class="ltag__twitter-tweet__twitter-logo"&gt;
        &lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ir1kO05j--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev.to/assets/twitter-f95605061196010f91e64806688390eb1a4dbc9e913682e043eb8b1e06ca484f.svg" alt="twitter logo"&gt;
      &lt;/div&gt;
    &lt;/div&gt;
    &lt;div class="ltag__twitter-tweet__body"&gt;
      "it has been scheduled" 
    &lt;/div&gt;
    &lt;div class="ltag__twitter-tweet__date"&gt;
      02:01 AM - 27 Nov 2020
    &lt;/div&gt;


    &lt;div class="ltag__twitter-tweet__actions"&gt;
      &lt;a href="https://twitter.com/intent/tweet?in_reply_to=1332142380816863232" class="ltag__twitter-tweet__actions__button"&gt;
        &lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--fFnoeFxk--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev.to/assets/twitter-reply-action-238fe0a37991706a6880ed13941c3efd6b371e4aefe288fe8e0db85250708bc4.svg" alt="Twitter reply action"&gt;
      &lt;/a&gt;
      &lt;a href="https://twitter.com/intent/retweet?tweet_id=1332142380816863232" class="ltag__twitter-tweet__actions__button"&gt;
        &lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--k6dcrOn8--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev.to/assets/twitter-retweet-action-632c83532a4e7de573c5c08dbb090ee18b348b13e2793175fea914827bc42046.svg" alt="Twitter retweet action"&gt;
      &lt;/a&gt;
      &lt;a href="https://twitter.com/intent/like?tweet_id=1332142380816863232" class="ltag__twitter-tweet__actions__button"&gt;
        &lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--SRQc9lOp--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev.to/assets/twitter-like-action-1ea89f4b87c7d37465b0eb78d51fcb7fe6c03a089805d7ea014ba71365be5171.svg" alt="Twitter like action"&gt;
      &lt;/a&gt;
    &lt;/div&gt;
  &lt;/div&gt;
&lt;/blockquote&gt;


&lt;h3&gt;
  
  
  &lt;strong&gt;Aftermath&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;The New York Fed has now set up a &lt;strong&gt;24-hour hotline for emergency calls&lt;/strong&gt; from some 250 account holders, mostly central banks, around the world. SWIFT has advised banks using the &lt;strong&gt;SWIFT Alliance Access system&lt;/strong&gt; to strengthen their cyber security posture and ensure they are following SWIFT security guidelines. The case threatened to reinstate the Philippines to the &lt;a href="https://en.wikipedia.org/wiki/Financial_Action_Task_Force_on_Money_Laundering"&gt;Financial Action Task Force on Money Laundering&lt;/a&gt; blacklist of countries that made insufficient efforts against money laundering. The Bangladesh Bank continued its efforts to retrieve the stolen money and had only recovered about $15 million, mostly from a gaming junket operator based in Metro Manila. In February 2019, the Federal Reserve pledged it would help Bangladesh Bank recover the money and SWIFT has also decided to help the central bank rebuild its infrastructure.&lt;/p&gt;

&lt;p&gt;This case was an interesting research for me and I have written this blog post based on multiple articles that I have read online. I may have gotten certain details wrong as I am not an expert, please reach out to me in this case.&lt;/p&gt;

&lt;p&gt;I have written this post to highlight how important software security practices are and how a minor oversight of these security controls could lead to individuals causing disruption remotely even with trusted systems in place like SWIFT that is used worldwide for huge amounts of money transfers.&lt;/p&gt;

&lt;p&gt;I hope you enjoyed reading this post.&lt;/p&gt;

&lt;p&gt;Thank you!&lt;/p&gt;

</description>
      <category>security</category>
      <category>controls</category>
      <category>swift</category>
      <category>bank</category>
    </item>
    <item>
      <title>How a developer broke the internet by un-publishing his package containing 11 lines of code</title>
      <dc:creator>chaitanya.dev</dc:creator>
      <pubDate>Sun, 22 Nov 2020 13:52:27 +0000</pubDate>
      <link>https://dev.to/chaitanyasuvarna/how-a-developer-broke-the-internet-by-un-publishing-his-package-containing-11-lines-of-code-31ei</link>
      <guid>https://dev.to/chaitanyasuvarna/how-a-developer-broke-the-internet-by-un-publishing-his-package-containing-11-lines-of-code-31ei</guid>
      <description>&lt;p&gt;All Javascript developers might have used &lt;a href="https://www.npmjs.com/"&gt;npm&lt;/a&gt; at some point in their lifetime. npm is the default package manager for &lt;a href="https://nodejs.org/en/"&gt;node.js&lt;/a&gt;. For those who don’t know what npm is, npm – short for &lt;strong&gt;Node Package Manager&lt;/strong&gt; is a package manager for the JavaScript programming language. npm, Inc. is a subsidiary of GitHub. npm can manage packages that are dependencies of a particular project, allowing users to reuse modules or pieces of code that are already distributed and present in npm’s remote registry. A package, that your project depends on, can itself have a dependency on another package, which has a dependency on another package and so on.. But the good thing is, with npm you don’t have to worry about the other dependencies, npm handles it for you and does all of it for &lt;strong&gt;free&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Now that we know what npm is and how it works, let’s see what happened on 22nd March 2016 that caused highly used packages like React, Node, Babel etc to break with multiple JavaScript programmers around the world receiving a strange error message when they tried to run their code.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Background&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://kodfabrik.com/"&gt;Azer Koçulu&lt;/a&gt; is an open-source developer who had been publishing and maintaining his packages on npm for other developers to use and include in their packages. Out of his ~270 packages on npm, one of them was called &lt;code&gt;kik&lt;/code&gt;, which helped programmers set up templates for their projects. &lt;br&gt;
&lt;a href="https://www.kik.com/"&gt;Kik&lt;/a&gt; also happens to be the name of a freeware instant messaging mobile app available on both android and iOS, from the company &lt;strong&gt;Kik interactive&lt;/strong&gt; based in Ontario, Canada.&lt;/p&gt;
&lt;h3&gt;
  
  
  &lt;strong&gt;The E-mail&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;One fine day, Koçulu received an email from one of Kik’s patent agents, asking him to rename his package called &lt;code&gt;kik&lt;/code&gt; as they were planning to publish a package on npm and Koçulu’s NPM package could have caused a confusion. The entire e-mail thread can be found &lt;a href="https://medium.com/@mproberts/a-discussion-about-the-breaking-of-the-internet-3d4d2a83aa4d"&gt;here&lt;/a&gt; but to give you the gist, Azer declined Kik’s request. Bob Stratton, Kik’s patent agent, put forward Kik’s request for the name &lt;code&gt;kik&lt;/code&gt; to NPM team, again citing the company’s trademark and potential confusion.&lt;br&gt;
NPM decided to side with Kik and took &lt;code&gt;kik&lt;/code&gt; away from Azer, handing the name over to the company.&lt;/p&gt;
&lt;h3&gt;
  
  
  &lt;strong&gt;The Liberation of Modules&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;On finding out that NPM had sided with the corporate, Koçulu wrote to NPM saying that he wanted all of the packages he had published on npm taken down, or they should let him know how he can take them all down quickly. Two days after Koçulu sent this email, on Tuesday 22nd March, Programmers all over the world were left staring at broken builds and failed installations. Out of the multiple lines of errors, one of the lines read as :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;npm ERR! 404 'left-pad' is not in the npm registry.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;What this error means is that, the code that you’re trying to build/run requires a package called &lt;code&gt;left-pad&lt;/code&gt; that does not exist in the npm registry (the one that we talked about at the start of this post) . Where did this package named ‘left-pad’ go?&lt;br&gt;
It seems Koçulu did exactly what he had written in his email, he unpublished all his packages from npm and &lt;strong&gt;left-pad&lt;/strong&gt; was one of the packages published by Koçulu. He wrote a &lt;a href="https://kodfabrik.com/journal/i-ve-just-liberated-my-modules"&gt;blog post&lt;/a&gt; explaining why he had unpublished all his modules. “This situation made me realize that NPM is someone’s private land where corporate is more powerful than the people, and I do open source because Power To The People,” Koçulu said in his blog.&lt;/p&gt;
&lt;h3&gt;
  
  
  &lt;strong&gt;Un-Un-publishing left-pad&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;To fix all the failing projects and packages around the world, on 23rd March Laurie Voss, CTO and cofounder of NPM, decided to do something unconventional and &lt;strong&gt;restore the unpublished left-pad 0.0.3&lt;/strong&gt; that apps required on NPM. His &lt;a href="https://twitter.com/seldo/status/712414588281552900"&gt;tweet&lt;/a&gt; read “&lt;strong&gt;Un-un-publishing&lt;/strong&gt; is an unprecedented action that we’re taking given the severity and widespread nature of breakage, and isn’t done lightly.”&lt;br&gt;
With that, all the failing packages started building and running successfully thus fixing the internet. Laurie also said “Even within npm we’re not unanimous that this was the right call, but I cannot see hundreds of builds failing every second and not fix it.” while I agree with him on this, it also made me wonder and think about a couple of things that we are gonna talk about next.&lt;/p&gt;
&lt;h3&gt;
  
  
  &lt;strong&gt;What was left-pad ?&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Let’s have a look at the contents of left-pad and try to figure out why were so many projects all over the world making use of this package.&lt;br&gt;
left-pad, as the name suggests, pads the lefthand-side of strings with characters or spaces and the entire package contains only 11 lines of code. &lt;strong&gt;11 lines&lt;/strong&gt; . This is left-pad in it’s entirety.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;module.exports = leftpad; 
function leftpad (str, len, ch) {
   str = String(str);
   var i = -1;
   if (!ch &amp;amp;&amp;amp; ch !== 0) ch = ' ';
   len = len - str.length;
   while (++i &amp;lt; len) {
     str = ch + str;
   }
   return str;
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  &lt;strong&gt;Why did this cause so many packages on NPM to break?&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;React, Babel, and a bunch of other high-profile packages on NPM broke on March 22nd 2016 because all these packages and projects took on a &lt;strong&gt;dependency&lt;/strong&gt; for a simple left padding string function on the package &lt;code&gt;left-pad&lt;/code&gt;. Most programmers, who were facing these build errors, might not have even heard the name left-pad but their code was breaking because their apps were dependent on some packages, which in turn were dependent on some packages and down the line, one of the dependencies might have been &lt;strong&gt;left-pad&lt;/strong&gt;.&lt;br&gt;
Ideally programmers don’t have to worry about all these dependencies as NPM takes care of this for them and has always been reliable in doing so. In this case though, the package was unpublished, and there was no way npm could find this dependency, thus causing these unforeseen errors.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Why did so many packages depend on left-pad to, well, left pad?&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;With only 11 lines of code, left-pad is just a function exported as a module that so many packages took a dependency on rather than the developers of those packages writing a basic function to left pad by themselves. It would hardly take a few minutes for a well versed programmer to write such a function and yet they decided to depend on another developer for this. Tying together multiple third-party dependencies or packages and developing a project with minimal code should not be considered ideal. Any issues with the third-party dependency would cause your code to break and you’ll be dependent on another developer to fix their work so that you can get your project to start functioning properly again. This has to be considered a serious issue when web services like Facebook, for example, become indirectly dependent on code written by other programmers that don’t even know the impact their code might be having down the line.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;How many dependencies are too many dependencies?&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Have we become so lazy that we require a &lt;a href="https://www.npmjs.com/package/isarray"&gt;package&lt;/a&gt; to check if an object &lt;strong&gt;isArray&lt;/strong&gt;? The package contains &lt;strong&gt;one line of code&lt;/strong&gt; and is downloaded 39,001,468 times weekly as I write this post. Do we really need to publish packages containing just one function? And should we be creating dependencies on packages for few lines of code that we can easily write?&lt;br&gt;
I think this method of software development needs to change and dependencies should be created on ‘&lt;strong&gt;libraries&lt;/strong&gt;‘ that provide an array of interrelated complex functionalities. Why would you import a package to add, subtract, multiply instead of importing a package that provides all Math functionalities?&lt;br&gt;
Ease of writing code by adding dependency after dependency for minimal functionality, leads to difficulties in maintaining code when you don’t have control on third-party packages and increases the points of failure.&lt;br&gt;
There should be least amount of dependencies and only on well-known libraries that offer many complex functionalities that would be hard/time consuming to write by oneself so that the risk of errors are worth it.&lt;/p&gt;

&lt;p&gt;To all programmers, all I say is that in the near future if there is a small and simple functionality that you need to use in your project, choose to write a few lines yourself rather than add a dependency on an unknown package. Take the frustrated programmers as an example who chose to directly or indirectly depend on 11 lines of code instead of writing it themselves.&lt;/p&gt;

&lt;p&gt;I have not written this blog to discuss the ethics/legality of the actions taken by Kik, NPM or Azer as I am no expert in that area and my opinions on that would not be valid.&lt;/p&gt;

&lt;p&gt;I have written this blog to discuss how the methodology of depending on so many small APIs( should 1-liner functions classify as API? ) can be disastrous and to re-think this way of programming.&lt;br&gt;
I’d also suggest you to go through this &lt;a href="https://www.reddit.com/r/programming/comments/4bjss2/an_11_line_npm_package_called_leftpad_with_only/"&gt;reddit post&lt;/a&gt; to see what other programmer’s opinions are on this event.&lt;/p&gt;

&lt;p&gt;All of this blog’s content comes from google search, reading articles etc and I may have been wrong in some places, please reach out to me if I have missed something.&lt;/p&gt;

&lt;p&gt;Note: The featured image is by &lt;a href="https://xkcd.com/"&gt;xkcd&lt;/a&gt; and all credits go to the artist.&lt;/p&gt;

&lt;p&gt;I hope you enjoyed reading this post!&lt;br&gt;
Thank you.&lt;/p&gt;

</description>
      <category>npm</category>
      <category>programming</category>
      <category>practices</category>
      <category>api</category>
    </item>
    <item>
      <title>Fetch Service Status &amp; Storage Info remotely in C# using WMI</title>
      <dc:creator>chaitanya.dev</dc:creator>
      <pubDate>Sun, 20 Sep 2020 08:24:39 +0000</pubDate>
      <link>https://dev.to/chaitanyasuvarna/fetch-service-status-storage-info-remotely-in-c-using-wmi-56ck</link>
      <guid>https://dev.to/chaitanyasuvarna/fetch-service-status-storage-info-remotely-in-c-using-wmi-56ck</guid>
      <description>&lt;p&gt;I was recently working on a &lt;a href="https://github.com/chaitanya-suvarna/RemoteServerStatus/"&gt;pet project&lt;/a&gt; where I wanted to find the status of some services on &lt;strong&gt;multiple server instances&lt;/strong&gt; and display this on a web page so that I have a single page to look at if I want to check the &lt;strong&gt;status&lt;/strong&gt; if any of my application services have stopped on any of the sever instances. I also wanted to be able to &lt;strong&gt;Start&lt;/strong&gt; or &lt;strong&gt;Stop&lt;/strong&gt; these services from the same page. With this in place I no longer needed to log into any of the remote servers to check up on the services. And just as an add-on, I also wanted to be able to look at the storage information on these servers so that I can take action on any of the drives that are filling up quickly.&lt;/p&gt;

&lt;p&gt;While looking around trying to figure out the easiest way to create such a Web App I came across &lt;a href="https://docs.microsoft.com/en-us/windows/win32/wmisdk/wmi-start-page"&gt;&lt;strong&gt;Windows Management Instrumentation&lt;/strong&gt;&lt;/a&gt; which is the infrastructure for management data and operations on &lt;strong&gt;Windows-based&lt;/strong&gt; operating systems. Developers can use WMI to &lt;strong&gt;remotely monitor the hardware and software&lt;/strong&gt; on remote computers. Remote connections for managed code are accomplished through the &lt;a href="https://docs.microsoft.com/en-us/dotnet/api/system.management?view=dotnet-plat-ext-3.1"&gt;System.Management&lt;/a&gt; namespace. &lt;/p&gt;

&lt;p&gt;Let’s have a look at how we can go about fetching Service and Storage info remotely using WMI with the System.Management namespace.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Setting up ConntectionOptions&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;The &lt;strong&gt;ConnectionOptions&lt;/strong&gt; class from the &lt;code&gt;System.Management&lt;/code&gt; namespace specifies all settings required to make a WMI connection. You can create a &lt;code&gt;ConnectionOptions&lt;/code&gt; object and use it without setting up any of it’s properties to connect to the remote computer with default connection options. In this example I have setup some of the properties based on my connection requirements to the remote servers.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ConnectionOptions connection = new ConnectionOptions();
connection.Username = "User";
connection.Password = "AStrongPassword";
connection.Authority = "ntlmdomain:DOMAINNAME";
connection.EnablePrivileges = true;
connection.Authentication = AuthenticationLevel.Default;
connection.Impersonation = ImpersonationLevel.Impersonate;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  &lt;strong&gt;Connecting to remote server using ManagementScope&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;The &lt;strong&gt;ManagementScope&lt;/strong&gt; class represents a scope for management operations. You can initialize a new &lt;code&gt;ManagementScope&lt;/code&gt; with a specific path and then connect the scope object to a namespace on a remote computer using the &lt;code&gt;ConnectionOptions&lt;/code&gt; object.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ManagementScope scope = new ManagementScope(
                        $"\\\\{serverName}\\root\\CIMV2", connection);
scope.Connect();
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  &lt;strong&gt;Fetching management information using ManagementObjectSearcher and ObjectQuery&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;The &lt;strong&gt;ObjectQuery&lt;/strong&gt; class is used to specify a query in the &lt;code&gt;ManagementObjectSearcher&lt;/code&gt;.&lt;br&gt;
The &lt;strong&gt;ManagementObjectSearcher&lt;/strong&gt; class is used to retrieve a collection of management objects based on a specified query. This class is one of the more commonly used entry points to retrieving management information. In this example I have created the &lt;code&gt;ObjectQuery&lt;/code&gt; to fetch &lt;strong&gt;LogicalDisk info&lt;/strong&gt; and I am using &lt;code&gt;ManagementObjectSearcher&lt;/code&gt; to get each disk’s info.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ObjectQuery query = new ObjectQuery("SELECT * FROM Win32_LogicalDisk");

ManagementObjectSearcher searcher = new ManagementObjectSearcher(scope, query);

foreach (ManagementObject managementObject in searcher.Get())
{
   Console.WriteLine("Drive Name :" +
                      managementObject["Name"].ToString());
   Console.WriteLine("Volume Size :" +
                      managementObject["Size"].ToString());
   Console.WriteLine("Free Space :" +
                      managementObject["FreeSpace"].ToString());
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Similarly we can also fetch &lt;strong&gt;Service information&lt;/strong&gt; like below&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ObjectQuery query = new ObjectQuery("SELECT * FROM Win32_Service");

ManagementObjectSearcher searcher = new ManagementObjectSearcher(scope, query);

foreach (ManagementObject managementObject in searcher.Get())
{
   Console.WriteLine("Service Name :" +
                      managementObject["DisplayName"].ToString());
   Console.WriteLine("Service State :" +
                      managementObject["State"].ToString());
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  &lt;strong&gt;Using InvokeMethod() on a ManagementObject&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;We can use the method &lt;strong&gt;InvokeMethod()&lt;/strong&gt; on a &lt;code&gt;ManagementObject&lt;/code&gt; to perform operations on it asynchronously. In the below example, I have created the &lt;code&gt;ObjectQuery&lt;/code&gt; to fetch the &lt;code&gt;ManagementObject&lt;/code&gt; to represent a specific service who’s &lt;code&gt;DisplayName&lt;/code&gt; is stored in service_name. I am then calling &lt;code&gt;InvokeMethod()&lt;/code&gt; to invoke the &lt;code&gt;StartService&lt;/code&gt; method with no options which will &lt;strong&gt;start the service on the remote server asynchronously&lt;/strong&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ObjectQuery query = new ObjectQuery("SELECT * FROM Win32_Service where
                                     DisplayName= '" + service_name + "'");

using (ManagementObjectSearcher searcher = new
                                   ManagementObjectSearcher(scope, query))
{
  foreach (ManagementObject myservice in searcher.Get())
  {
     myservice.InvokeMethod("StartService", null);
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Thus we have seen how we can use &lt;code&gt;System.Management&lt;/code&gt; namespace and it’s classes to fetch management info from Windows machines and perform actions remotely. You can see the entire ASP .Net Core Web App that I have created on my Github profile &lt;a href="https://github.com/chaitanya-suvarna/RemoteServerStatus"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;I hope you found this interesting. Thanks for reading!&lt;/p&gt;

</description>
      <category>dotnet</category>
      <category>devops</category>
      <category>webdev</category>
      <category>showdev</category>
    </item>
    <item>
      <title>Writing unit tests for HttpClient using NUnit and Moq in C#</title>
      <dc:creator>chaitanya.dev</dc:creator>
      <pubDate>Mon, 07 Sep 2020 16:23:09 +0000</pubDate>
      <link>https://dev.to/chaitanyasuvarna/writing-unit-tests-for-httpclient-using-nunit-and-moq-in-c-37jh</link>
      <guid>https://dev.to/chaitanyasuvarna/writing-unit-tests-for-httpclient-using-nunit-and-moq-in-c-37jh</guid>
      <description>&lt;p&gt;You have a class that sends a GET request using &lt;strong&gt;HttpClient&lt;/strong&gt; and consumes the response and performs further actions. Now that you have written this class you want to go ahead and write Unit Tests using &lt;a href="https://nunit.org/"&gt;NUnit&lt;/a&gt; for this class so that you can make sure the right &lt;strong&gt;URI&lt;/strong&gt; is being called with the correct &lt;strong&gt;Request Headers&lt;/strong&gt; and &lt;strong&gt;Request Method&lt;/strong&gt;. How do you go about this?&lt;/p&gt;

&lt;p&gt;Let’s take a look at the demo project that I have created to understand this better. You can find this project on my Github Page &lt;a href="https://github.com/chaitanya-suvarna/NUnitTestForHttpClientDemo"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;I have created a class called &lt;code&gt;EmployeeApiClientService&lt;/code&gt; that has a method called &lt;code&gt;GetEmployeeAsync()&lt;/code&gt; that takes an employeeid as a input parameter, sends a GET request to the employee API with the employeeid in the uri and returns the &lt;code&gt;Employee&lt;/code&gt; object back.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;public class EmployeeApiClientService
{
    private readonly HttpClient employeeHttpClient;

    public EmployeeApiClientService(HttpClient httpClient)
    {
        employeeHttpClient = httpClient;
    }

    //environment specific variables should always be set in a seperate config file or database. 
    //For the sake of this example I'm initialising them here.
    public static string testDatabase = "SloughDB";
    public static string environment = "TEST";

    public async Task&amp;lt;Employee&amp;gt; GetEmployeeAsync(int employeeId)
    {
        Employee employee = null;

        //Add headers
        employeeHttpClient.DefaultRequestHeaders.Add("Accept", "application/json");
        employeeHttpClient.DefaultRequestHeaders.Add("tracking-id", Guid.NewGuid().ToString());

        //Conditional Headers
        if (environment == "TEST")
        {
            employeeHttpClient.DefaultRequestHeaders.Add("test-db", testDatabase);
        }

        HttpResponseMessage response = await employeeHttpClient.GetAsync($"http://dummy.restapiexample.com/api/v1/employee/{employeeId}");
        if (response.IsSuccessStatusCode)
        {
            employee = JsonConvert.DeserializeObject&amp;lt;Employee&amp;gt;(await response.Content.ReadAsStringAsync());
        }
        return employee;
    }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once you have finished writing your class, you start writing your unit tests but you are stuck. There’s &lt;strong&gt;no interface for the HttpClient class&lt;/strong&gt; which you can use to &lt;a href="https://github.com/moq/moq"&gt;Moq&lt;/a&gt; and you do not want to hit the actual end-pints during the unit tests. What do you do here?&lt;/p&gt;

&lt;p&gt;There are multiple ways to tackle this problem :&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Method 1 : Write a wrapper class for HttpClient class&lt;/strong&gt;&lt;br&gt;
This method would require you to write a wrapper class eg. HttpClientWrapper and implement all of HttpClient’s methods in your wrapper class and then use this wrapper class as a dependency instead of HttpClient in your actual class.&lt;br&gt;
Then you can Mock this Wrapper class in your unit tests and verify the request details. For me, this method seemed like too much work and not a neat implementation.&lt;/p&gt;

&lt;p&gt;So I looked around a bit and found out that the HttpClient has a constructor overload that takes a &lt;strong&gt;HttpMessageHandler&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;public HttpClient(HttpMessageHandler handler);&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;And that’s how I came to the second method.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Method 2 : Mock HttpMessageHandler and pass it to your HttpClient class&lt;/strong&gt;&lt;br&gt;
HttpMessageHandler has one protected method &lt;code&gt;SendAsync()&lt;/code&gt; which is the underlying method called by all of HttpClient’s GET/POST/PATCH/PUT Async methods. All we have to do is mock this class and setup SendAsync to accept and return our desired values as per our test cases. We can use &lt;strong&gt;Moq&lt;/strong&gt; for this purpose.&lt;/p&gt;

&lt;p&gt;Moq is a &lt;strong&gt;Mocking Framework&lt;/strong&gt; used in .NET to isolate units to be tested from the underlying dependencies. We can create a Mock for HttpMessageHandler and pass it to the overloaded constructor for HttpClient.&lt;/p&gt;

&lt;p&gt;Let’s see how we can implement this.&lt;/p&gt;
&lt;h2&gt;
  
  
  &lt;strong&gt;Mock HttpMessageHandler&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;We will create a mock object of HttpMessageHandler using Moq and pass it to the HttpClient class constructor and pass this HttpClient object to our EmployeeApiClientService constructor in the Test &lt;strong&gt;SetUp&lt;/strong&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;EmployeeApiClientService employeeApiClientService;
Mock&amp;lt;HttpMessageHandler&amp;gt; httpMessageHandlerMock;

[SetUp]
public void setUp()
{
    httpMessageHandlerMock = new Mock&amp;lt;HttpMessageHandler&amp;gt;(MockBehavior.Strict);
    HttpClient httpClient = new HttpClient(httpMessageHandlerMock.Object);
    employeeApiClientService = new EmployeeApiClientService(httpClient);
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  &lt;strong&gt;Setup SendAsync method&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Moq does not allow us to directly setup &lt;code&gt;SendAsync()&lt;/code&gt; method because this method is &lt;strong&gt;protected&lt;/strong&gt; in the HttpMessageHandler class and cannot be accessed outside the class.&lt;br&gt;
We can use the &lt;strong&gt;Moq.Protected&lt;/strong&gt; api, which gives us some additional methods on the mocked object, where we can access the protected members using their names using the &lt;code&gt;.Protected()&lt;/code&gt; method.&lt;br&gt;
We will now &lt;strong&gt;Setup&lt;/strong&gt; &lt;code&gt;SendAsync()&lt;/code&gt; method of the mocked HttpMessageHandler object so that it returns StatusCode 200 with Employee object in json format for which we will use the &lt;code&gt;ReturnsAsync()&lt;/code&gt; method and we’ll make this object verifiable by using &lt;code&gt;Verify()&lt;/code&gt; method so that we can verify the number of calls to this method, request details etc. in the assertion section.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;httpMessageHandlerMock.Protected().Setup&amp;lt;Task&amp;lt;HttpResponseMessage&amp;gt;&amp;gt;(
    "SendAsync",
    ItExpr.IsAny&amp;lt;HttpRequestMessage&amp;gt;(),
    ItExpr.IsAny&amp;lt;CancellationToken&amp;gt;()
    ).ReturnsAsync(new HttpResponseMessage()
    {
       StatusCode = HttpStatusCode.OK,
       Content = new StringContent(JsonConvert.SerializeObject(new Employee()), Encoding.UTF8, "application/json")
    }).Verifiable();
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  &lt;strong&gt;Verify the call to SendAsync&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;In our assertion section of the Unit Test we’ll verify that the &lt;code&gt;SendAsync()&lt;/code&gt; method is called only &lt;strong&gt;once&lt;/strong&gt;, it’s called with a &lt;strong&gt;GET&lt;/strong&gt; request, the request contains the expected target &lt;strong&gt;uri&lt;/strong&gt; and the request contains all the required &lt;strong&gt;headers&lt;/strong&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;httpMessageHandlerMock.Protected().Verify(
    "SendAsync",
    Times.Exactly(1), 
    ItExpr.Is&amp;lt;HttpRequestMessage&amp;gt;(req =&amp;gt;
    req.Method == HttpMethod.Get  
    &amp;amp;&amp;amp; req.RequestUri.ToString() == targetUri // veryfy the RequestUri is as expected
    &amp;amp;&amp;amp; req.Headers.GetValues("Accept").FirstOrDefault() == "application/json" 
    &amp;amp;&amp;amp; req.Headers.GetValues("tracking-id").FirstOrDefault() != null 
    &amp;amp;&amp;amp; environment.Equals("TEST") ? 
                      req.Headers.GetValues("test-db").FirstOrDefault() == testDatabase : 
                      req.Headers.GetValues("test-db").FirstOrDefault() == null 
    ),
    ItExpr.IsAny&amp;lt;CancellationToken&amp;gt;()
    );
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  &lt;strong&gt;Complete Test Class&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;The complete test class with one test case to test the Request is created correctly looks something like this.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[TestFixture]
class EmployeeApiClientServiceTests
{
    EmployeeApiClientService employeeApiClientService;
    Mock&amp;lt;HttpMessageHandler&amp;gt; httpMessageHandlerMock;

    //environment specific variables should always be set in a separate config file or database. 
    //For the sake of this example I'm initialising them here.
    string testDatabase = "SloughDB";
    string environment = "TEST";

    [SetUp]
    public void setUp()
    {
        httpMessageHandlerMock = new Mock&amp;lt;HttpMessageHandler&amp;gt;(MockBehavior.Strict);
        HttpClient httpClient = new HttpClient(httpMessageHandlerMock.Object);
        employeeApiClientService = new EmployeeApiClientService(httpClient);
    }


    [Test]
    public async Task GivenICallGetEmployeeAsyncWithValidEmployeeId_ThenTheEmployeeApiIsCalledWithCorrectRequestHeadersAsync()
    {
        //Arrange
        int employeeId = 1;
        string targetUri = $"http://dummy.restapiexample.com/api/v1/employee/{employeeId}";
        //Setup sendAsync method for HttpMessage Handler Mock
        httpMessageHandlerMock.Protected().Setup&amp;lt;Task&amp;lt;HttpResponseMessage&amp;gt;&amp;gt;(
            "SendAsync",
            ItExpr.IsAny&amp;lt;HttpRequestMessage&amp;gt;(),
            ItExpr.IsAny&amp;lt;CancellationToken&amp;gt;()
            )
            .ReturnsAsync(new HttpResponseMessage()
            {
                StatusCode = HttpStatusCode.OK,
                Content = new StringContent(JsonConvert.SerializeObject(new Employee()), Encoding.UTF8, "application/json")
            })
            .Verifiable();

        //Act
        var employee = await employeeApiClientService.GetEmployeeAsync(employeeId);

        //Assert
        Assert.IsInstanceOf&amp;lt;Employee&amp;gt;(employee);

        httpMessageHandlerMock.Protected().Verify(
            "SendAsync",
            Times.Exactly(1), // verify number of times SendAsync is called
            ItExpr.Is&amp;lt;HttpRequestMessage&amp;gt;(req =&amp;gt;
            req.Method == HttpMethod.Get  // verify the HttpMethod for request is GET
            &amp;amp;&amp;amp; req.RequestUri.ToString() == targetUri // veryfy the RequestUri is as expected
            &amp;amp;&amp;amp; req.Headers.GetValues("Accept").FirstOrDefault() == "application/json" //Verify Accept header
            &amp;amp;&amp;amp; req.Headers.GetValues("tracking-id").FirstOrDefault() != null //Verify tracking-id header is added
            &amp;amp;&amp;amp; environment.Equals("TEST") ? req.Headers.GetValues("test-db").FirstOrDefault() == testDatabase : //Verify test-db header is added only for TEST environment
                                            req.Headers.GetValues("test-db").FirstOrDefault() == null
            ),
            ItExpr.IsAny&amp;lt;CancellationToken&amp;gt;()
            );
    }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Thus we have seen how we can easily write unit test cases to test &lt;strong&gt;HttpClient&lt;/strong&gt; calls to verify various aspects of the requests and how your class processes the responses. This method can also be used for POST/PUT/PATCH requests as all these methods in HttpClient use HttpMessageHandler’s &lt;code&gt;SendAsync()&lt;/code&gt; method under the hood.&lt;/p&gt;

&lt;p&gt;I hope you found this interesting. Thanks for reading!&lt;/p&gt;

</description>
      <category>dotnet</category>
      <category>testing</category>
      <category>tutorial</category>
      <category>moq</category>
    </item>
    <item>
      <title>SQL Server Service Broker for Asynchronous Applications</title>
      <dc:creator>chaitanya.dev</dc:creator>
      <pubDate>Sat, 05 Sep 2020 09:41:18 +0000</pubDate>
      <link>https://dev.to/chaitanyasuvarna/sql-server-service-broker-for-asynchronous-applications-2ghg</link>
      <guid>https://dev.to/chaitanyasuvarna/sql-server-service-broker-for-asynchronous-applications-2ghg</guid>
      <description>&lt;p&gt;Imagine an online retail web application which has an order fulfilment system that comes into play whenever a customer wants to place an order. This system might have different business processes such as payment processing, a CRM, inventory management, shipping etc. and each of them have their own databases. In a scenario when one of the systems is down, customers may not be able to place an order if there is no way to queue their order request and guarantee that their order request will be delivered to subsequent systems for processing. This is where an &lt;strong&gt;asynchronous queueing&lt;/strong&gt; and &lt;strong&gt;messaging system&lt;/strong&gt; like &lt;strong&gt;SQL Service Broker&lt;/strong&gt; comes into picture.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;SQL Server Service Broker&lt;/strong&gt; provides native support for messaging and queuing in the SQL Server Database Engine and Azure SQL Managed Instance. Application developers can use Service Broker to &lt;strong&gt;distribute data workloads across several databases&lt;/strong&gt; without programming complex communication and messaging internals. Service Broker ensures that all tasks are managed in the context of transactions to assure &lt;strong&gt;reliability&lt;/strong&gt; and technical &lt;strong&gt;consistency&lt;/strong&gt;. It’s part of SQL Server so if you’re already utilising SQL Server for your project, then there’s &lt;strong&gt;no additional licensing required&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Now that we know what SQL Service Broker is, let’s look at how we can use Service Broker in our order fulfilment system. We can leverage Service Broker to exchange data between the business processes. Even if there is a payment processing outage, customer’s order will be accepted and a request for payment processing will be &lt;strong&gt;queued&lt;/strong&gt; and held as a &lt;strong&gt;message&lt;/strong&gt;. The system can continue accepting other orders and queueing payment request messages. Once the payment processing system is back online, it’ll &lt;strong&gt;dequeue and process&lt;/strong&gt; the payment request messages. Once done, the subsequent CRM systems will be updated with the sale information and the shipping systems may do the needful and ship the products to the customer that made the order. Service Broker ensures that the payment processing request has reached the destination system and as the processing task will be done in the context of a transaction it ensures consistency.&lt;/p&gt;

&lt;p&gt;Let’s try and set up a simple Service Broker queue and see how we can send or receive messages through Service Broker.&lt;/p&gt;

&lt;p&gt;Things you’ll need:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;SQL Server&lt;/strong&gt;

&lt;ul&gt;
&lt;li&gt;Developer Edition available &lt;a href="https://www.microsoft.com/en-gb/sql-server/sql-server-downloads"&gt;here&lt;/a&gt;.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Microsoft Docs&lt;/strong&gt;

&lt;ul&gt;
&lt;li&gt;Documentation for reference &lt;a href="https://docs.microsoft.com/en-us/sql/database-engine/configure-windows/sql-server-service-broker?view=sql-server-ver15"&gt;here&lt;/a&gt;.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Enabling Service Broker in database&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Let’s start off by creating a a database and enabling Service Broker for that database.&lt;br&gt;
Note: To ensure that no users are connected to the database when you enable Service Broker we’ll use the &lt;code&gt;ROLLBACK IMMEDIATE&lt;/code&gt; option and this works as we are currently executing this in our &lt;strong&gt;DEV&lt;/strong&gt; database.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;CREATE DATABASE PaymentProcessingDB
GO
ALTER DATABASE PaymentProcessingDB
      SET ENABLE_BROKER
      WITH ROLLBACK IMMEDIATE;
GO
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  &lt;strong&gt;Create Message Types&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Message Type is the basic object of the messaging infrastructure that is used to define what kind of messages can and cannot be sent.&lt;br&gt;
You can set the type of a message by defining the ‘VALIDATION’ of a message type as follows &lt;code&gt;EMPTY&lt;/code&gt;, &lt;code&gt;NONE&lt;/code&gt;, &lt;code&gt;WELL_FORMED_XML&lt;/code&gt;, &lt;code&gt;VALID_XML WITH SCHEMA COLLECTION&lt;/code&gt;.&lt;br&gt;
We’ll create a &lt;strong&gt;PaymentRequestMessage&lt;/strong&gt; which will be sent by the service initiating the &lt;code&gt;DIALOG&lt;/code&gt; and a &lt;strong&gt;PaymentResponseMessage&lt;/strong&gt; which will be sent back to the &lt;strong&gt;Initiator Service&lt;/strong&gt; by the &lt;strong&gt;Target Service&lt;/strong&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;CREATE MESSAGE TYPE
       [PaymentRequestMessage]
       VALIDATION = WELL_FORMED_XML;
CREATE MESSAGE TYPE
       [PaymentResponseMessage]
       VALIDATION = WELL_FORMED_XML;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  &lt;strong&gt;Create Contract&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;The &lt;code&gt;CONTRACT&lt;/code&gt; is used to define the kind of message that will be sent by the INITIATOR – &lt;em&gt;the party that sends the first message&lt;/em&gt; and the TARGET – &lt;em&gt;the party that receives the message and sends a response&lt;/em&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;CREATE CONTRACT [PaymentProcessContract]
      ([PaymentRequestMessage]
       SENT BY INITIATOR,
       [PaymentResponseMessage]
       SENT BY TARGET
      );
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  &lt;strong&gt;Create Queues&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;A queue is a storage space where the messages reside. You can also query queues to see what messages are currently present in it.&lt;br&gt;
We will be creating two queues, the &lt;strong&gt;TagetQueue&lt;/strong&gt; will hold messages sent be the INITIATOR to the TARGET and the &lt;strong&gt;InitiatorQueue&lt;/strong&gt; will hold messages sent by the TARGET back to the INITIATOR.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;CREATE QUEUE PaymentProcess_TargetQueue;
GO
CREATE QUEUE PaymentProcess_InitiatorQueue;
GO
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  &lt;strong&gt;Create Services&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;A service is created on top of a queue which defines what contract is to be adhered to send messages to that queue. A queue can have multiple services adding messages to it by adhering to different contracts.&lt;br&gt;
For our example we are going to create two services for the Target and Initiator queues that adhere to the PaymentProcessContract.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;CREATE SERVICE
       [PaymentProcess_TargetService]
       ON QUEUE PaymentProcess_TargetQueue
       ([PaymentProcessContract]);
CREATE SERVICE
       [PaymentProcess_InitiatorService]
       ON QUEUE PaymentProcess_InitiatorQueue
       ([PaymentProcessContract]);
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  &lt;strong&gt;Send a Request Message&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;To send a message from the InitiatorService to the TargetService we will start a &lt;code&gt;DIALOG&lt;/code&gt; and mention the &lt;code&gt;CONTRACT&lt;/code&gt; that will be used for this &lt;code&gt;DIALOG&lt;/code&gt;. We will also use a unique identifier(GUID) for this &lt;code&gt;DIALOG&lt;/code&gt; which will be used for all the conversations that will be sent to the TargetQueue.&lt;/p&gt;

&lt;p&gt;Once the &lt;code&gt;DIALOG&lt;/code&gt; has started, we can send messages on the &lt;code&gt;CONVERSATION&lt;/code&gt; using the unique identifier used to start the &lt;code&gt;DIALOG&lt;/code&gt; and mention the &lt;code&gt;MESSAGE TYPE&lt;/code&gt; and the message body which, for our example, contains a PaymentProcessingRequest number.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;DECLARE @conversation_handle UNIQUEIDENTIFIER;
DECLARE @message_body XML;

BEGIN TRANSACTION;


--Begin conversation
BEGIN DIALOG @conversation_handle
     FROM SERVICE [PaymentProcess_InitiatorService]
     TO SERVICE N'PaymentProcess_TargetService'
     ON CONTRACT [PaymentProcessContract]
     WITH ENCRYPTION = OFF;

SELECT @message_body = N'&amp;lt;PaymentRequestMessage&amp;gt;PAYREQ001&amp;lt;/PaymentRequestMessage&amp;gt;';

--send message on conversation using the same GUID
SEND ON CONVERSATION @conversation_handle
     MESSAGE TYPE [PaymentRequestMessage] 
(@message_body);

COMMIT TRANSACTION;
GO
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You might have noticed that queues are never mentioned while sending messages on a &lt;code&gt;CONVERSATION&lt;/code&gt;, as a &lt;code&gt;DIALOG&lt;/code&gt; can start only between two services that logically defines the contract to be adhered to send messages to the queue.&lt;br&gt;
Once the message has been sent to the TargetService, we can view the message in the TargetQueue using a simple select statement as below.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;SELECT *, CAST(message_body AS XML) AS message_body_xml
FROM PaymentProcess_TargetQueue
;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  &lt;strong&gt;Receiving and Processing a Request Message&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;The initiator has sent the PaymentRequestMessage to the Payment processing system and it is now free to accept other orders. The Payment processing system will receive messages from the TargetQueue &lt;strong&gt;asynchronously&lt;/strong&gt; and respond back to the &lt;code&gt;INITIATOR&lt;/code&gt; once the processing is done.&lt;/p&gt;

&lt;p&gt;To process the request message, we will have to first &lt;code&gt;RECEIVE&lt;/code&gt; the message from the TargetQueue. Once the message is received we can enter the logic to process the request message, but for simplicity we are just going to display the message contents here. When we receive the message body, we also receive the &lt;strong&gt;conversation handle&lt;/strong&gt; and &lt;strong&gt;message type&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;We can use this &lt;strong&gt;conversation handle&lt;/strong&gt; to send a response message on the initiator queue on the same conversation. We can also &lt;strong&gt;check the message type&lt;/strong&gt; we received and process the message accordingly. As said earlier, in this example we are just going to respond back with a PaymentResponseMessage containing the PaymentProcessingRequest number indicating we have finished processing the Payment Request.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;-- Receive the request and send a reply
DECLARE @conversation_handle UNIQUEIDENTIFIER;
DECLARE @message_body XML;
DECLARE @message_type_name sysname;

BEGIN TRANSACTION;

WAITFOR
( RECEIVE TOP(1)
    @conversation_handle = conversation_handle,
    @message_body = message_body,
    @message_type_name = message_type_name
  FROM PaymentProcess_TargetQueue
), TIMEOUT 1000;

SELECT @message_body AS ReceivedPaymentRequestMsg;

--check message type
IF (@message_type_name = N'PaymentRequestMessage')
BEGIN
     DECLARE @reply_message_body XML;


--You can enter the logic to process the message here
     SELECT @reply_message_body = N'&amp;lt;PaymentResponseMessage&amp;gt;PAYREQ001&amp;lt;/PaymentResponseMessage&amp;gt;';

     SEND ON CONVERSATION @conversation_handle
          MESSAGE TYPE [PaymentResponseMessage]
     (@reply_message_body);
END

SELECT @reply_message_body AS PaymentResponseMessage;

COMMIT TRANSACTION;
GO
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can then check for the PaymentResponseMessage sent to the InitiatorQueue by executing the below select query on the InitiatorQueue.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;-- Check for the reply message in the initiator queue
SELECT *, CAST(message_body AS XML) AS message_body_xml
FROM PaymentProcess_InitiatorQueue;
GO
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  &lt;strong&gt;Understanding the relationship between DIALOG, CONTRACT, SERVICE and QUEUE&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;By sending the response message on the same &lt;code&gt;DIALOG&lt;/code&gt; by using the same conversation handle, the contract &lt;strong&gt;PaymentProcessContract&lt;/strong&gt; specified in the &lt;code&gt;DIALOG&lt;/code&gt; will be used to send the message.&lt;br&gt;
The PaymentProcessContract specifies &lt;strong&gt;PaymentResponseMessage&lt;/strong&gt; is sent by &lt;code&gt;Target&lt;/code&gt; to the &lt;code&gt;Initiator&lt;/code&gt; and the &lt;code&gt;DIALOG&lt;/code&gt; also specifies the conversation initiator is the &lt;strong&gt;PaymentProcess_InitiatorService&lt;/strong&gt;.&lt;br&gt;
The PaymentProcess_InitiatorService logically represents the &lt;strong&gt;PaymentProcess_InitiatorQueue&lt;/strong&gt; and thus the response message arrives at the InitiatorQueue.&lt;/p&gt;

&lt;p&gt;I hope this helps you understand the &lt;strong&gt;relationship&lt;/strong&gt; between &lt;code&gt;DIALOG&lt;/code&gt;, &lt;code&gt;CONTRACT&lt;/code&gt;, &lt;code&gt;SERVICE&lt;/code&gt; and &lt;code&gt;QUEUE&lt;/code&gt;.&lt;/p&gt;
&lt;h2&gt;
  
  
  &lt;strong&gt;Ending the Conversation&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Now that we have sent the PaymentResponseMessage back to the InitiatorQueue we are going to use the same method to RECEIVE the message from the queue.&lt;br&gt;
If the received message is of the type PaymentResponseMessage we will use &lt;code&gt;END CONVERSATION&lt;/code&gt; with the &lt;strong&gt;conversation handle&lt;/strong&gt; received from the response message.&lt;/p&gt;

&lt;p&gt;The idle conversations occupy space and are of no further use, so it is always a good practice to &lt;code&gt;END CONVERSATION&lt;/code&gt; once you have received the desired response and clear out the conversation.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;DECLARE @message_body XML;
DECLARE @conversation_handle UNIQUEIDENTIFIER;
DECLARE @message_type_name sysname;

BEGIN TRANSACTION;

WAITFOR
( RECEIVE TOP(1)
    @conversation_handle = conversation_handle,
    @message_body = CAST(message_body AS XML),
    @message_type_name = message_type_name
  FROM PaymentProcess_InitiatorQueue
), TIMEOUT 1000;

IF (@message_type_name = N'PaymentResponseMessage')
BEGIN
    END CONVERSATION @conversation_handle;
END

SELECT @message_body AS ReceivedPaymentResponseMessage;

COMMIT TRANSACTION;
GO
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We have ended the conversation on the Initiator side, this sends an End Dialog message to the TargetQueue. If you check the TargetQueue using the below query, you should see the EndDialog message.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;--Check for EndDialog message in TargetQueue
SELECT *, CAST(message_body AS XML) AS message_body_xml
FROM PaymentProcess_TargetQueue;
GO
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We also need to end the conversation from the Target side. We will do something similar to what we did with the InitiatorQueue but here we’ll look for a message type of EndDialog rather than PaymentResponseMessage.&lt;br&gt;
Ending the Conversation here does nothing but just cleans up the conversation. There is no message sent back to the initiator.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;-- Receive the End Dialog and clean up
DECLARE @conversation_handle UNIQUEIDENTIFIER;
DECLARE @message_body XML;
DECLARE @message_type_name sysname;

BEGIN TRANSACTION;

WAITFOR
( RECEIVE TOP(1)
    @conversation_handle = conversation_handle,
    @message_body = CAST(message_body AS XML),
    @message_type_name = message_type_name
  FROM PaymentProcess_TargetQueue
), TIMEOUT 1000;

--check if message type is EndDialog
IF (@message_type_name = N'http://schemas.microsoft.com/SQL/ServiceBroker/EndDialog')
BEGIN
     END CONVERSATION @conversation_handle;
END

COMMIT TRANSACTION;
GO
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;With this we have successfully end the &lt;code&gt;CONVERSATION&lt;/code&gt; or &lt;code&gt;DIALOG&lt;/code&gt; associated with the conversation_handle. We can have a look at both the queues, they should be empty unless a new PaymentRequestMessage is being processed.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;-- Check for a message in the target queue
SELECT *, CAST(message_body AS XML) AS message_body_xml
FROM PaymentProcess_TargetQueue;
GO

-- Check for a  message in the initiator queue
SELECT *, CAST(message_body AS XML) AS message_body_xml
FROM PaymentProcess_InitiatorQueue;
GO
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This is now the end of message processing between two systems that has happened &lt;strong&gt;asynchronously&lt;/strong&gt;. The TargetQueue can have PaymentRequests added at any point of time and the Payment Processing System can dequeue and process messages at any point of time and respond back with the results.&lt;/p&gt;

&lt;p&gt;Service Broker can also be used for batch processing of data wherein you can queue messages during business hours and process the whole chunk of messages during off-business hours or process it periodically during the day. One of the advantages of using SQL Server Service Broker is that as the messages and queues reside in the database, whenever you backup and restore the database all of your messages and queues are retained.&lt;/p&gt;

&lt;p&gt;Thus, we have had a glimpse of both queueing and messaging technology that SQL Server Service Broker offers and also the basic components required for a simple asynchronous application. Service Broker has a lot more to offer and you can get more information &lt;a href="https://docs.microsoft.com/en-us/sql/database-engine/configure-windows/sql-server-service-broker?view=sql-server-ver15"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;I hope you found this interesting. Thank you for reading!&lt;/p&gt;

</description>
      <category>database</category>
      <category>sql</category>
      <category>servicebroker</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>Using SQLite database with EntityFramework Core 3</title>
      <dc:creator>chaitanya.dev</dc:creator>
      <pubDate>Sat, 29 Aug 2020 18:59:47 +0000</pubDate>
      <link>https://dev.to/chaitanyasuvarna/using-sqlite-database-with-entityframework-core-3-10ad</link>
      <guid>https://dev.to/chaitanyasuvarna/using-sqlite-database-with-entityframework-core-3-10ad</guid>
      <description>&lt;p&gt;Recently I have been working with .Net Core web applications which used Entity Framework with SQL Server Database. I wanted to figure out how to use EF Core 3 with a portable database like SQLite. This is how I got the idea for my latest pet project, a simple .Net Core Web API that uses EF Core 3 to store and retrieve data from a SQLite Db.&lt;br&gt;
With this blog post I aim to demonstrate how easy it is to create this project with Visual Studio 2019.&lt;/p&gt;

&lt;p&gt;You can find my source code &lt;a href="https://github.com/chaitanya-suvarna/EFCore-SQLite-Demo/"&gt;here&lt;/a&gt; at my GitHub.&lt;/p&gt;

&lt;p&gt;Things required for this project :&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Visual Studio 2019&lt;/strong&gt;

&lt;ul&gt;
&lt;li&gt;Community edition available &lt;a href="https://visualstudio.microsoft.com/downloads/"&gt;here&lt;/a&gt;.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Postman&lt;/strong&gt; 

&lt;ul&gt;
&lt;li&gt;Download available &lt;a href="https://www.postman.com/downloads/"&gt;here&lt;/a&gt;.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Microsoft Docs&lt;/strong&gt; 

&lt;ul&gt;
&lt;li&gt;There’s no better reference than the &lt;a href="https://docs.microsoft.com/en-us/ef/core/get-started/?tabs=netcore-cli"&gt;official documentation&lt;/a&gt;!&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;h2&gt;
  
  
  &lt;strong&gt;Creating New Project&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;On Visual Studio, choose to create a new project and in the templates section select &lt;strong&gt;ASP.NET Core Web Application&lt;/strong&gt; and click Next.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--10j6vKwG--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://chaitanyasuvarna.files.wordpress.com/2020/07/chooseproject.png%3Fw%3D725" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--10j6vKwG--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://chaitanyasuvarna.files.wordpress.com/2020/07/chooseproject.png%3Fw%3D725" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In the next screen, give a name to your Project and click &lt;strong&gt;Create&lt;/strong&gt;.&lt;br&gt;
In the final screen, choose &lt;strong&gt;API&lt;/strong&gt; template and make sure &lt;strong&gt;ASP.NET Core 3.1&lt;/strong&gt; is selected in the drop-down menu.&lt;/p&gt;

&lt;p&gt;Once you click Create you should see your project in the Solution Explorer.&lt;br&gt;
Get rid of the sample WeatherForecast Controller and class as we will be creating our own.&lt;br&gt;
Add three new folders called &lt;strong&gt;Contexts&lt;/strong&gt;, &lt;strong&gt;Controllers&lt;/strong&gt; and &lt;strong&gt;Entities&lt;/strong&gt; to your project.&lt;br&gt;
Once done, your project structure should look like this.&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Z88rK5GV--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://chaitanyasuvarna.files.wordpress.com/2020/07/added-folders-1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Z88rK5GV--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://chaitanyasuvarna.files.wordpress.com/2020/07/added-folders-1.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  &lt;strong&gt;Adding EF Core and related packages&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Now we will be adding EF Core NuGet package for SQLite using the &lt;strong&gt;Package Manager Console&lt;/strong&gt; using the below command. This will also add the dependency EntityFramework Core package.&lt;br&gt;
&lt;strong&gt;Tools &amp;gt; NuGet Package Manager &amp;gt; Package Manager Console&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;code&gt;Install-Package Microsoft.EntityFrameworkCore.Sqlite&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Once installation is complete you should see &lt;strong&gt;Microsoft.EntityFrameworkCore.Sqlite&lt;/strong&gt; &amp;amp; &lt;strong&gt;Microsoft.EntityFrameworkCore&lt;/strong&gt; in your Project at Dependencies -&amp;gt; Packages .&lt;br&gt;
Now we are all set to start creating our entities and the DbContext.&lt;/p&gt;
&lt;h2&gt;
  
  
  &lt;strong&gt;Creating the Model&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;EF Core can read and write &lt;strong&gt;entity instances&lt;/strong&gt; from/to the database, and if you’re using a relational database, EF Core can create tables for your entities via &lt;strong&gt;migrations&lt;/strong&gt;.&lt;br&gt;
By convention, each entity type will be set up to map to a database table with the &lt;strong&gt;same name as the DbSet property&lt;/strong&gt; that exposes the entity. &lt;/p&gt;

&lt;p&gt;For our example, let’s create an Entity called &lt;strong&gt;Athlete&lt;/strong&gt; that’ll be used to store an Athlete’s information.&lt;br&gt;
Add a new class file called &lt;strong&gt;Athlete.cs&lt;/strong&gt; in the &lt;strong&gt;Entities&lt;/strong&gt; folder in our project. The class will have 3 properties &lt;strong&gt;Name&lt;/strong&gt;, &lt;strong&gt;Age&lt;/strong&gt; and &lt;strong&gt;Sport&lt;/strong&gt; along with an identifier called &lt;strong&gt;Id&lt;/strong&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;using System.ComponentModel.DataAnnotations;

namespace SQLiteDemoWebApi.Entities
{
    public class Athlete
    {
        [Key]
        public int id { get; set; }

        [Required]
        [MaxLength(100)]
        public string Name { get; set; }

        public int Age { get; set; }

        [Required]
        [MaxLength(100)]
        public string Sport { get; set; }
    }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now let’s create a &lt;strong&gt;DbContext&lt;/strong&gt; which will be used by EF Core to gather details about the application’s entity types that are exposed in DbSet properties in the context and how they map to a database schema.&lt;/p&gt;

&lt;p&gt;Add a new class file called &lt;strong&gt;AthleteSQLiteDbContext.cs&lt;/strong&gt; in the &lt;strong&gt;Contexts&lt;/strong&gt; folder in our project which will expose the Athlete entity as &lt;strong&gt;Athletes DbSet&lt;/strong&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;using Microsoft.EntityFrameworkCore;
using SQLiteDemoWebApi.Entities;

namespace SQLiteDemoWebApi.Contexts
{
    public class AthleteSQLiteDbContext : DbContext
    {
        public DbSet&amp;lt;Athlete&amp;gt; Athletes { get; set; }

        public AthleteSQLiteDbContext(DbContextOptions&amp;lt;AthleteSQLiteDbContext&amp;gt; options) : base(options)
        { }
        protected override void OnModelCreating(ModelBuilder modelBuider)
        {
            base.OnModelCreating(modelBuider);
        }
    }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  &lt;strong&gt;Adding Migrations and Updating the Database&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Add-Migration&lt;/strong&gt; command scaffolds a migration to create the initial set of tables for the model i.e. the details we have mentioned in the DbContext and the corresponding Entities.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Update-Database&lt;/strong&gt; command creates the database and applies the new migration to it.&lt;/p&gt;

&lt;p&gt;To execute these commands we need to install the &lt;a href="https://docs.microsoft.com/en-us/ef/core/miscellaneous/cli/powershell"&gt;PMC tools for EF Core&lt;/a&gt;. This can be done by executing the below command on the Package Manager Console.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;Install-Package Microsoft.EntityFrameworkCore.Tools&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Once the package is installed you should see &lt;strong&gt;Microsoft.EntityFrameworkCore.Tools&lt;/strong&gt; added to your &lt;strong&gt;Dependencies&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Before we start creating migrations, we need to add a &lt;strong&gt;connection string&lt;/strong&gt; for our SQLite database. We’ll be adding this connection string to the &lt;strong&gt;appsettings.json&lt;/strong&gt; file. For the sake of this project, we’ll be creating our SQLite database at the root directory.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
  "Logging": {...},
  "ConnectionStrings": {
    "SQLite": "Data Source=Athlete.db"
  },
  "AllowedHosts": "*"
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We also need to make sure that &lt;strong&gt;AthleteSQLiteDbContext&lt;/strong&gt; is added to the container at startup. This can be done by adding it to the services collection in the &lt;strong&gt;ConfigureServices()&lt;/strong&gt; method in &lt;strong&gt;Startup.cs&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;The &lt;strong&gt;UseSqlite&lt;/strong&gt; method requires the connection string which we fetch from appsettings.json using the &lt;strong&gt;Configuration&lt;/strong&gt; class.&lt;/p&gt;

&lt;p&gt;The ConfigureServices method should look like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;public void ConfigureServices(IServiceCollection services)
{
    services.AddControllers();
    services.AddDbContext&amp;lt;AthleteSQLiteDbContext&amp;gt;(o =&amp;gt;
                o.UseSqlite(Configuration.GetConnectionString("SQLite")));
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We can now add our initial migration to the database with the &lt;strong&gt;Add-Migration&lt;/strong&gt; command.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;Add-Migration InitialMigration&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;This will create a &lt;strong&gt;Migrations&lt;/strong&gt; folder in your project.&lt;br&gt;
Three files are added to your project under the &lt;strong&gt;Migrations&lt;/strong&gt; directory:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;XXXXXXXXXXXXXX_InitialMigration.cs&lt;/strong&gt;–The main migrations file. Contains the operations necessary to apply the migration (in Up) and to revert it (in Down).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;XXXXXXXXXXXXXX_InitialMigration.Designer.cs&lt;/strong&gt;–The migrations metadata file. Contains information used by EF.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;AthleteSQLiteDbContextModelSnapshot.cs&lt;/strong&gt;–A snapshot of your current model. Used to determine what changed when adding the next migration.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The timestamp in the filename helps keep them ordered chronologically so you can see the progression of changes.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Update-Database&lt;/strong&gt; command can be used to apply migrations to a database. While productive for local development and testing of migrations, this approach isn’t ideal for managing production databases. For our example, we’ll be using this command.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;Update-Database&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;You should see a newly created &lt;strong&gt;Athlete.db&lt;/strong&gt; file in your Project directory after the update-database command is executed successfully.&lt;/p&gt;
&lt;h2&gt;
  
  
  &lt;strong&gt;Creating a Controller for the API&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Now that we have the database and it’s connectivity setup, all that’s remaining is to create a controller that’ll use EF Core to fetch and update data from the SQLite database.&lt;br&gt;
As part of this blog post, I do not want to get into the details of creating an API Controller.&lt;br&gt;
So we will be using the Visual Studio scaffolding magic to get a controller created for us that uses the AthleteSQLiteDbContext to GET, PUT, POST, DELETE Athlete data.&lt;/p&gt;

&lt;p&gt;To do this, Right-Click on the &lt;strong&gt;Controllers&lt;/strong&gt; directory in your project -&amp;gt; Add -&amp;gt; Controller.&lt;br&gt;
Select &lt;strong&gt;API Controller with Actions using EntityFramework&lt;/strong&gt; and click OK.&lt;br&gt;
Select the &lt;strong&gt;Model: Athlete&lt;/strong&gt; and &lt;strong&gt;DbContext: AthleteSQLiteDbContext&lt;/strong&gt; from the drop-down and click Add.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--SZCI2Wlf--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://chaitanyasuvarna.files.wordpress.com/2020/07/apicontroller-1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--SZCI2Wlf--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://chaitanyasuvarna.files.wordpress.com/2020/07/apicontroller-1.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--1Z1_4ebt--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://chaitanyasuvarna.files.wordpress.com/2020/07/controller-with-dbcontext-1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--1Z1_4ebt--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://chaitanyasuvarna.files.wordpress.com/2020/07/controller-with-dbcontext-1.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You should now see a &lt;strong&gt;AthletesController.cs&lt;/strong&gt; file in your &lt;strong&gt;Controllers&lt;/strong&gt; directory. There’s no changes needed to be made in this controller class except, make sure he Controller’s Route is set to &lt;strong&gt;"api/Athletes"&lt;/strong&gt; .&lt;/p&gt;
&lt;h2&gt;
  
  
  &lt;strong&gt;Test the API&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;To test the API we’ll use &lt;strong&gt;Postman&lt;/strong&gt; so that we can send requests to the API endpoints.&lt;br&gt;
We’ll use the &lt;strong&gt;App URL&lt;/strong&gt; that is mentioned in the &lt;strong&gt;Debug&lt;/strong&gt; section which can be found by Right-click on Project -&amp;gt; Properties -&amp;gt; Debug.&lt;/p&gt;

&lt;p&gt;Once your project is running, in Postman we’ll first create a new &lt;strong&gt;POST&lt;/strong&gt; request to the below endpoint so that we can create an Athlete record in the database.&lt;br&gt;
&lt;a href="https://localhost:43325/api/Athletes"&gt;https://localhost:43325/api/Athletes&lt;/a&gt; (Note that the port number may differ for you)&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--vRJiBWjD--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://chaitanyasuvarna.files.wordpress.com/2020/07/postman.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--vRJiBWjD--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://chaitanyasuvarna.files.wordpress.com/2020/07/postman.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once you get a 201 Created Response, we can be sure that the Athlete data is inserted into the database.&lt;/p&gt;

&lt;p&gt;This can also be confirmed by creating a new &lt;strong&gt;GET&lt;/strong&gt; request for the same endpoint URL.&lt;br&gt;
You should receive a response similar to this :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[
    {
        "id": 2,
        "name": "John Doe",
        "age": 35,
        "sport": "Swimming"
    }
]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We have now successfully configured EF Core in our Web API to connect to an SQLite database. I hope you find this helpful and got to learn something new.&lt;br&gt;
As stated earlier, you can find the finished project &lt;a href="https://github.com/chaitanya-suvarna/EFCore-SQLite-Demo/"&gt;here&lt;/a&gt; at my &lt;a href="https://github.com/chaitanya-suvarna/"&gt;GitHub page&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>dotnet</category>
      <category>efcore</category>
      <category>database</category>
      <category>microservices</category>
    </item>
    <item>
      <title>Wireless lock with Arduino Uno</title>
      <dc:creator>chaitanya.dev</dc:creator>
      <pubDate>Sat, 29 Aug 2020 15:20:26 +0000</pubDate>
      <link>https://dev.to/chaitanyasuvarna/wireless-lock-with-arduino-uno-cg6</link>
      <guid>https://dev.to/chaitanyasuvarna/wireless-lock-with-arduino-uno-cg6</guid>
      <description>&lt;p&gt;Few years ago when I got to know about &lt;strong&gt;Arduino Uno&lt;/strong&gt;, I was really curious about the capabilities of this &lt;strong&gt;open-source microcontroller board&lt;/strong&gt;. For those who might not be familiar with Arduino, it is a popular tool for IoT product development and Arduino Uno is one of their most popular products. It contains everything needed to support the microcontroller. Simply connect it to a computer with a USB cable or power it with a AC-to-DC adapter or battery to get started. You can tinker with your Uno without worrying too much about doing something wrong. It has 14 digital input/output pins (of which 6 can be used as PWM outputs), 6 analog inputs, a 16 MHz ceramic resonator (CSTCE16M0V53-R0), a USB connection, a power jack, an ICSP header and a reset button.&lt;/p&gt;

&lt;p&gt;As always when I want to get to know a new piece of technology better, I decide to use it for a pet project. With the Arduino Uno, I decided to make a wireless door lock. For this I used the below components :&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Arduino Uno&lt;/li&gt;
&lt;li&gt;LED lights&lt;/li&gt;
&lt;li&gt;Solenoid Lock&lt;/li&gt;
&lt;li&gt;Relay Circuit&lt;/li&gt;
&lt;li&gt;Breadboard&lt;/li&gt;
&lt;li&gt;Ethernet Shield&lt;/li&gt;
&lt;li&gt;Some Android skills for developing the App to control the lock and C skills to write the ‘Sketch’ for Arduino Uno
Computer&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--2n5VqW3d--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/nju4dqcoz07dqqi9gmaz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--2n5VqW3d--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/nju4dqcoz07dqqi9gmaz.png" alt="Alt Text" title="How I setup the circuit for the Wireless lock"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I got all the components required from a local hardware store, luckily all of it was available from the same place. Make sure you get the authentic Arduino Uno board as there are many aftermarket copies out there which don’t work as well.&lt;/p&gt;

&lt;p&gt;Once I had all the components setup on my table, I had no clue where to begin with. Luckily, Arduino has a very helpful getting started guide &lt;a href="https://www.arduino.cc/en/Guide/ArduinoUno"&gt;here&lt;/a&gt;. I installed all the necessary softwares and drivers, wrote my first sketch in C to have an LED blink every 5 seconds, connected my Arduino Uno to the PC and uploaded my first sketch. The feeling when you get something to work for the first time, even if it’s something as small as a blinking LED, is just wonderful. I was hooked after that! I wrote multiple programs to do stuff with LEDs. I had a spare motor that I had got for another project and I uploaded a sketch to rotate it clockwise and anticlockwise within intervals.&lt;/p&gt;

&lt;p&gt;After some googling (yes, I use that as a verb), I found out that I could use the Arduino Uno as a Web Server. My mind was blown, I quickly wrote a sketch to host a simple web page with 2 buttons: ‘Turn Left’ and ‘Turn Right’. I had my mobile on the same network as Uno and I could access a webpage that had two buttons which when I pressed, I could control the direction in which the motor connected to the Arduino Uno was turning. And slowly I realised how capable this small microcontroller board in my hand was.&lt;/p&gt;

&lt;p&gt;Now coming to my actual project, I had a solenoid lock which is very simple to work with. It has a magnet inside it, if you pass electricity through the lock, it will pull the latch in thus leaving the lock in an ‘unlocked’ state. When there’s no electricity passing through, the latch will be out in the ‘locked’ state. After connecting this lock to the Arduino Uno on the breadboard, all I had to do was pass on/off signal on the port it was connected to. For those trying to work with modules like a Solenoid lock, make sure that you get a power adapter as the usb power from a computer &lt;strong&gt;won’t be enough&lt;/strong&gt; to get it to work.&lt;/p&gt;

&lt;p&gt;To control the lock remotely, I created a Web server on the Arduino Uno which could be used by an Android app for the lock/unlock functionality similar to the ‘Turn Left’ and ‘Turn Right’ buttons with the motor. There are tons of tutorials out there which will help you with this part and the one that pointed me to the right direction was &lt;a href="https://www.arduino.cc/en/Tutorial/WebServer"&gt;this&lt;/a&gt; .&lt;/p&gt;

&lt;p&gt;I had really enjoyed working on this project with my friend Abhishek, as he’s as intrigued with new tech as I am. I know some of you may ask why didn’t I use a Raspberry Pi? The answer is Arduino Uno was good enough for the task. Even though Raspberry Pi is a much stronger device which can practically run an Operating System(Raspbian) on it, I didn’t think it was a good decision to shell out almost thrice the price to get the same thing done.&lt;/p&gt;

&lt;p&gt;I hope you’ll enjoyed reading this and if you have any questions please feel free to drop a comment or reach out to me. For those looking for some awesome project ideas involving Arduino, you can check out &lt;a href="https://create.arduino.cc/projecthub"&gt;https://create.arduino.cc/projecthub&lt;/a&gt; .&lt;/p&gt;

</description>
    </item>
  </channel>
</rss>
