<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Tim Udoma</title>
    <description>The latest articles on DEV Community by Tim Udoma (@samtimberlan).</description>
    <link>https://dev.to/samtimberlan</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/samtimberlan"/>
    <language>en</language>
    <item>
      <title>Designing a Global Surveillance System for the NSA. A Look at Frostbite</title>
      <dc:creator>Tim Udoma</dc:creator>
      <pubDate>Fri, 26 Aug 2022 04:45:00 +0000</pubDate>
      <link>https://dev.to/samtimberlan/designing-a-global-surveillance-system-for-the-nsa-a-look-at-frostbite-n7b</link>
      <guid>https://dev.to/samtimberlan/designing-a-global-surveillance-system-for-the-nsa-a-look-at-frostbite-n7b</guid>
      <description>&lt;p&gt;On September 11, 2001, the deadliest terrorist attack in US history occurred, with over 11,000 people either killed or injured. This led to a cascade of events and proactive measures to eliminate even the slightest chances of a recurrence. One such measure was the enactment of the USA PATRIOT act which although permitting substantial incursions on privacy, further empowered the National Security Agency (NSA) and related intelligence agencies to prevent terrorism by solidifying their liberty to engage in global surveillance.&lt;/p&gt;

&lt;p&gt;Successful efforts were made in the past to achieve global surveillance. In 2008, XKeyScore, a data mining and exploitation system, was developed to gather nearly everything a user does on the internet. With over a decade of rapid developments in technology, there has been an enormous increase in data and changes in human-computer interaction. In fact, a whole new environment, the Metaverse, now exists due to advancements in Artificial Reality (AR) and Virtual Reality (VR), thus blurring the barrier between digital and physical existence. Consequently, there is an urgent need for the development of a system that not only extends and addresses the limitations of XKeyScore but adopts cutting-edge technologies to fulfill the NSA’s mission of gaining a decisive advantage for the nation. This new system, code-named Frostbite, would be an intelligence hive for the NSA (its custodian), the Federal Bureau of Investigation (FBI), and the Central Intelligence Agency (CIA).&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Frostbite?
&lt;/h2&gt;

&lt;p&gt;One limitation of XKeyScore is its inability to store data for longer due to influx. In some locations, as much as 20TB of information was received per day from the internet at a potential rate of 10gigabits per second. To curb this, the system was designed to only store content data for at most five days and metadata for 30 days. Content data refers to the actual data contained in the resources gathered. For example, the texts in an email, the voices in an audio recording, etc. On the other hand, metadata refers to supplementary information describing a resource. For example, a call log contains metadata like who was called, how long the call lasted, and the locations (based on cell towers) of the caller and receiver.&lt;/p&gt;

&lt;p&gt;Frostbite largely addresses this limitation through the use of modern and cloud architecture. Giants like Google, Microsoft, AWS, and Oracle will provide the necessary enterprise capabilities through the Joint Warfighting Cloud Capability (JWCC) project. Migration to the cloud would allow the NSA to focus on its core objectives by offloading infrastructure concerns to professionals in form of cloud providers, thus shifting from Capital Expenditure (CAPEX) to Operating Expenditure (OPEX), a cost-effective approach. Additionally, the cloud would enhance the agility of field agents using edge devices as data would not have to travel across continents to a central server, but to the nearest data center for processing and storage. &lt;/p&gt;

&lt;p&gt;As a central intelligence hive, Frostbite will consolidate data previously held in disparate systems like DISHFIRE, Boundless Informant, FAIRVIEW, and XKeyScore into a single data lake of highly voluminous, varied, and near real-time data. This consolidation will include but not be limited to financial transactions, text messages, compromised computer networks, telephone information, emails, phone calls, drones, satellite images, and fiber-optic cables; basically every device on earth. Resultingly, the NSA will have a bird’s eye view of whatever happens in the digital world in near real-time. &lt;/p&gt;

&lt;h2&gt;
  
  
  Implementation Detail
&lt;/h2&gt;

&lt;p&gt;The unprecedented scale of incoming data demands that a distributed cluster architecture is used; one that can share workloads efficiently among commodity computers. Hadoop has been an effective tool for this job and rightly so. In combination with Hadoop, Spark, an in-memory computing engine, would be run on Kubernetes clusters to handle processing needs.&lt;/p&gt;

&lt;h3&gt;
  
  
  Database
&lt;/h3&gt;

&lt;p&gt;In a distributed environment, relational databases are unsuitable due to the Consistency Availability Partition tolerance (CAP) theorem constraints, which basically states that scalability and availability cannot be obtained at the same time as high consistency as shown in fig 1.0. To illustrate, imagine there are three shops, each with an individual number of pencils available. To get the total number of pencils in the store, the store owner employs an assistant to go to each store at noon to get the total number of pencils and record it in a book. However, because each store makes daily sales, the individual number of pencils at any store may change and the logbook will not reflect the most up-to-date record until the assistant performs the routine task of updating the counts at noon. Conversely, if there was only one store, there would be no need for this tedious process. The “tedious process” is what is known as eventual consistency and is fundamentally how distributed databases work. While they cannot always provide an updated record of events, they are sure to be highly available and scalable; just like the collection of stores will be more likely to have a pencil.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--0_IEZO87--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/39odptbno3a7174norpw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--0_IEZO87--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/39odptbno3a7174norpw.png" alt="Image description" width="850" height="768"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Fig 1.0 CAP theorem. &lt;/p&gt;

&lt;p&gt;NoSQL databases unlike relational databases are more suited to handling data at the scale required by Frostbite. Cassandra is a NoSQL database that is excellent at writing data at a high velocity and is, therefore, the preferred database for this system. Under the hood, it uses a commit log and writes sequentially to the disk which, in addition to improving its fault tolerance (Pries &amp;amp; Dunnigan, 2015), prevents rapid SSD disk failures. Disk failures are expensive and are typically avoided by addressing these seemingly subtle concerns as &lt;a href="https://www.uber.com/blog/ubers-highly-scalable-and-distributed-shuffle-as-a-service/"&gt;Uber recently experienced&lt;/a&gt;. In terms of reading performance, Cassandra uses caching to improve speed. This is a technique where recently used or frequently accessed data is kept in high-speed temporary storage for later access.&lt;/p&gt;

&lt;h3&gt;
  
  
  Data Analysis
&lt;/h3&gt;

&lt;p&gt;A key feature of Frostbite is its ability to perform predictive modeling based on the data available. As a case study, imagine a field agent who is successfully authenticated and authorized by the system using Neuralink and an inconspicuous AR/VR glass. Through the glasses, data is sent to the system for real-time facial recognition and identification of other potential threats which could be dealt with before they materialize as shown in fig 1.1. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--8O4tDld6--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/3frbh68m4b6yygsrcjbl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--8O4tDld6--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/3frbh68m4b6yygsrcjbl.png" alt="Image description" width="610" height="284"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Fig 1.1 Edge communication with the cloud (Chan et al 2017).&lt;/p&gt;

&lt;p&gt;This facial recognition would be achieved through the use of OpenCV and other relevant algorithms related to image processing, human physiology, and pattern recognition. Furthermore, Spark provides ML tools like SparkMLLib, a rich suite of statistical and Machine Learning (ML) tools for further diagnostic, descriptive, and predictive analytics like summary statistics and clustering.&lt;/p&gt;

&lt;h3&gt;
  
  
  Discovery
&lt;/h3&gt;

&lt;p&gt;Discovery is a feature that powers the search and navigation of data combined from sources. This is essential considering the plethora of data expected to be ingested on Frostbite. The use of a discovery engine will, for example, enable an analyst to seek out all pdf documents created two weeks ago mentioning terrorism on the internet (Document Cloud, 2022). Elastic search, based on Apache Lucene would provide discovery by indexing both structured and unstructured data and allowing for fast and flexible retrieval.&lt;/p&gt;

&lt;h3&gt;
  
  
  Visualization
&lt;/h3&gt;

&lt;p&gt;As humans, we are better able to discern patterns when we see them. Computers, on the other hand, are really good at crunching numbers and can quickly find patterns within. To gain a better understanding of what the data we have reveals, it is pertinent to translate this data into some sort of trend (histogram, map, pie chart, etc.) that we can see. This will help us make faster decisions. ArcGIS, a geographic information system will be used to visualize geospatial data on maps, as it allows for real-time data and deep image analysis.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Desperate times require desperate measures. The desperate times existing today require proactive monitoring. Frostbite, being at the core of national intelligence promises far-reaching benefits. &lt;/p&gt;

&lt;p&gt;The system will utilize state-of-the-art technology to continue to enable agile decision-making. Hadoop, Spark, and Cassandra will form the backbone of the system by providing a distributed landscape for data ingestion, processing, and storage. Furthermore, a discovery engine like Elastic Search will give the needed flexibility for finding needles in a haystack, the needle being targets of interest; some we know, some we don’t. Finally, visualization in form of maps, statistical methods, and color variations will unlock insights whether for executives at the top level or agents in the field, helping them make faster decisions.&lt;/p&gt;

&lt;p&gt;If you learned something new from this article, please like and share&lt;/p&gt;

</description>
      <category>dataengineering</category>
      <category>tutorial</category>
      <category>cloud</category>
      <category>architecture</category>
    </item>
    <item>
      <title>Buffer Overflow</title>
      <dc:creator>Tim Udoma</dc:creator>
      <pubDate>Tue, 12 Jul 2022 04:25:00 +0000</pubDate>
      <link>https://dev.to/samtimberlan/buffer-overflow-3g5n</link>
      <guid>https://dev.to/samtimberlan/buffer-overflow-3g5n</guid>
      <description>&lt;p&gt;Buffer overflow or buffer overrun is a term used to describe a condition where a program writes to a buffer beyond its capacity. With an overflowing buffer, a malicious attacker can gain access to memory not originally allocated to a process, for the purpose of injecting and executing arbitrary code.&lt;/p&gt;

&lt;p&gt;Dependency Execution Prevention (DEP) is a solution to combat buffer overflow by enforcing memory access policies. The basic idea behind DEP is this, even if a buffer overflow occurs and the control flow integrity is compromised, the newly injected code should not be able to execute. Any attempt to execute code at non-executable location results in a STATUS_ACCESS_VIOLATION exception.&lt;/p&gt;

&lt;p&gt;As an optimization, Microsoft provides other memory protection attributes to further limit buffer flows. For example, Copy-on-Write Protection allows processes to share a physical memory space so far as it has not been written to. Though effective, DEP offers no protection against other types of attacks, particularly those involving code reuse and Return Oriented Programming.&lt;/p&gt;

&lt;p&gt;If you learned something new from this article, please like and share&lt;/p&gt;

</description>
      <category>exploit</category>
      <category>bufferoverflow</category>
      <category>programming</category>
      <category>cpp</category>
    </item>
    <item>
      <title>Migrating Core-Banking Workloads to the Cloud</title>
      <dc:creator>Tim Udoma</dc:creator>
      <pubDate>Tue, 12 Jul 2022 04:23:13 +0000</pubDate>
      <link>https://dev.to/samtimberlan/migrating-core-banking-workloads-to-the-cloud-3pf9</link>
      <guid>https://dev.to/samtimberlan/migrating-core-banking-workloads-to-the-cloud-3pf9</guid>
      <description>&lt;p&gt;Core banking is a backend system that provides all mission-critical functionalities – backbone functionalities that are key to the running of a bank – which includes account management, transaction processing, deposit, general ledger, loan, and credit processing&lt;/p&gt;

&lt;p&gt;For half a century, mainframes have proven to be a tested and trusted solution in handling core banking workloads. This is largely due to the fact that mainframes are a beast when it comes to raw processing power with the latest z16 being able to process 25 billion secure transactions per day!&lt;/p&gt;

&lt;p&gt;Traditional banks have been unable to keep up with the fast pace of fintech digitalization because most core banking solutions target mainframes that use primitive languages such as COBOL and complex monolithic architecture. Changes also require planned downtimes to load even the smallest updates.&lt;/p&gt;

&lt;p&gt;There is a severe scarcity of technology domain expertise required for business continuity in most traditional banks. Rather than work with “boring” programming languages, students and recent college graduates have opted for more “exciting” programming languages poised to take create cutting-edge solutions. Older ones with in-depth knowledge of how complex mainframes work and have been around supporting the core banking system are now retiring or are increasingly expensive to retain. This presents a risk that cannot be ignored as the safety reputation of any bank hinges on its risk culture.&lt;/p&gt;

&lt;p&gt;A shift to the cloud offers a greatly reduced Total Cost of Ownership. Whereas the annual cost of maintaining a mainframe is estimated to be around $5 million, a similar workload running on a public cloud provider could be 10 times lower at around $550,000.&lt;/p&gt;

&lt;p&gt;The Migration of core banking workloads is best done in phases to avoid disruption. Portions of mission-critical functionalities are broken down into microservices and shifted to the cloud in a “Lift and Shift” process starting from the least essential to the most essential functionalities.&lt;/p&gt;

&lt;p&gt;Typically, a large organization like a traditional&lt;br&gt;
bank will need more than 5000 million instructions per second (MIPS) capacity. For such a workload, twenty 1 TB solid-state drives can be used as primary storage while twenty 1TB hard disk drives provide a virtual tape.&lt;/p&gt;

&lt;p&gt;Azure Active Directory can be leveraged when using Microsoft Azure to provide consistent and organizational-level security. This is in addition to the inherent defense system available on the cloud against distributed attacks. Azure Backup will provide a secure backup environment that ensures data in transit and at rest is protected even against ransomware attacks.&lt;/p&gt;

&lt;p&gt;If you learned something new from this article, please like and share&lt;/p&gt;

</description>
      <category>microservices</category>
      <category>architecture</category>
      <category>finance</category>
      <category>cloud</category>
    </item>
    <item>
      <title>Beneath the Rug, Computer Processing</title>
      <dc:creator>Tim Udoma</dc:creator>
      <pubDate>Tue, 12 Jul 2022 04:18:50 +0000</pubDate>
      <link>https://dev.to/samtimberlan/beneath-the-rug-computer-processing-1b98</link>
      <guid>https://dev.to/samtimberlan/beneath-the-rug-computer-processing-1b98</guid>
      <description>&lt;p&gt;You have probably heard that the CPU processing is done in 0s and 1s. Have you ever wondered how this is possible?&lt;/p&gt;

&lt;p&gt;Consider the number 11. This number is represented as eleven in the decimal numbering system. The decimal system is a representation of numbers using digits 0 – 9. The number 11 simply means the first number after the roundtrip from 0 – 9. The same number represented in the binary system is 1011. Note that the number of digits used for representing the same number is reduced (11 vs 1011), this is an attribute known as compactness. Since computers are better at handling numbers, using the binary system ensures accuracy and precision at the expense of compactness. These binary numbers are usually represented in 8-digit bit strings called bytes. For example, the representation of the number 1011 would be 00001011, with extra zeros used for padding.&lt;/p&gt;

&lt;p&gt;At this point, you are probably wondering, what about the text on my screen? The CPU uses a text conversion process called character encoding to translate text to numbers. Think of it as a dictionary. Every character is a key, tied to a numeric value. Assuming A has a value of 1 in the dictionary, B would have the value of 2. This dictionary is case sensitive; hence the alphabet A has a different value from the alphabet a. Initially, computers used ASCII, a character encoding scheme modeled for only English characters, today computers use Unicode which encompasses 100, 000 characters (including your favorite emoji 😀).&lt;/p&gt;

&lt;p&gt;Images follow a similar but slightly more complex pattern. At the smallest level, every image is made up of a pixel. On a color screen, every pixel is represented using three colors Red, Green, and Blue (RGB). Each color uses an 8-bit number to represent the color intensity. For example, 255 255 255 means all three colors are at the highest intensity for the particular pixel, and 000 000 000 means all colors are set to the lowest intensity. Interestingly, since images can be described in terms of numbers, mathematical calculations can be done on images. This is the basis of facial and character recognition systems using machine learning.&lt;/p&gt;

&lt;p&gt;If you learned something new from this article, please like and share&lt;/p&gt;

</description>
      <category>machinelearning</category>
      <category>systemarchitecture</category>
      <category>computerscience</category>
      <category>computing</category>
    </item>
    <item>
      <title>Managing Data at Scale</title>
      <dc:creator>Tim Udoma</dc:creator>
      <pubDate>Tue, 12 Jul 2022 03:58:15 +0000</pubDate>
      <link>https://dev.to/samtimberlan/managing-data-at-scale-5gcm</link>
      <guid>https://dev.to/samtimberlan/managing-data-at-scale-5gcm</guid>
      <description>&lt;p&gt;Imagine you are on a summer vacation to an exquisite resort. You constantly make sure that your phone or camera is charged because you take lots of pictures and videos to preserve the memory. Well, after a few days, you run out of disk space because of the countless high-definition pictures and videos you have taken. As a solution, you move files from your phone to your favorite cloud provider (maybe Google Drive or iCloud) and problem solved. In reality, while you have freed up space on your device, you have consumed space on another device. To get a sense of how much data you may have shared on the internet over time, open network settings on your phone and look for data usage. At the time of this writing, mine was 520Gb. Outstandingly, I am just one among five billion internet users generating over 2.5 billion gigabytes each day.&lt;/p&gt;

&lt;p&gt;Traditional computers are no longer the sole generators of data. Video game consoles, smart watches, CCTV networks, mobile phones, and weather sensors all generate data of varying sizes and types. For some devices, data generated must be streamed at real-time or near real-time speeds. Unfortunately, having almost reached the limits of the number of transistors that can fit on a single CPU chip, how can processing be increased to match the exponential increase in data? More importantly, how will the data be stored for future retrieval?&lt;/p&gt;

&lt;p&gt;It is a common saying that united we stand, divided we fall. Just as tiny ants are capable of consuming a dead animal many times their size by swarming together, massive data can be processed and analyzed using multiple computers (nodes) for processing. This is referred to as parallel processing. Furthermore, data storage in these systems uses a file system different from that of regular operating systems as it has a higher storage allocation unit (chunk). &lt;/p&gt;

&lt;p&gt;A popular technology for handling data at massive scale is Apache Hadoop. Hadoop enables the distributed processing and storage of data sets across a cluster of computers thus providing high availability and fault tolerance. &lt;br&gt;
It achieves this using four key components: Common, Hadoop Distributed File System (HDFS), Yet Another Resource Allocator (YARN), and MapReduce.&lt;/p&gt;

&lt;p&gt;Common is the base component on which the other three components depend. HDFS is the file system responsible for efficient distributed file storage. Recall that big data requires a larger storage allocation unit different from that of the regular OS. HDFS is what provides this functionality. It divides data into a minimum of 64MB chunks which are then replicated thrice across the cluster. Illustratively, to store a 1GB file in a Hadoop cluster of six computers and a chunk limit of 100MB, HDFS breaks the file into 10 parts and creates three copies of each part which is then stored on the cluster. Notice that the chunk size can be configured to suit the needs of the system. However, care should be taken not to configure a chunk size larger than that of the size of average files to be processed as this would lead to poor I/O performance specifically called the small file problem. In a cluster, HDFS assigns a computer the role of a master (name node). The name node keeps track of what file is stored and where it is stored. Additionally, it uses a JobTracker to assign tasks to other nodes (data nodes). YARN supports the name node in the fulfillment of job tracking, scheduling, and resource allocation. MapReduce coordinates data processing by performing two essential duties. First, it breaks down large datasets into smaller ones to be processed by each node (Map). Next, it reassembles each output to get a final result (Reduce). &lt;/p&gt;

&lt;p&gt;Asides from the primary components of Hadoop, other open-source utilities exist to improve and extend its capabilities. Sqoop facilitates the import and export of data from relational databases, Mahout provides the library for machine learning models required for predictive analysis, Kafka offers data streaming, Hive allows querying data from different data stores, even flat files, using Hive Query Language (HQL) which is similar to SQL and HBase serves as a database system for storing large volumes of real-time data.&lt;/p&gt;

&lt;p&gt;​Other databases exist to store petabyte-scale data and perform Online Analytical Processing (OLAP) – a technique for processing and analyzing data – like Cassandra and MongoDB. Similar to HBase, Cassandra is a column-oriented database. However, the absence of a master node rules out the problem of a single point of failure that is present in HBase due to its reliance on Hadoop. While MongoDB offers the best read performance of the three databases listed, it is unsuitable for large volumes of real-time data. By contrast, Cassandra outperforms the rest in write speeds making it suitable to store logs for example. In fact, Cassandra seems to perform better as the cluster size increases beyond 24, consequently making it the best choice for supporting a large distributed sensor system. &lt;/p&gt;

&lt;p&gt;If you learned something new from this article, please like and share&lt;/p&gt;

</description>
      <category>dataengineering</category>
      <category>systemarchitecture</category>
      <category>opensource</category>
      <category>bigdata</category>
    </item>
    <item>
      <title>How to Upload to an AWS S3 bucket using .Net Core API</title>
      <dc:creator>Tim Udoma</dc:creator>
      <pubDate>Mon, 01 Mar 2021 17:11:09 +0000</pubDate>
      <link>https://dev.to/samtimberlan/how-to-upload-to-an-aws-s3-bucket-using-net-core-api-16ap</link>
      <guid>https://dev.to/samtimberlan/how-to-upload-to-an-aws-s3-bucket-using-net-core-api-16ap</guid>
      <description>&lt;p&gt;This guide assumes you have access to an AWS account with subscription to S3 service and a bucket set up.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; Create and setup project&lt;/li&gt;
&lt;li&gt; Add a service&lt;/li&gt;
&lt;li&gt; Create a bucket&lt;/li&gt;
&lt;li&gt; Add security checks&lt;/li&gt;
&lt;li&gt; Upload a file&lt;/li&gt;
&lt;li&gt; Retrieve public URL for uploaded content&lt;/li&gt;
&lt;li&gt; Inject dependencies into startup file&lt;/li&gt;
&lt;li&gt; Add service to controller&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Using an IDE of choice, create a new web project. It could be an API or a web app. For this demo, we will be creating an API using .Net 5.0.&lt;/p&gt;

&lt;h2&gt;
  
  
  Create the API project
&lt;/h2&gt;

&lt;p&gt;Using Visual Studio or any IDE of your choice, create an API project. Choose .Net 5.0 as the target. Install the following nuget packages: &lt;code&gt;AWSSDK.S3&lt;/code&gt; and &lt;code&gt;AWSSDK.Extensions.NETCore.Setup&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;## Add a service&lt;br&gt;
 Create a folder or class library project (depending on your preference) named Services; this will store our AWS service which will be called by the API controller.&lt;/p&gt;

&lt;p&gt;## Create a bucket&lt;br&gt;
 To create a bucket, we will need to &lt;a href="https://docs.aws.amazon.com/sdk-for-net/latest/developer-guide/net-dg-config-netcore.html"&gt;connect to our AWS account with valid credential&lt;/a&gt; using the nuget package AWSSDK.Extensions.NETCore.Setup. The nuget package “AWSSDK.S3” provides helpful classes for interacting with our upstream bucket. These classes will enable us perform actions such as creating and updating a bucket. Now, let us create a method that will create a bucket with a specified bucket name. This method will check if the bucket exists and create it if it doesn’t. Using &lt;code&gt;AmazonS3Client&lt;/code&gt;, the bucket will be created using a &lt;code&gt;PutBucketRequest&lt;/code&gt; object, containing the bucket information.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt; public async Task&amp;lt;bool&amp;gt; CreateBucketAsync(string bucketName)
        {
            try
            {
                _logger.LogInformation("Creating Amazon S3 bucket...");
                var bucketExists = await AmazonS3Util.DoesS3BucketExistV2Async(_amazonS3Client, bucketName);
                if (bucketExists)
                {
                    _logger.LogInformation($"Amazon S3 bucket with name '{bucketName}' already exists");
                    return false;
                }

                var bucketRequest = new PutBucketRequest()
                {
                    BucketName = bucketName,
                    UseClientRegion = true
                };

                var response = await _amazonS3Client.PutBucketAsync(bucketRequest);

                if (response.HttpStatusCode != HttpStatusCode.OK)
                {
                    _logger.LogError("Something went wrong while creating AWS S3 bucket.", response);
                    return false;
                }

                _logger.LogInformation("Amazon S3 bucket created successfully");
                return true;
            }
            catch (AmazonS3Exception ex)
            {
                _logger.LogError("Something went wrong", ex);
                throw;
            }
        }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Add security checks
&lt;/h2&gt;

&lt;p&gt;As is the case with arbitrary file uploads by users, data is untrusted, hence, it must be checked to ensure it is clean and conforms to business requirements. For this demo, we will be requiring users to upload only image files (".jpg", ".jpeg", ".png", “gif”) not more than 6Mb. Furthermore, during upload, the file will be saved, not with the original file name, but a random name; the original name will be saved as part of the file meta. This will prevent injection and related malicious attacks. &lt;/p&gt;

&lt;p&gt;Below is the code to certify that uploaded files are images:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;private bool IsValidImageFile(IFormFile file)
        {

            // Check file length
            if (file.Length &amp;lt; 0)
            {
                return false;
            }

            // Check file extension to prevent security threats associated with unknown file types
            string[] permittedExtensions = new string[] { ".jpg", ".jpeg", ".png", ".pdf" };
            var ext = Path.GetExtension(file.FileName).ToLowerInvariant();
            if (string.IsNullOrEmpty(ext) || !permittedExtensions.Contains&amp;lt;string&amp;gt;(ext))
            {
                return false;
            }

            // Check if file size is greater than permitted limit
            if (file.Length &amp;gt; _config.FileSize) // 6MB
            {
                return false;
            }

            return true;
        }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Upload a file
&lt;/h2&gt;

&lt;p&gt;To upload a file, the file must be represented as a &lt;code&gt;TransferUtilityUploadRequest&lt;/code&gt; object. This object contains several properties, notably:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;InputStream: a stream of the file content to be uploaded&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Key: The storage name for the file. This will be set to the random file name&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;BucketName: specifies the destination bucket for upload&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;CannedACL: specifies the access control policy for the uploaded file. This will be set to &lt;code&gt;S3CannedACL.PublicRead&lt;/code&gt; so that users will be able to view the uploaded content via the generated link&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;MetaData: contains arbitrary information about the file. We will add the orginal file name as part of the metadata&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;With the upload object constructed, we can call &lt;code&gt;UploadAsync&lt;/code&gt; method of &lt;code&gt;TransferUtility&lt;/code&gt; class, passing it as a parameter. This will asynchronously trigger the upload process to AWS S3.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;public async Task&amp;lt;AWSUploadResult&amp;lt;string&amp;gt;&amp;gt; UploadImageToS3BucketAsync(UploadRequestDto requestDto)
        {
            try
            {
                var file = requestDto.File;
                string bucketName = requestDto.BucketName;

                if (!IsValidImageFile(file))
                {
                    _logger.LogInformation("Invalid file");
                    return new AWSUploadResult&amp;lt;string&amp;gt;
                    {
                        Status = false,
                        StatusCode = StatusCodes.Status400BadRequest
                    };
                }

                // Rename file to random string to prevent injection and similar security threats
                var trustedFileName = WebUtility.HtmlEncode(file.FileName);
                var ext = Path.GetExtension(file.FileName).ToLowerInvariant();
                var randomFileName = Path.GetRandomFileName();
                var trustedStorageName = "files/" + randomFileName + ext;

                // Create the image object to be uploaded in memory
                var transferUtilityRequest = new TransferUtilityUploadRequest()
                {
                    InputStream = file.OpenReadStream(),
                    Key = trustedStorageName,
                    BucketName = bucketName,
                    CannedACL = S3CannedACL.PublicRead, // Ensure the file is read-only to allow users view their pictures
                    PartSize = 6291456
                };

                // Add metatags which can include the original file name and other decriptions
                var metaTags = requestDto.Metatags;
                if (metaTags != null &amp;amp;&amp;amp; metaTags.Count() &amp;gt; 0)
                {
                    foreach (var tag in metaTags)
                    {
                        transferUtilityRequest.Metadata.Add(tag.Key, tag.Value);
                    }
                }

                transferUtilityRequest.Metadata.Add("originalFileName", trustedFileName);


                await _transferUtility.UploadAsync(transferUtilityRequest);

                // Retrieve Url
                var ImageUrl = GenerateAwsFileUrl(bucketName, trustedStorageName).Data;

                _logger.LogInformation("File uploaded to Amazon S3 bucket successfully");
                return new AWSUploadResult&amp;lt;string&amp;gt;
                {
                    Status = true,
                    Data = ImageUrl
                };
            }
            catch (Exception ex) when (ex is NullReferenceException)
            {
                _logger.LogError("File data not contained in form", ex);
                throw;
            }
            catch (Exception ex) when (ex is AmazonS3Exception)
            {
                _logger.LogError("Something went wrong during file upload", ex);
                throw;
            }

        }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Retrieve public URL for uploaded content
&lt;/h2&gt;

&lt;p&gt;Additionally, we need a way to get a sharable URL which can be saved to a database. AWS has two patterns for constructing S3 file URLs, namely: Path style, which is deprecated and virtual hosted style. For this demo, we will use the virtual hosted style to retrieve the file URL. It follows any of the underlisted patterns&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  http://[bucketName].[regionName].amazonaws.com/[key]&lt;/li&gt;
&lt;li&gt;  https://[bucketName].s3.amazonaws.com/[key]&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The code is shown below:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;public AWSUploadResult&amp;lt;string&amp;gt; GenerateAwsFileUrl(string bucketName, string key, bool useRegion = true)
        {
            // URL patterns: Virtual hosted style and path style
            // Virtual hosted style
            // 1. http://[bucketName].[regionName].amazonaws.com/[key]
            // 2. https://[bucketName].s3.amazonaws.com/[key]

            // Path style: DEPRECATED
            // 3. http://s3.[regionName].amazonaws.com/[bucketName]/[key]
            string publicUrl = string.Empty;
            if (useRegion)
            {
                publicUrl = $"https://{bucketName}.{_config.AwsRegion}.{_config.AwsS3BaseUrl}/{key}";
            }
            else
            {
                publicUrl = $"https://{bucketName}.{_config.AwsS3BaseUrl}/{key}";
            }
            return new AWSUploadResult&amp;lt;string&amp;gt;
            {
                Status = true,
                Data = publicUrl
            };
        }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Inject dependencies into startup file
&lt;/h2&gt;

&lt;p&gt;Next, here is the code to inject dependencies which the service will need:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;            // Add app injections
            services.AddDefaultAWSOptions(Configuration.GetAWSOptions());
            services.AddAWSService&amp;lt;IAmazonS3&amp;gt;();
            services.AddTransient&amp;lt;IUploadService, UploadService&amp;gt;();
            services.AddTransient&amp;lt;TransferUtility&amp;gt;();
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;These should be added to the &lt;code&gt;ConfigureServices&lt;/code&gt; method.&lt;/p&gt;

&lt;h2&gt;
  
  
  Add service to controller
&lt;/h2&gt;

&lt;p&gt;Finally, we are ready to setup our controller. It will contain two endpoints; one for creating the bucket and another for uploading contents&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;        [HttpPost]
        public async Task&amp;lt;IActionResult&amp;gt; Post([FromForm] UploadRequestDto requestDto)
        {
            var result = await _uploadService.UploadImageToS3BucketAsync(requestDto);
            return StatusCode(result.StatusCode);
        }

        [HttpPost("create-bucket")]
        public async Task&amp;lt;IActionResult&amp;gt; CreateS3BucketAsync(string bucketName)
        {
            await _uploadService.CreateBucketAsync(bucketName);
            return StatusCode(StatusCodes.Status200OK);
        }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For the full implementation, please &lt;a href="https://github.com/samtimberlan/AWSService"&gt;visit this repo&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;There you have it. Thanks for reading. If you learned something new from this article, please like and share.&lt;/p&gt;

&lt;p&gt;Did you spot a typo, an error or want to contribute? &lt;a href="https://github.com/samtimberlan/Blog-Posts/blob/drafts/How%20to%20upload%20to%20AWS%20S3%20bucket%20using%20.Net%20core.md"&gt;Here's the repo on GitHub&lt;/a&gt;&lt;/p&gt;

</description>
      <category>backend</category>
      <category>netcore</category>
      <category>api</category>
      <category>aws</category>
    </item>
    <item>
      <title>Five Points to Note When Writing Regular Expressions in .Net</title>
      <dc:creator>Tim Udoma</dc:creator>
      <pubDate>Thu, 28 Jan 2021 13:36:43 +0000</pubDate>
      <link>https://dev.to/samtimberlan/five-points-to-note-when-writing-regular-expressions-in-net-1o62</link>
      <guid>https://dev.to/samtimberlan/five-points-to-note-when-writing-regular-expressions-in-net-1o62</guid>
      <description>&lt;p&gt;Regular expressions are powerful patterns written to check the adherence of input to expected requirements. For example, regular expressions provide a way to validate email addresses to ensure they meet the expected requirement of a name, an &lt;code&gt;@&lt;/code&gt; sign and a domain.&lt;br&gt;
Though powerful, regular expressions (regex), can cause performance bottlenecks and security breaches through denial of service (DOS), if not used correctly. What are some important points to consider when using regex in .Net?&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; Input source: An input source can be reliable or unreliable. They usually contain data that:

&lt;ul&gt;
&lt;li&gt;Matches the pattern&lt;/li&gt;
&lt;li&gt;Does not match the pattern&lt;/li&gt;
&lt;li&gt;Nearly matches the pattern.
Unreliable sources will contain more input that nearly matches or does not match a given pattern. 
Input that nearly matches the regex pattern, typically require more processing time. Depending on the input length, this time could be days, or even years.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt; Always use a timeout: The regex engine in .Net uses an infinite timeout by default when matching. This is meant to be changed according to the developer’s need and the resulting timeout exception handled accordingly. Using a regex timeout secures an app from denial of service.&lt;/li&gt;
&lt;li&gt; Object instantiation: Use static pattern-matching method, such as &lt;code&gt;Regex.Match(String, String)&lt;/code&gt; instead of instantiating the regex class &lt;code&gt;new Regex(pattern)&lt;/code&gt; for methods that will be frequently called, as this, will probably use a cached, compiled version&lt;/li&gt;
&lt;li&gt; Assembly Compilation: When developing a regular expression library, you can significantly improve the startup time and execution time by compiling to a library. &lt;/li&gt;
&lt;li&gt;  Testing: When testing, do not restrict input to only ones that match. Test input that nearly matches and observe the performance, especially in scenarios where the regex will handle unreliable input&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Additional reading:&lt;br&gt;
&lt;a href="https://docs.microsoft.com/en-us/dotnet/standard/base-types/best-practices"&gt;Microsoft doc: Best practices for regular expressions in .NET&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://docs.microsoft.com/en-us/dotnet/standard/base-types/backtracking-in-regular-expressions"&gt;Microsoft Doc: Backtracking in Regular Expressions&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;There you have it. Thanks for reading. If you learned something new from this article, please like and share.&lt;/p&gt;

&lt;p&gt;Did you spot a typo, an error or want to contribute? &lt;a href="https://github.com/samtimberlan/Blog-Posts/blob/master/Five%20points%20to%20note%20when%20using%20Regular%20Expressions.md"&gt;Here's the repo on GitHub&lt;/a&gt;&lt;/p&gt;

</description>
      <category>net</category>
      <category>netcore</category>
      <category>csharp</category>
      <category>backend</category>
    </item>
    <item>
      <title>Converting Numbers To Roman Numerals Using Javascript</title>
      <dc:creator>Tim Udoma</dc:creator>
      <pubDate>Fri, 24 Jul 2020 22:32:26 +0000</pubDate>
      <link>https://dev.to/samtimberlan/converting-numbers-to-roman-numerals-using-javascript-3jlj</link>
      <guid>https://dev.to/samtimberlan/converting-numbers-to-roman-numerals-using-javascript-3jlj</guid>
      <description>&lt;h2&gt;
  
  
  The Challenge:
&lt;/h2&gt;

&lt;p&gt;When converting numbers to Roman numerals, there are certain restrictions. A particular figure is not to appear more than thrice. This is done to enhance readability and reduce confusion. For example, IIII might be mistaken for III. Consequently, When a symbol appears after a larger (or equal) symbol it is added. But, if the symbol appears before a larger symbol it is subtracted.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Solution:
&lt;/h2&gt;

&lt;p&gt;There are several approaches and algorithms to solve this challenge. However, the algorithm a friend of mine and I came up with is outlined below:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Declare a nested array. This contains symbols for special Roman numerals. Each element of the parent array holds symbols for units, tens, hundreds and thousands. Furthermore, include symbols for special numerals like four and nine. This would save you lines of code.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;var RomanNumerals = [
  ["I", "IV", "V", "IX"],      //Units
  ["X", "XL", "L", "XC"],      //Tens
  ["C", "CD", "D", "CM"],      //Hundreds
  ["M"]                        //Thousands(In this case only M. Roman numerals don't exceed 3999)
];
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Create a reusable function to perform the numeric conversion. It accepts two parameters, the number to be converted and an array indicating the value of the number – unit, tens, hundreds or thousands. The number to be converted is checked and assigned a value from the array, corresponding to its Roman equivalent. Here’s a sneak peek into the function.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;function convertToDigitRoman(num, romArr) {
  var arrayToBeUsed = romArr;
  if (num &amp;lt;= 3) {
    while (num &amp;gt; 0) {
      romanNumResultArr.push(arrayToBeUsed[0]);
      num--;
    }
  }
  if (num == 4) {
    romanNumResultArr.push(arrayToBeUsed[1]);
  }

  if (num &amp;gt;= 5 &amp;amp;&amp;amp; num &amp;lt; 9) {
    romanNumResultArr.push(arrayToBeUsed[2]);
    while (num - 5 &amp;gt; 0) {
      romanNumResultArr.push(arrayToBeUsed[0]);
      num--;
    }
  }

  if (num == 9) {
    romanNumResultArr.push(arrayToBeUsed[1]);
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Now we have a reusable function converting digits, it’s time to use this function to convert numbers with multiple digits like 10 or 265. To achieve this, we use another function. It breaks the number into an array of single numbers. Hence, 265 becomes &lt;code&gt;[2,6,5]&lt;/code&gt;. Finally, we loop through each of them calling our reusable function to assign the proper roman numeral.
Here’s the full code:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;var RomanNumerals = [
  ["I", "IV", "V", "IX"],
  ["X", "XL", "L", "XC"],
  ["C", "CD", "D", "CM"],
  ["M"]
];
var romanNumResultArr = [];

function convertToRoman(num) {
//Roman numbers can't exceed 3999
  if (num &amp;gt;= 4000) {
    return "Unfortunately, numbers from 4000 and above do not have roman numerals";
  }

//Break the number into an array of numbers
  var givenNumeral = num
    .toString()
    .split("")
    .map(Number);


//The length of the givenNumeral tells us the value of the number - 2 would mean ten and 3, hundred.
  for (i = givenNumeral.length; i &amp;gt; 0; i--) {
    convertToRomanUnit(givenNumeral.shift(), RomanNumerals[i - 1]);
  }

  var result = romanNumResultArr.join("");

  console.log(result);
  return result;
}


//---------------Reusable Function-------------------------------

function convertToRomanUnit(num, romArr) {
  var arrayToBeUsed = romArr;
  if (num &amp;lt;= 3) {
    while (num &amp;gt; 0) {
      romanNumResultArr.push(arrayToBeUsed[0]);
      num--;
    }
  }
  if (num == 4) {
    romanNumResultArr.push(arrayToBeUsed[1]);
  }

  if (num &amp;gt;= 5 &amp;amp;&amp;amp; num &amp;lt; 9) {
    romanNumResultArr.push(arrayToBeUsed[2]);
    while (num - 5 &amp;gt; 0) {
      romanNumResultArr.push(arrayToBeUsed[0]);
      num--;
    }
  }

  if (num == 9) {
    romanNumResultArr.push(arrayToBeUsed[1]);
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can &lt;a href="https://github.com/samtimberlan/Roman-Numeral-Converter/blob/master/ConverToRomanNumeralsJS.js"&gt;view the full source code here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;There you have it. Thanks for reading. If you learned something new from this article, please like and share.&lt;/p&gt;

&lt;p&gt;Did you spot a typo, an error or want to contribute? &lt;a href="https://github.com/samtimberlan/Blog-Posts/blob/master/Converting%20Numbers%20To%20Roman%20Numerals%20Using%20Javascript.md"&gt;Here's the repo on GitHub&lt;/a&gt;&lt;/p&gt;

</description>
      <category>algorithms</category>
      <category>intermediate</category>
    </item>
    <item>
      <title>REST – A pragmatic Approach For Building APIs</title>
      <dc:creator>Tim Udoma</dc:creator>
      <pubDate>Fri, 24 Jul 2020 21:43:17 +0000</pubDate>
      <link>https://dev.to/samtimberlan/rest-a-pragmatic-approach-for-building-apis-4ec1</link>
      <guid>https://dev.to/samtimberlan/rest-a-pragmatic-approach-for-building-apis-4ec1</guid>
      <description>&lt;p&gt;“How can a computer thingy be RESTful?” That was the only thought that ran through my mind when I first heard of REST. Even more mind boggling was how it was always called in conjunction with APIs. How I fondly remember those days.&lt;/p&gt;

&lt;p&gt;With this article, I will greatly describe REST along with a brief description of an API. Let’s get started.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is an API?
&lt;/h2&gt;

&lt;p&gt;An API is easily defined as an Application Programming Interface. Yeah yeah, same old tech-y definition. So, what exactly is an API? The easiest way to think of an API is as anything that delivers a service. Imagine you are getting married. As often the case, you’re already weighed down by a lot of responsibilities: providing the best clothes for yourself and the train, ensuring that bills for things such as reception venue, bridal price and the likes are paid. You definitely are not going to be the same person to cook and serve the guests. These could be handled by others willing to render such services.&lt;/p&gt;

&lt;p&gt;Similarly, if we are to create a small weather application that forecasts the weather conditions of hundreds of cities round the world, instead of traveling to every single one of those cities, every single day, with instruments for recording weather reports, we could just get the data from those who already have these reports in their libraries, while focusing on creating our app. That makes sense. We could pay them a token from money which otherwise, could have been used to pay for fares.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is REST?
&lt;/h2&gt;

&lt;p&gt;I have always thought the English word “REST” to mean, as one dictionary defined: “A pause for relaxation”. However, I have realized that not all words containing those four letters R-E-S-T actually mean relaxation or a state of inactivity. Some are almost opposite, as in the case of the word “Restive”.&lt;/p&gt;

&lt;p&gt;In our case, REST is an abbreviation for Representational State Transfer. It is a mindset or thought process developers follow when building an API. It is not an API.&lt;/p&gt;

&lt;h2&gt;
  
  
  A brief history:
&lt;/h2&gt;

&lt;p&gt;REST was brought to birth as a Dissertation or research propounded by Roy Fielding while gaining his Doctorate degree in the year 2000.&lt;/p&gt;

&lt;p&gt;At that point in time, the internet was the Rave of the moment. It was just five years since the birth of JavaScript. Microsoft, Netscape and Yahoo were all having a good tussle as to who would gain web dominance.&lt;/p&gt;

&lt;p&gt;Furthermore, with the growth of the internet arose a need for distributed servers, that is, computers in different locations, to communicate. The only available solution was a protocol which required developers to write complex, error-prone, hard-to-debug XML documents. That protocol was the Simple Object Access Protocol (SOAP).&lt;/p&gt;

&lt;h2&gt;
  
  
  Constraints of REST
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Client – Server:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This is the most basic constraint of REST. It implies that REST can ONLY be implemented on a client – server architecture. This client is usually a browser used to make requests, whereas, the server is the remote computer holding information to be accessed. This constraint of REST enhances separation of concerns and scalability.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Code On Demand:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;An optional constraint, signifying the fact that the server can send executable code to be implemented by the client.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Layered System:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;No matter how many systems process a request, the end response should be the same regardless.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Statelessness:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The server does not store client-related information in form of sessions.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Cache-ability:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The server should identify which responses are cache-able or non-cache-able. Information that will not easily change over time, should be cached or stored in the browser, so that loading time can be reduced and performance, increased.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Uniform Interface:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This generally ensures a standard method for communication – HTTP verbs. It is another fundamental constraint that must be implemented for an API to be RESTful. There are four sub constraints included, they are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Every resource should be identified. They should have a name or URI. Let’s reconsider the weather service for our small app. Using a very popular service provider, open weather map, we can get the weather report for London by making a call through this URI:&lt;br&gt;
api.openweathermap.org/data/2.5/weather?q=London&lt;br&gt;
This is possible because, the resource we are looking for, weather details for London city, is named “London” in the URI. Therefore, finding reports for other cities could be done just by replacing “London” by the city name in the URI.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The resources can be manipulated through representations. All the data or information needed by the server to take actions are fully included in the representations transferred by the requests. Operations such as editing and deleting of resources should be carried out with the given information the client has.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Self-descriptive Messages. Every request should provide all information regarding the action it wants the server to perform.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Hyper-media As The Engine Of Application State(HATEOAS). Don’t be fooled by this big term. This constraint means that URIs for resource representations are embedded in the representation itself. Assuming our request to the weather API was successful, we could get a simplified response of this form:&lt;br&gt;
&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
"id":123456,
"name":"London",
"link": [{
"rel":"self", "href":"data/2.5/weather?q=London"
}]
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  URL Design Principles for RESTful Web API
&lt;/h2&gt;

&lt;p&gt;Among the constraints of a REST architectural style, Uniform Interface is the central feature. It doesn’t mean all resources share the same URL, the true meaning is to assign every resource a unique URL, and the design of all URLs follows the same normal form. There are many books written about the rules of RESTful API design if you want to do some research however, here are some rules we will follow:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;The hierarchical relationship must be separated by a forward slash (/). For example, &lt;a href="http://localhost/products/shoes"&gt;http://localhost/products/shoes&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;All words should be lowercase and use hyphens (-) to improve readability. For example, &lt;a href="http://localhost/user-comments"&gt;http://localhost/user-comments&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;No file extensions. Please note, we are designing URLs for a RESTful API, not for websites, so you won’t see the URLs end with /.jsp /.aspx .html /.php, etc.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Use a plural noun for collection names or store names. For example, &lt;a href="http://localhost/users"&gt;http://localhost/users&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Use a singular noun or identity for specific documents or elements. For example, in the URL “&lt;a href="http://localhost/offices/2/employees/tim-udoma%E2%80%9C"&gt;http://localhost/offices/2/employees/tim-udoma“&lt;/a&gt;, the ID 2 helps the server identify the office, and the employee name helps the server to retrieve his information.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Query parameters should be used for filtering. For example, &lt;a href="http://localhost/shop/24/products/clothes?gender=male"&gt;http://localhost/shop/24/products/clothes?gender=male&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Use a verb or verb phrase for controller names. For example, &lt;a href="http://localhost/products/search"&gt;http://localhost/products/search&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Create, Read, Update, Delete (CRUD) operations should not be used in URLs. For example, you will never see the URL &lt;a href="http://localhost/get-cloth/10"&gt;http://localhost/get-cloth/10&lt;/a&gt;, because the correct design is to send the URL &lt;a href="http://localhost/products/clothes"&gt;http://localhost/products/clothes&lt;/a&gt; within an HTTP GET request to the server, the HTTP request is a self-descriptive message.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;There you have it. Thanks for reading. If you learned something new from this article, please like and share.&lt;/p&gt;

&lt;p&gt;Did you spot a typo, an error or want to contribute? &lt;a href="https://github.com/samtimberlan/Blog-Posts/blob/master/REST%20%E2%80%93%20A%20pragmatic%20Approach%20For%20Building%20APIs.md"&gt;Here's the repo on GitHub&lt;/a&gt;&lt;/p&gt;

</description>
      <category>rest</category>
      <category>api</category>
    </item>
    <item>
      <title>Distributed Caching In ASP.Net Core Using Redis</title>
      <dc:creator>Tim Udoma</dc:creator>
      <pubDate>Thu, 16 Jul 2020 12:34:40 +0000</pubDate>
      <link>https://dev.to/samtimberlan/distributed-caching-in-asp-net-core-3-1-using-redis-3a2n</link>
      <guid>https://dev.to/samtimberlan/distributed-caching-in-asp-net-core-3-1-using-redis-3a2n</guid>
      <description>&lt;p&gt;Distributed caching is the concept of centralizing a cache such that it is used by multiple servers. During development, caches commonly reside in memory on the developer’s system. This presents a challenge when deploying to multiple servers in a farm. If the cache were to reside in the memory of each of these servers, there would be no way to map keys to values of previous requests, since requests are not guaranteed to end up on the same server in a load-balanced distributed environment.&lt;/p&gt;

&lt;p&gt;A distributed cache resolves this problem by acting as a central memory store for caching. It ensures that stateful resources such as user sessions are “sticky”, that is, they are routed to the same server that handled the first request for a user session.&lt;/p&gt;

&lt;p&gt;Distributed caching in ASP.Net Core makes use of the interface &lt;code&gt;IDistributedCache&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;To get started, this article makes some assumptions:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;You have an editor installed (Visual Studio Code or Visual Studio)&lt;/li&gt;
&lt;li&gt;You have Redis installed. You can find installation instructions &lt;a href="https://chocolatey.org/packages/redis-64/"&gt;here&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;You are running .Net Core 3.1&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;With this in mind, let’s dive in.&lt;/p&gt;

&lt;p&gt;Create a new ASP.Net core API project.&lt;br&gt;
Build the project.&lt;br&gt;
Install the nuget package &lt;code&gt;Microsoft.Extensions.Caching.StackExchangeRedis&lt;/code&gt;&lt;br&gt;
Register the package in &lt;code&gt;Startup.cs&lt;/code&gt; using the code below:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// Add Redis distributed cache
services.AddStackExchangeRedisCache(options =&amp;gt; options.Configuration = this.Configuration.GetConnectionString("redisServerUrl"));

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This adds Redis to the application and exposes the &lt;code&gt;IDistributedCache&lt;/code&gt; interface which can be consumed using dependency injection. Notice that it is at this point we pass our Redis server port number. It is good practice not to hard code this, but to set it as a configurable property via appsettings. Now head over to &lt;code&gt;appsettings.json&lt;/code&gt; and add the Redis server URL under &lt;code&gt;ConnectionStrings&lt;/code&gt; section.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;"ConnectionStrings": {
    "redisServerUrl" :  "localhost:6379"
  },
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can find your Redis port number by running &lt;code&gt;redis-server&lt;/code&gt; command in CMD. By default, it is &lt;code&gt;localhost:6379&lt;/code&gt;. Now Redis is setup and ready to be used in your application.&lt;/p&gt;

&lt;p&gt;Head over to the &lt;code&gt;WeatherForecastController&lt;/code&gt; and inject &lt;code&gt;IDistributedCache&lt;/code&gt; via constructor injection.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;public WeatherForecastController(ILogger&amp;lt;WeatherForecastController&amp;gt; logger, IDistributedCache cache)
{
    _logger = logger;
    _cache = cache;
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;code&gt;IDistributedCache&lt;/code&gt; interface ensures the presence of essential methods for working with a cache namely:&lt;br&gt;
&lt;code&gt;GetString()&lt;/code&gt; and &lt;code&gt;GetStringAsync()&lt;/code&gt; for retrieving string values including JSON strings&lt;br&gt;
&lt;code&gt;Get()&lt;/code&gt; and &lt;code&gt;GetAsync()&lt;/code&gt; for retrieving byte array values.&lt;br&gt;
&lt;code&gt;SetString()&lt;/code&gt; and &lt;code&gt;SetStringAsync()&lt;/code&gt; for setting string values also including JSON strings&lt;br&gt;
&lt;code&gt;Set()&lt;/code&gt; and &lt;code&gt;SetAsync()&lt;/code&gt; for setting byte array values.&lt;br&gt;
In the &lt;code&gt;Get&lt;/code&gt; action method of &lt;code&gt;WeatherForecastController&lt;/code&gt; controller, check the cache for the specified cache key, if present, return the result from the cache rather than the DB.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// Check if content exists in cache
string cachedWeatherResult = await _cache.GetStringAsync("weatherResult");
if (cachedWeatherResult != null)
{
    return cachedWeatherResult;
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Else, process the request&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;var rng = new Random();
            var weatherResult = Enumerable.Range(1, 5).Select(index =&amp;gt; new WeatherForecast
    {
        Date = DateTime.Now.AddDays(index),
        TemperatureC = rng.Next(-20, 55),
        Summary = Summaries[rng.Next(Summaries.Length)]
    })
    .ToArray();
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;save it to cache for subsequent requests.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;string result = JsonConvert.SerializeObject(weatherResult);
await _cache.SetStringAsync("weatherResult", result);
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Cache Expiration
&lt;/h2&gt;

&lt;p&gt;Given our setup, we run the risk of having stale data in our cache. To resolve this, we need to set the absolute expiration and sliding expiration times whenever we add an item to the cache.&lt;br&gt;
Absolute expiration time defines the period a particular item can remain in the cache before it is removed. On the other hand, sliding expiration time refers to the period before a cache item is removed if unused. It is usually smaller than the Absolute expiration time.&lt;/p&gt;

&lt;p&gt;Now let us implement this by refactoring our code to avoid repetition. We will add extension methods to &lt;code&gt;IDistributedCache&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Create a new folder in the project called &lt;code&gt;Extensions&lt;/code&gt; and add a class named &lt;code&gt;CacheExtensions&lt;/code&gt; and add the code below:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;using Microsoft.Extensions.Caching.Distributed;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;public static async Task&amp;lt;T&amp;gt; GetCacheValueAsync&amp;lt;T&amp;gt;(this IDistributedCache cache, string key) where T : class
{
    string result = await cache.GetStringAsync(key);
    if (String.IsNullOrEmpty(result))
    {
         return null;
    }
    var deserializedObj = JsonConvert.DeserializeObject&amp;lt;T&amp;gt;(result);
    return deserializedObj;
}

public static async Task SetCacheValueAsync&amp;lt;T&amp;gt;(this IDistributedCache cache, string key, T value) where T : class
{
      DistributedCacheEntryOptions cacheEntryOptions = new DistributedCacheEntryOptions();

      // Remove item from cache after duration
      cacheEntryOptions.AbsoluteExpirationRelativeToNow = TimeSpan.FromSeconds(60);

      // Remove item from cache if unsued for the duration
      cacheEntryOptions.SlidingExpiration = TimeSpan.FromSeconds(30);

      string result = value.ToJsonString();

      await cache.SetStringAsync(key, result);
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Note that &lt;code&gt;SetCacheValueAsync&lt;/code&gt; uses an instance of &lt;code&gt;DistributedCacheEntryOptions&lt;/code&gt; to set Absolute expiration time and sliding expiration time.&lt;/p&gt;

&lt;p&gt;With these extension methods in place, refactor the &lt;code&gt;WeatherForecastController&lt;/code&gt; to invoke them. The controller should look like this&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;public async Task&amp;lt;IEnumerable&amp;lt;WeatherForecast&amp;gt;&amp;gt; Get()
{
   // Check if content exists in cache
   WeatherForecast[] weatherResult = await _cache.GetCacheValueAsync&amp;lt;WeatherForecast[]&amp;gt;("weatherResult");
   if (weatherResult != null)
   {
        return weatherResult;
   }
   var rng = new Random();
    weatherResult = Enumerable.Range(1, 5).Select(index =&amp;gt; new WeatherForecast
   {
        Date = DateTime.Now.AddDays(index),
        TemperatureC = rng.Next(-20, 55),
        Summary = Summaries[rng.Next(Summaries.Length)]
    })
        .ToArray();

    await _cache.SetCacheValueAsync("weatherResult", weatherResult); 
    return weatherResult;
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Build and run the application. Ensure Redis CLI is still running in CMD. Open Postman and navigate to the weather forecast URL. Note the time taken to service the request. This first request was not from the cache. &lt;/p&gt;

&lt;p&gt;Send the request again and this time, the request time should be less than that of the first request. The data in the response should be from the cache. This can be verified by running the command &lt;code&gt;keys *&lt;/code&gt; in Redis CLI. If your key is in the cache, congrats, you have been able to implement Redis successfully.&lt;/p&gt;

&lt;p&gt;There you have it. Thanks for reading. If you learned something new from this article, please like and share.&lt;/p&gt;

&lt;p&gt;Did you spot a typo, an error or want to contribute? &lt;a href="https://github.com/samtimberlan/Blog-Posts/blob/drafts/Distributed%20Caching%20In%20ASP.Net%20Core%203.1%20Using%20Redis.md"&gt;Here's the repo on GitHub&lt;/a&gt;&lt;/p&gt;

</description>
      <category>distributedsystems</category>
      <category>aspnetcore</category>
      <category>redis</category>
      <category>api</category>
    </item>
    <item>
      <title>Essential Laws of User Experience(UX) every Software Developer must know</title>
      <dc:creator>Tim Udoma</dc:creator>
      <pubDate>Sat, 04 Jul 2020 21:49:47 +0000</pubDate>
      <link>https://dev.to/samtimberlan/essential-laws-of-user-experience-ux-every-web-developer-must-know-164m</link>
      <guid>https://dev.to/samtimberlan/essential-laws-of-user-experience-ux-every-web-developer-must-know-164m</guid>
      <description>&lt;p&gt;&lt;strong&gt;Doherty Threshold:&lt;/strong&gt; Propounded by Walter J. Doherty and Ahrvind J. Thadani states that “neither the user nor the computer should have to wait for each other”. The average interval between request and response should be less than 400 milliseconds. Keeping users waiting longer reduces productivity and loses attention. To achieve this both Front and Back-end engineers need to work together. For instance, back-end engineers ensure that frequently accessed resources are cached in-memory using a caching tool like Redis while Front-end engineers ensure that scripts and styles are minified and served only when needed.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Aesthetic and Usability Effect:&lt;/strong&gt; An aesthetically pleasing design was studied in 1995 by Maasaki Kurosu and Kaori Kashimura from Hitachi design centre. They found out that users are more tolerant of minor functional defects, so far as an app is aesthetically appealing or beautiful. Keep in mind that the front-end of an application is what users interact with, as such it should be given the attention it deserves, to reflect the brand whilst allowing users to carry out functions seamlessly.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Fitts Law:&lt;/strong&gt; Size and distance. The time to acquire a target is a function of the distance to and size of the target. The target here are buttons, links and Call To Actions (CTAs) users interact with. These should be conspicuous and strategically placed. As an example, an E-commerce site will not have the “Add to cart” button hidden at the footer of the product page, if the goal is to convert visitors to customers.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Jakob’s Law of Internet User Experience:&lt;/strong&gt; “Users spend time on sites other than your site. They prefer your site to work just like other sites.” This law essentially discourages web developers from being too innovative to the extent where users are unable to interact with their products simply because it requires a steep learning curve.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Occam’s Razor:&lt;/strong&gt; “Among competing hypothesis that predicts equally well, the one with the fewest assumptions should be selected”. In design, we strive to keep it simple. Not reaching for complex solutions where simpler ones would not just suffice but be perfect.&lt;/p&gt;

&lt;p&gt;There you have it. Thanks for reading. If you learned something new from this article, please like and share.&lt;/p&gt;

&lt;p&gt;Did you spot a typo, an error or want to contribute? &lt;a href="https://github.com/samtimberlan/Blog-Posts/blob/drafts/Essential%20Laws%20Of%20UX%20every%20Web%20Developer%20must%20know.md"&gt;Here's the repo on GitHub&lt;/a&gt;&lt;/p&gt;

</description>
      <category>ux</category>
      <category>webdev</category>
    </item>
  </channel>
</rss>
