<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: oluwatobi2001</title>
    <description>The latest articles on DEV Community by oluwatobi2001 (@oluwatobi2001).</description>
    <link>https://dev.to/oluwatobi2001</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/oluwatobi2001"/>
    <language>en</language>
    <item>
      <title>Implementing a Visitor Counter on Azure Resume Challenge</title>
      <dc:creator>oluwatobi2001</dc:creator>
      <pubDate>Mon, 02 Sep 2024 00:53:31 +0000</pubDate>
      <link>https://dev.to/oluwatobi2001/implementing-a-visitor-counter-on-azure-resume-challenge-2i63</link>
      <guid>https://dev.to/oluwatobi2001/implementing-a-visitor-counter-on-azure-resume-challenge-2i63</guid>
      <description>&lt;p&gt;Azure cloud resume challenge provides a thrilling journey for cloud enthusiasts and helps  give you portfolio-worthy projects.  You can get more details on how to get started &lt;a href="https://cloudresumechallenge.dev/docs/the-challenge/azure/" rel="noopener noreferrer"&gt;here&lt;/a&gt;. &lt;br&gt;
One of the major steps in completing the challenge involved completing and implementing a visitor counter that records the number of visitors to the website. This article details how I went about implementing this, the obstacles I faced and how I was able to circumvent them.&lt;br&gt;
To follow along with this tutorial, here are some prerequisites.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Possession of an Azure account with an active subscription&lt;/li&gt;
&lt;li&gt;Knowledge of Azure storage. (here is an &lt;a href="https://dev.to/oluwatobi2001/step-by-step-guide-hosting-static-webapps-on-azure-19ao"&gt;article&lt;/a&gt; that discusses it in detail )&lt;/li&gt;
&lt;li&gt;Familiarity with Azure Functions and Azure Cosmos DB.&lt;/li&gt;
&lt;li&gt;JavaScript knowledge&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;With that, let's get started.&lt;/p&gt;
&lt;h2&gt;
  
  
  Brief intro
&lt;/h2&gt;

&lt;p&gt;So far, I have completed the preceding steps in the Azure cloud challenge which is creating and deploying the Static resume on the Azure platform. I wrote a detailed article covering how I implemented it &lt;a href="https://dev.to/oluwatobi2001/step-by-step-guide-hosting-static-webapps-on-azure-19ao"&gt;here&lt;/a&gt;.&lt;br&gt;&lt;br&gt;
Moving on to the next stage of the challenge which involved me implementing the visitor count API for my site, I decided to set it up  using Azure Cosmos DB as the database of choice and Azure functions to execute concise serverless functions using JavaScript. My choice for a serverless stems from the increased efficiency and cost-effectiveness it provides.  I also configured my front-end resume site to include the relevant JavaScript which consumes the API and implements a responsive frontend.  I would highlight on the tools used in the next paragraph.&lt;/p&gt;
&lt;h3&gt;
  
  
  Azure Cosmos DB
&lt;/h3&gt;

&lt;p&gt;Azure Cosmos Db is a cloud database made available by Azure. It's similar to the popular Dynamo DB used in AWS and Big Query in GCP. Azure Cosmos DB offers a wide variety of database options ranging from relational databases such as SQL to NoSQL database services. This provides us with the ability to store and update data directly to the cloud seamlessly and affordably. &lt;/p&gt;
&lt;h3&gt;
  
  
  Azure Functions
&lt;/h3&gt;

&lt;p&gt;This is a serverless implementation of backend applications on the Azure platform. it's easily compatible with a lot of programming languages such as Node JS, .Net, Python, C# and others. These functions can also be configured to be invoked by several triggers such as HTTP requests, database updates etc.&lt;br&gt;
Moving on, will be highlighting the process of setting up the visitor count API and then, integrate it to our frontend.&lt;/p&gt;
&lt;h2&gt;
  
  
  Setting up Azure Cosmos DB
&lt;/h2&gt;

&lt;p&gt;On the Azure Portal home page, click on the navigation button and select Azure Cosmos DB. It will display an image similar to the one below. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffhu130m9jkwu227lc17g.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffhu130m9jkwu227lc17g.PNG" alt="Cosmos DB home page" width="800" height="170"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Thereafter, click on the create button on the top of the page. In order to implement the project, we would be sticking with the Azure Cosmos DB for NoSQl.  Also ensure to include the resource group for the Database. Its more preferred to have a single resource group managing all the applications involved in this project. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flcoa6dhhpl94dshgmwra.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flcoa6dhhpl94dshgmwra.PNG" alt="Image description" width="800" height="456"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frxwnb50ggr9nmr8c8d9d.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frxwnb50ggr9nmr8c8d9d.PNG" alt="Image description" width="800" height="294"&gt;&lt;/a&gt;&lt;br&gt;
on successful creation of the database , navigate to the data explorer tab which allows us to easily configure the database properties. &lt;/p&gt;

&lt;p&gt;While in the data explorer app, click on new container. This allows us to  create a database id and a container id to store our data. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv6891v8loismmuotqtip.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv6891v8loismmuotqtip.PNG" alt="Image description" width="287" height="480"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjp0ujdp8g59nscwfjc5g.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjp0ujdp8g59nscwfjc5g.PNG" alt="Image description" width="386" height="393"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You would also be required to enter a partition key. This can be renamed based on your preference. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp107a6314y4h3u55wfaf.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp107a6314y4h3u55wfaf.PNG" alt="Image description" width="800" height="349"&gt;&lt;/a&gt;&lt;br&gt;
Thereafter, you would need to customize the items field to include the visitors count field.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
"id": "item",
"count" : 0
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Setting up Azure functions
&lt;/h2&gt;

&lt;p&gt;To go on with the tutorial, we will now create the Azure function for this tutorial. Creating the function will require us to access the Azure function app directory  which serves to manage the various Azure function apps. To successfully create the function application, navigate to Azure function apps and click on &lt;code&gt;Create&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmui1uigobytiph4mre5w.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmui1uigobytiph4mre5w.PNG" alt="Image description" width="800" height="131"&gt;&lt;/a&gt;&lt;br&gt;
You would then be offered some plans for application hosting. For the sake of the tutorial, we would be sticking with the consumption plan. &lt;br&gt;
&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcblrqv6drv4ezmrlu542.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcblrqv6drv4ezmrlu542.PNG" alt="Image description" width="800" height="338"&gt;&lt;/a&gt;&lt;br&gt;
Thereafter, enter a unique name for the function and select an appropriate Azure resource group. Also, select the runtime stack environment you are quite comfortable with. I intend to utilize Node JS to build the function environment. &lt;br&gt;
&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fossivxqli0888koyzsv4.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fossivxqli0888koyzsv4.PNG" alt="Image description" width="800" height="554"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;On successful creation of the function app, you should see something similar to this. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F96hlcw799upacmz5bcvo.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F96hlcw799upacmz5bcvo.PNG" alt="Image description" width="800" height="307"&gt;&lt;/a&gt;&lt;br&gt;
Clicking on the &lt;code&gt;Go to resource&lt;/code&gt; button leads us to the function app dashboard. Within the dashboard, kindly click on &lt;code&gt;Create function&lt;/code&gt;.&lt;br&gt;&lt;br&gt;
We would then select the http trigger function. This method is adopted in our case, since we intend to communicate with our resume site via browser HTTP requests, we will  then be utilizing the &lt;code&gt;httpTrigger1&lt;/code&gt; to serve as our function trigger. &lt;br&gt;
&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr0mistrd4p4xm7tqnl20.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr0mistrd4p4xm7tqnl20.PNG" alt="Image description" width="711" height="591"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffrkxpdc3wvtcdcort2l5.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffrkxpdc3wvtcdcort2l5.PNG" alt="Image description" width="426" height="377"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In order to configure our server-less Azure function to interact seamlessly with our database, we need to navigate to the defined httpTrigger1 function and click on the integration tab. &lt;br&gt;
&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5vf3s6we6s82gr6kom89.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5vf3s6we6s82gr6kom89.PNG" alt="Image description" width="800" height="308"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We will click on the add input and add output tabs and then modify it to integrate our database with our function. &lt;br&gt;
Within each tab, enter the database name and the collection name of the database we created. Also, ensure that a new database connection is created in each tab. This will ensure seamless connection with the database. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F53co7hy13buhutlkxv0l.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F53co7hy13buhutlkxv0l.PNG" alt="Image description" width="472" height="507"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffs6w43b750srqmocvkth.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffs6w43b750srqmocvkth.PNG" alt="Image description" width="488" height="357"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxbpyter2zdjjhlwjga5z.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxbpyter2zdjjhlwjga5z.PNG" alt="Image description" width="800" height="263"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Upon successful completion, click on the function trigger tab and you will get access to the function dashboard. A default code will be available under the &lt;code&gt;code*test&lt;/code&gt; tab.&lt;/p&gt;

&lt;p&gt;Now to the main issue I faced in the building the application, I experienced a great deal of difficulty while trying to connect the database to the function due to a flaw in the default &lt;code&gt;function.json&lt;/code&gt; code. &lt;/p&gt;

&lt;p&gt;The &lt;code&gt;function.json&lt;/code&gt; code  by default stores the default configuration details needed for easy communicability amongst the function and database. &lt;br&gt;
 However, an upgrade from the Azure function version over time has invalidated some fields in the Azure &lt;code&gt;function.json&lt;/code&gt; which ultimately led to the errors I experienced.&lt;br&gt;
Here are the 2 fields in question&lt;/p&gt;

&lt;p&gt;&lt;code&gt;ConnectionString&lt;/code&gt;&lt;br&gt;
&lt;code&gt;collectionName&lt;/code&gt;&lt;br&gt;
these two have subsequently been replaced with the &lt;code&gt;connection&lt;/code&gt; and &lt;code&gt;containerName&lt;/code&gt; strings respectively.&lt;/p&gt;

&lt;p&gt;The code below contains the corrected  version for the &lt;code&gt;function.json&lt;/code&gt; code.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
  "bindings": [
{
      "authLevel": "function",
      "type": "httpTrigger",
      "direction": "in",
      "name": "req",
      "methods": [
        "get",
        "post"
]
},
{
      "type": "http",
      "direction": "out",
      "name": "res"
},
{
      "name": "inputDocument",
      "direction": "in",
      "type": "cosmosDB",
      "methods": [],
      "databaseName": "views_db",
      "containerName": "tutorial-container",
      "connection": "tobi-tuts_DOCUMENTDB"
},
{
      "name": "outputDocument",
      "direction": "out",
      "type": "cosmosDB",
      "methods": [],
      "databaseName": "views_db",
      "containerName": "tutorial-container",
      "connection": "tobi-tuts_DOCUMENTDB"
}
]
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;On successful editing, save the function.json and test the code. If appropriately connected,  a response code&lt;code&gt;200&lt;/code&gt; code is displayed. &lt;/p&gt;

&lt;p&gt;Now we will go on to configure our Azure serverless function code to be executed.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
module.exports = async function (context, req , data) {
    context.log('JavaScript HTTP trigger function processed a request.');

    context.bindings.outputDocument = data[0];
    context.bindings.outputDocument.count += 1;
    context.res = {
        // status: 200, /* Defaults to 200 */
        body: data[0].count
    };
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1e0lsvjcgp7yvvhqvy01.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1e0lsvjcgp7yvvhqvy01.PNG" alt="Image description" width="800" height="321"&gt;&lt;/a&gt;&lt;br&gt;
In the code above, the binding database document is accessed and then updated. The total number of visitor counts stored in the database is then fetched and sent as a response message to the frontend. The visitor count is also updated to reflect the new number.&lt;/p&gt;

&lt;p&gt;With that, we have completed the backend API of the application. Subsequently, we would be implementing the frontend to successfully seamlessly communicate with the backend API  and update the database.&lt;br&gt;
We will be creating a JS file to manage our fetch requests to the API and also based on the response received, modifies the HTML code to reflect the visitor number.&lt;br&gt;
Here is the code below;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;window.addEventListener('DOMContentLoaded', (e) =&amp;gt; {
    getVisitorCount();
})
const myApiLink = {Your Azure function link};
const getVisitorCount =() =&amp;gt; {
    let count = 0;
fetch(myApiLink , {
    mode: 'cors'
}).then(response =&amp;gt; {
    return response.json() }
).then(res =&amp;gt; {
const count = res;
document.getElementById('visitorCount').innerHTML = count;
})
return count;
}


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The above code is the Frontend JavaScript code in order to retrieve the number of visits and also update the visitor count.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;window.addEventListener('DOMContentLoaded', (e) =&amp;gt; {
    getVisitorCount();
})
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This ensures that the function getVisitorCount gets executed only after the Web page have been loaded. Also, we defined the ApiLink variable which is the URL of the Azure function we built in the preceding section. You can obtain this link on the functions dashboard.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const getVisitorCount =() =&amp;gt; {
    let count = 0;
fetch(myApiLink , {
    mode: 'cors'
}).then(response =&amp;gt; {
    return response.json() }
).then(res =&amp;gt; {
const count = res;
document.getElementById('visitorCount').innerHTML = count;
})
return count;
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The code above sets the preliminary count to 0. the fetch request  is used to access the Azure function and the result obtained is then dynamically updated in the homepage. &lt;br&gt;
With that, we have come to the end of the tutorial. You can check here to see the expected result.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6n76wfcohhw2jzaxszx3.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6n76wfcohhw2jzaxszx3.PNG" alt="Image description" width="800" height="245"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Additional Info.
&lt;/h2&gt;

&lt;p&gt;So far, we have implemented the Visitor count API on our resume project. You can further raise the challenge by only tracking unique visits to the site. This can be sorted by entering a representation of each visitor's IP to avoid counting each visit by a visitor multiple times.  Additionally, you can experiment with the challenge by using other Azure storage accounts such as Azure file storage/object storage in place of Cosmos DB to obtain similar results.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;You can also interact with me on my blog and check out my other articles &lt;a href="//link.ee/tobilyn77"&gt;here&lt;/a&gt;. Till next time, keep on coding!&lt;/p&gt;

</description>
      <category>azure</category>
      <category>cosmosdb</category>
      <category>serverless</category>
      <category>azurefunctions</category>
    </item>
    <item>
      <title>Step-by-Step Guide: Hosting Static Webapps on Azure</title>
      <dc:creator>oluwatobi2001</dc:creator>
      <pubDate>Tue, 20 Aug 2024 16:39:08 +0000</pubDate>
      <link>https://dev.to/oluwatobi2001/step-by-step-guide-hosting-static-webapps-on-azure-19ao</link>
      <guid>https://dev.to/oluwatobi2001/step-by-step-guide-hosting-static-webapps-on-azure-19ao</guid>
      <description>&lt;p&gt;Website hosting and deployment is a necessary aspect in  overall web application development. It has  become more popularized and simplified in the advent of new generation deployment sites.  A wide variety of cloud infrastructure services also offers these features with additional benefits such as monitoring and scalability. Here, we will be exploring how to deploy a static website  written in plain HTML &amp;amp; CSS on Microsoft Azure.&lt;/p&gt;

&lt;p&gt;This article is a buildup from a similar &lt;a href="https://www.freecodecamp.org/news/how-to-deploy-node-js-app-on-azure/" rel="noopener noreferrer"&gt;article&lt;/a&gt; which illustrated how to deploy backend servers to Azure cloud via the use of the Visual Studio code extension. &lt;br&gt;
Here are some prerequisites to follow along with this article.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;An Azure Account&lt;/li&gt;
&lt;li&gt;A  webpage template to be deployed. You can get sample webpage templates &lt;a href="https://www.themezy.com/all-free/resume" rel="noopener noreferrer"&gt;here&lt;/a&gt;.
With this, let's get started.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Setting up  Azure account
&lt;/h2&gt;

&lt;p&gt;To begin with,  we need to have an Azure account created to be able to follow along and implement this tutorial. Azure offers a great advantage  as it provides free Azure credits for its first 30 days. To create a free account, click on this &lt;a href="https://go.microsoft.com/fwlink/?linkid=2227353&amp;amp;clcid=0x409&amp;amp;l=en-us&amp;amp;srcurl=https%3A%2F%2Fazure.microsoft.com%2Ffree%2Fsearch%2F%3F%26ef_id%3D_k_cj0kcqjwq_g1bhcsarisacc7nxo2p0d9qffttai_qsmf2joncdbnecfj12xzg9hfctebr93nb0t0t6aaamrjealw_wcb_k_%26ocid%3Daidcmmfdukp5kz_sem__k_cj0kcqjwq_g1bhcsarisacc7nxo2p0d9qffttai_qsmf2joncdbnecfj12xzg9hfctebr93nb0t0t6aaamrjealw_wcb_k_%26gad_source%3D1%26gclid%3Dcj0kcqjwq_g1bhcsarisacc7nxo2p0d9qffttai_qsmf2joncdbnecfj12xzg9hfctebr93nb0t0t6aaamrjealw_wcb" rel="noopener noreferrer"&gt;link&lt;/a&gt;. &lt;/p&gt;

&lt;h2&gt;
  
  
  Setting up a Resource group
&lt;/h2&gt;

&lt;p&gt;On completion of the sign up process,  we now have access to the Azure portal which contains a wide variety of cloud services we intend to explore. we will then be creating a new resource group to help manage the static website we will be creating. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0as9shefnnunjdjnwx5k.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0as9shefnnunjdjnwx5k.PNG" alt="Resource group homepage" width="800" height="273"&gt;&lt;/a&gt;&lt;br&gt;
With the Resource group created, we will now create an Azure storage account to serve as a storage account for our website files. The Storage account can be accessed by searching for it on the Azure Market place. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fivloiyyzbuieb2742c7g.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fivloiyyzbuieb2742c7g.PNG" alt="MarketPlace" width="800" height="286"&gt;&lt;/a&gt;&lt;br&gt;
 The default storage of choice to host static website files is the &lt;strong&gt;Blob&lt;/strong&gt; storage. &lt;/p&gt;

&lt;h2&gt;
  
  
  Setting up a Storage Account
&lt;/h2&gt;

&lt;p&gt;After locating Azure storage on the marketplace, click on the Microsoft storage account and click on Create.&lt;br&gt;
Microsoft Azure provides an easy-to-use platform to create our storage account.&lt;br&gt;
&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbd7xynm5g33t35q940i1.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbd7xynm5g33t35q940i1.PNG" alt="Azure" width="800" height="422"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In the column for the storage account name, the user's name choice can be entered. Also, it's advised to choose the closest region to you geographically in order to reduce storage query time. For the purpose of illustration, I selected the US East region.  In other to guarantee application efficiency, kindly also ensure that the storage type is set to &lt;code&gt;locally redundant storage&lt;/code&gt;. This ensures the consistent availability of your database files within your chosen local region.&lt;br&gt;
&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnevcsbnwfz5evl8vsski.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnevcsbnwfz5evl8vsski.PNG" alt="Storage account creation" width="774" height="322"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You can leave all other  tabs as default and then click on &lt;strong&gt;Review and Create&lt;/strong&gt; and viola, your storage account is created.&lt;br&gt;
&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcos3q6rw1t75sz3mokqc.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcos3q6rw1t75sz3mokqc.PNG" alt="creating a storage account" width="800" height="526"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuehnab4tj00rh42e01is.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuehnab4tj00rh42e01is.PNG" alt="storage account" width="270" height="384"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In the above scenario, the storage account name was set to &lt;code&gt;ty6&lt;/code&gt;. Any name of your choice can be used instead. &lt;br&gt;
Now that we are done with creating  the storage account, let's go on to deploy our Website files. &lt;/p&gt;

&lt;p&gt;Before uploading your files to the storage account we just created,  navigate to the data management section on the left and enable the static websites feature. Click on the enable icon and save.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fex41kr2l2lvz6vgv0yhw.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fex41kr2l2lvz6vgv0yhw.PNG" alt="enable static web app" width="800" height="313"&gt;&lt;/a&gt;.&lt;br&gt;
Now that we have enabled the static website feature, navigate back to the storage account page to upload the HTML and CSS files of the website you intend to host.&lt;/p&gt;

&lt;h2&gt;
  
  
  Setting up the Static WebApp
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Futy5axm5nhg8ohftzvt5.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Futy5axm5nhg8ohftzvt5.PNG" alt="uploading web files" width="800" height="251"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;On the screen, the upload button when clicked leads us to the page where our files can be uploaded. This also allows for multiple uploads at once.&lt;/p&gt;

&lt;p&gt;On successful upload, specify the title of the home page within the column. In my case, it is titled &lt;code&gt;index.html.&lt;/code&gt; &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmh16kx2iqqh2zxbijm4v.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmh16kx2iqqh2zxbijm4v.png" alt="Link Page" width="800" height="366"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Thereafter, a link to the primary website endpoint will be generated and provided within the home page. Navigating to the link will show your hosted static site which can be accessed by anyone around the world. Here is a link to &lt;a href="https://tobb.z13.web.core.windows.net/" rel="noopener noreferrer"&gt;mine&lt;/a&gt;. &lt;br&gt;
&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm304ohe0xrbc0yi99slt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm304ohe0xrbc0yi99slt.png" alt="Website address" width="800" height="345"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;With that, we have come to the end of the tutorial. Feel free to check out my other articles &lt;a href="//linktr.ee/tobilyn77"&gt;here&lt;/a&gt;. Till next time, keep on coding!&lt;/p&gt;

</description>
      <category>azureresumechallenge</category>
      <category>azure</category>
      <category>staticwebapps</category>
      <category>storage</category>
    </item>
    <item>
      <title>Achieving Atomicity in Mongo DB Database operations</title>
      <dc:creator>oluwatobi2001</dc:creator>
      <pubDate>Tue, 13 Aug 2024 20:30:27 +0000</pubDate>
      <link>https://dev.to/oluwatobi2001/achieving-atomicity-in-mongo-db-database-operations-15dj</link>
      <guid>https://dev.to/oluwatobi2001/achieving-atomicity-in-mongo-db-database-operations-15dj</guid>
      <description>&lt;p&gt;Databases play an integral role in the overall web architecture and it's important to store relevant server data to meet the needs of the users. Bearing this in mind, the developer needs to ensure that the database is optimized to work under the best practices available.&lt;br&gt;
In this article, More details regarding Database Atomicity will be discussed and a demo project will be executed to illustrate this, Node JS and MongoDb will serve as our Backend server and database of choice. Here are some of the prerequisites for this tutorial.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Knowledge of MongoDB database&lt;/li&gt;
&lt;li&gt;Knowledge of Node JS.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
  What is Atomicity?
&lt;/h2&gt;

&lt;p&gt;Atomicity simply implies that any transaction performed on a given database exists as a single indivisible unit. It also means that the entire transaction will get aborted peradventure while executing the transaction, and an error occurs. The transaction however, gets executed if no error shows up while executing it. Contextually, transactions refer to database operations.  The changes made in the database preceding the error get removed to its default status before the transaction is executed.  These characteristics are one of the many important characteristics of an ideal database. More information regarding these best practices can be found in this &lt;a href="https://www.freecodecamp.org/news/database-optimization-principles/" rel="noopener noreferrer"&gt;article&lt;/a&gt;.&lt;/p&gt;
&lt;h2&gt;
  
  
  Real-life Use cases of Database Atomicity
&lt;/h2&gt;

&lt;p&gt;It helps to prevent the occurrence of database transaction inefficiencies.  A single database transaction may contain several operations waiting to be executed, most especially in cases of storing financial transactions involving debiting, crediting and storing transaction details as required. Having the atomic principle in place ensures that the overall changes to the database won't be made pending the successful completion of all functions within the transaction.&lt;br&gt;
 Right now we will be demonstrating how to implement atomicity in a MongoDB project with a controller function defined in our Demo banking application.&lt;/p&gt;
&lt;h2&gt;
  
  
  Implementing Atomicity  in Mongo DB: Demo project
&lt;/h2&gt;

&lt;p&gt;To set up the project, create an empty folder and initialize a node project within that folder by running&lt;code&gt;npm init&lt;/code&gt; on the command line. Then, install relevant packages. &lt;a href="https://mongoosejs.com/" rel="noopener noreferrer"&gt;Mongoose&lt;/a&gt; will serve as our  node library to interact with our Mongo DB server.  Thereafter, proceed to set up the project and ensure the MongoDB is connected to the Node JS application.&lt;/p&gt;

&lt;p&gt;Here is the initial code to send money across to a receiver.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;exports.sendMoney = async (req, res) =&amp;gt; {
  const { recepientID, amount, pin } = req.body;
  const { emailAddress, id } = req.user;

  try {
    // Verify if the recipient account exists
    const verifyReceiver = await Account.findOne({ acctNo: recepientID });
    if (!verifyReceiver) {
      return res.status(400).json("Wrong account credentials, please recheck details");
    }

    // Verify the sender's account and balance
    const verifyBalance = await Account.findOne({ acctOwner: emailAddress });
    if (!verifyBalance) {
      return res.status(400).json("Sender account not found");
    }

    if (verifyBalance.acctBalance &amp;lt; amount) {
      return res.status(401).json("Insufficient funds, kindly top up your account in order to proceed");
    }

    if (verifyBalance.acctBalance === 0) {
      return res.status(400).json("Sorry, your account balance is too low.");
    }

    if (verifyBalance.acctPin !== pin) {
      return res.status(400).json("Incorrect Pin. Check and try again");
    }

    // Deduct amount from sender's balance and save
    const newBalance = verifyBalance.acctBalance - amount;
    verifyBalance.acctBalance = newBalance;
    await verifyBalance.save();

    // Add amount to receiver's balance and save
    const creditedBalance = verifyReceiver.acctBalance + amount;
    verifyReceiver.acctBalance = creditedBalance;
    await verifyReceiver.save();

    // Create transaction details
    const transactionDets = {
      amount,
      receiver: recepientID,
      sender: emailAddress,
      status: "successful"
    };
    const newTransaction = await Transaction.create(transactionDets);

    // Update sender's account with the transaction
    const updatedAccount = await Account.findByIdAndUpdate(
      verifyBalance._id,
      { $push: { transactions: newTransaction._id } },
      { new: true, useFindAndModify: false }
    );

    console.log(updatedAccount);

    return res.status(200).json("Transaction successful");

  } catch (err) {
    console.error(err);
    return res.status(500).json("Sorry, this transaction cannot be completed currently. Try again later");
  }
};

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This code above is a mini representation of how a financial transaction is made when executed in Node JS. Via the use of the promise callbacks(&lt;code&gt;async&lt;/code&gt; and &lt;code&gt;await&lt;/code&gt; methods), it ensures orderly execution of the functions provided. However, in cases of errors in user input information or in the model schemas itself, the function when executed might not be completely successful and may return an `&lt;code&gt;error 500&lt;/code&gt; or 400` as the case may be. &lt;/p&gt;

&lt;p&gt;Notwithstanding the error, some documents would have already been created giving room for the data inconsistency illustrated in the previous paragraph. Now how do we prevent this?&lt;/p&gt;

&lt;p&gt;Thankfully, the Mongo DB database was built with the need to achieve the beast database optimization in mind. Here is how it solves the problem.&lt;/p&gt;

&lt;p&gt;It  easily fulfils database atomicity by the introducing &lt;strong&gt;Mongo DB sessions&lt;/strong&gt;.  A session in Mongo DB serves to group multiple executable entities together allowing the multiple functions to be executed as a single transaction. We will now discuss some functions that can be called to maintain the atomicity of a data transaction.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;startSession&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;startTransaction&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;commitTransaction&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;abortTransaction&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;endSession&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;startSession:&lt;/strong&gt; This function is usually called at the beginning of the function which encapsulates all operations within it as a single executable unit.&lt;br&gt;
  &lt;code&gt;const session = await mongoose.startSession();&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;endSession:&lt;/strong&gt; This is called at the end of the database operations to terminate a session opened.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;session.endSession();&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;startTransaction:&lt;/strong&gt; This function is invoked to begin execution of the  transaction within the mongo DB session. &lt;/p&gt;

&lt;p&gt;session.startTransaction();&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;commitTransaction:&lt;/strong&gt; This function ensures that the entire database operation gets executed altogether. This function is usually called after all data operations have been called successfully without any error occurring.&lt;/p&gt;

&lt;p&gt;&lt;code&gt; session.commitTransaction();&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;abortTransaction:&lt;/strong&gt; This function immediately cancels the entire transaction whenever it is executed. To ensure consistency and atomicity, it is best invoked while handling errors that may come up while executing the database operation. Appropriate knowledge and use of error handlers will come in handy here.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;session.abortTransaction();&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Here is the link to the complete code.&lt;/p&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const mongoose = require('mongoose');

&lt;p&gt;exports.sendMoney = async (req, res) =&amp;gt; {&lt;br&gt;
  const { recepientID, amount, pin } = req.body;&lt;br&gt;
  const { emailAddress, id } = req.user;&lt;/p&gt;

&lt;p&gt;// Start a session and a transaction&lt;br&gt;
  const session = await mongoose.startSession();&lt;br&gt;
  session.startTransaction();&lt;/p&gt;

&lt;p&gt;try {&lt;br&gt;
    // Verify if the recipient account exists&lt;br&gt;
    const verifyReceiver = await Account.findOne({ acctNo: recepientID }).session(session);&lt;br&gt;
    if (!verifyReceiver) {&lt;br&gt;
      await session.abortTransaction();&lt;br&gt;
      session.endSession();&lt;br&gt;
      return res.status(400).json("Wrong account credentials, please recheck details");&lt;br&gt;
    }&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// Verify the sender's account and balance
const verifyBalance = await Account.findOne({ acctOwner: emailAddress }).session(session);
if (!verifyBalance) {
  await session.abortTransaction();
  session.endSession();
  return res.status(400).json("Sender account not found");
}

if (verifyBalance.acctBalance &amp;amp;lt; amount) {
  await session.abortTransaction();
  session.endSession();
  return res.status(401).json("Insufficient funds, kindly top up your account in order to proceed");
}

if (verifyBalance.acctBalance === 0) {
  await session.abortTransaction();
  session.endSession();
  return res.status(400).json("Sorry, your account balance is too low.");
}

if (verifyBalance.acctPin !== pin) {
  await session.abortTransaction();
  session.endSession();
  return res.status(400).json("Incorrect Pin. Check and try again");
}

// Deduct amount from the sender's balance and save
const newBalance = verifyBalance.acctBalance - amount;
verifyBalance.acctBalance = newBalance;
await verifyBalance.save({ session });

// Add the amount to the receiver's balance and save
const creditedBalance = verifyReceiver.acctBalance + amount;
verifyReceiver.acctBalance = creditedBalance;
await verifyReceiver.save({ session });

// Create transaction details
const transactionDets = {
  amount,
  receiver: recepientID,
  sender: emailAddress,
  status: "successful"
};
const newTransaction = await Transaction.create([transactionDets], { session });

// Update sender's account with the transaction
const updatedAccount = await Account.findByIdAndUpdate(
  verifyBalance._id,
  { $push: { transactions: newTransaction._id } },
  { new: true, useFindAndModify: false, session }
);

console.log(updatedAccount);

// Commit the transaction
await session.commitTransaction();
session.endSession();

return res.status(200).json("Transaction successful");
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;} catch (err) {&lt;br&gt;
    // If an error occurs, abort the transaction and end the session&lt;br&gt;
    await session.abortTransaction();&lt;br&gt;
    session.endSession();&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;console.error(err);
return res.status(500).json("Sorry, this transaction cannot be completed currently. Try again later");
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;}&lt;br&gt;
};&lt;br&gt;
&lt;/p&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h3&gt;
&lt;br&gt;
  &lt;br&gt;
  &lt;br&gt;
  Additional information&lt;br&gt;
&lt;/h3&gt;

&lt;p&gt;So far, we have come to the end of the tutorial. Achieving database efficiency isn't just limited to Atomicity. Other key fundamentals such as Database indexing, sharding,  and isolation will also be important to achieving this. &lt;/p&gt;

&lt;p&gt;Feel free to check out my other articles &lt;a href="//linktr.ee/tobilyn77"&gt;here&lt;/a&gt;. Till next time, keep on coding!&lt;/p&gt;

</description>
      <category>atomicity</category>
      <category>mongodb</category>
      <category>node</category>
      <category>database</category>
    </item>
    <item>
      <title>A Beginners guide to Building Content Scripts</title>
      <dc:creator>oluwatobi2001</dc:creator>
      <pubDate>Sat, 27 Jul 2024 06:13:42 +0000</pubDate>
      <link>https://dev.to/oluwatobi2001/a-beginners-guide-to-building-content-scripts-df</link>
      <guid>https://dev.to/oluwatobi2001/a-beginners-guide-to-building-content-scripts-df</guid>
      <description>&lt;p&gt;Browser extensions are add-ons to the browsers which are used to add aesthetics to the site and also provide optimal user experience.&lt;br&gt;
The concept of content script in extension development is a quite useful knowledge to be acquired by developers alike as its significantly expanded the use cases of browser extensions. &lt;/p&gt;

&lt;p&gt;This article aims to introduce what content scripts are and how they work. There would also be a demo project in which the basic of chrome extensions will be discussed and a simple content script will be used in our extension. With that, let's get started. &lt;/p&gt;

&lt;h2&gt;
  
  
  Understanding Content Scripts
&lt;/h2&gt;

&lt;p&gt;First of all, what is a content script?  Content scripts are JavaScript codes which on interaction with web pages via a browser extension, are executed to modify the webpage.  &lt;/p&gt;

&lt;p&gt;It easily achieves this by interacting with the webpage document object model. The web page document object model is the raw structure of the given web page. The manner by which the Chrome content scripts act to modify the web page in question is usually termed &lt;strong&gt;injection&lt;/strong&gt;. &lt;/p&gt;

&lt;p&gt;Having had a brief intro to content scripts, we would then go on to implement it on our web pages. But before then, we need to set up our browser extension which will power the script.&lt;/p&gt;

&lt;h2&gt;
  
  
  Setting Up Your Chrome Extension
&lt;/h2&gt;

&lt;p&gt;Setting up a Chrome extension file is pretty straightforward. For further reference building extensions, attached below is the link to the Chrome extension documentation &lt;a href="https://developer.chrome.com/docs/extensions" rel="noopener noreferrer"&gt;page&lt;/a&gt;. &lt;br&gt;
An ideal Chrome extension must include a well-detailed &lt;code&gt;manifest.json&lt;/code&gt; file which provides the default background information about the Chrome extension.&lt;br&gt;
Also, the appropriate &lt;code&gt;JS&lt;/code&gt; file to be executed is also included. Other additional files &lt;code&gt;(HTML and CSS)&lt;/code&gt; help provide aesthetics to the extension.&lt;br&gt;
With that, let's go on to build our extension, incorporating our content script injection. We will illustrate the power of content scripts by creating a Chrome extension which displays a button overlaying on any active web page we navigate to. &lt;/p&gt;

&lt;h2&gt;
  
  
  Writing a Manifest file
&lt;/h2&gt;

&lt;p&gt;In this section, the parts of the manifest file will be highlighted and discussed. Here is the code to the manifest file for the project. &lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

{
    "manifest_version": 3,
    "name": "Add Button",
    "version": "1.0",
    "description": "An extension that alerts a response when clicked",
    "permissions": ["activeTab"],
    "content_scripts": [
{
        "matches": ["&amp;lt;all_urls&amp;gt;"],
        "js": ["ContentScript.js"],
        "CSS": ["Button.css"]
}
]
}



&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;Manifest version:&lt;/strong&gt; The manifest version is usually requested. By default, it's set to 3. as it is a significant better upgrade than the version 2.&lt;br&gt;
&lt;strong&gt;Name:&lt;/strong&gt; The name of the extension is also typed in the manifest file. In my case, the project is named &lt;code&gt;Add button&lt;/code&gt;. That can be tweaked to suit the user's preference. &lt;br&gt;
The Version of the Chrome extension is also inputted. In our case, this is the first version of the extension hence it's named &lt;code&gt;1.0&lt;/code&gt;, subsequent improvements on this extension can prompt modifying the file to increase the versions respectively.&lt;br&gt;
&lt;code&gt;Description:&lt;/code&gt; A description of what the extension does also gives credence to the Chrome extension to the non-technical users of the extension. &lt;/p&gt;

&lt;p&gt;&lt;em&gt;Subsequent points raised are quite cogent in building the content scripts.&lt;/em&gt; &lt;/p&gt;

&lt;p&gt;The permissions object highlights the route for the execution of the content scripts. This also prevents the content scripts from running in unexpected tabs and web pages. It allows us to list all the permissions our Chrome extension might require. Some Chrome extensions may need access to the browser storage, other Chrome APIs and some sites in question. In our case for this project, we are limiting our Chrome extension to just the &lt;code&gt;active browser tab&lt;/code&gt; being used. It's important to have this in place to reduce the risk of Chrome extension compromising other non-used parts of our Chrome browser.&lt;/p&gt;

&lt;p&gt;We will then configure the content scripts field in our manifest file. &lt;br&gt;
The content scripts field specifies the various code files we intend to inject into our web page. &lt;br&gt;
It contains the &lt;code&gt;matches&lt;/code&gt; sub field which specifies the webpage URL we want it to act upon.  For ease of use, we just included all  URLs allowing this to act on all web pages we access. You can however specify the URL you intend to inject in the subfield value. Eg &lt;a href="//google.com"&gt;www.google.com&lt;/a&gt;, &lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

"matches": ["https://*.google.com/*"]
run_at": "document_idle


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;The &lt;code&gt;JS&lt;/code&gt; files which contain the injection code are also specified. In our case, &lt;br&gt;
Our JS file is named &lt;code&gt;Content script.js&lt;/code&gt;. We also specified the &lt;code&gt;CSS&lt;/code&gt; file used in styling this project. &lt;/p&gt;

&lt;p&gt;With this, we have had a miniature implementation of the manifest file for our project. We will then go on to write our injection code in the subsequent section &lt;/p&gt;

&lt;h2&gt;
  
  
  Creating Content Scripts
&lt;/h2&gt;

&lt;p&gt;In the spirit of keeping things simple, we would be creating a simple button that when clicked upon, shows an alert message. This button is expected to overlay the existing webpage. &lt;br&gt;
Here is the code below  &lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

// Create a button element
const button = document.createElement("button");

// Set the button's text content
button.textContent = "Click me";

// Set the button's ID
button.id = "clickMe";

// Append the button to the document body
document.body.appendChild(button);

// Add a click event listener to the button
button.addEventListener("click", () =&amp;gt; {
  // Show an alert when the button is clicked
  alert("Click event listener was added");

  // Log a message to the console
  console.log("Hello world");
});



&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;The styling can be changed to suit your preference however a styling template has been included in the code repository.&lt;/p&gt;

&lt;p&gt;here is a picture of its implementation.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fikwyjbbqcd7ek3uutg75.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fikwyjbbqcd7ek3uutg75.PNG" alt="chrome extension"&gt;&lt;/a&gt;&lt;br&gt;
Here is the link to the &lt;a href="//github.com/oluwatobi2001/wk2.git"&gt;source code &lt;/a&gt; containing the code styling.&lt;/p&gt;

&lt;h2&gt;
  
  
  Advanced Techniques and Use Cases
&lt;/h2&gt;

&lt;p&gt;So far we have completed the project. However to advance one's knowledge, here are some of the advanced techniques and best practices you can also implement while building content scripts. &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Cross-interaction with browser background scripts&lt;/li&gt;
&lt;li&gt;Implementation of data state managers  to allow for dynamic scripting&lt;/li&gt;
&lt;li&gt;Integrating other external APIs allows for data manipulation and analysis&lt;/li&gt;
&lt;li&gt;Employing caching strategies in order to optimize extension performance&lt;/li&gt;
&lt;li&gt;Integrating Content scripts with service workers&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;You can also interact with me on my blog and check out my other articles &lt;a href="//linktr.ee/tobilyn77"&gt;here&lt;/a&gt;. Till next time, keep on coding!&lt;/p&gt;

</description>
      <category>extensions</category>
      <category>javascript</category>
      <category>script</category>
      <category>node</category>
    </item>
    <item>
      <title>Mastering Linux: Easy Tips for Locating Files Folders and Text</title>
      <dc:creator>oluwatobi2001</dc:creator>
      <pubDate>Thu, 06 Jun 2024 21:50:51 +0000</pubDate>
      <link>https://dev.to/oluwatobi2001/mastering-linux-easy-tips-for-locating-files-folders-and-text-1gnc</link>
      <guid>https://dev.to/oluwatobi2001/mastering-linux-easy-tips-for-locating-files-folders-and-text-1gnc</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;I see a world where every device will utilize Linux in the nearest future. It’s currently the driving force for open-source development globally and it’s cemented its relevance in today’s world as it is the backbone of many applications and services. Apt knowledge of the use of this operating system and ability to execute programs with it gives the developer an advantage. &lt;br&gt;
This article which is the first of many, serves as to guide the developer on how to perform basic search commands using Linux. This is needed to locate file directories and texts.&lt;br&gt;
As a popular programmer stated, the best way to learn Linux is to use it. Having Linux installed and running is a prerequisite to this tutorial so facilitate easy comprehension and practice. With this completed, let’s get started.&lt;/p&gt;

&lt;h2&gt;
  
  
  Linux search commands
&lt;/h2&gt;

&lt;p&gt;Searching for files can be cumbersome and complex for the newbie Linux user especially as it’s a sharp contrast from the Windows OS. Thankfully, newer Linux distros include beautiful graphical user interfaces to enhance user interaction and experience on Linux  but still, the mastery of Linux search commands via the command line is still relevant, hence the need for this tutorial. Right now we will be introducing the various Linux commands that are used in files and text searches. They include&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Locate&lt;/li&gt;
&lt;li&gt;find&lt;/li&gt;
&lt;li&gt;Grep&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The find and locate commands are specifically designed in the search for files and directories within the Linux file system while the Grep is more suited to locate texts within various text files in the Linux file system.  Details regarding their use cases and relevant examples will be provided  in the subsequent sections.&lt;/p&gt;

&lt;h2&gt;
  
  
  Locate command
&lt;/h2&gt;

&lt;p&gt;This command alongside the find command mentioned earlier is used to search for files using the file names within the Linux file system directories but how is this command different from the find command seeing that they are quite synonymous? &lt;br&gt;
 Firstly, the Locate command is executed to locate the details of a file being searched from a database of filenames which is updated automatically daily. &lt;br&gt;
 This implies it has a faster search time than the find command. However, it has a minor flaw in that the data only gets updated at a specified time daily or when executed. Hence files saved after the database has been updated at a particular time won’t be shown in the search results. &lt;br&gt;
Here is a command to search for the file &lt;code&gt;“laptop.txt”&lt;/code&gt; using locate. &lt;br&gt;
&lt;code&gt;Locate laptop&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmy56k6wsp4o900ybw4zu.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmy56k6wsp4o900ybw4zu.JPG" alt="Image description" width="467" height="36"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As you an see, the file name is been searched from the &lt;code&gt;plocate db&lt;/code&gt; which doesn't have the file name indexed. &lt;br&gt;
With that, we have successfully discussed the locate command. Up next is the find command&lt;/p&gt;

&lt;h2&gt;
  
  
  Find Command
&lt;/h2&gt;

&lt;p&gt;The find command as highlighted briefly in the previous paragraph can also be used to locate file names within a given directory. However, unlike the locate command which searches through an indexed database containing filenames, the find command searches through the entire Linux file system to locate the files in question. This results in a much slower response time compared to the locate command. However, it has an advantage over the locate command by providing all files matching the search query  at any time indicated irrespective of the time the file was saved unlike the locate command. &lt;br&gt;
Here is a command to search for the file “laptop.txt” using find. &lt;br&gt;
&lt;code&gt;Find  laptop&lt;/code&gt;&lt;br&gt;
&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgu0mtj5ok82suazq6s8j.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgu0mtj5ok82suazq6s8j.JPG" alt="Image description" width="367" height="35"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Grep
&lt;/h2&gt;

&lt;p&gt;This is a command used in Linux to search for words/phrases within various text file outputs. It’s an acronym which stands for Global regular expression print.  It goes beyond locating the files which contain the text in question but also highlights the lines where these texts are found. Here is an example of how Grep can be used. &lt;br&gt;
&lt;code&gt;grep laptop&lt;/code&gt;&lt;br&gt;
&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foxublm580w1m3kuql8ml.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foxublm580w1m3kuql8ml.JPG" alt="Image description" width="452" height="73"&gt;&lt;/a&gt;&lt;br&gt;
The above code searches for the text &lt;code&gt;“laptop”&lt;/code&gt; in the Linux directory. It then outputs any files where this can be found. By default, the Grep command is highly case-sensitive, matching the texts only in the specified case format. However, due to its inherent flexibility, it can also output non-case-sensitive matches too. To achieve this, the –I command is added to the line text. &lt;br&gt;
Here is a command to search for the text  “laptop” eliminating the case sensitivity using grep. &lt;br&gt;
&lt;code&gt;grep –i laptop laptop.txt&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7gn5dih47lqv3fsnktwf.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7gn5dih47lqv3fsnktwf.JPG" alt="Image description" width="527" height="78"&gt;&lt;/a&gt;&lt;br&gt;
This will output all matching text ignoring the case format of the text matches.&lt;br&gt;
Also, Grep provides an inverse searching feature which highlights all other lines without the text to be matched in question and ignores the lines containing the text being matched in question. &lt;/p&gt;

&lt;p&gt;&lt;code&gt;grep –vi laptop&lt;/code&gt;&lt;br&gt;
So far, these commands can be used to navigate through the Linux file directory to locate various files and text within the Linux OS. It is also essential to gain mastery of other Linux file commands such as &lt;code&gt;ls&lt;/code&gt;, &lt;code&gt;mv&lt;/code&gt;, &lt;code&gt;rm&lt;/code&gt; and &lt;code&gt;rmdir&lt;/code&gt; which can be used in file navigation and file structure modification. &lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;With this, we have come to the end of the tutorial. We hope you’ve learned essentially about Linux commands, how to use them and their pros and cons. Feel free to drop any questions or comments in the comment box below. You can also reach out to me on my blog also check out my other articles &lt;a href="//linktr.ee/tobilyn77"&gt;here&lt;/a&gt;. Till next time, keep on coding!&lt;/p&gt;

</description>
      <category>linux</category>
      <category>file</category>
      <category>search</category>
      <category>grep</category>
    </item>
    <item>
      <title>Optimizing Performance Using Prometheus with Node JS for Monitoring</title>
      <dc:creator>oluwatobi2001</dc:creator>
      <pubDate>Tue, 30 Apr 2024 18:25:35 +0000</pubDate>
      <link>https://dev.to/oluwatobi2001/optimizing-performance-using-prometheus-with-node-js-for-monitoring-b90</link>
      <guid>https://dev.to/oluwatobi2001/optimizing-performance-using-prometheus-with-node-js-for-monitoring-b90</guid>
      <description>&lt;p&gt;Application performance monitoring is an essential hack you as a developer should get acquainted with to detect any error and provide a seamless experience to the end user of the service. &lt;br&gt;
However, to achieve this, it’s quite needed to possess adequate knowledge of some Application monitoring tools. &lt;/p&gt;

&lt;p&gt;In this article, we will be talking about The Prometheus tool, its use cases, and its relevance to backend development. Furthermore, we will be looking into integrating it into our Backend application, via the use of several packages and we will also test its functionality. &lt;br&gt;
Additionally, we would chip in on how to generate meaningful insights from our scraped metrics with Prometheus using a tool called Grafana. Having laid out our objectives, let’s get started.&lt;br&gt;
Before moving on to this tutorial, here are some prerequisites &lt;br&gt;
needed to understand this tutorial efficiently. &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Firstly, you need to have at least basic to intermediate knowledge of Node JS&lt;/li&gt;
&lt;li&gt;Understand and ability to use NPM&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  What is Prometheus?
&lt;/h2&gt;

&lt;p&gt;Prometheus is an open-source system monitoring and observability tool invented in the year 2012. It provides monitoring, observability and alerting features for both cloud services and backend applications.  It also works on the principle of querying various endpoints and generating various data matrices which are then stored and can be analyzed to monitor for application and cloud performance.&lt;br&gt;
It provides client libraries across several programming languages providing ease of Application integration.  It also provides an Admin dashboard where various data are queried and scraped from an application and cloud operations are collected. Its Alert features also notify the application developer in case of an occurrence of any anomaly in the application metrics.&lt;br&gt;
Additionally; it possesses an advanced feature known as the PROMQL (Prometheus query language) which allows the developer to use advanced queries to generate data commands for appropriate analysis and the generation of measurable information insights. Right now, we will be integrating Prometheus into our sample backend application which is a Node JS application powered by the Express framework. It is important to have the software already set up before proceeding to the subsequent sections. Here is a link to the Prometheus &lt;a href="https://prometheus.io/download/" rel="noopener noreferrer"&gt;tool&lt;/a&gt;. &lt;/p&gt;

&lt;p&gt;After completing your installation, initialize a default Node JS application and then install the necessary server dependencies.&lt;/p&gt;

&lt;h2&gt;
  
  
  Prometheus Client packages for Node JS
&lt;/h2&gt;

&lt;p&gt;Prometheus as mentioned earlier provides various client packages across various languages and Node JS isn’t left out. Currently, here are some of the most popular Prometheus client packages available for Node JS.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://npmjs.com/package/express-prom-bundle" rel="noopener noreferrer"&gt;Express-prom-bundle&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://npmjs.com/package/prom-client" rel="noopener noreferrer"&gt;Prom-client&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://npmjs.com/package/prometheus-api-metrics" rel="noopener noreferrer"&gt;prometheus-api-metrics&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://npmjs.com/package/express-prometheus-middleware" rel="noopener noreferrer"&gt;express-prometheus-middleware&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://npmjs.com/package/appmetrics" rel="noopener noreferrer"&gt;appmetrics&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://npmjs.com/package/metrics" rel="noopener noreferrer"&gt;metrics&lt;/a&gt;
And others. &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;However, for this tutorial, the Prom-client package will be used as it has user-friendly documentation, a large user base and strong community support. Here is a link to its &lt;a href="https://github.com/siimon/prom-client" rel="noopener noreferrer"&gt;documentation&lt;/a&gt; and &lt;a href="https://www.npmjs.com/package/prom-client" rel="noopener noreferrer"&gt;npm&lt;/a&gt; page. &lt;/p&gt;

&lt;h2&gt;
  
  
  Node JS integration with Prometheus
&lt;/h2&gt;

&lt;p&gt;To integrate the Prometheus metrics tracking feature into our Node JS application, we would have to install the prom-client package&lt;br&gt;
&lt;code&gt;npm i prom-client&lt;/code&gt;&lt;br&gt;
On successful completion of this step, we will then be initializing the prom client in our code base.&lt;br&gt;
Navigating to the &lt;code&gt;index.js&lt;/code&gt; page, and prom client can be initialized by&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

const client =require(“prom-client”)


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;With this, we have successfully initialized Prometheus into our Node JS project. Subsequently, we would be setting it up to collect relevant metrics and also learn about some of its specific metric features.&lt;/p&gt;

&lt;h2&gt;
  
  
  Tracking metrics using Prometheus             
&lt;/h2&gt;

&lt;p&gt;Prometheus provides the user with the flexibility to collect routine Backend application service metrics via the use of its &lt;code&gt;CollectDefaultMetrics&lt;/code&gt; feature. It also empowers the user to design unique customized metric collection functions which helps the user assess the application performance more reliably. &lt;br&gt;
To collect routine application metrics, the code below can be used. &lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

const register = new client.Registry();
const collectDefaultMetrics = client.collectDefaultMetrics;

collectDefaultMetrics({
    register
});



&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;The above code initializes the Prometheus registry which serves as a state/container that holds all metrics to be collected from the application. &lt;/p&gt;

&lt;p&gt;With this properly configured, we have successfully implemented the default metrics collection feature. To access the metrics collected, an endpoint would have to be created which when accessed, would provide all metrics scraped per time. This can easily be created below.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

app.get("/metrics", async (req, res) =&amp;gt; {
    res.setHeader("Content-Type", client.register.contentType);
    let metrics = await register.metrics();
    res.send(metrics);
});



&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt; &lt;br&gt;
Accessing this endpoint "/metrics",  will reveal all  the scraped metrics collected. We would then go on to create customizable specific application metrics to be scraped by the Prometheus library. &lt;br&gt;
Here is the code below. on how to create custom metrics. &lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

const http_request_counter = new client.Counter({
    name: 'myapp_http_request_count',
    help: 'Count of HTTP requests',
    labelNames: ['method', 'route', 'statusCode']
});

register.registerMetric(http_request_counter);

app.use("/*", function(req, res, next) {
    http_request_counter.labels({
        method: req.method,
        route: req.originalUrl,
        statusCode: res.statusCode
    }).inc();
    console.log(register.metrics());
    next();
});

 


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;in the code above, we created a request counter metric. This counter will automatically increase by 1 whenever we access any endpoint in our backend application. Also included is the name assigned to the  custom metrics and the parameters to be collected such as the route, the request method and the status code of the request. &lt;/p&gt;

&lt;p&gt;Here is the final code for the project.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

// Import required modules
const express = require("express");
const client = require("prom-client");
const bodyParser = require("body-parser");
const cors = require("cors");
const connectDB = require("./Config/db");

// Create an Express application
const app = express();

// Initialize Prometheus registry
const register = new client.Registry();

// Configure default Prometheus labels
register.setDefaultLabels({
    app: "blue",
});

// Define Prometheus metrics
const http_request_counter = new client.Counter({
    name: 'myapp_http_request_count',
    help: 'Count of HTTP requests',
    labelNames: ['method', 'route', 'statusCode']
});
const userCounter = new client.Counter({
    name: "user_counter",
    help: "User counter for my application"
});

// Register Prometheus metrics with the registry
register.registerMetric(http_request_counter);
register.registerMetric(userCounter);

// Middleware to collect and expose Prometheus metrics
app.get("/metrics", async (req, res) =&amp;gt; {
    res.setHeader("Content-Type", client.register.contentType);
    let metrics = await register.metrics();
    res.send(metrics);
});

// Middleware to increment user counter for all routes
app.get("/*", (req, res) =&amp;gt; {
    userCounter.inc();
    console.log(register.metrics());
    res.send("test");
});

// Middleware to count HTTP requests and log metrics
app.use("/*", function(req, res, next) {
    http_request_counter.labels({
        method: req.method,
        route: req.originalUrl,
        statusCode: res.statusCode
    }).inc();
    console.log(register.metrics());
    next();
});

// Collect default Prometheus metrics (e.g., CPU, memory)
client.collectDefaultMetrics({
    register
});

// Define a Prometheus histogram for response time
const restResponseTimeHistogram = new client.Histogram({
    name: 'rest_response_time_duration_seconds',
    help: 'REST API response time in seconds',
    labelNames: ['method', 'route', 'status_code']
});

// Enable CORS for all routes
app.use(cors({
    origin: '*',
}));

// Parse JSON requests
app.use(express.json());
app.use(bodyParser.json());

// Connect to the database
connectDB();

// Start the server
app.listen(process.env.PORT || 5000, () =&amp;gt; {
    console.log("Server is running...");
});



&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Here is the endpoint showing the application metrics scraped by Prom-client. &lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvknv6vdcj5f5uv0wxh6i.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvknv6vdcj5f5uv0wxh6i.JPG" alt="Prometheus scraping"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;With this, we have completed the basics of Prometheus integration with Node JS. Other advanced concepts in Prometheus would further help the developer to create complex relevant metrics registers using the prom client tool. &lt;/p&gt;

&lt;h2&gt;
  
  
  Additional info (Exploring Grafana)
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft9fg0h4amxrt29ovp0wp.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft9fg0h4amxrt29ovp0wp.JPG" alt="Grafana dashboard"&gt;&lt;/a&gt;&lt;br&gt;
Obtaining your metrics from your backend service application shouldn’t be all that there is to be. As beautiful as Prometheus is, it is also limited in quite several features of which data visualization is among. Here comes Grafana to the rescue. &lt;br&gt;
The Grafana tool is a data visualization tool that generates meaningful readable insights from the queried metric raw data obtained from Prometheus. You can check out their documentation &lt;a href="https://grafana.com/docs/" rel="noopener noreferrer"&gt;here&lt;/a&gt; and download the tool &lt;a href="https://grafana.com/grafana/download" rel="noopener noreferrer"&gt;here&lt;/a&gt; for your respective Operating systems. You can also check out tutorials on how to integrate both Prometheus and Grafana. &lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;With this, we have come to the end of the tutorial. We hope you’ve learned essentially about Prometheus, its uses and how to integrate into a backend application for efficient monitoring of application performance.&lt;br&gt;
Feel free to drop comments and questions in the box below, and also check out my other articles &lt;a href="//linktr.ee/tobilyn77"&gt;here&lt;/a&gt;. Till next time, keep on coding!&lt;/p&gt;

</description>
      <category>prometheus</category>
      <category>monitoring</category>
      <category>node</category>
      <category>grafana</category>
    </item>
    <item>
      <title>Common Security Vulnerabilities in the Blockchain World</title>
      <dc:creator>oluwatobi2001</dc:creator>
      <pubDate>Wed, 03 Apr 2024 17:59:03 +0000</pubDate>
      <link>https://dev.to/oluwatobi2001/common-security-vulnerabilities-in-the-blockchain-world-4n80</link>
      <guid>https://dev.to/oluwatobi2001/common-security-vulnerabilities-in-the-blockchain-world-4n80</guid>
      <description>&lt;p&gt;The Blockchain has in no small measure posed a worthy replacement to the traditional banking system, liberalizing access to wealth and enlightening the masses on asset creation and financial intelligence. This and many more have made Blockchain and its services gain so much popularity in recent times with over a million transactions being performed on the Blockchain every second. Not to ignore the fact that Blockchain allows for the easy movement of assets and money across geographical barriers faced by traditional banking thereby eliminating additional fees.&lt;br&gt;
However, the Blockchain sector does not exist with its deficiencies. Due to its decentralized and liberalized structures, quite a lot of fraudulent malicious activities get on daily leaving little or no trace. Not too long ago, a major crypto currency exchange went bankrupt due to a backdoor found on their Blockchain resulting in a loss of user funds and rendering a lot of families penniless. All these and many more are some of the concerns of the Blockchain industry.&lt;br&gt;
With financial experts predicting that Blockchain will in no distant time replace the fiat currency and with countries like El Salvador already adopting cryptocurrency as legal tender, the future seems bright for the Blockchain industry, but with these security threats, can the stability of the Blockchain and the eventual expansion of the Blockchain market be hampered with?&lt;/p&gt;

&lt;p&gt;This article aims to highlight common security threats and loopholes present in the blockchain industry and the principles behind such attacks. Awareness of these forms of attacks and taking pre-emptive measures to prevent these would prove beneficial in the long run and help build consumer trust. With this, let's get started. &lt;/p&gt;

&lt;h2&gt;
  
  
  Double Spending
&lt;/h2&gt;

&lt;p&gt;The concept of double spending is routinely faced upon the creation of a new Blockchain protocol or web3 Dapps, Funny as it sounds, what does double spending mean? This is when the user maliciously exploits errors in the system to spend the same unit of cryptocurrency more than once on a blockchain protocol. Tackling this can be quite cumbersome as it involves the use of highly secured cryptographic algorithms and other systems to gauge the system and eliminate such. Failure to do so could ultimately result in loss of user trust in the blockchain and incurring losses on the part of the managers. An example of this attack was occurred in 2019/20, famously tagged the 51% attack on the Ethereum classic network. &lt;/p&gt;

&lt;h2&gt;
  
  
  Sybil Attack
&lt;/h2&gt;

&lt;p&gt;This attack involves a group of malicious entities trying to take control of an entire blockchain &lt;br&gt;
service by creating multiple nodes with malicious functions. This often results in blockchain manipulations and draining financial losses. The presence of multiple nodes in the user blockchain could also pose a risk to user identity as they tend to intercept user details and IP addresses increasing user distrust in the blockchain. The result of these, attacks is to achieve what is popularly known in the blockchain security space as the 51% attack. This involves controlling more than 50% of the blockchain network. In 2018, the Verge (XVG) and Bitcoin Gold (BTG) were affected by the Sybil attack resulting in heavy losses. &lt;/p&gt;

&lt;h2&gt;
  
  
  Distributed Denial-of-Service (DDoS) Attack
&lt;/h2&gt;

&lt;p&gt;Distributed denial of service is a common cause of concern in the web2 space and this isn’t any different in the blockchain although unlike in the web2 space in which DDoS usually results in the slowing down of website function, the blockchain, due to its decentralized model has a form of immunity against that. This form of attack involves massive flooding of the blockchain protocol with spam transaction information, congesting the ledger network and delaying the completion of legitimate blockchain transactions. This also builds up user distrust and defeats the entire purpose for which blockchain is built.&lt;br&gt;
 Also, The DDoS attack may be launched at smart contracts creating parasitic contracts which delay the execution of other contracts. Solana and Arbitrum have also been under DDOS attacks in the past.&lt;/p&gt;

&lt;h2&gt;
  
  
  Eclipse Attack
&lt;/h2&gt;

&lt;p&gt;This form of attack entails the isolation of a specific node within a decentralized system surrounding it with malicious nodes and exploiting its connections with other nodes to manipulate blockchain transactions. This can be achieved by flooding the nodes with several requests, forcing the node to connect to these malicious bots and injecting malicious codes into the node thereby disrupting the blockchain&lt;/p&gt;

&lt;h2&gt;
  
  
  Timestamp Manipulation
&lt;/h2&gt;

&lt;p&gt;Every blockchain transaction is attached to a timestamp which represents the time the transaction was performed. However, as seemingly harmless as this seems, it can serve as an access point of vulnerability to the blockchain by hackers. It involves manipulating the timestamp of a block, disrupting the sequence of the smart contract execution, and ensuring the execution of irrelevant smart contracts congesting the system which drain the blockchain resources. A good example of this is the DAO attack on the Ethereum blockchain.&lt;br&gt;
These examples highlight the need for the blockchain developer to be aware of these vulnerability mechanisms’ and properly build secured and efficient blockchain protocols and services&lt;/p&gt;

&lt;p&gt;With this, we have come to the end of the article. Feel free to drop any questions or comments in the comment box below.  Till next time, keep on innovating!&lt;/p&gt;

</description>
      <category>blockchain</category>
      <category>security</category>
      <category>hacking</category>
      <category>vulnerabilities</category>
    </item>
    <item>
      <title>OPTICAL CHARACTER RECOGNITION USING NODE JS AND TESSERACT OCR ENGINE</title>
      <dc:creator>oluwatobi2001</dc:creator>
      <pubDate>Mon, 11 Mar 2024 11:42:07 +0000</pubDate>
      <link>https://dev.to/oluwatobi2001/optical-character-recognition-using-node-js-and-tesseract-ocr-engine-1ab</link>
      <guid>https://dev.to/oluwatobi2001/optical-character-recognition-using-node-js-and-tesseract-ocr-engine-1ab</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;What then is optical character recognition? Optical character recognition involves converting the age of a text into readable text formats. This is possible due to the increasing technological advances resulting in the development of several optical character recognition tools and models.&lt;br&gt;
You might now ask, what is the benefit of this to me? Optical character recognition is a technological innovation borne out of necessity to help solve recurring problems for individuals and businesses worldwide. Technology has caused a massive shift from the analogue process of doing things to a more digital, automated format of operation. &lt;/p&gt;

&lt;p&gt;This entails that for businesses to thrive, they must adapt to these new realities or risk being phased out. But the big problem is that quite a lot of the business operations are already in analogue formats, and transcribing them to digital formats would be less cost-effective and less efficient.&lt;/p&gt;

&lt;p&gt;Hence, optical character recognition comes to the rescue. This seamlessly extracts the texts from the scanned images, eliminating the time lost in transcribing these documents.&lt;/p&gt;

&lt;p&gt;Also, optical character recognition has its use cases in other fields of technology, such as data analysis and visualization, enabling efficient data utilization. These help and serve as an aid to business owners in ensuring high cost-effective productivity and efficiency. Not to be left out, the finance sector utilizes this technology to facilitate payments virtually and ensure seamless financial transactions. All these and many more are some of the use cases of optical character recognition.&lt;br&gt;
In this tutorial, I intend to illustrate how to set up an easy-to-use character recognition application with Node JS serving as the backend and React JS as the frontend tool.&lt;/p&gt;

&lt;p&gt;To be able to enjoy this tutorial, here are some prerequisites:. &lt;br&gt;
• Intermediate knowledge of Node JS&lt;br&gt;
• Knowledge of Git and Github&lt;br&gt;
• Knowledge of React JS&lt;/p&gt;

&lt;h2&gt;
  
  
  Optical Character Recognition Engines Available
&lt;/h2&gt;

&lt;p&gt;Before diving in, here are some of the most popularly used optical recognition tools.&lt;br&gt;
• Amazon Textract&lt;br&gt;
• Google Document AI&lt;br&gt;
• IBM DataCap&lt;br&gt;
• DocParser&lt;br&gt;
• CamScanner&lt;br&gt;
• Abbyy&lt;br&gt;
• Base64.ai &lt;br&gt;
And many more. However, for the tutorial, we would be using the Tesseract OCR engine due to its open source, good documentation, and support for Node JS, among other reasons. We would now proceed to delve deep into Tesseract.&lt;/p&gt;

&lt;h2&gt;
  
  
  A brief intro to Tesseract
&lt;/h2&gt;

&lt;p&gt;Tesseract is an open-source optical character recognition engine and is often revered as the first Optical Character Recognition tool ever made. It was created by HP in 1984, maintained by Google until 2018 and is currently being maintained by its users community. It is currently available in executable formats across various operating systems across the globe. It offers character recognition services for over 100 languages, among which English, French, German, and Spanish are available. The latest version is version 5.3.0. &lt;/p&gt;

&lt;h2&gt;
  
  
  Setting Up and Installation
&lt;/h2&gt;

&lt;p&gt;To harness this tutorial, we would need to set the required file structures in place. First of all, we need to download and install the Tesseract OCR engine on our local Personal computer. To complete this, I would recommend its documentation site. You can pick your choice based on your operating system specifications. For Windows users, after installing the OCR engine, which can be downloaded from this &lt;a href="https://digi.bib.uni-mannheim.de/tesseract/" rel="noopener noreferrer"&gt;link&lt;/a&gt;, the folder path must be added to the environmental variables for it to be effectively run via the command prompt. For Linux and Mac OS users, you can also get to install Tesseract via this &lt;a href="https://tesseract-ocr.github.io/tessdoc/Installation.html" rel="noopener noreferrer"&gt;link&lt;/a&gt;.  With this solved, let's now dive into the tutorial proper.&lt;/p&gt;

&lt;h2&gt;
  
  
  Demo Project
&lt;/h2&gt;

&lt;p&gt;We intend to build a web application that integrates Tesseract engine to provide the optical recognition feature to the user. The web application will be designed with Node.js Express serving as the backend tool, and React JS serving as the frontend library. To save time, we would delve into the backend functionality and briefly discuss the frontend aspect of the project. Now let's dive in. &lt;br&gt;
First of all, ensure to create your Frontend and Backend code folders respectively. Navigate to the backend folder and then install Node-tesseract-ocr, express, and multer libraries &lt;br&gt;
&lt;code&gt;Npm I  Node-tesseract-ocr  express  multer.&lt;/code&gt;&lt;br&gt;
NodeJs Tesseract OCR serves as the Node JS implementation for the Tesseract engine. Multer helps with efficient parsing and storage of the uploaded images used in this tutorial, while Express serves as a great framework for the  Node JS server.&lt;br&gt;
After completing this, in your index.js page, initialize the packages installed by importing them as follows:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const express = require("express");
const app = express()
const multer = require("multer")
const tesseract = require("node-tesseract-ocr");

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Thereafter, we would be providing a configuration file for Tesseract. This involves providing a config file with the following code. &lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const config = {
    lang: 'eng',
    oem: 1,
    psm: 3
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;This config file specifies the language intended to be recognized, in this case, English. As mentioned earlier, Tesseract provides language support for over 100 languages, so this can be tweaked to fit the specific language you are interested in.&lt;br&gt;
The oem represents the OCR engine modes available. So far, there are 2 models in the recent updates: Model 1 represents the legacy model, while Engine 2 represents the neural net LSTM model engine. However, you can choose from 4 model operations, with 0 representing legacy engine, 1 representing neural nets LSTM only, 2 representing a combo of the 2 above, and 3 being the default mode. The page segmentation mode (PSM) is based on the region of the image you intend to recognize optically. It ranges from 1 to 14 modes, but we would stick to 3 (default) as we don’t specify the region of the image to be transcribed.&lt;br&gt;
 Thereafter, we would be setting up Express and Multer to handle the images uploaded from the frontend site to the server.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;app.use("/uploads", express.static(path.join(__dirname, "/uploads")))

var storage = multer.diskStorage({
    destination: (req, file, cb) =&amp;gt; {
        cb(null, 'uploads/')
    },
    filename: (req, file, cb) =&amp;gt; {
        cb(null,     file.originalname )
    },

})

const upload = multer({
    storage: storage
})
;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Now, we would be writing a post request that invokes Tesseract to analyze the pictures obtained from the front end.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;app.post("/img-upload", upload,single("file"), (req, res) =&amp;gt; {
const file = req.file.filename;
    tesseract.recognize(file, config). then((text) =&amp;gt; {
console.log("text: " + text);
        res.status(200).json(text)
}).catch((err) =&amp;gt; {
        console.log(err)
        res.status(500).json(err)
})

})
app.listen("5000", () =&amp;gt; {
console.log("Hello")
});

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;In the function above, a post request is created with the endpoint set to &lt;code&gt;img-upload&lt;/code&gt; and the file accessed. Tesseract, which is already initialized, is invoked to recognize the image with the configuration file passed along. The text obtained from this is then sent to the frontend to be seen. Any error during this process would be caught and also shown.&lt;br&gt;
Here is a result of what was expected.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F09hcst6enfw01byaz4zu.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F09hcst6enfw01byaz4zu.JPG" alt="Image description"&gt;&lt;/a&gt;&lt;br&gt;
Attached below is the final code  for the backend of this project.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const express =  require("express");
const app = express()
const multer  = require("multer")
const tesseract  =  require("node-tesseract-ocr");
const path= require("path");
const cors = require("cors");

app.use(cors({
    origin: 'http://localhost:5173',
    mathods: ['GET', 'POST']
    }))

app.use(express.json());
app.use("/uploads", express.static(path.join(__dirname, "/uploads")))

var storage = multer.diskStorage({
    destination: (req, file, cb) =&amp;gt; {
        cb(null, 'uploads/')
    },
    filename: (req, file, cb) =&amp;gt; {
        cb(null,     file.originalname )
    },

})

const upload = multer({
    storage: storage
})
;
const config = {
    lang: 'eng',
    oem: 1,
    psm: 3
}
app.post("/img-upload", upload.single('file'), (req, res) =&amp;gt; {
    const file  = req.file.filename;
    tesseract.recognize(`uploads/${file}`, config).then((text) =&amp;gt; {
        console.log("text: " + text);
        res.status(200).json(text)
    }).catch((err) =&amp;gt; {
        console.log(err)
        res.status(500).json(err)
    })

})
app.listen("5000", () =&amp;gt; {
console.log("Hello")
});


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;The frontend was minimally designed to test the functionality of the application and to  upload the images to be transcribed. Here is a picture of the frontend screen. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffuvv03a2ugmb22jj5i3k.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffuvv03a2ugmb22jj5i3k.JPG" alt="Image description"&gt;&lt;/a&gt;&lt;br&gt;
You can click &lt;a href="//github.com/oluwatobi2001/ocr-frontend.git"&gt;here&lt;/a&gt; for the code to the frontend design of this project. &lt;/p&gt;

&lt;h2&gt;
  
  
  Additional Information And Improvements
&lt;/h2&gt;

&lt;p&gt;So far so good, we have come to the end of the tutorial. Utilizing cloud platform providers and tools like Docker would further help to seamlessly run the application on the cloud. This OCR technology can also be harnessed and integrated with data science to aid in data processing and visualization. Moreover, the texts can also be stored in a database of choice for further processing.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;I sincerely hope you’ve learnt about optical character recognition using Node JS and Tesseract OCR engine and its successful implementation to help improve our day-to-day activities.&lt;br&gt;
Feel free to drop comments and questions, and also check out my other educational tech articles &lt;a href="//tobilyn77.hashnode.dev"&gt;here&lt;/a&gt;. Till next time, keep on coding!&lt;/p&gt;

</description>
      <category>ocr</category>
      <category>node</category>
      <category>tesseract</category>
      <category>multer</category>
    </item>
    <item>
      <title>Introduction to Cloud Computing: The Models, Benefits, Risks, Implementation and Popular Tools</title>
      <dc:creator>oluwatobi2001</dc:creator>
      <pubDate>Wed, 06 Mar 2024 21:14:34 +0000</pubDate>
      <link>https://dev.to/oluwatobi2001/introduction-to-cloud-computing-the-models-benefits-risks-implementation-and-popular-tools-2loh</link>
      <guid>https://dev.to/oluwatobi2001/introduction-to-cloud-computing-the-models-benefits-risks-implementation-and-popular-tools-2loh</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;The breakthrough in technological innovations is probably one of the major milestones recorded in the 20th century. Decades ago, larger less efficient mainframe computer systems which were expensive to maintain and rare to find were in vogue but currently, everyone seems to be talking about cloud computing. Over the years, some major industries experienced massive growth, resulting in increased users and therefore an increased need to further expand their operations to cater for these users. The physical server option had quite a lot of deficiencies thereby prompting the popularization of Cloud computing and its related operations in our world today. But what is cloud computing and what benefits does it have in today's world?&lt;br&gt;
Cloud computing can be explained as a virtual-based technology which involves data storage and computation over the internet without direct active physical management by the user. It entails the use of servers, software and database management systems over the Internet. &lt;/p&gt;

&lt;p&gt;This article aims to serve as a guide to beginners interested in the field of Cloud development and networking. Details regarding various models and implementations of cloud computing will be discussed. Also, its benefits and associated risks will be highlighted. As a bonus, tools which you can use to become a high-demand cloud developer will also be introduced here. Now, let's begin.&lt;/p&gt;

&lt;h2&gt;
  
  
  Brief Roadmap into Cloud Computing
&lt;/h2&gt;

&lt;p&gt;Just as described earlier, cloud computing works by providing easy access to cloud services and data over the internet to the user's devices. It works via the User Device =&amp;gt; Server =&amp;gt; Database. The server coordinates the requests made from the individual's devices and then through its connectivity, retrieves the relevant information from the database. This system also involves the use of automation tools which eliminates the need for a response from  a physical staff to generate the needed resources and reduce server request loads efficiently.&lt;/p&gt;

&lt;h2&gt;
  
  
  Models of Cloud computing services
&lt;/h2&gt;

&lt;p&gt;Cloud computing in its entirety is complex and largely comprehensive and cuts across several use cases. However, for ease of learning, most cloud computing platforms can be grouped into 3 major categories. These include;&lt;/p&gt;

&lt;h3&gt;
  
  
  Infrastructure –As-A-Service
&lt;/h3&gt;

&lt;p&gt;This category entails the provision of all Computing services options  which range from and not limited to Virtual Servers, Virtual operating systems, Storage options and application programming interfaces. However in this model, the user is expected to perform hardcore operations to ensure the user’s server gets powered up on the cloud. &lt;code&gt;Amazon AWS&lt;/code&gt;, &lt;code&gt;Microsoft Azure&lt;/code&gt;, &lt;code&gt;Google Cloud Platform&lt;/code&gt; and &lt;code&gt;IBM Cloud&lt;/code&gt; are popular examples in this category.&lt;/p&gt;

&lt;h3&gt;
  
  
  Platform-As-A-Service
&lt;/h3&gt;

&lt;p&gt;This category provides a platform to allow for virtual software creation through the use of its development tools and its associated features. In this model of cloud computing, the user is only tasked with the code creation aspect while the cloud service provider is tasked with the responsibility of maintaining the software development platform and the site infrastructure. This model also provides web hosting services for this software. Google App Engine, Heroku, and Salesforce are key leaders in this category&lt;/p&gt;

&lt;h3&gt;
  
  
  Software-As-A-Service
&lt;/h3&gt;

&lt;p&gt;This category entails the cloud platform providers providing software applications over the Internet alongside the necessary infrastructure to maintain it. This is usually termed web services. Internet users can easily access this software not regarding geographical location and device used. Microsoft 365 and Google Workspace are a great example of this category.&lt;/p&gt;

&lt;h2&gt;
  
  
  Benefits of Cloud Computing
&lt;/h2&gt;

&lt;p&gt;Cloud computing due to its massive influx of users and its worldwide adoption seems to have some benefits. What are the benefits it offers to its users?&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Cost-effectiveness:&lt;/strong&gt; Cloud computing technology has been proven to be highly cost-effective, providing easy access to software services and eliminating the need for maintenance of physical server costs and other physical miscellaneous expenses. With most services offering a Pay-as-you-go model of payment, Users are only required to pay for the services they use without having to be billed for services not utilized. All these contribute to maximizing profits and minimizing losses.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Enhanced storage capability:&lt;/strong&gt; Physical Database Management is quite limited and often demands continual upgrading which isn’t cost-friendly. In great contrast, Cloud Computing has in recent times been the best alternative to storing data. With so many web platforms offering cheap data storage options, problems regarding storage backup in recent times have been widely eliminated.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Data Mobility:&lt;/strong&gt; Compared to building physical data storage centers and physical servers for your application, the cloud helps you to easily set this up for a less expensive cost and also achieve an equal or even higher level of efficiency. Its comparative advantage lies in the fact that the data stored in cloud is easily mobile and can be accessed at anytime anywhere around the world with reduced latency speed compared to the use of physical servers. Also, in case of natural or man-made disasters, cloud operations are more immune compared to their physical counterpart as it has additional data duplication features that help minimize data losses.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Efficient data processing:&lt;/strong&gt; With features such as load balancing, scaling optimization and availability zones, the Cloud as a whole ensures the overall efficiency of your hosted application. The scaling optimization allows it to automatically detect increased  or decreased site activity and adjust appropriately automatically In response to the activity to ensure and facilitate the efficiency of site usage and data processing.&lt;/p&gt;

&lt;h2&gt;
  
  
  Risks Seen in Cloud Computing
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Hacking concerns:&lt;/strong&gt; As safe as the cloud seems, there also seem to be concerns about the efficiency of the security the cloud platforms provide especially in the new wave of data breaches and data losses of recent. Hence, some people are still wary of entrusting the storage of sensitive data to a 3rd party. However, most cloud platforms have an efficient cloud security system in place to avoid this.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Technical expertise&lt;/strong&gt;: Cloud development is a relatively new field with a thousand different tools to learn across several niches and use cases. Quite not unusual, the cloud market is quite unsaturated as there are fewer certified cloud experts compared to other fields thereby putting them in high demand. Hence getting a cloud developer can be quite cumbersome.&lt;/p&gt;

&lt;h2&gt;
  
  
  Implementing Cloud Computing
&lt;/h2&gt;

&lt;p&gt;The world is ever evolving and applications that would remain relevant would have to adjust to the current realities or risk of being phased out.&lt;/p&gt;

&lt;p&gt;Having exhausted the pros and cons of Cloud development, how do we go about implementing cloud computing in our application development? The process of implantation entails proper planning and brainstorming. Not all applications however do need cloud computing but for those that may need it, adequate research on platforms that best fit your company operation, have favorable economics of scale and have a good ease of usage should be considered.&lt;br&gt;
 Also, the choice of the cloud model should be made depending on the features the user wants and the level of user expertise. &lt;br&gt;
 Developing a long-term plan for cloud usage is also essential to ensure sustainability and guarantee efficiency. Lastly, periodic upgrades and improvements to ensure optimal cloud security and monitor efficiency should be carried out.&lt;/p&gt;

&lt;h2&gt;
  
  
  Top Cloud Computing Providers
&lt;/h2&gt;

&lt;p&gt;Here are some of the popular cloud computing service providers available worldwide. &lt;br&gt;
• Amazon Web Services&lt;br&gt;
• Microsoft Azure&lt;br&gt;
• Google Cloud Platform&lt;br&gt;
• Alibaba Cloud&lt;br&gt;
• Oracle cloud infrastructure and many more. &lt;/p&gt;

&lt;p&gt;Quite a lot offers certification examinations and training programs which you as a beginner can take advantage of in your journey to cloud development.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;With this, we have come to the end of this article. I  hope you’ve learnt about the nitty-gritty of Cloud computing and its benefits to you as a developer. &lt;br&gt;
Feel free to drop comments and questions and also check out my other articles &lt;a href="//tobilyn77.hashnode.dev"&gt;here&lt;/a&gt;. Till next time, keep on coding!&lt;/p&gt;

</description>
      <category>cloud</category>
      <category>computing</category>
      <category>aws</category>
      <category>paas</category>
    </item>
    <item>
      <title>Mastering PDF Creation: The Ultimate Guide to Using PDFKit and Node JS</title>
      <dc:creator>oluwatobi2001</dc:creator>
      <pubDate>Fri, 01 Mar 2024 23:41:09 +0000</pubDate>
      <link>https://dev.to/oluwatobi2001/mastering-pdf-creation-the-ultimate-guide-to-using-pdfkit-and-node-js-3he</link>
      <guid>https://dev.to/oluwatobi2001/mastering-pdf-creation-the-ultimate-guide-to-using-pdfkit-and-node-js-3he</guid>
      <description>&lt;p&gt;The world has been evolving rapidly and technology isn’t left out. From the days when slates were used, down to the invention of hard-cover books and pens and now in the area of smart electronic devices, the need to pass information from one entit...&lt;/p&gt;

</description>
      <category>pdf</category>
      <category>node</category>
      <category>pdfkit</category>
      <category>invoiceparser</category>
    </item>
    <item>
      <title>Optimizing The Performance of Web Applications With Jest</title>
      <dc:creator>oluwatobi2001</dc:creator>
      <pubDate>Wed, 21 Feb 2024 15:43:33 +0000</pubDate>
      <link>https://dev.to/oluwatobi2001/optimizing-the-performance-of-web-applications-with-jest-1me0</link>
      <guid>https://dev.to/oluwatobi2001/optimizing-the-performance-of-web-applications-with-jest-1me0</guid>
      <description>&lt;p&gt;Performance testing is an encompassing yet underrated field of software development and it’s a must-have skill as a software developer to prevent common software failure issues that occur among production applications. It is a routine software practice which is carried out to determine the stability of a system in terms of scalability, reliability and data management amongst other parameters. &lt;/p&gt;

&lt;p&gt;In this tutorial, I hope to walk you through Performance testing, what it entails, and the common tools used for backend testing and also, do a demo performance testing project together. The tutorial is simplified and suitable for beginners, mid developers and professional developers. Extensive knowledge of this skill is fundamental to growing as a backend developer and can serve as a revision for expert developers. With all said, let's dive in.&lt;/p&gt;

&lt;p&gt;Right now, I would shortly highlight the necessary prerequisites needed to fully harness all that would be discussed in this tutorial.&lt;br&gt;&lt;br&gt;
• Intermediate Knowledge of Node JS&lt;br&gt;
• Basic knowledge of JavaScript operation&lt;br&gt;
• Knowledge of API development&lt;/p&gt;
&lt;h2&gt;
  
  
  What is Performance testing all about?
&lt;/h2&gt;

&lt;p&gt;It serves quite a lot of purposes among which is to test for system efficiency in performing and sustaining tasks. It also serves as a standard to compare systems of varying efficiency and build and also suggests the most effective among them all.&lt;/p&gt;

&lt;p&gt;It also helps to reveal vulnerabilities. State-of-the-art testing tools are well optimized to efficiently analyze the code lines to detect any error and are quick to highlight the areas where these occur.  &lt;/p&gt;

&lt;p&gt;The end goal of performance testing is dependent on the use of the application in question. The end goal can either be concurrency-oriented or transaction rate oriented depending on whether the app involves end users or not. &lt;br&gt;
Performance testing could entail Load testing, which is usually carried out to evaluate the behavior of a web service under a specific expected load.  Other types of testing that can be assessed include Integration testing, spike testing, soak testing and stress testing. &lt;/p&gt;
&lt;h2&gt;
  
  
  Examples of performance testing tools
&lt;/h2&gt;

&lt;p&gt;There are quite a lot of tools which are commonly used in our contemporary times to test the efficacy and latency of web applications. In this section, we hope to discuss on some of the tools used, and highlight their strengths and use cases. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.npmjs.com/package/jest"&gt;&lt;em&gt;Jest&lt;/em&gt;&lt;/a&gt;: This is a multi-platform testing tool used to assess the correctness of JavaScript-based applications. It was initially created to test the efficiency of React applications but has since been extended to assess the efficiency of Node Js apps. It also offers a code coverage feature. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.npmjs.com/package/mocha"&gt;&lt;em&gt;Mocha&lt;/em&gt;&lt;/a&gt;: Mocha is a concise asynchronous JavaScript-based testing tool for node Js applications. It is also used with assertion libraries such as Chai and should. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/Pythagora-io/pythagora"&gt;&lt;em&gt;Pythagora&lt;/em&gt;&lt;/a&gt;: This tool offers a unique integrating testing feature to help test how different part of the application works together. It also has the code coverage feature.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://artillery.io"&gt;&lt;em&gt;Artillery&lt;/em&gt;&lt;/a&gt;: This is a stack agnostic testing tool i.e. it can be used for multiple web applications based on different programming languages and still produce an optimal test outcome. This tool provides efficient Load testing features which help to determine the optimal status of the application when exposed to a large load of traffic. It also checks the speed at which an app responds to a user request without crashing. &lt;br&gt;
&lt;a href="https://www.npmjs.com/package/ava"&gt;&lt;em&gt;Ava&lt;/em&gt;&lt;/a&gt;: Ava is a JavaScript-based performance unit testing tool used to test the efficacy of Node JS applications. It works asynchronously running multiple concurrent tests to determine the suitability of multiple code units. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.npmjs.com/package/loadtest"&gt;&lt;em&gt;Loadtest&lt;/em&gt;&lt;/a&gt;: This is a special Node package which is used to load test &lt;code&gt;Node JS&lt;/code&gt; applications to evaluate the ability of the application to cope with requests of varying amount and to evaluate for efficiency and concurrency.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://jmeter.apache.org"&gt;&lt;em&gt;Apache J-meter&lt;/em&gt;&lt;/a&gt;: Apache JMeter offers load-testing features for web applications. It has an in-built IDE to enable interaction with the user. It is multithreaded increasing its ability to mimic several users. &lt;/p&gt;

&lt;p&gt;There are other testing tools which are also equally useful. However, in this tutorial, we will be utilizing &lt;code&gt;Jest&lt;/code&gt; to test our back-end application. &lt;/p&gt;
&lt;h2&gt;
  
  
  Demo Project
&lt;/h2&gt;

&lt;p&gt;So right now, we would be performing a unit test on our code using &lt;code&gt;Jest&lt;/code&gt; testing tool. Let’s begin by installing &lt;code&gt;Jest&lt;/code&gt; package to our code folder.  To complete this, type &lt;code&gt;npm install jest&lt;/code&gt; in the command prompt and when successfully installed, a success message will be displayed.&lt;/p&gt;

&lt;p&gt;In this tutorial, we intend to test the efficiency of some selected routes in our Node JS application. This will necessitate writing different unit tests for each routes and evaluating its correctness. Now let’s optimize our file structure in order to successfully unit test our application. &lt;br&gt;
Navigate to the &lt;code&gt;package.json&lt;/code&gt; file and edit it to include this.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;javascript

"scripts": {
    "test": " jest",
    "start": "nodemon index.js"
  },

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The code above includes Jest package as the default recognized testing application whenever we want to run some performance testing on this project. This automatically triggers jest functionality when we enter npm test in the &lt;code&gt;command prompt&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Thereafter, we would like to create our test environment. A folder in the &lt;code&gt;root&lt;/code&gt; directory should be created with the name “tests”. This helps the Jest operator to locate the files for testing. Within the test folder, kindly create a test file. The file can be named with whatever name you prefer but the suffix &lt;code&gt;.test.js&lt;/code&gt; should be added so as to enable &lt;code&gt;Jest&lt;/code&gt; recognize it and run it while testing.&lt;br&gt;
After completing all these, let’s go into unit testing proper. &lt;/p&gt;

&lt;p&gt;In the”test.js” file, let’s import and initialize some required packages and functions.  In my code, I intend to test the routes of a book library application.  It contains the &lt;code&gt;get all book&lt;/code&gt; route, &lt;code&gt;Get a single book&lt;/code&gt; route, &lt;code&gt;upload a book&lt;/code&gt; route and &lt;code&gt;delete a book&lt;/code&gt; route. . We intend on making unit tests for these routes. &lt;br&gt;
So firstly, we intend to import the book database into the test.js file.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
Javascript
const Book = require('../models/Book')

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The code above imports and initializes our default book MongoDB database.&lt;br&gt;
We would then be importing the functions that we intend to test in each route.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt; jsx
const {GetAllBooks}= require("../controllers/Books");

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The function above is for the get all books route. This contains the function to be tested. Now we intend to go into the Jest test function.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;jsx
jest.mock("../models/Book");

const req = {};
const res = {
  status: jest.fn((x) =&amp;gt; x),
  send: jest.fn((x) =&amp;gt; x),
};

it("it should return all the books in the database", async () =&amp;gt; {
  Book.find.mockImplementationOnce(() =&amp;gt; ({
    Title: "black is king",
    Author: "black",
    price: "$23",
    Summary: "redkjnsadf",
  }));
  await GetAllBooks(req, res);
  expect(res.status).toHaveBeenCalledWith(200);
  expect(res.send).toHaveBeenCalledTimes(1);
});

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;First of all, Jest offers you the ability to create a fake database by copying the structure of the default database. This is known as mocking. This enables the unit test to operate faster and eliminate the lag that comes with getting responses from large databases. Testing which involves the real database is referred to as end-to-end testing as opposed to unit testing. &lt;br&gt;
Also, attached above are sample requests and response objects. The response objects contains both the status and send function which returns a defined output if successfully run or not. &lt;/p&gt;

&lt;p&gt;Now the &lt;code&gt;It&lt;/code&gt; function contains a short description of what the test should be about. Also attached to it is an anonymous function containing the request we intend to test. The &lt;code&gt;expect&lt;/code&gt; statement would return passed if the function satisfies the requirements. If it fails, a failed response would be returned. &lt;/p&gt;

&lt;h2&gt;
  
  
  Other Additional Information
&lt;/h2&gt;

&lt;p&gt;With that we have delved a bit into unit testing of functions. You can also attempt testing for the delete functions, upload function and specific book function. &lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;So we have been able to harness the usefulness of testing tools in optimizing our web application. We have also enjoyed its simplicity. Other additional features like we mentioned earlier can also be integrated using the same method talked about in this tutorial.&lt;br&gt;
I sincerely hope you learned something new and enjoyed this innovative tutorial. Till next time, keep on coding.&lt;/p&gt;

</description>
      <category>performance</category>
      <category>jest</category>
      <category>webdev</category>
      <category>node</category>
    </item>
  </channel>
</rss>
