<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Sid Bhanushali</title>
    <description>The latest articles on DEV Community by Sid Bhanushali (@sidbhanushali).</description>
    <link>https://dev.to/sidbhanushali</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/sidbhanushali"/>
    <language>en</language>
    <item>
      <title>Exploring DoRA</title>
      <dc:creator>Sid Bhanushali</dc:creator>
      <pubDate>Tue, 21 Feb 2023 09:40:47 +0000</pubDate>
      <link>https://dev.to/sidbhanushali/exploring-dora-o0k</link>
      <guid>https://dev.to/sidbhanushali/exploring-dora-o0k</guid>
      <description>&lt;p&gt;In today's fast-paced and ever-changing technology landscape, the success of software development and IT operations depends heavily on the ability to deliver high-quality software at speed. This is where DevOps comes in, providing a collaborative and agile approach to software development and IT operations. &lt;/p&gt;

&lt;p&gt;However, to truly understand the effectiveness and efficiency of DevOps practices, it's crucial to have metrics in place to track success. Metrics provide organizations with valuable insights into the performance of their processes, systems, and people, enabling them to continuously improve and optimize their DevOps practices.&lt;/p&gt;

&lt;p&gt;In this article, we will explore the importance of having metrics to track success and efficiency in DevOps and discuss four key metrics identified by DORA (DevOps Research and Assessment). By understanding these metrics and how to effectively use them, organizations can better measure the impact of their DevOps practices and make data-driven decisions to drive success.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Lead Time: &lt;/li&gt;
&lt;li&gt;Deployment Frequency: &lt;/li&gt;
&lt;li&gt;Mean Time to Recovery (MTTR): &lt;/li&gt;
&lt;li&gt;Change Failure Rate: &lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Lead Time:&lt;/p&gt;

&lt;p&gt;Lead time is a measure of the time it takes from when code is committed to when it is successfully running in production. This metric is important because it provides a view into the speed and efficiency of the development process. A long lead time can indicate a bottleneck in the development process, while a shorter lead time suggests a more streamlined and efficient process. To track lead time, a DevOps engineer could use tools like Git, Jira, or Trello to track code commits, builds, and deployments, and calculate the time between each stage. &lt;/p&gt;

&lt;p&gt;This metric is relevant to business goals because it provides a view into the speed and efficiency of the development process, which is critical for delivering high-quality software quickly to meet the needs of customers and stay ahead of the competition. Long lead times can result in delays and missed opportunities, while short lead times can lead to increased agility and innovation.&lt;/p&gt;

&lt;p&gt;Deployment Frequency: &lt;/p&gt;

&lt;p&gt;Deployment frequency measures the number of times per day that an organization releases code into production. A high deployment frequency indicates a fast and efficient deployment process, while a low frequency suggests a slower, less efficient process. To track deployment frequency, an engineer could use deployment automation tools like Jenkins or Ansible to automate deployments and track the number of deployments per day, which could then be presented in a graphical format&lt;/p&gt;

&lt;p&gt;it provides insight into the speed and efficiency of the deployment process, which is critical for delivering new features and improvements quickly and reliably. High deployment frequency can lead to faster time-to-market.&lt;/p&gt;

&lt;p&gt;Mean Time to Recovery (MTTR): &lt;/p&gt;

&lt;p&gt;Mean time to recovery (MTTR) measures the average amount of time it takes to recover from a service disruption. This metric is critical for ensuring high availability and reliability of services, and it provides insight into the effectiveness of incident response and remediation processes. To track MTTR, one ] could use monitoring tools like Nagios or Datadog to track and log incidents and calculate the time it takes to resolve each incident. The data can then be presented graphically for a better visual representation of change.&lt;/p&gt;

&lt;p&gt;Change Failure Rate: Change failure rate measures the percentage of changes that result in a failure and require remediation. This metric is important because it provides a view into the risk associated with changes, and it can help organizations optimize their change management processes. To track change failure rate, a DevOps engineer could use change management tools like Ansible or Puppet to log changes and track failures&lt;/p&gt;

&lt;p&gt;In conclusion, these four DevOps metrics (Lead Time, Deployment Frequency, MTTR, and Change Failure Rate) provide valuable insights into the performance and efficiency of software development and IT operations processes. By tracking and analyzing these metrics, organizations can gain a better understanding of how their DevOps practices are contributing to their business goals.&lt;/p&gt;

</description>
      <category>devops</category>
      <category>security</category>
      <category>webdev</category>
      <category>productivity</category>
    </item>
    <item>
      <title>A 7-Step Overview Of ML Model Deployment</title>
      <dc:creator>Sid Bhanushali</dc:creator>
      <pubDate>Tue, 20 Dec 2022 09:48:43 +0000</pubDate>
      <link>https://dev.to/sidbhanushali/a-7-step-overview-of-ml-model-deployment-1a04</link>
      <guid>https://dev.to/sidbhanushali/a-7-step-overview-of-ml-model-deployment-1a04</guid>
      <description>&lt;p&gt;The nuances of ML are hard enough locally, but are you tired of dealing with the headache of juggling machine learning development, deployment, and maintenance all on your own? Say goodbye to the days of being a one-man (or woman) ML show and hello to the world of MLOps. In this article, we'll explore an overview of how to deploy an ML model and where to start when youre ready to move from your local environment into the great beyond.&lt;/p&gt;

&lt;p&gt;Deploying a machine learning (ML) model typically involves the following steps:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Preprocessing the data: The first step is to prepare the data that will be used to train the model. This may involve cleaning and formatting the data, as well as splitting it into training and testing sets.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Training the model: Next, the model is trained on the prepared data using a machine learning algorithm. This typically involves adjusting the model's hyperparameters and optimizing its performance.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Evaluating the model: After training, the model's performance is evaluated on the testing set to assess its accuracy and identify any areas for improvement.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Tuning the model: If the model's performance is not satisfactory, it may be necessary to fine-tune the model by adjusting its hyperparameters or trying different algorithms.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Serializing the model: Once the model is performing well, it needs to be saved in a format that can be easily loaded and used for predictions. This is typically done using a process called serialization.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Deploying the model: There are several ways to deploy a machine learning model, depending on the requirements of the application. Some options include deploying the model as a web service, integrating it into a mobile app, or using it to make predictions on a server.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Monitoring and maintaining the model: After the model is deployed, it is important to monitor its performance and make any necessary updates or adjustments to ensure that it continues to perform well.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

</description>
    </item>
    <item>
      <title>DevSecOps -Automate &amp; Secure</title>
      <dc:creator>Sid Bhanushali</dc:creator>
      <pubDate>Fri, 14 May 2021 19:36:26 +0000</pubDate>
      <link>https://dev.to/sidbhanushali/devsecops-automate-secure-5ae4</link>
      <guid>https://dev.to/sidbhanushali/devsecops-automate-secure-5ae4</guid>
      <description>&lt;p&gt;DevSecOps is the practice of integrating a security-first mindset and methodologies into traditional DevOps CI / CD environments. Here are key best practices for organizations seeking to implement DevSecOps.&lt;/p&gt;

&lt;p&gt;Being able to get code out the door fast, secure, and efficiently is the name of the game. In a CI/CD environment, it’s important to maintain speed as the main tenet but also to be aware of the security needed to bulk up your pipeline. Without automation, implementing security practices could be a major bottleneck in the pipeline and wouldn’t be considered a priority for many organizations that rely on speed. For security to be part of this workflow, it needs to be automated for it to be considered a relevant factor in an environment that prioritizes speed.&lt;/p&gt;

&lt;p&gt;Security controls and tests need to be embedded early and everywhere in the development lifecycle, and they need to happen in an automated fashion because the culture of software deployment is changing rapidly. Some organizations are pushing new versions of code into production almost 50 times per day for a single app. Not only this but adding automated security analysis within CI platforms can limit the introduction of vulnerable code earlier in the software development lifecycle.&lt;/p&gt;

&lt;p&gt;However, trying to run automated scans on your entire application source code each day can consume a lot of time and break your ability to keep up with daily changes. One option is to run scans against recent or new code changes.&lt;/p&gt;

&lt;p&gt;A growing number of test automation tools with a range of capabilities have become available for doing security analysis and testing throughout the software development lifecycle, from source-code analysis through integration and post-deployment monitoring. For example, nmap and Metasploit, which are tools to monitor servers and networks for vulnerabilities or known exploits, can be integrated into said automation.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;&lt;em&gt;Cron Jobs&lt;/em&gt;&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;However, all this depends on the type and frequency of the task to be automated. There are certain tasks that need to run on an interval basis, such as backing up databases, updating the system, performing periodic reboots, and so on.&lt;/p&gt;

&lt;p&gt;Such tasks in Linux are referred to as cron jobs. Cron jobs are used for the automation of tasks to help in simplifying the execution of repetitive and sometimes mundane tasks. Cron is a daemon that allows you to schedule these jobs which are then carried out at specified intervals.&lt;/p&gt;

&lt;p&gt;A crontab file, also known as a cron table, is a simple text file that contains rules or commands that specify the time interval of execution of a task. It hosts a set of rules that are analyzed and performed by the cron daemon. The system crontab file is located at /etc/crontab and can only be accessed and edited by the root user. The crontab file looks like so: &lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi.imgur.com%2FOwtIwvG.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi.imgur.com%2FOwtIwvG.png" alt="chronjobs"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The basic syntax for a crontab file comprises 5 columns represented by asterisks followed by the command to be carried out. This format can also be represented as shown below:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[minute:0-59] [hour: 0 - 23] [day:0 - 31] [month:0-12] [day of week] /directory/command output
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;The first five fields in the command represent numbers that define when and how often the command runs. A space separates each position, which represents a specific value. Lets see how to apply this to a linux system&lt;/p&gt;

&lt;p&gt;To create or edit a cron job as the root user, run the command&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt; crontab -e
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;All cron jobs being with a shebang header as shown&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#!/bin/bash
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;This indicates the shell you are using, which, for this case, is bash shell. Next, specify the interval at which you want to schedule the tasks using the cron job format. For example, let's say we wanted to run a backup script every month when the system isn’t actively in use&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;* 2 0 * * /root/backup.sh
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;The command runs the first of every month at 2 am. Cron Jobs is a useful tool built into Linux systems that can automate specific tasks or scripts. However, Jenkins is a much more comprihenisve automation build tool that is more commonly used in the lifecycle. Lets see how we can impliment best practices when using Jenkins.&lt;/p&gt;

&lt;h2&gt;
  
  
  Securing Jenkins
&lt;/h2&gt;

&lt;p&gt;Another all in one automation tool is jenkins. jenkins is an open source automation server. It helps automate the parts of software development related to building, testing, and deploying, facilitating continuous integration and continuous delivery. Jenkins is an all in one tool to integrate automation into every stage of the CI CD process. Since jenkins is a server based tool, it is important to secure the Jenkins instance and have proper handling of users and credentials within it. Jenkins does not come preconfigured with default security checks. When creating users in Jenkins, it's important to differentiate the access control that each user has.&lt;/p&gt;

&lt;p&gt;Another important thing is to be mindful of the credentials and where they are stored. Using Jenkins credentials provider, users can bind their credentials to variables and use them in their jenkinsfile as to not expose sensitive data. Here is an example of a credentials screen in jenkins that will impliment credentails binding. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi.imgur.com%2F5tmGL8O.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi.imgur.com%2F5tmGL8O.png" alt="Jenkins credentials binding"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Securing Linux Servers
&lt;/h2&gt;

&lt;p&gt;The heart of any pipeline is a linux system. Since Cron Jobs need a linux system to function on, it's important to consider the security of the linux systems themselves that will be in charge of automation. Securing the linux system itself is a critical step in DevSecOps.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Disable Root Login&lt;/em&gt;&lt;/strong&gt;&lt;br&gt;
    The first step in securing the system is securing the way people even log into the system, to begin with. Disabling root login is essential to strengthen your server security. This is because keeping root login enabled can present a security risk and diminish the safety of small business cloud resources hosted on the server, as hackers can exploit this credential to access the server. Instead, create a new user account and assign elevated (sudo) permissions, so that you will still have a way of installing packages and performing other admin actions on the server.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;User logins through Public / Private key pairs&lt;/em&gt;&lt;/strong&gt;&lt;br&gt;
   One suggestion is to use good password hygiene, meaning having a decent mix of numbers, letters, and special characters to prevent from password cracking. However, this can get messy to enforce and passwords can ultimately be cracked using large amounts of computing power. A more secure way to grant access is through the use of public/private key pairs for users.&lt;/p&gt;

&lt;p&gt;generate (on their local machine) their keypair using &lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ssh-keygen -t rsa 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Then they need to put the contents of their public key (id_rsa.pub) into &lt;code&gt;~/.ssh/authorized_keys&lt;/code&gt; on the server being logged into.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Key Rotation and/or Configure 2 Factor Authentication&lt;/em&gt;&lt;/strong&gt;&lt;br&gt;
    It is important to keep changing the private/public key pairs, as well as any other passwords or credentials needed to access a machine to prevent keys or passwords from being leaked . 2 Factor Authentication can be used in conjunction with SSH (Secure Shell) to enforce the requirement for a second credential when logging into the server. To set up 2FA on a Debian server and Debian-derived distributions, you should install the libpam-google-authenticator package. The package can display a QR code or produce a secret token that can be added to a software authentication device, such as Google Authenticator.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Server Side antivirus / IDS&lt;/em&gt;&lt;/strong&gt;&lt;br&gt;
   External software and programs for secuirty and defense should always be an extra layer, not the only layer.  Many routers or firewalls will oftentimes have a preconfigured instance of an Antivirus, IDS or some form of it. The disadvantage to this is that it puts the burden on one sole piece of hardware. If a phishing email with a malicious payload is slipped through the cracks, then an IDS system that simply monitors the external perimeter is not much help. Once someone is in, they can make as much noise as they want, since all the guards are patrolling the outside.&lt;/p&gt;

&lt;p&gt;A solution to this could be a standalone IDS that sits on the internal network as part of a layered defense, providing visibility within the network and around the important assets and internal files. It can be configured to protect sensitive data without interfering with legitimate network traffic.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Disk encryption&lt;/em&gt;&lt;/strong&gt;&lt;br&gt;
   you can secure your data by configuring disk encryption to encrypt whole disks (including removable media), partitions, as well as any other files. There are many methods that can be used to achieve this. One universal way to do this on all Linux systems is to install the cryptsetup package. As always, make sure root user login is disabled, only users with advanced sudo privileges!&lt;/p&gt;

&lt;p&gt;Volume level disk encryption helps protect users and customers from Data Theft or even accidental loss. Encrypted hard disks make it very hard for hackers to gain access or read any sort of data on that hard disk.&lt;/p&gt;

&lt;h2&gt;
  
  
  Securing EC2
&lt;/h2&gt;

&lt;p&gt;In most cases, the linux instance that will be running the automation would be running on a cloud compute instance, lets say EC2 for example. One benefit of using an EC2 is the diversity and flexibility it offers. A tradeoff of this can be security. There are steps that can be taken to secure an EC2 instance.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Security Groups&lt;/em&gt;&lt;/strong&gt;&lt;br&gt;
Security groups are the fundamental network security of AWS. They control how inbound and outbound traffic is allowed into the EC2 Machine. These control the opening and closing of network ports to allow for different protocols or servers to run on.&lt;/p&gt;

&lt;p&gt;For example, since the Jenkins servers default port is port 8080, you have to expose the port in the security group. You can run Jenkins on a different port, however that must be exposed as well.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi.imgur.com%2FkIuYCay.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi.imgur.com%2FkIuYCay.png" alt="enter image description here"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;VPC&lt;/em&gt;&lt;/strong&gt;&lt;br&gt;
 Controlling the network traffic to your EC2 instance is crutial to maintain its secuirty. Configure your VPC and use private subnets for your instances if they should not be accessed directly from the internet. A VPC is your own network in the cloud. For example, in each region there are availability zones. A VPC is a private network within an AWS region and it would span all the availability zones / physical centers in the region.&lt;/p&gt;

&lt;p&gt;Subnets are sub-networks inside the VPC, span a single availability zone, and are logical subdivisions of an IP network. The practice of dividing a network into two or more networks is called subnetting. AWS provides two types of subnetting one is Public which allows the internet to access the machine and another is private which is hidden from the internet.&lt;/p&gt;

&lt;p&gt;Subnets could be compared to the different rooms in your apartment. They are containers within your VPC that segment off a slice of the CIDR block you define in your VPC.  CIDR notation is a compact representation of an IP address and its associated network mask. &lt;/p&gt;

&lt;p&gt;For example:&lt;br&gt;
192.168.100.14/24 represents the IP address &lt;br&gt;
192.168.100.14 is the network prefix &lt;br&gt;
192.168.100.0, or equivalently, its subnet mask 255.255.255.0.&lt;/p&gt;

&lt;p&gt;Subnets allow you to give different access rules and place resources in different containers where those rules should apply. You wouldn't have a big open window in your bathroom on the shower wall so people can see sensitive things, much like you wouldn't put a database with secretive information in a public subnet allowing any and all network traffic. You might put that database in a private subnet (i.e a locked closet). Anything from outside of the VPC could connect to a public subnet, but only containers inside a VPC can access a private subnet.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;IAM&lt;/em&gt;&lt;/strong&gt;&lt;br&gt;
 Another sure proof way to manage the security of your EC2 Instance is through IAM. IAM Is where users can manage their credentials. By using IAM with Amazon EC2, you can control whether users in your organization can perform a task using specific Amazon EC2 instances. It's important to lock away your access keys and consider them important numbers, as if they were credit cards or social security numbers. Similarly you wouldn't have one social security number for every user therefore you would not have one credential as a root user.&lt;/p&gt;

&lt;p&gt;it's important to create individual users and grant them the least amount of permissions as needed. Policy actions are classified as List, Read, Write, Permissions management, or Tagging. For example, you can choose actions from the List and Read access levels to grant read-only access to your users.&lt;/p&gt;

</description>
      <category>devops</category>
      <category>aws</category>
      <category>linux</category>
      <category>security</category>
    </item>
    <item>
      <title>A Brief Intro to MVC Architecture</title>
      <dc:creator>Sid Bhanushali</dc:creator>
      <pubDate>Thu, 24 Sep 2020 23:05:35 +0000</pubDate>
      <link>https://dev.to/sidbhanushali/a-brief-intro-to-mvc-architecture-27e4</link>
      <guid>https://dev.to/sidbhanushali/a-brief-intro-to-mvc-architecture-27e4</guid>
      <description>&lt;p&gt;"MVC" has become an increasingly popular buzzword in the web development community but what exactly does it mean? Over the last 20 years, websites have gone from simple HTML pages with a bit of CSS, to incredibly complex applications with thousands of developers working on them. To make working with these complex web applications much easier, developers use different patterns to lay out their projects to make the code less complex and easier to work with. By far the most popular of these patterns is MVC also known as Model View Controller. The goal of this pattern is to split a large application into specific sections that all have their own purpose. To illustrate each section, let's look at an example where a user is requesting a specific page from a server.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fhafv10o4a3sfrqbyczw6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fhafv10o4a3sfrqbyczw6.png" alt="MVC diagram"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;MVC Diagram&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Controller&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The following diagram illustrates server-side logic, that follows MVC architecture, which occurs when a request from a client is received. Based on what URL is being requested, the server will send all the request information to a specific controller. The controller is responsible for handling the entire request from the client and will tell the rest of the server what to do with the request. It acts as a middleman between the other two sections, model and view, and should not contain very much code. The first thing that happens when a controller receives a request is asking the model for information based on the request.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Model&lt;/strong&gt;&lt;br&gt;
The model is responsible for handling all of the data logic of a request. This means that the model interacts with the database and handles all validation, saving, updating, deleting, and any other CRUD related actions of the data. The controller should never directly interact with the data logic. It should only ever use the model to perform these interactions. This means that the controller never has to worry about how to handle the data that it sends and receives and instead, only needs to tell the model what to do and responds based on what the model returns.&lt;/p&gt;

&lt;p&gt;This also means the model never has to worry about handling user requests and what to do on failure or success. All of that is handled by the controller whereas the model only cares about interacting with the data. After the model sends its response back to the controller, the controller then needs to interact with the view to render the data to the user.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;View&lt;/strong&gt;&lt;br&gt;
The view is only concerned with how to present the information that the controller sends. This means the view will be a template file that dynamically renders HTML based on the data the controller sends it the view. The view does not worry about how to handle the final presentation of the data, but instead only cares about how to present it. The view will send its final presentation back to the controller where the controller will handle sending that presentation back to the user. The important thing to note about this design is that the model and the view never interact with each other. Any interactions between the model and the view are done through the controller.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Putting It All Together&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;If we consider real-world applications of this we can think of some web-apps that we interact with every day, say for example any social image-sharing app. Imagine a user sends a request to a server to get their photos. The server would send that request to the controller that handles photos. That controller would then ask the model that manipulates the photos collections or tables in the database to return a list of all photos. The model would query the database for a list of all photos and the return that list back to the controller.&lt;/p&gt;

&lt;p&gt;If the response back from the model was successful, then the controller would ask the view associated with photos to return a presentation of the list of photos. This view would take the list of photos from the controller and render each photo element in the list into any HTML format that could be used by the browser. This is how image galleries are rendered.&lt;/p&gt;

&lt;p&gt;The controller would then take that presentation and return it back to the user, thus ending the request. If earlier, the model returned an error instead of a list of photos, the controller would instead handle that error by requesting the view that was created to show the errors or the HTTP error code that was returned. Most commonly recognized by web-users&lt;br&gt;
as the “404 Not found page”. That error presentation would then be returned to the user instead of the image gallery. In summary, the model handles all of the data, the view handles all of the presentations, and the controller tells the model and views what to do. This is the idea behind basic MVC architecture and is how many applications maintain a manageable and organized codebase.&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>beginners</category>
      <category>javascript</category>
      <category>productivity</category>
    </item>
    <item>
      <title>Cookies vs Session vs Local storage</title>
      <dc:creator>Sid Bhanushali</dc:creator>
      <pubDate>Sun, 20 Sep 2020 20:48:46 +0000</pubDate>
      <link>https://dev.to/sidbhanushali/cookies-vs-session-vs-local-storage-22ja</link>
      <guid>https://dev.to/sidbhanushali/cookies-vs-session-vs-local-storage-22ja</guid>
      <description>&lt;p&gt;Hello everyone, we will be quickly overviewing the three main ways to store data within one’s browser which are session storage, local storage, and cookies. Let’s look at the similarities and differences and when to use which ones.&lt;/p&gt;

&lt;p&gt;The first key similarity is that all three of these properties are stored on the client-side or on the user's browser and only on that user’s browser. Cookies, Local Storage, and Session storage are not available on another browser within the same computer making them browser independent. They are meant to exchange information between the browser and the server. The information that is contained on them is most usually previous interactions or specifications that are specific to a user. Local storage and Session storage can be considered in the same category as they are both very similar in how they interact and only differ in a few instances. Cookies behave almost completely different than the other two and have also been around longer.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fq8q3l78h8h8w3vaxy8td.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fq8q3l78h8h8w3vaxy8td.png" alt="img"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Image by FreeCodeCamp&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Capacities&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;One way that cookies defer from local and session storage is the capacity size. cookies can store only a much smaller amount of information; the capacity for cookies is 4 Kb for most browsers while local storage and session storage can hold 10 Mb and 5 Mb respectively. This means that cookies are going to be much smaller than local storage and session storage but that’s okay for their use cases.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Browser Support&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Cookies are supported in older browsers which support HTML 4 because they’re been around for much longer but that’s not really something you have to worry about because HTML 5 is in pretty much any browser being used now.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Accessibility&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Cookies and local storage are available for any window inside the browser so if you have Google Chrome open on one tab or another tab the cookies are going to be available on all the different tabs that you have open to that website while for example, section storage is only available in the single tab that you have open that you set it in so it won’t be available if they open another tab and go to your website.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Expiration&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This is where local storage and session storage really differ from each other.&lt;/p&gt;

&lt;p&gt;session storage is for that one browsing session which is why it’s called session storage. It is removed as soon as the user closes the tab where that session was set, whereas local storage is around forever until the user ends up deleting it or the code for the website is programmed to delete it after a certain action.&lt;/p&gt;

&lt;p&gt;As for cookies, the expiration date is declared when it is sent to the client and it is the developer who sets the expiration which is always declared on a cookie. An expiration date is usually set to very far in the future, with the intention that it stays on the browser forever. Usually, the date for those is December 31 9999, which is the furthest date allowed to be set, so be aware that any cookies you may have on your browser could likely expire on new year’s day in the year 10,000. Another reason that warrants an expiration on a cookie is when a user has performed a certain action or has done something in a certain timeframe. One example of this is the monthly free article limitations that are placed on online news websites like the Wall Street Journal.&lt;/p&gt;

&lt;p&gt;However, cookies can also not have an expiration property specified. A cookie with no expiration date specified will expire when the browser is closed, similar to the expiration of session storage. This type of cookie is known as session cookies because they are removed after the browser’s session ends. One main usage of session cookies is to allow visitors to be recognized or authenticated as they visit from page to page on the website itself. Another usage of the session cookie’s functionality is the shopping cart feature on e-commerce sites where the cart is populated with your items as you go from page to page on the site.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Storage Location&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;As for storage location local storage and session storage are both on the browser like I said earlier but cookies while they are stored in the browser they are sent to this server every time a user requests something from the server. Whether it’s an image, HTML file, CSS file, or anything that is sent as the server response, the cookies get sent along with the client’s request. This is why they have a much smaller capacity. Because all the information in the cookies gets sent to the server, if you have a lot of cookies that are really large, it will slow down your requests to the server and the responses that it sends back. Although the maximum size of a cookie is only 4KB, one can imagine the amount of data that is being sent through cookies when considering large scale applications where servers are handling tens of thousands of requests at a given second.&lt;/p&gt;

&lt;p&gt;This is why best practice dictates that the cookies that are sent back and forward are as small and as limited as possible so you don’t slow down the request any more than needed&lt;/p&gt;

&lt;p&gt;Cookies are also really helpful for doing certain authentication-related tasks and they get sent to the server from the browser in the HTTP headers, unlike local storage or session storage which are just accessed by the application as client-side data storage.&lt;/p&gt;

&lt;p&gt;In summary, if you are going to be storing something in the user’s browser you’re almost always going to want to use local storage or session storage depending on how long you want it to live on the client-side. Whether you want it to be for one session (session storage) or if you want it to live after they close the browser (local storage).&lt;/p&gt;

</description>
      <category>javascript</category>
      <category>webdev</category>
      <category>html</category>
      <category>beginners</category>
    </item>
  </channel>
</rss>
