<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Ronal Niraula</title>
    <description>The latest articles on DEV Community by Ronal Niraula (@ronal).</description>
    <link>https://dev.to/ronal</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/ronal"/>
    <language>en</language>
    <item>
      <title>Workflow of Data Engineering Project on AWS</title>
      <dc:creator>Ronal Niraula</dc:creator>
      <pubDate>Wed, 05 Jul 2023 17:37:16 +0000</pubDate>
      <link>https://dev.to/ronal/workflow-of-data-engineering-project-on-aws-2k8m</link>
      <guid>https://dev.to/ronal/workflow-of-data-engineering-project-on-aws-2k8m</guid>
      <description>&lt;ol&gt;
&lt;li&gt;Architecture Diagram:&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;(&lt;a href="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9x7uw1on6yh2uvvv6vq4.png"&gt;https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9x7uw1on6yh2uvvv6vq4.png&lt;/a&gt;)&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Amazon S3: &lt;br&gt;
Amazon S3 (Simple Storage Service) is a highly scalable and secure cloud storage service provided by Amazon Web Services (AWS). It allows users to store and retrieve any amount of data from anywhere on the web, with high durability, availability, and performance. S3 provides a simple web interface, as well as an API, to manage and access the data stored in it. Users can create buckets (i.e., containers for objects) in S3 and upload files (i.e., objects) to them.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Crawler (to extract all the schema &amp;amp; information): &lt;br&gt;
AWS provides a managed service called AWS Glue Crawler, which is a fully-managed data crawler that automatically discovers, categorizes, and registers metadata about your data assets in AWS.&lt;br&gt;
Amazon Athena: Amazon Athena is an interactive query service provided by AWS that allows users to analyze data in Amazon S3 using standard SQL queries. It is a serverless service, which means that users do not need to manage any infrastructure or perform any database administration tasks. Athena automatically scales query processing based on the amount of data being queried, so users can run ad-hoc queries on large datasets with ease.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;AWS Glue: AWS Glue is a fully-managed extract, transform, and load (ETL) service provided by AWS that allows users to move data between different data stores and data lakes. It provides a serverless environment for running ETL jobs, which means that users do not need to manage any infrastructure or perform any database administration tasks.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Amazon Redshift: Amazon Redshift is a fully-managed cloud data warehouse provided by AWS that allows users to store and analyze large amounts of structured and semi-structured data. It is designed to be fast, scalable, and cost-effective, making it a popular choice for big data analytics.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;VPC: VPCs provide a flexible and secure way for users to launch and run their AWS resources in a private and isolated network environment.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;Dataset:&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;For this project, covid-19 dataset will be used, provided by the aws. Structure of dataset and information related to dataset can be studied through this blog&lt;br&gt;
&lt;a href="https://aws.amazon.com/blogs/big-data/a-public-data-lake-for-analysis-of-covid-19-data/"&gt;Data Set&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Only selective datasets are downloaded as shown in the figure:&lt;/p&gt;

&lt;p&gt;(&lt;a href="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9ya986ljmt0tk3n2d78e.png"&gt;https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9ya986ljmt0tk3n2d78e.png&lt;/a&gt;)&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Storing downloaded Dataset in Amazon S3:&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;(&lt;a href="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/hrloibo2gxrmi0kiui97.png"&gt;https://dev-to-uploads.s3.amazonaws.com/uploads/articles/hrloibo2gxrmi0kiui97.png&lt;/a&gt;)&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Analyzing the data (Schema Crawling and Table Building)
Crawler is created for each and every csv data files where crawler crawls on all the dataset stored in S3 and extract all the schema and information.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;(&lt;a href="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/amp02k29z9xp562zh93f.png"&gt;https://dev-to-uploads.s3.amazonaws.com/uploads/articles/amp02k29z9xp562zh93f.png&lt;/a&gt;)&lt;/p&gt;

&lt;p&gt;After completion of crawler task, Amazon Athena is used to analyze each and every data (studying the datasets) extracted by crawler.&lt;/p&gt;

&lt;p&gt;(&lt;a href="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/adem2vijls3d8crgj30c.png"&gt;https://dev-to-uploads.s3.amazonaws.com/uploads/articles/adem2vijls3d8crgj30c.png&lt;/a&gt;)&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Building Data Models
After analyzing the data, to understand the overall flow of the Data, Data Model is created.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Here, fips is primary key in most of the table which can also be studied from the data source of the dataset.&lt;/p&gt;

&lt;p&gt;(&lt;a href="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6k66jpo34ttaxtalkfta.png"&gt;https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6k66jpo34ttaxtalkfta.png&lt;/a&gt;)&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Schema:
Here, the fact table is factCovid and there are 3 different dimension table (i.e dimHospital, dimRegion, dimDate). Most of the information is redundant across most of the table in above Data model, thus schema is created because it is more simpler to analyze data in data warehouse when there is less table.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;(&lt;a href="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/8ut4zplf6jnrkol06u81.png"&gt;https://dev-to-uploads.s3.amazonaws.com/uploads/articles/8ut4zplf6jnrkol06u81.png&lt;/a&gt;)&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;   Use of Jupyter Notebook:&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;Libraries:
import boto3&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;import pandas as pd&lt;/p&gt;

&lt;p&gt;from io import StringIO&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Access Case:&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;(&lt;a href="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/bg8z5chzexk4utxyes88.png"&gt;https://dev-to-uploads.s3.amazonaws.com/uploads/articles/bg8z5chzexk4utxyes88.png&lt;/a&gt;)&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Connect to Athena and Query Data&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;(&lt;a href="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qs6v7s1v58jwjelvbpf4.png"&gt;https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qs6v7s1v58jwjelvbpf4.png&lt;/a&gt;)&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Function:
This function basically take boto3 object and dictionary and run query in Athena and store the output in S3 i.e in staging_dir&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;(&lt;a href="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/8h1x7eb0il3qj8s6bwjt.png"&gt;https://dev-to-uploads.s3.amazonaws.com/uploads/articles/8h1x7eb0il3qj8s6bwjt.png&lt;/a&gt;)&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Query Response:&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;(&lt;a href="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/76gyrpoaia0iglu7k86v.png"&gt;https://dev-to-uploads.s3.amazonaws.com/uploads/articles/76gyrpoaia0iglu7k86v.png&lt;/a&gt;)&lt;/p&gt;

&lt;p&gt;Similarly, this process is repeated for all other 9 tables created using crawler shown in figure 4.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Fixing errors’ found in table:&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;(&lt;a href="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ua542ku9cca62e7jf3fr.png"&gt;https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ua542ku9cca62e7jf3fr.png&lt;/a&gt;)&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Transformation (ETL job in python):&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;(&lt;a href="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/038c4jwygcxcdelm87ft.png"&gt;https://dev-to-uploads.s3.amazonaws.com/uploads/articles/038c4jwygcxcdelm87ft.png&lt;/a&gt;)&lt;/p&gt;

&lt;p&gt;Changing Dimension Table(Date) property as explained in figure of schema:&lt;/p&gt;

&lt;p&gt;(&lt;a href="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/yvo0zj97j01ms26rba1j.png"&gt;https://dev-to-uploads.s3.amazonaws.com/uploads/articles/yvo0zj97j01ms26rba1j.png&lt;/a&gt;)&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Save results to S3:&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;(&lt;a href="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/hsm47fa9l7jfliv67ydj.png"&gt;https://dev-to-uploads.s3.amazonaws.com/uploads/articles/hsm47fa9l7jfliv67ydj.png&lt;/a&gt;)&lt;/p&gt;

&lt;p&gt;Here, specify the S3 bucket name in place of bucket.&lt;br&gt;
Similarly, this process is repeated for all other 9 tables created using crawler shown in figure 4.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Extracting Schema out of the DataFrame:&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;(&lt;a href="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/e8i14y5p7q71wpymm6m6.png"&gt;https://dev-to-uploads.s3.amazonaws.com/uploads/articles/e8i14y5p7q71wpymm6m6.png&lt;/a&gt;)&lt;/p&gt;

&lt;p&gt;Note: These schema will be needed while creating table in Amazon Redshift.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Creating a Redshift Namespace (Cluster)&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;(&lt;a href="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9mnkndm8swrmkolga20t.png"&gt;https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9mnkndm8swrmkolga20t.png&lt;/a&gt;)&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;AWS Glue Job&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;In AWS, AWS Glue Jobs are used for the ETL (Extract, Transform, Load) process to move data from one location to another. AWS Glue is a fully managed ETL service that makes it easy to move data between different data stores and data lakes.&lt;/p&gt;

&lt;p&gt;AWS Glue Jobs are used to run ETL scripts and perform data transformations on data stored in various data sources such as Amazon S3, Amazon RDS, Amazon DynamoDB, Amazon Redshift, and more. AWS Glue Jobs can be written in Python or Scala, and they can be run on a serverless infrastructure. This means that AWS Glue Jobs can scale automatically and can handle any amount of data processing.&lt;/p&gt;

&lt;p&gt;Table is created in Amazon Redshift with the help create table command and copy command is used to copy the dimension table and fact table from S3 to Redshift. These all process is done by creating a job in AWS Glue Job. &lt;/p&gt;

&lt;p&gt;(&lt;a href="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/wa0wkyozjfkuq6iov299.png"&gt;https://dev-to-uploads.s3.amazonaws.com/uploads/articles/wa0wkyozjfkuq6iov299.png&lt;/a&gt;)&lt;/p&gt;

&lt;p&gt;Once the AWS Glue Jobs’ task is completed, data can be queried and viewed in Redshift query editor as shown in figure below:&lt;/p&gt;

&lt;p&gt;(&lt;a href="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/pt127p5uz9p9enr4vyed.png"&gt;https://dev-to-uploads.s3.amazonaws.com/uploads/articles/pt127p5uz9p9enr4vyed.png&lt;/a&gt;)&lt;/p&gt;

&lt;p&gt;Finally, this data can be used by Data Analyst and Data Scientist to derive insights and decision making.&lt;/p&gt;

&lt;p&gt;Thank You.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>dataengineering</category>
      <category>data</category>
      <category>jupyter</category>
    </item>
    <item>
      <title>How DataCamp and Code for Nepal helped me learn data engineering for free</title>
      <dc:creator>Ronal Niraula</dc:creator>
      <pubDate>Wed, 05 Jul 2023 15:52:17 +0000</pubDate>
      <link>https://dev.to/ronal/how-datacamp-and-code-for-nepal-helped-me-learn-data-engineering-for-free-3639</link>
      <guid>https://dev.to/ronal/how-datacamp-and-code-for-nepal-helped-me-learn-data-engineering-for-free-3639</guid>
      <description>&lt;p&gt;If you are considering a career in the field of data, or if you have limited experience and want to learn more about it, this blog is for you. Continue reading!&lt;/p&gt;

&lt;p&gt;What inspired you to join the Data Fellowship?&lt;/p&gt;

&lt;p&gt;I have a lot of practical data science experience and frequently do data visualization and information design work. I had a gut feeling that I needed to push myself and explore more of the topics in the data domain. Data engineers are on the front lines of data strategy so that others don’t need to be. They are the first to deal with the inflow of structured and unstructured data into a business’s systems. Any data strategy would be incomplete without them.&lt;/p&gt;

&lt;p&gt;On my LinkedIn network, I have seen people talking about data engineering and how it shaped all projects in data science. Finally, I decided I should explore being a data engineer. Then, as with anything else in this digital era, I searched for an online program where I could learn data engineering, and because I’m from Nepal, one of the criteria we use when looking for something is that it must be available for free. As a result, I was looking for a free data engineering course. Coursera, edx, udemy, and DataCamp are all places where I frequently engage. While investigating, I came across a few data engineering bootcamps in the United States. They meet the first but not the second of my criteria: they should be free.&lt;/p&gt;

&lt;p&gt;And then, after a few weeks, one day I came upon a Code for Nepal post on LinkedIn. I wasn’t a stranger to Code for Nepal. Back in the day, I did some volunteer work. My heart leapt with delight and the thirst to learn as soon as I noticed the data fellowship post partnering with data camp.&lt;/p&gt;

&lt;p&gt;I applied for the fellowship with high hopes, and I was accepted.&lt;/p&gt;

&lt;p&gt;(&lt;a href="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/0fulq9phisq1di91cwg1.png"&gt;https://dev-to-uploads.s3.amazonaws.com/uploads/articles/0fulq9phisq1di91cwg1.png&lt;/a&gt;)&lt;/p&gt;

&lt;p&gt;Finally, I began my three-month data fellowship with Code For Nepal and DataCamp after completing my orientation. I chose the data engineering career path and began my journey.&lt;/p&gt;

&lt;p&gt;How difficult was the fellowship journey?&lt;/p&gt;

&lt;p&gt;It was totally tough. It was filled with deadlines and completing the course was not easy. During the first 4–6 weeks, I studied 49 hours a week. But after 3 weeks, the learning process was impeded due to family responsibilities. At that time, I had two options: I could drop out and offer my spot to other students looking for similar opportunities, or I could stay and push myself without any excuses. Guess what? I chose to push myself and manage my time, and lastly, you are reading this blog post.&lt;/p&gt;

&lt;p&gt;What lessons did you learn from The Fellowship?&lt;/p&gt;

&lt;p&gt;I discovered that everyone, no matter how confident or successful they appear, suffers from self-doubt and imposter syndrome. What matters is that you feel the fear and go ahead and do it anyhow. Simply adopt a learning attitude and keep an eye out for opportunities. When they present themselves, seize the opportunity without further delay. Also, I learned that individuals are willing to assist if I know how to ask and what I require.&lt;/p&gt;

&lt;p&gt;Would you suggest Data Fellowship to a friend?&lt;/p&gt;

&lt;p&gt;Yes, absolutely! Beyond all the amazing resources, sessions, and opportunities, The Fellowship is an experience that will make a difference in many aspects of your life. It did in my life and will shape lots of areas. For those that are currently feeling unfulfilled in their career and are not sure what their next move is, The Fellowship will help you identify your skills, values, and what’s important to you in your career and will help you push beyond your comfort zone by equipping you with the skills. Data engineering, data science, data analysis, machine learning and much more. All you have to do is decide and be disciplined about the decision.&lt;/p&gt;

&lt;p&gt;Any words to future fellowship seekers?&lt;/p&gt;

&lt;p&gt;Lookout for the next cohort and apply without hesitation. As Code for Nepal is offering free access to DataCamp, all you need is time and a desire to learn.Remember, Albert Einstein once said,&lt;/p&gt;

&lt;p&gt;“The only source of knowledge is experience.”&lt;/p&gt;

&lt;p&gt;He was right, but it does not have to be your experience. You can leverage knowledge from other people’s lessons. You can stand on the shoulders of giants. Like Stand on the shoulders of DataCamp and Code for Nepal and pave your career the way you intended.&lt;/p&gt;

&lt;p&gt;I was so much fascinated by the program that I had applied for the current year 2023 and selected for this cohort as well to learn more about data.&lt;/p&gt;

&lt;p&gt;(&lt;a href="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/l4ey5d1d4kcxoy72sszw.jpg"&gt;https://dev-to-uploads.s3.amazonaws.com/uploads/articles/l4ey5d1d4kcxoy72sszw.jpg&lt;/a&gt;)&lt;/p&gt;

&lt;p&gt;What are you doing right now?&lt;/p&gt;

&lt;p&gt;At the moment, I’m working on database engineering and pursuing my MScIT in Data Analysis. &lt;/p&gt;

&lt;p&gt;Finally, I’d like to express my gratitude to Code for Nepal for providing a fantastic opportunity to the Nepali community, as well as DataCamp for believing in Code for Nepal.&lt;/p&gt;

&lt;p&gt;If you have any queries regarding the article or want to work together on your next data engineering and data science project, ping me on LinkedIn.&lt;/p&gt;

</description>
      <category>data</category>
      <category>fellowship</category>
      <category>datascience</category>
      <category>management</category>
    </item>
    <item>
      <title>Triggers in SQL Server: Unlocking the Power of Automation</title>
      <dc:creator>Ronal Niraula</dc:creator>
      <pubDate>Tue, 04 Jul 2023 16:49:53 +0000</pubDate>
      <link>https://dev.to/ronal/triggers-in-sql-server-unlocking-the-power-of-automation-306k</link>
      <guid>https://dev.to/ronal/triggers-in-sql-server-unlocking-the-power-of-automation-306k</guid>
      <description>&lt;p&gt;&lt;strong&gt;Introduction&lt;/strong&gt;&lt;br&gt;
Triggers are essential in the field of database management for automating processes and enforcing business rules. Understanding triggers is essential for effective and reliable database operations, regardless of your experience level with SQL Server development or where you are at in your career. I'll go into the idea of triggers in SQL Server, look at their features, and discover how they can improve your database management experience in this blog post.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What are Triggers?&lt;/strong&gt;&lt;br&gt;
An automatic response to certain events, such as data alterations (inserts, updates, or deletes) occurring on a particular table, is known as a trigger in SQL Server. When these events happen, triggers give you a mechanism to carry out a series of specified actions, enabling you to uphold data integrity and enforce complicated business rules.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Types of Triggers&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;SQL Server supports two main types of triggers: "After" triggers and "Instead of" triggers. But I will be discussing "After" trigger as it is the most useful as compared to other one.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;After Trigger&lt;/strong&gt;&lt;br&gt;
These triggers fire after the triggering event has occurred and is completed. After triggers are commonly used to audit changes, update related tables, or perform calculations based on the modified data.&lt;br&gt;
Let's dive into the example.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Connect SQL Server  with LocalDB&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;(&lt;a href="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5uyexo8gih06lupyexpm.png"&gt;https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5uyexo8gih06lupyexpm.png&lt;/a&gt;)&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Store table is created with store id, created date, store name, and phone column by selecting one database.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;(&lt;a href="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/pktwymzqygrh4eyuhg2r.png"&gt;https://dev-to-uploads.s3.amazonaws.com/uploads/articles/pktwymzqygrh4eyuhg2r.png&lt;/a&gt;)&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Another table named as Triggered table is created for the value insertion after user insert new and unique row in the store table.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;(&lt;a href="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9i6gtiuikr8ekecs1190.png"&gt;https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9i6gtiuikr8ekecs1190.png&lt;/a&gt;)&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Let's write the trigger for inserting each unique row in the new trigger table after any row insertion in store table.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;CREATE TRIGGER dbo.fetch_data
   ON  dbo.store
   AFTER INSERT
AS 
BEGIN
    -- SET NOCOUNT ON added to prevent extra result sets from
    -- interfering with SELECT statements.
    SET NOCOUNT ON;
    declare @store_id int
    declare @store_name varchar(max)
    declare @phone varchar(max)

    select @store_id=[store_id] from inserted;
    select @store_name = [store_name] from inserted;
    select @phone=[phone] from inserted;

   if (
    @_store_name _in (select store_name from trigger_table)
    and @phone in (select phone from trigger_table))
    BEGIN
        rollback
        RAISERROR ('Same store name with same date and phone already exist', 16, 1);
    END
    ELSE
    BEGIN
        insert into  dbo.trigger_table (store_id,inserted_date,store_name,phone)
        values (@store_id,GETDATE(),@store_name,@phone)
    END

END
GO
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Inserting a row into the store table&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;(&lt;a href="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/gplfoilq1atofwxy5jbe.png"&gt;https://dev-to-uploads.s3.amazonaws.com/uploads/articles/gplfoilq1atofwxy5jbe.png&lt;/a&gt;)&lt;/p&gt;

&lt;p&gt;(&lt;a href="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/0om99lfpbrw4llmy2nrp.png"&gt;https://dev-to-uploads.s3.amazonaws.com/uploads/articles/0om99lfpbrw4llmy2nrp.png&lt;/a&gt;)&lt;/p&gt;

&lt;p&gt;(&lt;a href="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/uviv1sfeyonrhheniqct.png"&gt;https://dev-to-uploads.s3.amazonaws.com/uploads/articles/uviv1sfeyonrhheniqct.png&lt;/a&gt;)&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Now let's check the trigger table&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;(&lt;a href="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5lnnmo727haw3p9o602i.png"&gt;https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5lnnmo727haw3p9o602i.png&lt;/a&gt;)&lt;br&gt;
Same field is inserted in the trigger table with the current date.&lt;br&gt;
Wow, it automatically knew any new insertion and insert the exact same data in another table without any manual process.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Let's try to insert duplicate data in store table&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;(&lt;a href="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/gwpmpj4m5c3fevf8yk8u.png"&gt;https://dev-to-uploads.s3.amazonaws.com/uploads/articles/gwpmpj4m5c3fevf8yk8u.png&lt;/a&gt;)&lt;br&gt;
During the duplicate store name and phone insertion, the trigger executes and display the error message regarding duplicate entry and the query is rollback. As a result the duplicate data won't get inserted neither in the store table nor in the trigger table.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Advantages of Triggers&lt;/strong&gt;&lt;br&gt;
Data Integrity: Triggers protect your database by making sure that only accurate and reliable data is kept inside. Triggers assist in preserving the quality and correctness of your data by verifying input, carrying out referential integrity checks, or enforcing complicated business rules.&lt;/p&gt;

&lt;p&gt;Automation: Triggers eliminate the need for manual intervention by automating repetitive operations. You can save time and effort by using them to automatically update related tables, produce derived data, send notifications, or keep audit trails.&lt;/p&gt;

&lt;p&gt;Scalability: By enclosing sophisticated logic within the database itself, triggers can improve the scalability of your database system. This lessens the requirement for recurrent code execution in various application layers and encourages the development of an architecture that is more effective and simplified.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Conclusion&lt;/strong&gt;&lt;br&gt;
An essential part of SQL Server are triggers, which let you automate processes and enforce business rules inside of your database. You may improve data integrity, automate repetitive procedures, increase scalability, and guarantee consistency in your database administration processes by utilizing triggers efficiently. To get the most of your triggers, follow best practices, properly test them, and record how they work. You'll unlock the power of automation and simplify your SQL Server workflows if you have a firm grasp of triggers.&lt;/p&gt;

</description>
      <category>sqlserver</category>
      <category>database</category>
      <category>trigger</category>
      <category>sql</category>
    </item>
  </channel>
</rss>
