<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Vaibhav Bhutkar</title>
    <description>The latest articles on DEV Community by Vaibhav Bhutkar (@vaibhav9017).</description>
    <link>https://dev.to/vaibhav9017</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/vaibhav9017"/>
    <language>en</language>
    <item>
      <title>Migrating from from Pre-Fabric to Microsoft Fabric – A Comparative Guide.</title>
      <dc:creator>Vaibhav Bhutkar</dc:creator>
      <pubDate>Tue, 01 Jul 2025 11:28:46 +0000</pubDate>
      <link>https://dev.to/vaibhav9017/migrating-from-from-pre-fabric-to-microsoft-fabric-a-comparative-guide-2kne</link>
      <guid>https://dev.to/vaibhav9017/migrating-from-from-pre-fabric-to-microsoft-fabric-a-comparative-guide-2kne</guid>
      <description>&lt;p&gt;&lt;strong&gt;Why this Matters&lt;/strong&gt;: In today’s data-driven world, enterprises are flooded with tools: Azure Synapse, Data Lake, Power BI, Azure Data Factory, and many more. Managing them all in a cohesive, cost-effective, and high-performing architecture often feels like fitting together pieces of a complex puzzle or game.&lt;br&gt;
&lt;strong&gt;Microsoft Fabric&lt;/strong&gt; – a unified, SaaS-based data platform that aims to simplify the entire data lifecycle, from ingestion to insights. In this blog, we explore what it means to move from a pre-Fabric (traditional Azure) architecture to Fabric, and what benefits, challenges, and changes come with it.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Pre-Fabric Architecture&lt;/strong&gt;&lt;br&gt;
Before Fabric, a typical Microsoft data architecture looked like this: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Azure Data Factory (ADF): for data ingestion and transformation.&lt;/li&gt;
&lt;li&gt;Azure Data Lake / Blob Storage: for storing raw data.&lt;/li&gt;
&lt;li&gt;Power BI: for visualization.&lt;/li&gt;
&lt;li&gt;Multiple identities, billing, and disconnected services.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Challenged with these:&lt;/strong&gt;&lt;br&gt;
Cost Optimization - Paying separately for each service, often overprovisioned.&lt;br&gt;
Integration - Too many moving parts, difficult to manage end-to-end pipelines.&lt;br&gt;
Learning Curve - Requires knowledge of multiple tools and their integration.&lt;br&gt;
Collaboration - Teams often work in silos: engineers, analysts, and business users.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fic1dmdpqq85ko1jrhj66.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fic1dmdpqq85ko1jrhj66.jpeg" alt="Image description" width="800" height="640"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Microsoft Fabric:&lt;/strong&gt; &lt;br&gt;
Microsoft Fabric is an end-to-end analytics SaaS platform that unifies six core workloads: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Data Engineering&lt;/li&gt;
&lt;li&gt;Data Factory&lt;/li&gt;
&lt;li&gt;Data Science&lt;/li&gt;
&lt;li&gt;Data Warehousing&lt;/li&gt;
&lt;li&gt;Real-Time Analytics&lt;/li&gt;
&lt;li&gt;Power BI&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;It’s built on OneLake, a single storage layer accessible by all workloads, and uses notebooks, pipelines, and datasets under one umbrella.&lt;br&gt;
Core concepts Behind it - &lt;br&gt;
&lt;strong&gt;OneLake&lt;/strong&gt;: One storage for all data – governed, secure, and universally accessible.&lt;br&gt;
&lt;strong&gt;Lakehouse &amp;amp; Warehouse Models&lt;/strong&gt;: Unified for both engineering and business use.&lt;br&gt;
&lt;strong&gt;Workspaces&lt;/strong&gt;: Seamless collaboration with role-based access.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;a href="https://learn.microsoft.com/en-us/credentials/certifications/fabric-analytics-engineer-associate/?practice-assessment-type=certification" rel="noopener noreferrer"&gt;Fabric &lt;/a&gt;as Microsoft 365 for data. Just like Word, Excel, and Teams are tightly integrated, Fabric integrates all analytics tools in one platform.&lt;/strong&gt; &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feo3qntywkuftm0n4tvzv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feo3qntywkuftm0n4tvzv.png" alt="Image description" width="800" height="259"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If Microsoft provide all these features in one umbrella, then we need to consider some considerations before directly jumping to fabric. &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Existing Resources&lt;/strong&gt;: Are you heavily reliant on Synapse, ADF, etc.? &lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Data Volume &amp;amp; Type&lt;/strong&gt;: Fabric is optimized for structured/semi-structured; advanced unstructured use cases still maturing.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Security &amp;amp; Compliance&lt;/strong&gt;: Map your policies into Fabric’s workspace and OneLake controls.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Benefits if we Move to Fabric&lt;/strong&gt;&lt;br&gt;
Unified Platform: No more jumping between Azure resources or services.&lt;br&gt;
Low Costing: Capacity-based pricing; easier to control.&lt;br&gt;
Faster Time to Built: less code with more actions.&lt;br&gt;
One environment for data engineers, analysts, and scientists.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Do We Really Need All the Data to Make Our Decisions?</title>
      <dc:creator>Vaibhav Bhutkar</dc:creator>
      <pubDate>Sun, 05 Jan 2025 13:41:37 +0000</pubDate>
      <link>https://dev.to/vaibhav9017/do-we-really-need-all-the-data-to-make-our-decisions-34ia</link>
      <guid>https://dev.to/vaibhav9017/do-we-really-need-all-the-data-to-make-our-decisions-34ia</guid>
      <description>&lt;p&gt;In the era of data-driven everything, we often hear the mantra, “The more data, the better the decision.” Businesses, governments, and even individuals are constantly amassing and analyzing mountains of data. But do we truly need all that data to make sound decisions? Let’s dig deeper, challenge this idea, and explore an alternative perspective.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Too Much Data, Too Many Choices&lt;/strong&gt;  &lt;/p&gt;

&lt;p&gt;Having an abundance of data can sometimes create more confusion than clarity. Imagine a retailer examining customer purchase behavior. Does the business need every single click, scroll, and hover to optimize their strategy? Probably not. Often, the key insights lie in just a fraction of the data.&lt;br&gt;&lt;br&gt;
Consider a recent project I worked on involving eCommerce analytics. The client initially wanted to analyze every conceivable data point across millions of transactions. However, by focusing on just five metrics—cart abandonment rate, average order value, repeat customer rate, product views, and search-to-purchase conversion—we achieved actionable insights faster, with far less complexity. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Example: Data Ingestion in IT&lt;/strong&gt;  &lt;/p&gt;

&lt;p&gt;Let’s pivot to an IT-specific example—data ingestion in a cloud-based architecture. Imagine an enterprise that ingests terabytes of log data from IoT devices daily. The raw data includes timestamps, device IDs, error codes, environmental metrics, and more. Initially, the IT team attempted to process all incoming data in real-time, leading to high operational costs and system slowdowns.&lt;br&gt;&lt;br&gt;
By reassessing their strategy, the team realized that not all data points were critical for their objective—detecting device anomalies. They identified three key metrics: error codes, device IDs, and timestamps. Filtering the raw data stream to retain only these metrics reduced ingestion volume by 70%, dramatically cutting storage costs and improving processing speeds. The refined pipeline not only met performance goals but also simplified downstream analytics.&lt;br&gt;&lt;br&gt;
This example highlights how focusing on the signal—the most relevant data—rather than the noise can drive efficiency and clarity in IT systems.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Signal vs. Noise&lt;/strong&gt;  &lt;/p&gt;

&lt;p&gt;Not all data points carry equal weight. When collecting data, distinguishing between “signal” (valuable insights) and “noise” (extraneous information) is critical. A marketing campaign’s success, for example, may hinge on just a few factors: the target audience, the timing, and the message. Adding excessive layers of demographic or behavioral data can obscure rather than clarify the picture.&lt;br&gt;&lt;br&gt;
A memorable moment in my career involved helping a healthcare company predict patient no-shows. The initial model used over 100 variables, from weather patterns to patient health records. By eliminating irrelevant predictors, we reduced the model to 12 essential factors, improving accuracy and cutting down processing time.  &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Cost of Data Overload&lt;/strong&gt;  &lt;/p&gt;

&lt;p&gt;Collecting and storing massive amounts of data isn’t just inefficient; it’s costly. Cloud storage bills skyrocket, and processing power demands increase exponentially. Moreover, the human cost of analyzing unnecessary data—time spent by analysts, decision-makers, and developers—can be immense. A leaner data strategy often translates to more focused teams and faster results.  &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How to Decide What Data Matters&lt;/strong&gt;  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Define the Decision First:&lt;/strong&gt; Begin with the decision you need to make, then identify the data required to inform it. This prevents the “data for data’s sake” trap.  .&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Prioritize Key Metrics:&lt;/strong&gt; Ask, “What is the minimum amount of data needed to make this decision?” Focus on high-impact metrics.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Validate Regularly:&lt;/strong&gt; Test whether adding more data improves your decisions. If not, it’s noise.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Leverage Sampling:&lt;/strong&gt; Sometimes, a well-constructed sample is all you need. This is particularly true in scenarios like surveys, polling, or quality control.
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Final View&lt;/strong&gt;  &lt;/p&gt;

&lt;p&gt;The world is inundated with data, but more isn’t always better. Often, clarity comes from pruning excess information and focusing on what truly matters. By embracing a “less is more” philosophy, we not only make decisions faster but often make better ones too. So, the next time you’re faced with a mountain of data, pause and ask yourself: Do I really need all of this to decide?&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Large Dataset - Pipeline, Seamless Scale-Up and Scale-Down</title>
      <dc:creator>Vaibhav Bhutkar</dc:creator>
      <pubDate>Tue, 31 Dec 2024 14:49:32 +0000</pubDate>
      <link>https://dev.to/vaibhav9017/large-dataset-pipeline-seamless-scale-up-and-scale-down-52bj</link>
      <guid>https://dev.to/vaibhav9017/large-dataset-pipeline-seamless-scale-up-and-scale-down-52bj</guid>
      <description>&lt;p&gt;In today’s data-driven world, managing and processing large datasets efficiently is more than a technical challenge—it’s a business necessity. As organizations increasingly rely on data to make decisions, the need to adapt quickly to changing data loads has become critical. But what happens when your data processing needs spike unexpectedly, or when they drop significantly during low-usage periods? This blog explores how to handle and optimize pipelines for large datasets, focusing on scaling them up and down efficiently, with a spotlight on Azure Data Factory (ADF).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Challenge of Large Dataset Pipelines&lt;/strong&gt; &lt;br&gt;
Handling large datasets isn’t just about storage; it’s about ensuring that data can flow smoothly from one location to another without bottlenecks. The sheer volume of data can overwhelm pipelines, causing latency, failed processes, or even total system shutdowns. Traditional data pipelines often struggle to adapt to fluctuating loads, leading to inefficiencies and increased costs. Scalability isn’t just a nice-to-have feature; it’s a core requirement for any modern data processing architecture.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Building Scalable Pipelines with Azure Data Factory&lt;/strong&gt;&lt;br&gt;
Azure Data Factory (ADF) is a powerful tool designed to handle data integration and processing at scale. Its flexible architecture allows for dynamic scaling, enabling businesses to manage even the most demanding data loads efficiently. ADF’s features—such as parallelism, partitioning, and data flows—provide the foundation for building pipelines that can adjust to varying workloads. With ADF, you can create pipelines that scale up during peak data loads and scale down during quieter periods, ensuring both performance and cost-efficiency. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Scaling Up: When Data Grows Beyond Expectations&lt;/strong&gt;&lt;br&gt;
Scaling up is essential when your data volume increases rapidly, whether due to seasonal trends, marketing campaigns, or other unexpected events. Azure’s elasticity is a game-changer here. By leveraging ADF’s dynamic scaling capabilities, you can:     &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Increase Compute Resources&lt;/strong&gt;: Auto-scaling integration runtimes allow your pipelines to handle larger volumes of data without manual intervention. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Optimize Parallel Processing&lt;/strong&gt;: Splitting data into smaller partitions and processing them simultaneously boosts throughput. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Adjust Triggers Dynamically&lt;/strong&gt;: Modify pipeline triggers to accommodate increased data ingestion rates, ensuring that no data is left unprocessed.&lt;br&gt;
For instance, imagine a scenario where an eCommerce platform experiences a sudden surge in transactions during a flash sale. By dynamically scaling the ADF pipeline, you can ensure all transaction data is processed in near real-time, maintaining seamless operations and customer satisfaction. &lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Scaling Down: Managing Costs During Downtime&lt;/strong&gt; &lt;br&gt;
While scaling up is crucial for performance, scaling down is equally important for cost management. During periods of low data activity, unused resources can lead to unnecessary expenses. ADF provides tools to:  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Deallocate Resources Automatically:&lt;/strong&gt; By identifying low-priority operations, you can pause or terminate them during downtimes.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Minimize Redundancies:&lt;/strong&gt; Streamline your pipelines to avoid duplicate processes and reduce resource consumption.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Monitor Usage Patterns:&lt;/strong&gt; Use Azure Monitor to analyze pipeline activity and adjust resource allocation accordingly.&lt;br&gt;
Consider a financial services firm processing customer data. During weekends or holidays, data activity might drop significantly. By scaling down resources, the firm can maintain its pipelines’ functionality while optimizing costs. &lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Overcoming Challenges in Scaling Pipelines&lt;/strong&gt; &lt;br&gt;
Scaling pipelines isn’t without its challenges. Common issues include latency during scale-up, resource contention, and managing pipeline dependencies. To address these:     &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Optimize Latency and Throughput:&lt;/strong&gt; Use ADF’s built-in data movement features to reduce delays.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Implement Robust Monitoring:&lt;/strong&gt; Set up alerts and dashboards to track performance and identify bottlenecks early.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Design for Resilience:&lt;/strong&gt; Build fault-tolerant pipelines that can recover quickly from failures. &lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Real-Life Application: ADF Pipelines in Action&lt;/strong&gt; &lt;br&gt;
In my experience working with Azure Data Factory, I encountered a scenario where incremental database backups in .bak files needed to be migrated to an existing SQL database. The challenge lay in managing large volumes of backup data efficiently. By implementing dynamic scaling, we ensured that the pipeline adapted to varying data loads, completing the migration process seamlessly without over-provisioning resources. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Future Trends in Scalable Data Pipelines&lt;/strong&gt; &lt;br&gt;
As technology evolves, the future of scalable pipelines looks even brighter. AI and machine learning are set to play a significant role in predictive scaling, allowing pipelines to anticipate and prepare for data load changes proactively. Innovations in cloud-native tools, like enhanced ADF features, will make handling large datasets even more efficient. Businesses must also prepare for hybrid and multi-cloud environments, which add another layer of complexity to scalability. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Conclusion&lt;/strong&gt; &lt;br&gt;
By designing adaptable pipelines with tools like Azure Data Factory, organizations can optimize costs, improve reliability, and respond faster to changing data demands. Whether you’re dealing with sudden spikes in data volume or prolonged periods of low activity, a well-designed scalable pipeline is your key to success.&lt;br&gt;&lt;br&gt;
What strategies have you implemented to scale your data pipelines? Share your experiences or reach out to discuss specific challenges and solutions in your projects. &lt;/p&gt;

</description>
    </item>
    <item>
      <title>Importance of Design in Development !</title>
      <dc:creator>Vaibhav Bhutkar</dc:creator>
      <pubDate>Mon, 30 Sep 2024 17:11:20 +0000</pubDate>
      <link>https://dev.to/vaibhav9017/importance-of-design-in-development--1jaa</link>
      <guid>https://dev.to/vaibhav9017/importance-of-design-in-development--1jaa</guid>
      <description>&lt;p&gt;When we start working on any project, its tempting to dive straight in coding. The excitement of building something very quickly, especially when we are familiar with the tools, often leads up to skip essential steps like system design. However, over a time, I have learned about How crucial it is to carefully design a system before jumping in to development. I have been involved in multiple projects, analysing requirements and designing system based on these needs. Skipping this process can lead in a range of problems, which I will discuss here. &lt;br&gt;
With this blog, I want to take you through the importance of system design, some of the benefits, potential problems when it's skipped, and what different prospectives - developers, clients and stakeholders - bring to the table. I will also share some real life sample design to illustrate my points.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Why System Design&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;System design is like the blueprint for building a house. You wouldn't built a house without a plan. so, why build an application without a solid design? The design ensures that the architecture is scalable, maintainable and meets the project requirements. Here's is the details why system design is important. &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Clear Roadmap&lt;/strong&gt;: It offers a clear path for both developers and stakeholders. It tells developer what we need to build and gives the client and stakeholders a concreate understanding of what the final product look like.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Scalability&lt;/strong&gt;: A well designed application or system is flexible and can scale with the growth of data, users and functionalities. In the real world, application often expand beyond their initial scope, and good design can ensures system can handle that. &lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Risk Mitigation&lt;/strong&gt;: When you design a system properly, you identify potential issues early on. The proactive approach reduces the risk of expensive rework later in the project. &lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Collaboration and Communication&lt;/strong&gt;: Design facilitate better communication between teams. It aligns everyone on the same page - whether you'er a developer, client or project manager. &lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Problem when we ignore design&lt;/strong&gt;:&lt;br&gt;
Skipping or Ignoring system design can lead to numerous challenges during the project lifecycle. Here are few that I have come across.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Unclear Requirements&lt;/strong&gt;: Without a design, developers often work based on assumptions. This leads to misunderstandings, which result in functionality that doesn't align with client expectations.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Rework and Delays&lt;/strong&gt;: A lack of design often means going back to fix things. This not only delays the project but can result it in increasing project cost. &lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Technical Debt&lt;/strong&gt;: When system aren't designed properly from the beginning, t
he quick fixes used to compensate add to technical debt. Over time, this slows down the application and increase the mainteinance. &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Benefits of Proper System Design&lt;/strong&gt;&lt;br&gt;
On the other hand solid design provides numerous advantages: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Efficiency&lt;/strong&gt;: With a clear design, development become more efficient because everyone knows exactly what they need to build.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Predictable Outcomes&lt;/strong&gt;: With a blue print in hand, you can predict more accurately what the final outcome will look like.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Easier Maintenance&lt;/strong&gt;: A well thought - out design makes future updates and
maintenance simpler because the structure of code is cleaner.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Sample System Designs&lt;/strong&gt;&lt;br&gt;
Let me share a couple of system design samples that I have been working on.&lt;br&gt;
&lt;strong&gt;Data Ingestion Flow&lt;/strong&gt;&lt;br&gt;
&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbhyembg56rzazg0dewp6.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbhyembg56rzazg0dewp6.jpeg" alt="Image description" width="800" height="783"&gt;&lt;/a&gt;&lt;br&gt;
This flow illustrate the design of data ingestion pipeline. We designed this flow to fetch data from FTP server, process them on VM using automated PowerShell script and push the incremental back up data to SQL Database.&lt;br&gt;
&lt;strong&gt;Why this design&lt;/strong&gt;: We needed an automation solution that could handle daily backups without manual intervention. The design ensures the scalability of the pipeline and handles incremental updates to avoid redundant data processing. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Perspective of System Design&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Clients Perspective&lt;/strong&gt;: Clients appreciate seeing a system design because it gives them a clear visual representation of how their application will work. Its easier for them to provide feedback before development starts, which minimizes last minutes changes.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Developer's Perspective&lt;/strong&gt;: A developer, I can say a well-documented design simplifies the coding process. It removes ambiguity, which helps in avoiding unnecessary confusion or mistakes. With a design in hand, you know what your starting point is and what a end goal looks like. &lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Stakeholder's Perspective&lt;/strong&gt;: For a stakeholders and third parties, system design ensures transparency. It provides a level of trust that the project is being developed thoughtfully and will meet their needs in terms of both functionality and scalability.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Conclusion&lt;/strong&gt;:&lt;br&gt;
System design is backbone of any successful project. It aligns everyone involved in the project, mitigates risks and ensures the application is built to scale efficiently. Whenever you're working on small internal tool or a large-scale enterprise application, never underestimate the power of good system design. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Recommended Reading&lt;/strong&gt;:&lt;br&gt;
&lt;strong&gt;Clean Architecture&lt;/strong&gt;: by Robert C. Martin&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Accelerated Integration: How We Met Our Deadline Ahead of Schedule.</title>
      <dc:creator>Vaibhav Bhutkar</dc:creator>
      <pubDate>Fri, 28 Jun 2024 16:37:54 +0000</pubDate>
      <link>https://dev.to/vaibhav9017/accelerated-integration-how-we-met-our-deadline-ahead-of-schedule-57fl</link>
      <guid>https://dev.to/vaibhav9017/accelerated-integration-how-we-met-our-deadline-ahead-of-schedule-57fl</guid>
      <description>&lt;p&gt;&lt;strong&gt;Introduction&lt;/strong&gt;: This story or blog is about an Journey of an Integration project. I am sure that today everyone is aware about Integration. What it is? right? But still, let me highlight the thing what is an Integration. In simple words or for better understanding, Integration is nothing but an communication or passing data between two systems through one communication channel.&lt;br&gt;&lt;br&gt;
Similarly as mentioned in above definition, we had one project to integrate Salesforce with WordPress. I won't provide you the details about project itself - thats a story for another time. &lt;/p&gt;

&lt;p&gt;In today's world of technology, things can change very quickly. Recently, we faced a big challenge: the deadline of our ongoing project was moved up. Earlier it was different and based on that deadline the progress is going on, however suddenly it gets changed from Client's end. With this change, we found that we don't have much time to complete our tasks. Our tasks was to integrate Salesforce with WordPress. In short its a two way integration, it means data will flow from Salesforce to WordPress and vice versa, both are the crucial systems for client.  &lt;/p&gt;

&lt;p&gt;The project was going smooth but suddenly we came in trouble, I am not even aware about How to handle this situation. It was very challenging and tough situation for us. I had reached out to our Manager and then after discussion, revising next flow and current state of project we got more resources for that project and target to achieve the end goal. You might think this resolve our issue, that we got resource we can complete the project and meet the deadline, but its not so easy! here! "actually problem starts"... The project was partially build and we got new resource working on that project. Our main aim to meet the deadline, but now with new resources challenge is increased "to make each team member productive from day one." Here's how we did it!&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Challenge&lt;/strong&gt;&lt;br&gt;
Our project was originally planned and to be completed in several months. This actually gives us enough time to carefully perform development and complete integration between Salesforce and WordPress. However, suddenly the deadlines was changed, we had to deliver the project much sooner than expected.   &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Solution&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Expanding the Team&lt;/strong&gt;&lt;br&gt;
Our first step was to bring in new resources or team members. This wasn't just about having more hands on deck; it was about having right skills and ensuring everyone could hit the ground running. Everyone is much focused and with confident; as all team members are important and valuable asset for us. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;How we achieved the speed, lets read below&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Getteing Everyone Up to Speed&lt;/strong&gt;&lt;br&gt;
To make sure each and every team member should be productive from day one joined the team, we took several steps as below.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Clear Communication:&lt;/strong&gt; We make sure that everyone should understand the project flow, project goals, their roles and important one that is new timeline. This actually helped to align everone's efforts. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Design Diagram:&lt;/strong&gt; We have prepared Architectural design diagram of project, and that was iterated to small unit level as well. Once member joined we are providing that diagram and explaining project over it. It actually helps team member lot rather than reading project documentations. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Mentorship Communication:&lt;/strong&gt; We paired new team member with experienced one, experienced one means the person working since long on that project. This not only worked with quick learning but also build a collaborative team culture. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;DSM:&lt;/strong&gt; Everyone is aware about DSM. We also held daily standup meetings to track the progress and address any issue immediately. This kept everyone one the same page and also allowed for quick adjustments. This also gives us clear picture about today's output.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Daily Check-in/Unit testing:&lt;/strong&gt;&lt;br&gt;
We also have implemented an unit test case suite in Postman, once we made check in any code over test environment we are running our test suite. It gives clear idea about everything going over production server is errorless.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;- Reporting:&lt;/strong&gt; Each day, we are reporting back to our stakeholders, detailing the tasks we had completed. This ensures transparency and kept everyone informed about our progress.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Outcome / Result:&lt;/strong&gt;&lt;br&gt;
By focusing on above key areas, we turned a potential crises into a success story. The expanded team worked efficiently, confidently and we met the new deadline without compromising on quality. The integration of WooCommerce and Salesforce was completed smoothly. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What we learned:&lt;/strong&gt;&lt;br&gt;
Below are some key takeaway from this experience: &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Become Flexible:&lt;/strong&gt; Being able to adapt quickly to changing circumstances crucial.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Teamwork:&lt;/strong&gt; A well coordinated team can achieve great results, even under pressure.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Preparation:&lt;/strong&gt; Having a solid onboarding process can make a big difference in how quickly new team members become productinve.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Communication:&lt;/strong&gt; Regular updates and clear communication are vital to keep the project on track.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Accelerated Integration is challenging, but with right approach, its possible to turn it into an opportunity for success. By expanding our team, ensuring everyone was productive from the start, and maintaining clear communication through daily reports, we were able to meet our tight deadlines and deliver high-quality integration of Salesforce and WordPress.  &lt;/p&gt;

</description>
      <category>acceleratedintegration</category>
      <category>teamwork</category>
      <category>rapiddevelopment</category>
      <category>deadlinedriven</category>
    </item>
    <item>
      <title>CopyData from REST API Application into an Azure SQL Database with extra Column; using Azure Data Factory.</title>
      <dc:creator>Vaibhav Bhutkar</dc:creator>
      <pubDate>Mon, 24 Jun 2024 12:56:47 +0000</pubDate>
      <link>https://dev.to/vaibhav9017/copydata-from-rest-api-application-into-an-azure-sql-database-with-extra-column-using-azure-data-factory-3o5g</link>
      <guid>https://dev.to/vaibhav9017/copydata-from-rest-api-application-into-an-azure-sql-database-with-extra-column-using-azure-data-factory-3o5g</guid>
      <description>&lt;p&gt;&lt;strong&gt;Requirement&lt;/strong&gt;:&lt;br&gt;
Our requirement is - to get data from REST API and transfer data to Azure SQL table, along with transferring data we have to transfer one extra column value to AZURE SQL instance.&lt;br&gt;
To achieve this use case, we have to understand the data coming from REST Api call, so try to execute API call through Postman once if possible. In this tutorial I am going to explain about how we can call REST Api through ADF and captured/received data get stored in AZURE SQL database along with extra column.&lt;br&gt;
For this tutorial we are using REST API call and AZURE SQL as target database.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Prerequisite :&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Create an Azure Data Factory instance in Azure. &lt;/li&gt;
&lt;li&gt;Create Azure SQL instance in Azure.&lt;/li&gt;
&lt;li&gt;Get an API key for Rest Call - if authorization is there.&lt;/li&gt;
&lt;li&gt;Postman is needed to check - Response of API call.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Set up Linked Services for REST API :&lt;/strong&gt;&lt;br&gt;
Linked service is like a connection string which define the connection information needed for service to connect other sources. To create linked service for source i.e. REST API - Go to ADF Workspace then click on "Manage" on from left side menu. Under linked services click on "New", then select REST and configure required connection details, refer below images.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frtz2plihxw6wkjq1c4hq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frtz2plihxw6wkjq1c4hq.png" alt="Image description" width="800" height="217"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjir5bd9oi80l04bapt4n.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjir5bd9oi80l04bapt4n.png" alt="Image description" width="800" height="834"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Create Dataset for REST API&lt;/strong&gt;&lt;br&gt;
For creating Dataset for Rest API's, go to Author section from left panel, click on Dataset and click on "New Dataset." Select Data store as "REST" and click continue. Next is link the linked service created earlier or created in above step and then click ok. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxpvk5dv9z4o95si7zi4l.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxpvk5dv9z4o95si7zi4l.png" alt="Image description" width="800" height="213"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqgrcefczgf66elursi0s.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqgrcefczgf66elursi0s.png" alt="Image description" width="800" height="134"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Set up linked Services for Azure SQL DB:&lt;/strong&gt;&lt;br&gt;
To create linked service for Azure SQL DB. - Go to ADF workspace then click on "Manage" from left side menu. Under Linked service click on "New", then select Azure SQL Database and configure the connection. Add a name to lined service then select integration runtime as "AutoResolveIntegrationRuntime". Next use Azure Subscription where database is located. Then provide all details like Server Name, Username, Password, and database name to connect. Lastly click create. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftsupdqp603yqbwrb9e8h.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftsupdqp603yqbwrb9e8h.png" alt="Image description" width="800" height="187"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr7fry87xpbpfff6z1nla.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr7fry87xpbpfff6z1nla.png" alt="Image description" width="800" height="714"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;*&lt;em&gt;Create DataSet for Azure SQL DB: *&lt;/em&gt;&lt;br&gt;
Similarly, Dataset created for REST API service in same way create data set for Azure SQL DB. To perform that activity; go to Author panel from left menu. Select DataSet section and click on "New Data Set". Select data store as "Azure SQL Database" and click on continue. Next link the linked service created above for Azure DataBase.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fadkt3umfmal8j9oyaq10.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fadkt3umfmal8j9oyaq10.png" alt="Image description" width="800" height="578"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Create Pipeline and Add Copy Activity:&lt;/strong&gt;&lt;br&gt;
Now we are ready with Data Set for Azure SQL Database which is destination and Rest API service which act as source in our use case. To create activity please follow below steps. &lt;br&gt;
Go to the "Author" section.&lt;br&gt;
Click on "Pipelines" and then "New Pipeline".&lt;br&gt;
Add copy activity to the pipeline.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F51j9ca3cna6tbl6rsz1t.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F51j9ca3cna6tbl6rsz1t.png" alt="Image description" width="800" height="325"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Drag "Copy Data" activity from Activities pane to the pipeline canvas.&lt;br&gt;
In the "Source tab", select "REST API" dataset. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffap8ud0bejnpvscv1ewf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffap8ud0bejnpvscv1ewf.png" alt="Image description" width="800" height="366"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In the "Sink" tab, select the Azure SQL Database dataset.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fosqqmyz87bphandiff0s.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fosqqmyz87bphandiff0s.png" alt="Image description" width="800" height="463"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This complete activity of copy data from REST API to Azure SQL Database. Now here we need to check the response of API and based on that mapping need to consider in ADF pipeline. For that check executing API through Postman. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj5m94ypag8wy9jiirpf7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj5m94ypag8wy9jiirpf7.png" alt="Image description" width="800" height="257"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Add Extra column:&lt;/strong&gt;&lt;br&gt;
Now we need to add one extra column to the destination Azure SQL table. To achieve this we need to add addition column from source; put column name and value - IF it is custom then use Custom or else use as Parameter; if you are passing it from any other activity. Here we are creating Custom column. Update column Name and its value (we are passing static value here.) This completes our entire setup for process of getting data from REST API and Push it to Azure SQL Database.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmtasn6gxlz62p59asbad.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmtasn6gxlz62p59asbad.png" alt="Image description" width="800" height="148"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Test and Run:&lt;/strong&gt;&lt;br&gt;
Run the pipeline in debug mode to test the data flow and ensure everything is working as expected.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Publish:&lt;/strong&gt;&lt;br&gt;
Once the pipeline is tested and validated, publish all the changes to make the pipeline available for scheduled runs or trigger based executions. &lt;/p&gt;

&lt;p&gt;&lt;em&gt;Conclusion:&lt;/em&gt;&lt;br&gt;
By following all above steps, you should be able to create a robust ADF pipeline that fetches data from REST API and store it in Azure SQL Database with the necessary columns.  &lt;/p&gt;

</description>
      <category>node</category>
      <category>azuredatafactory</category>
      <category>javascript</category>
      <category>data</category>
    </item>
    <item>
      <title>Generate SAS token for Azure API Management with Node Js.</title>
      <dc:creator>Vaibhav Bhutkar</dc:creator>
      <pubDate>Thu, 06 Jul 2023 07:44:36 +0000</pubDate>
      <link>https://dev.to/vaibhav9017/generate-sas-token-for-azure-api-management-using-node-js-3gl2</link>
      <guid>https://dev.to/vaibhav9017/generate-sas-token-for-azure-api-management-using-node-js-3gl2</guid>
      <description>&lt;p&gt;Azure API management is a platform provided by Microsoft Azure that enables organizations to publish, secure, manage, and analyze their APIs. API's allows different software application to communicate and interact with each other. Azure API management simplifies process of creating, maintaining and API deployment process. &lt;/p&gt;

&lt;p&gt;This blog will help you out to generate access token programmatically using node js. This token is used to make direct call to Azure API management REST API. If you want sample code in c# .net then refer this link (&lt;a href="https://learn.microsoft.com/en-us/rest/api/apimanagement/apimanagementrest/azure-api-management-rest-api-authentication"&gt;https://learn.microsoft.com/en-us/rest/api/apimanagement/apimanagementrest/azure-api-management-rest-api-authentication&lt;/a&gt;)&lt;/p&gt;

&lt;p&gt;It is possible to create SAS token manually, for that purpose use above url or navigate to Azure Management Portal and generate SAS token from there. &lt;br&gt;
Login to Portal - Azure API Management Services - Deployment + Infrastructure -- Management API -- generate Token &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--qDiFy4Oe--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/unuvfu1sug7q5fvzq1sa.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--qDiFy4Oe--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/unuvfu1sug7q5fvzq1sa.png" alt="Image description" width="343" height="657"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Mention the expiry date for token in Expiry text box. &lt;br&gt;
Generate Token Manually Through Code : &lt;br&gt;
    1. Construct a sign in string in below format - &lt;br&gt;
        {identifier} + "\n" + {expiry}&lt;br&gt;
        Here, identifier - it’s the identifier field from the API management tab from Azure API management instance. &lt;br&gt;
        expiry - desired expiry date of SAS token. &lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    const expiry = new Date();
    expiry.setDate(expiry.getDate() + 10);
    const expiryString = `${expiry.toISOString().split(".")[0]}.${formatMilliseconds(expiry.getMilliseconds())}Z`;
    const encoder = crypto.createHmac("sha512",Buffer.from(AZ_APIM_KEY, "utf8"));
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;(AZ_APIM_KEY - Used API Key as constant - you please use your own key from Azure)&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;2. Need to generate a signature by applying HMAC-SHA512 hash function to sign in string using key. Base 64 encode returned signature key.

        const dataToSign = `integration\n${expiryString}`;
        const dataToSignBytes = encoder.update(dataToSign, "utf8").digest();
        const signature = dataToSignBytes.toString("base64");

3. Finally created access token in below format.
    a. uid= {identifier}&amp;amp;ex={expiry}&amp;amp;sn={Base64 encoded signature format}

e.g. Token generated here is with above example -
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Token = &lt;code&gt;SharedAccessSignature uid=${AZ_APIM_IDENTIFIER}&amp;amp;ex=${expiryString}&amp;amp;sn=${signature}&lt;/code&gt;;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Following is full code of token generation using node js -&lt;/strong&gt;&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const createToken = async () =&amp;gt; {
  try {
    const expiry = new Date();
    expiry.setDate(expiry.getDate() + 10);

    const expiryString = `${expiry.toISOString().split(".")[0]}.${formatMilliseconds(expiry.getMilliseconds())}Z`;
    const encoder = crypto.createHmac("sha512",Buffer.from(AZ_APIM_KEY, "utf8"));

    const dataToSign = `${AZ_APIM_IDENTIFIER}\n${expiryString}`;
    const dataToSignBytes = encoder.update(dataToSign, "utf8").digest();
    const signature = dataToSignBytes.toString("base64");

    const token = `SharedAccessSignature uid=${AZ_APIM_IDENTIFIER}&amp;amp;ex=${expiryString}&amp;amp;sn=${signature}`;
    return token;

  } catch (error:any) {
    console.log(error);
    logs.insertLog(new Date(), "Error", "Crosswalk", "Expiration Utility", "1.0", "createToken", error.exceptionType, "Error observed while creating token", error.message, "", error.source, error.stackTrace);
  }
};
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Use this access token as an Authorization key for further API call in Azure API management to change the subscription status/Subscriber status etc. depend on requirement. Refer below url for update user or subscription.&lt;br&gt;
&lt;a href="https://learn.microsoft.com/en-us/rest/api/apimanagement/current-ga/user/update?tabs=HTTP"&gt;https://learn.microsoft.com/en-us/rest/api/apimanagement/current-ga/user/update?tabs=HTTP&lt;/a&gt;&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const url = `${baseURL}/subscriptions/{Your details for url}`;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;base url - here baseURL is constant - construct your base url based on API management url subscription or subscription id resource group etc &lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    const payload = {
      properties: {
        state: `expired`,
      },
    };
    const token = await createToken();
    const response = await axios.patch(url, payload, {
      headers: {
        Authorization: `${token}`,
        "content-type": "application/json",
      },
    });
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Execute above call to make changes at Azure API management. &lt;/p&gt;

</description>
    </item>
    <item>
      <title>A Guide to Embedded App Development with Procore</title>
      <dc:creator>Vaibhav Bhutkar</dc:creator>
      <pubDate>Mon, 03 Jul 2023 10:34:11 +0000</pubDate>
      <link>https://dev.to/vaibhav9017/enhancing-construction-management-a-guide-to-embedded-app-development-with-procore-4im3</link>
      <guid>https://dev.to/vaibhav9017/enhancing-construction-management-a-guide-to-embedded-app-development-with-procore-4im3</guid>
      <description>&lt;p&gt;Now a day's, construction industry is undergoing a digital transformation, and various software platforms like Procore are leading the way in revolutionizing construction project management. With Procore's open API, developers have the opportunity to extend the platform's functionality by creating embedded apps. In this blog post we are going to discuss  about "World of Embedded app development with Procore", the development points, benefits etc. &lt;/p&gt;

&lt;p&gt;Before going in deep lets understand what is Embedded Application? -&lt;br&gt;
    It’s a software or application that is designed and integrated into a larger system or device to perform specific functions. The term "embedded" implies that the software is embedded within the other application or firm of a device, rather than being a standalone application running on a general-purpose computer or device. &lt;br&gt;
    Embedded app have limited user interfaces, as they primarily focus on performing specific tasks behind the scenes. &lt;br&gt;
Now, we have some idea about Embedded application. Before starting  actual development on application; Let's more understand about Procore's API. I am going to tell you what user or developer need to do if you are going to develop embedded application for Procore. We will discuss all technical points required for app development. We will not discuss about theoretical sign off document and other process. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Setup Developer Account:&lt;/strong&gt; &lt;br&gt;
    As a developer you need to create signup to developer portal (&lt;a href="https://developers.procore.com/signup"&gt;https://developers.procore.com/signup&lt;/a&gt;) to build an application. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--StuEXuqQ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9nynlovfwduqfxbdsobu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--StuEXuqQ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9nynlovfwduqfxbdsobu.png" alt="Image description" width="800" height="379"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Once done with registration at developer portal. User or developer need to create new app at Procore development environment. Once developer creates app will get Procore Client ID, Client Secret, Sandbox environment and URL etc and access to the app page. These details are required for User/Developer for implementing authentication with Procore.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;Define Your App's Purpose:&lt;/strong&gt;&lt;br&gt;
    Clearly define the purpose of your embedded app. Determine the problem it aims to solve or the specific functionality it should provide within the Procore ecosystem. Consider the needs of Procore users and identify gaps or opportunities where your app can add value.&lt;br&gt;
    Our app purpose is simple, "Our application can be used by Procore user. To use the same; user or application user need to buy API key (or license)and utilize that key further in application". Here; we want the details of user who is going to utilize or installing our this application.&lt;br&gt;&lt;br&gt;
&lt;strong&gt;Plan your application:&lt;/strong&gt;&lt;br&gt;
    Define your application architecture and workflow. Define the key functionality, features we are offering with application. Check out how our application will interact with Procore users and How app will integrate with Procore data. Draw some architectural diagrams and run all virtual use cases on it.&lt;br&gt;
Review Procore's API Documentation: &lt;br&gt;
    To understand available end points, data model, authentication methods Thoroughly review Procore's API documentation. This documentation (&lt;a href="https://developers.procore.com/documentation/introduction"&gt;https://developers.procore.com/documentation/introduction&lt;/a&gt;) will guide you in making API calls to fetch and manipulate data within Procore.  &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Build and Test app:&lt;/strong&gt;&lt;br&gt;
    Start building you’re application as defined in Architectural diagram. Utilize all the end points API and stack to integrate app with Procore. Try to follow best practices provided by Procore to handle the data. Ensure the all functionality will be provided by your application as mentioned. &lt;br&gt;
    While building application; One point I noticed that We need to Embed this app inside Iframe. So, try to test and verify each single functionality will work under Iframe or not? User this url (&lt;a href="https://iframetester.com/"&gt;https://iframetester.com/&lt;/a&gt;) to test your application under embedded app. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Handle Sign in With Procore:&lt;/strong&gt;&lt;br&gt;
    If your application want the details of user who is trying to use our application or you are going to implement data connector app with in Iframe; then in such case - Developer need to implement Sign in With Procore functionality. To implement this functionality under I frame is quite difficult as -  Under Iframe Sign in with Procore page not loads.  I have implemented this functionality using window Pop up. (Means - once user click on Sign in button - one window popup loads with Sign in screen and once sign in complete, again redirected user to the main screen where he redirected.)&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--cWkoGEdh--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/lyrj9cmue0t70gp3wl2n.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--cWkoGEdh--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/lyrj9cmue0t70gp3wl2n.png" alt="Image description" width="800" height="459"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Client side storage not work:&lt;/strong&gt;&lt;br&gt;
    Now a days most developers are using client side storage mechanism to store some key value. But this will not work here with Procore Embedded app as we are integrating our application inside Iframe. Client side storage not work in Iframe; if parent and child domain was different. It was because of cross origin policy. &lt;br&gt;
    I observed this issue while building our application, I tried almost all client side storage approaches () but not worked with Procore Embedded app. This is an important point each developer need to keep that in mind. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Submit Your App for Review:&lt;/strong&gt;&lt;br&gt;
    Once your app is developed and tested, submit it to Procore for review. Procore has a review process to ensure that apps meet their quality and security standards. Provide all the necessary information about your app, including its purpose, functionality, and integration points with Procore.&lt;/p&gt;

&lt;p&gt;This blog will provides a general overview of the process for developing embedded apps with Procore. It is essential to refer to Procore's official documentation and resources for detailed instructions and the latest information&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Rest API vs Graph QL</title>
      <dc:creator>Vaibhav Bhutkar</dc:creator>
      <pubDate>Fri, 31 Mar 2023 11:10:43 +0000</pubDate>
      <link>https://dev.to/vaibhav9017/rest-api-vs-graph-ql-524p</link>
      <guid>https://dev.to/vaibhav9017/rest-api-vs-graph-ql-524p</guid>
      <description>&lt;p&gt;In this blog post we will discuss about what is Rest? What is GraphQL?. We will discuss about difference between Rest and GraphQL also we will check when to use Rest and GraphQL. &lt;br&gt;
Now a days, REST APIs and GraphQL are two popular technologies used for building APIs. This allows developers to interact with a server and retrieve data. Both REST and GraphQL have their own unique strengths and weaknesses, and which one to use depends on your specific use case. So, we will explore the differences between REST and GraphQL and provide some overview on when to use which technology.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What is REST?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;REST stands for Representational State Transfer. REST is an architectural style used for building web services. RESTful APIs use HTTP requests to GET, POST, PUT, and DELETE data. RESTful APIs mostly return data in JSON or XML format.&lt;/p&gt;

&lt;p&gt;REST is based on a client-server architecture, which means that the client makes a request to the server, and the server responds with the requested data. RESTful APIs use URIs (Uniform Resource Identifiers) to identify resources, and HTTP methods are used to perform actions on those resources.&lt;/p&gt;

&lt;p&gt;RESTful APIs we have been we are using since a long time and are well established, with a large community and plenty of resources available for developers. They are straightforward to build and are compatible with a wide range of programming languages.&lt;/p&gt;

&lt;p&gt;It is an HTTP method describes the type of request that is sent to the server. They are:&lt;/p&gt;

&lt;p&gt;GET - To reads a representation of a specified source.&lt;br&gt;
POST - To creates a new specified source.&lt;br&gt;
PUT - To updates/replaces every resource in a collection.&lt;br&gt;
PATCH - To modifies a source.&lt;br&gt;
DELETE - To deletes a source.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What is GraphQL?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;GraphQL is a query language for APIs. GraphQL was developed by Facebook in 2012. GraphQL allows developers to request exactly the data what they need, and nothing more. Unlike REST, which returns fixed data structures, GraphQL lets the client specify the shape of the data it needs, and the server returns only that data.&lt;/p&gt;

&lt;p&gt;GraphWL uses a schema to define the data that can be queried, and it uses a query language to request that data. GraphQL APIs can return data in any format, including JSON and XML.&lt;/p&gt;

&lt;p&gt;GraphQL is become increasingly popular in recent years because it provides a more efficient and flexible way to retrieve data from an API. It eliminates the problem of &lt;strong&gt;over-fetching&lt;/strong&gt; and &lt;strong&gt;under-fetching data&lt;/strong&gt;, which can be a problem with RESTful APIs.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Differences between REST and GraphQL&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The core difference between GraphQL and REST APIs is - GraphQL is a specification, a query language; while REST is an architectural concept for network-based software.&lt;/p&gt;

&lt;p&gt;Another major differences between REST and GraphQL is how data is requested. In REST, the client makes a request to the server for a specific resource, and the server responds with all the data associated with that resource.&lt;br&gt;
With GraphQL, the client specifies the data it needs, and the server responds with only that data.&lt;/p&gt;

&lt;p&gt;Another difference is the way data is represented. RESTful APIs typically use JSON or XML to represent data, while GraphQL can return data in any format.&lt;/p&gt;

&lt;p&gt;RESTful APIs are straightforward to build and are compatible with a wide range of programming languages. GraphQL is more complex to build, but it provides a more efficient and flexible way to retrieve data from an API.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--9S0i6G5H--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/m33a0l8420naq7e5xyss.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--9S0i6G5H--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/m33a0l8420naq7e5xyss.png" alt="Image description" width="880" height="256"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;When to use REST and when to use GraphQL&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The choice between REST and GraphQL depends on your specific use case. Here below are some general guidelines:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Use REST if:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;If your API has a fixed set of resources, and the data associated with those resources doesn't change frequently.&lt;br&gt;
You need to support a wide range of clients, including web browsers, mobile devices, and IoT devices.&lt;br&gt;
You need a simple API that is easy to build and maintain.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Use GraphQL if&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Your API has a large number of resources, and the data associated with those resources changes frequently.&lt;br&gt;
You need to support a wide range of clients, but you want to minimize the amount of data transferred over the network.&lt;br&gt;
You want a flexible API that can adapt to changing client needs.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Conclusion&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In conclusion, both REST and GraphQL are powerful technologies that can be used to build APIs. REST is well established and easy to build, while GraphQL provides a more efficient and flexible way to retrieve data from an API. The choice between REST and GraphQL depends on your specific use case.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Clean Code: What It Is and Why It Matters</title>
      <dc:creator>Vaibhav Bhutkar</dc:creator>
      <pubDate>Fri, 31 Mar 2023 07:56:01 +0000</pubDate>
      <link>https://dev.to/vaibhav9017/clean-code-what-it-is-and-why-it-matters-59g5</link>
      <guid>https://dev.to/vaibhav9017/clean-code-what-it-is-and-why-it-matters-59g5</guid>
      <description>&lt;p&gt;&lt;strong&gt;Introduction&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;I am writing this blog because, I was impressed the way he explained about Clean code and why it is necessary in developers life, when I was going through Clean code book written by Uncle Bob. Clean code is one of the most crucial and important peace concepts in software development. It refers to writing code that is easy to read, understand, and maintain. In this blog post, we will discuss what clean code is and why it matters.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What is Clean Code?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Clean code is code that is easy to read, understand, and modify. It is code that is well-organized, concise, and efficient. Clean code is essential for several reasons. First Readable - it makes it easier for other developers to understand your code. Second Maintainable - it makes it easier to maintain your code. Finally Testable - it makes it easier to identify and fix bugs.&lt;/p&gt;

&lt;p&gt;Clean code follows several principles. One of these principles is the DRY principle, which stands for "Don't Repeat Yourself." This principle means that you should avoid duplicating code wherever is possible. Another principle is the KISS principle, which stands for "Keep It Simple, Stupid." This principle means that you should strive for simplicity in your code.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Why Does Clean Code Matter?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Clean code matters for several reasons. First, it makes it easier for other developers to understand your code. If your code is clean, readable and well-organized, other developers can quickly understand and get up to speed with your codebase. It means that they can start contributing to the project more quickly, which can help accelerate on development.&lt;/p&gt;

&lt;p&gt;Second, clean code makes it easier to maintain your codebase. If your code is well-organized, elagant and easy to read, it will be easier to identify and fix bugs. This means that you can spend less or minimal time for debugging and more time building new features or functionalities.&lt;/p&gt;

&lt;p&gt;Finally, clean code is essential for the long-term success of your project. As your codebase grows day to day, it will become more challenging to maintain. If your code is clean, readable and well-organized, it will be easier to scale your project and add new features over time.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How to Write Clean Code&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Writing clean code takes practice and discipline. Here are some tips for writing clean code:&lt;/p&gt;

&lt;p&gt;Use descriptive variable names - it should tell about functionality. &lt;br&gt;
Write small functions - fits in screen &lt;br&gt;
Avoid duplicating code&lt;br&gt;
Keep functions and classes short&lt;br&gt;
Write tests for your code&lt;/p&gt;

&lt;p&gt;By following above mentioned tips, you can write code that is easy to read, understand, and maintain.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Conclusion&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Clean code is essential for the success of any software project. It makes it easier for other developers to understand and maintain your codebase. It also makes it easier to identify and fix bugs. By following best practices such as the DRY and KISS principles, you can write clean code that is easy to read, understand, and maintain.&lt;/p&gt;

&lt;p&gt;Refer Clean Code handbook written by Robert C. Martin. for more details. &lt;/p&gt;

</description>
    </item>
    <item>
      <title>Performance optimization using React Hooks! Like useCallback and  useMemo.</title>
      <dc:creator>Vaibhav Bhutkar</dc:creator>
      <pubDate>Tue, 04 Jan 2022 05:32:21 +0000</pubDate>
      <link>https://dev.to/vaibhav9017/performance-optimization-using-react-hooks-like-usecallback-and-usememo-4dhg</link>
      <guid>https://dev.to/vaibhav9017/performance-optimization-using-react-hooks-like-usecallback-and-usememo-4dhg</guid>
      <description>&lt;p&gt;Performance is very important key and most common thing that each and every developer may faces at some point after building any application.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Use Effect:&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;The Effect Hook lets you perform side effects in function components.&lt;/p&gt;

&lt;p&gt;Data fetching, setting up a subscription, and manually changing the DOM in React components are all examples of side effects. Whether or not you’re used to calling these operations “side effects” (or just “effects”), you’ve likely performed them in your components before.&lt;/p&gt;

&lt;p&gt;useEffect run after every render. By default, it runs both after the first render and after every update. Instead of thinking in other terms, you might find it easier to think that effects happen “after render”. React guarantees the DOM has been updated by the time it runs the effects.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--bCRzoxd0--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/nhsj7xl2nf1h5opj4hgb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--bCRzoxd0--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/nhsj7xl2nf1h5opj4hgb.png" alt="Image description" width="819" height="291"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Here in above example we declare the count state variable, and  we tell React we need to use an effect. We pass a function to the useEffect Hook. . Inside effect, we set message using console.  When React renders this component, it will remember the effect we used, and then run our effect after updating the DOM. This happens for every render, including the first one.&lt;/p&gt;

&lt;p&gt;Here’s where the optimization comes in. To prevent the useEffect from executing each time the function reference changes, we can use useCallback. The useCallback hook will store the reference to the function instead of the function itself. The reference of the function will only be updated when one of the dependencies of the function is updated. If you don't want the function reference to be updated ever, you can leave the dependency array empty in the same way as the dependency array of the useEffect hook. Below is the code sample of it. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--P1lAjV6r--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/nfwew2v3idhg1yuhhxch.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--P1lAjV6r--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/nfwew2v3idhg1yuhhxch.png" alt="Image description" width="880" height="650"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;When component state is changed, the component re-renders, but those re-renders can be minimized. This means faster rendering, fewer computations, minimal API calls, and etc.&lt;/p&gt;

&lt;p&gt;Also even when we made API calls using react UseEffect at time we can use dependency object such way to stope unnecessary renders of useEffect. This can be achieved by putting some conditions inside useEffect function call. &lt;br&gt;
Also we can use useMemo while exporting same as below. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--z7hcMqkh--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/zirjigsnv83k12stodmd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--z7hcMqkh--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/zirjigsnv83k12stodmd.png" alt="Image description" width="880" height="219"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--2Dq6l2ac--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6loqfvmopw1dqov3il8v.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--2Dq6l2ac--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6loqfvmopw1dqov3il8v.png" alt="Image description" width="880" height="588"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;While using React memo in above way, you need to check all API calls from applications and where possible use the same. It reduces unnecessary API calls. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; React is customizable, everyone can customize it according to his way. So this optimization can depend on the scenario. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Conclusion:&lt;/strong&gt;&lt;br&gt;
Above all are the sample examples. There are other various ways to reduce useEffects rendering calls, these depends on the requirements in application. So please explore more you will get more in react. Happy Learning !!!&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Reference:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://reactjs.org/docs/hooks-effect.html"&gt;https://reactjs.org/docs/hooks-effect.html&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://blog.devgenius.io/performance-optimization-with-react-hooks-usecallback-usememo-f2e527651b79"&gt;https://blog.devgenius.io/performance-optimization-with-react-hooks-usecallback-usememo-f2e527651b79&lt;/a&gt;&lt;/p&gt;

</description>
      <category>programming</category>
      <category>beginners</category>
      <category>react</category>
      <category>elastikteams</category>
    </item>
  </channel>
</rss>
