<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: PrachiBhende</title>
    <description>The latest articles on DEV Community by PrachiBhende (@prachibhende).</description>
    <link>https://dev.to/prachibhende</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/prachibhende"/>
    <language>en</language>
    <item>
      <title>Why Data Governance Is Not Optional in a Microsoft Fabric Workflow</title>
      <dc:creator>PrachiBhende</dc:creator>
      <pubDate>Mon, 30 Mar 2026 14:32:31 +0000</pubDate>
      <link>https://dev.to/prachibhende/why-data-governance-is-not-optional-in-a-microsoft-fabric-workflow-23fh</link>
      <guid>https://dev.to/prachibhende/why-data-governance-is-not-optional-in-a-microsoft-fabric-workflow-23fh</guid>
      <description>&lt;p&gt;Imagine this.&lt;/p&gt;

&lt;p&gt;You've built a solid pipeline in Microsoft Fabric. Multiple APIs feeding into a Lakehouse, raw JSON landing in Bronze, clean data flowing through to Silver and Gold. Notebooks running on schedule, reports loading fast. Everything looks great.&lt;/p&gt;

&lt;p&gt;Then a colleague from a different team messages you:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;"Hey, I can see the full API responses in your Bronze folder — including what looks like customer email addresses. Was that intentional?"&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;It wasn't.&lt;/p&gt;

&lt;p&gt;This is the moment most engineers realize that &lt;strong&gt;building the pipeline was only half the job&lt;/strong&gt;. The other half — governance — had been sitting quietly in the backlog, treated as something to set up "later."&lt;/p&gt;

&lt;p&gt;Later had arrived.&lt;/p&gt;




&lt;h2&gt;
  
  
  Governance Is an Engineering Problem, Not Just a Policy Problem
&lt;/h2&gt;

&lt;p&gt;When engineers hear "data governance," it's easy to mentally file it under compliance, legal, or something the data team lead worries about. But in a Microsoft Fabric workflow, governance decisions directly affect how you architect your workspace, your storage layers, your access model, and your pipelines.&lt;/p&gt;

&lt;p&gt;Get it wrong and you're not just violating a policy — you're creating real security gaps, breaking downstream pipelines silently, and building something that becomes harder to trust over time.&lt;/p&gt;

&lt;p&gt;Let's walk through four areas where this matters most.&lt;/p&gt;




&lt;h2&gt;
  
  
  1. OneLake Is One Roof — And That's Both the Power and the Risk
&lt;/h2&gt;

&lt;p&gt;Microsoft Fabric's biggest architectural feature is &lt;strong&gt;OneLake&lt;/strong&gt; — a single, unified storage layer that sits beneath everything: your Lakehouse, your Warehouse, your Dataflows, your Notebooks. It's one lake for your entire organization.&lt;/p&gt;

&lt;p&gt;This is genuinely powerful. No data silos. No copying data between systems. One place, one copy, everything connected.&lt;/p&gt;

&lt;p&gt;But here's the risk that's easy to overlook:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;A misconfigured permission in Fabric doesn't just expose one table. It can expose an entire workspace.&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;In a traditional setup, your raw ingestion store and your analytics store are separate systems with separate access controls. In Fabric, they live under the same OneLake roof. If a user or service principal has workspace-level access, they may be able to navigate directly to your Bronze Lakehouse folder — including the raw, unmodified API responses sitting there.&lt;/p&gt;

&lt;p&gt;For a pipeline that ingests from multiple APIs, those raw responses can contain far more than you intended to share: system fields, internal IDs, personal data, tokens, metadata that was never meant to be visible beyond the engineering team.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What to do about it:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Treat workspace access as a privilege, not a default. Don't add colleagues to a workspace just because it's convenient.&lt;/li&gt;
&lt;li&gt;Separate your Bronze, Silver, and Gold layers into &lt;strong&gt;different workspaces&lt;/strong&gt; if your data sensitivity warrants it. This gives you independent access boundaries.&lt;/li&gt;
&lt;li&gt;Use &lt;strong&gt;service principals&lt;/strong&gt; for pipeline execution instead of personal accounts — and scope their permissions as narrowly as possible.&lt;/li&gt;
&lt;li&gt;Regularly audit who has access to what. Fabric's admin portal makes this possible; make it a habit.&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  2. Row-Level and Column-Level Security — Enforcing What Each Team Should See
&lt;/h2&gt;

&lt;p&gt;As data moves through your medallion layers, different teams need different slices of it.&lt;/p&gt;

&lt;p&gt;Your engineering team debugging a pipeline failure needs to see the raw JSON in Bronze. Your data analysts building a dashboard should only see aggregated, clean records in Gold — and certainly not columns like &lt;code&gt;customer_email&lt;/code&gt;, &lt;code&gt;internal_user_id&lt;/code&gt;, or &lt;code&gt;api_auth_token&lt;/code&gt; that might have passed through earlier layers.&lt;/p&gt;

&lt;p&gt;Without explicit security controls, the default in Fabric is broad access. Anyone with access to a Lakehouse or Warehouse can, by default, query everything in it.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Row-Level Security (RLS)&lt;/strong&gt; lets you restrict which rows a user sees based on their identity. A regional analyst sees only their region's data. A team lead sees only their team's records. The filter is applied automatically at query time — the user doesn't even know rows are being hidden.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Column-Level Security (CLS)&lt;/strong&gt; restricts which columns are visible. You can expose a table for querying while masking sensitive fields entirely — the column simply doesn't appear in the result set for users without permission.&lt;/p&gt;

&lt;p&gt;In a Fabric Warehouse, both RLS and CLS can be implemented using familiar T-SQL constructs. For Lakehouse, you control access at the folder and file level, and increasingly through the SQL analytics endpoint as Fabric's security model matures.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;A practical approach for a multi-API pipeline:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Bronze Lakehouse  → Engineering team only (raw JSON, all fields)
Silver Lakehouse  → Data engineering + analytics engineers (cleaned, typed, some fields masked)
Gold Warehouse    → Analysts + BI tools (aggregated, RLS applied, sensitive columns removed)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This isn't just good security — it's good architecture. Each layer serves a different audience, and the access model should reflect that.&lt;/p&gt;




&lt;h2&gt;
  
  
  3. Data Lineage — Knowing What Breaks When an API Changes
&lt;/h2&gt;

&lt;p&gt;Here's a scenario that every engineer building on top of APIs will eventually face:&lt;/p&gt;

&lt;p&gt;An API you depend on quietly changes its response schema. A field gets renamed. A nested object gets flattened. A new required field appears. The raw JSON still lands in Bronze — but the Silver notebook that was parsing &lt;code&gt;response.user.email&lt;/code&gt; now fails silently because the field is now &lt;code&gt;response.contact.email_address&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;The pipeline doesn't error loudly. It just starts writing nulls. Downstream reports start showing gaps. Someone notices three days later.&lt;/p&gt;

&lt;p&gt;Without &lt;strong&gt;data lineage&lt;/strong&gt;, answering the question &lt;em&gt;"what broke and what does it affect?"&lt;/em&gt; becomes a manual archaeology exercise — tracing through notebooks, pipelines, and semantic models to find every place that field was used.&lt;/p&gt;

&lt;p&gt;With lineage, you get a map.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Microsoft Fabric has native lineage built in.&lt;/strong&gt; From the workspace view, you can open the lineage view and see exactly how data flows from source to destination — which pipelines feed which Lakehouses, which notebooks transform which tables, which semantic models consume which Gold layer datasets.&lt;/p&gt;

&lt;p&gt;When an API schema changes, lineage lets you:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Immediately identify every downstream artifact that depends on the affected source&lt;/li&gt;
&lt;li&gt;Prioritize which breaks are critical (feeding a live dashboard) vs. acceptable (feeding a weekly batch report)&lt;/li&gt;
&lt;li&gt;Communicate impact to stakeholders before they notice it themselves&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Schema drift&lt;/strong&gt; — the gradual or sudden change in the structure of your source data — is one of the most common causes of silent pipeline failures. Lineage doesn't prevent drift, but it dramatically reduces the blast radius when it happens.&lt;/p&gt;

&lt;p&gt;A good habit: after any API onboarding, open Fabric's lineage view and verify the dependency chain looks exactly as you expect. It takes five minutes and has saved hours of debugging.&lt;/p&gt;




&lt;h2&gt;
  
  
  4. Audit Trails — Knowing Who Did What and When
&lt;/h2&gt;

&lt;p&gt;Audit trails often feel like a compliance checkbox. In practice, they're one of the most useful debugging and accountability tools an engineering team has.&lt;/p&gt;

&lt;p&gt;Consider these real situations:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A Gold layer table was modified and downstream reports broke. Who ran the notebook that changed it?&lt;/li&gt;
&lt;li&gt;A pipeline that was running daily suddenly stopped. When did it last succeed, and what changed in the workspace around that time?&lt;/li&gt;
&lt;li&gt;A stakeholder claims data was correct last Tuesday but wrong today. What queries were run against that dataset, and by whom?&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Without audit logs, these questions are very hard to answer. With them, they take minutes.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Microsoft Fabric captures workspace-level activity logs&lt;/strong&gt; that record operations across pipelines, notebooks, Lakehouses, and Warehouses. These logs can be routed to a Log Analytics workspace or queried through the Fabric admin APIs.&lt;/p&gt;

&lt;p&gt;What's worth tracking:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Pipeline run history&lt;/strong&gt; — when each pipeline ran, whether it succeeded or failed, and what triggered it&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Notebook execution logs&lt;/strong&gt; — who ran what, when, and against which data&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Data access logs&lt;/strong&gt; — which users or service principals queried sensitive datasets&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Permission changes&lt;/strong&gt; — when workspace or item-level access was modified and by whom&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;For a multi-API pipeline specifically, audit trails are also valuable for &lt;strong&gt;API usage accountability&lt;/strong&gt; — being able to demonstrate to stakeholders or vendors that data from a specific API was accessed appropriately and only by authorized processes.&lt;/p&gt;

&lt;p&gt;Setting this up early costs very little. Reconstructing a timeline of events after an incident — without logs — costs a great deal.&lt;/p&gt;




&lt;h2&gt;
  
  
  Governance Doesn't Slow You Down. Neglecting It Does.
&lt;/h2&gt;

&lt;p&gt;The instinct when building fast is to defer governance. Get the pipeline working first. Add security later. Document lineage once things stabilize.&lt;/p&gt;

&lt;p&gt;The problem is that "later" in a Fabric workflow usually means retrofitting controls onto a system that was built without them — reshaping access models, re-architecting workspace boundaries, and explaining to stakeholders why something that looked finished needs significant rework.&lt;/p&gt;

&lt;p&gt;The four areas covered here aren't advanced or time-consuming to implement at the start:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Governance Area&lt;/th&gt;
&lt;th&gt;When to Implement&lt;/th&gt;
&lt;th&gt;Effort&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;OneLake access model&lt;/td&gt;
&lt;td&gt;At workspace creation&lt;/td&gt;
&lt;td&gt;Low&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Row / Column level security&lt;/td&gt;
&lt;td&gt;When Silver/Gold layers are built&lt;/td&gt;
&lt;td&gt;Medium&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Lineage review&lt;/td&gt;
&lt;td&gt;After each new source is onboarded&lt;/td&gt;
&lt;td&gt;Low&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Audit log routing&lt;/td&gt;
&lt;td&gt;At workspace setup&lt;/td&gt;
&lt;td&gt;Low&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;None of these require a dedicated governance team or a separate project. They're engineering decisions that fit naturally into the work you're already doing — if you make them at the right time.&lt;/p&gt;




&lt;h2&gt;
  
  
  Final Thoughts
&lt;/h2&gt;

&lt;p&gt;Microsoft Fabric is a genuinely powerful platform. The unified storage model, the native integrations, the medallion architecture support — it makes building sophisticated data workflows accessible in ways that weren't possible before.&lt;/p&gt;

&lt;p&gt;But that power comes with shared responsibility. The same OneLake that makes everything connected and efficient also means that access, lineage, and auditability need to be designed deliberately — not assumed.&lt;/p&gt;

&lt;p&gt;Governance in Fabric isn't a gate that slows down engineering. It's the foundation that makes what you build trustworthy enough to actually use.&lt;/p&gt;

</description>
      <category>architecture</category>
      <category>dataengineering</category>
      <category>microsoft</category>
      <category>security</category>
    </item>
    <item>
      <title>Why I Chose a Lakehouse Over a Warehouse in Microsoft Fabric — And the Trade-offs I Weighed</title>
      <dc:creator>PrachiBhende</dc:creator>
      <pubDate>Mon, 30 Mar 2026 13:21:57 +0000</pubDate>
      <link>https://dev.to/prachibhende/why-i-chose-a-lakehouse-over-a-warehouse-in-microsoft-fabric-and-the-trade-offs-i-weighed-57k7</link>
      <guid>https://dev.to/prachibhende/why-i-chose-a-lakehouse-over-a-warehouse-in-microsoft-fabric-and-the-trade-offs-i-weighed-57k7</guid>
      <description>&lt;p&gt;When I was designing my data pipeline in Microsoft Fabric, one of the first and most important decisions I had to make was this:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Should I store my data in a Lakehouse or a Warehouse?&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;On the surface, both seem like valid options. But once I looked at the specifics of what my pipeline was actually doing, the answer became clear. And the reasoning behind it taught me a lot about how these two storage paradigms are fundamentally different — and when each one makes sense.&lt;/p&gt;

&lt;p&gt;Here's the full breakdown of my thought process.&lt;/p&gt;




&lt;h2&gt;
  
  
  First, a Quick Recap — What's the Difference?
&lt;/h2&gt;

&lt;p&gt;Before diving into my decision, let me quickly set the context.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Microsoft Fabric Warehouse&lt;/strong&gt; is a fully managed, SQL-first data store. It works best when your data is structured, your schema is well-defined upfront, and your workload is primarily read-heavy analytics. Think: BI dashboards, aggregated reports, clean dimensional models.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Microsoft Fabric Lakehouse&lt;/strong&gt; is a more flexible storage layer built on top of OneLake (Delta format). It can store structured, semi-structured, and unstructured data. You can query it with SQL via the SQL analytics endpoint, but it doesn't require a rigid schema at write time. Think: raw files, JSON dumps, logs, diverse data formats all living together.&lt;/p&gt;

&lt;p&gt;Now let's talk about why Lakehouse won for my use case.&lt;/p&gt;




&lt;h2&gt;
  
  
  Factor 1: I Was Dealing with High Data Volume
&lt;/h2&gt;

&lt;p&gt;My pipeline was calling &lt;strong&gt;multiple APIs&lt;/strong&gt; — and doing so continuously. Every API call returned a response, and those responses needed to be stored. Across all the endpoints and the frequency of calls, the volume of data being written was significant and growing.&lt;/p&gt;

&lt;p&gt;Warehouse in Fabric works well for structured analytical queries, but it comes with real compute costs every time you write, transform, or query data. At high volumes, that cost compounds quickly.&lt;/p&gt;

&lt;p&gt;Lakehouse, on the other hand, stores data as files in OneLake — and &lt;strong&gt;file storage is cheap&lt;/strong&gt;. Writing large volumes of raw data to a Lakehouse doesn't trigger compute the way Warehouse operations do. You write the files, they land in storage, and compute only kicks in when you actually query them.&lt;/p&gt;

&lt;p&gt;For a high-volume ingestion scenario, this was a meaningful advantage.&lt;/p&gt;




&lt;h2&gt;
  
  
  Factor 2: My Workload Was Write-Heavy
&lt;/h2&gt;

&lt;p&gt;Most discussions about data storage focus on reads — dashboards, queries, reports. But my pipeline's primary job was &lt;strong&gt;writing&lt;/strong&gt; — constantly ingesting API responses, writing logs, capturing metadata.&lt;/p&gt;

&lt;p&gt;Warehouse in Fabric is optimized for structured reads. Frequent, high-throughput writes — especially to many different tables or with variable schemas — create overhead and can become a bottleneck.&lt;/p&gt;

&lt;p&gt;Lakehouse is naturally suited for write-heavy workloads. Writing a file to a Lakehouse folder is simple, fast, and doesn't require a predefined table schema. You just drop the file and move on. No DDL changes, no schema migrations, no blocked writes waiting on locks.&lt;/p&gt;

&lt;p&gt;For a pipeline that's primarily producing data rather than consuming it, Lakehouse was the right fit.&lt;/p&gt;




&lt;h2&gt;
  
  
  Factor 3: My Source Data Was JSON — And I Needed to Store It As-Is
&lt;/h2&gt;

&lt;p&gt;This was perhaps the most decisive factor.&lt;/p&gt;

&lt;p&gt;Every API I called returned a &lt;strong&gt;JSON response&lt;/strong&gt;. And a key requirement for my bronze layer was this:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Store the raw API response exactly as received — no transformation, no modification, no flattening.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;This matters for two reasons:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Future analysis&lt;/strong&gt; — Raw JSON preserves every field the API returned, even ones I didn't think I needed at ingestion time. If my downstream logic changes, the original data is still there.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Issue handling&lt;/strong&gt; — When something goes wrong, I want to go back to the exact payload the API sent me. A transformed or flattened record loses that fidelity. The raw JSON is the ground truth.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Warehouse doesn't handle this well. It's a relational store — it expects rows and columns. Storing raw JSON blobs in a Warehouse is technically possible, but it's awkward, schema-unfriendly, and goes against the grain of what Warehouse is designed for.&lt;/p&gt;

&lt;p&gt;Lakehouse handles this naturally. I simply saved each API response as a &lt;code&gt;.json&lt;/code&gt; file into the appropriate folder in the Lakehouse. No schema required. No transformation needed. The file lands exactly as the API returned it.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;lakehouse/
└── bronze/
    └── api_name/
        └── 2024-03-15/
            ├── response_001.json
            ├── response_002.json
            └── response_003.json
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This folder-per-source, date-partitioned structure is clean, navigable, and exactly what a bronze layer should look like — raw, unmodified, organized.&lt;/p&gt;




&lt;h2&gt;
  
  
  Factor 4: I Needed Flexibility Across Multiple APIs
&lt;/h2&gt;

&lt;p&gt;I wasn't calling one API with a consistent schema. I was calling &lt;strong&gt;multiple APIs&lt;/strong&gt;, each with its own response structure, field names, nesting depth, and data types.&lt;/p&gt;

&lt;p&gt;With a Warehouse, you'd typically define a table per source — which means a schema per source — which means schema management becomes a project in itself. Every time an API changes its response format, you potentially need to alter a table, handle nulls, or deal with breaking changes.&lt;/p&gt;

&lt;p&gt;With a Lakehouse, none of that applies at write time. Each API's responses go into their own folder. The schema is inferred or defined later, when you actually need to query the data. This separation of ingestion from schema definition is one of the core strengths of the lakehouse architecture.&lt;/p&gt;

&lt;p&gt;It gave me the freedom to onboard a new API source without any infrastructure changes — just a new folder and a new pipeline run.&lt;/p&gt;




&lt;h2&gt;
  
  
  Factor 5: Compute Costs for Warehouse Didn't Justify My Use Case
&lt;/h2&gt;

&lt;p&gt;Fabric Warehouse runs on compute capacity — every query, every write operation, every transformation consumes capacity units. For analytics workloads where you're running a handful of expensive queries for a dashboard, that's a reasonable trade-off.&lt;/p&gt;

&lt;p&gt;But for my pipeline, the primary operation was &lt;strong&gt;ingestion&lt;/strong&gt; — writing raw data, writing logs, tracking metadata. The downstream analytics would come later and would be infrequent. Paying Warehouse-level compute for what was essentially a file-writing operation didn't make economic or architectural sense.&lt;/p&gt;

&lt;p&gt;Lakehouse let me ingest freely. When I did need to run queries — for validation, for debugging, for building silver-layer transforms — I used the Lakehouse's SQL analytics endpoint, which is efficient and on-demand.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Trade-offs I Accepted
&lt;/h2&gt;

&lt;p&gt;Choosing Lakehouse wasn't without compromise. Here's what I gave up:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;What I Gave Up&lt;/th&gt;
&lt;th&gt;Why It Was Acceptable&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Native SQL write semantics&lt;/td&gt;
&lt;td&gt;I was writing files, not rows — SQL writes weren't needed&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Enforced schema at ingestion&lt;/td&gt;
&lt;td&gt;Schema enforcement is a silver/gold layer concern, not bronze&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Easier BI tool integration (direct)&lt;/td&gt;
&lt;td&gt;SQL endpoint still supports Power BI and Fabric notebooks&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;ACID guarantees at write time&lt;/td&gt;
&lt;td&gt;Delta format in Lakehouse still provides ACID on reads&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;None of these trade-offs were dealbreakers for my use case. The bronze layer is not the place for strict schemas or governance — that comes later. Bronze should be raw, faithful, and cheap to write to.&lt;/p&gt;




&lt;h2&gt;
  
  
  How This Fits the Medallion Architecture
&lt;/h2&gt;

&lt;p&gt;This decision fits neatly into the &lt;strong&gt;medallion architecture&lt;/strong&gt; pattern:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Bronze (Lakehouse):&lt;/strong&gt; Raw JSON files, exactly as received from APIs. No transformation. Partitioned by source and date.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Silver (Lakehouse):&lt;/strong&gt; Cleaned, flattened, typed data. Schema enforced here.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Gold (Warehouse or semantic model):&lt;/strong&gt; Aggregated, business-ready data for reporting and analytics.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The key insight is that &lt;strong&gt;different layers can use different storage types&lt;/strong&gt; depending on what they need to do. Lakehouse is ideal for bronze because of its flexibility and low write cost. Warehouse becomes more appropriate as data becomes more structured and query patterns become more defined.&lt;/p&gt;




&lt;h2&gt;
  
  
  Final Thoughts
&lt;/h2&gt;

&lt;p&gt;If you're building a pipeline in Microsoft Fabric and wondering which to choose, here's a simple heuristic:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;If you're writing raw, varied, or semi-structured data at high volume — start with Lakehouse.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;If you're serving structured, well-defined data to BI tools and analysts — Warehouse is worth the compute.&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;For my pipeline — multiple APIs, write-heavy ingestion, raw JSON responses, high volume, and a need for bronze-layer fidelity — Lakehouse was the clear winner.&lt;/p&gt;

&lt;p&gt;It let me store data the way it arrived, at the scale it arrived, without fighting the storage layer to do it.&lt;/p&gt;

&lt;p&gt;And when it came time to transform and query that data, Lakehouse had the tools to support that too.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>How a Few Smart Engineering Choices Made My API Data Pipeline Reliable</title>
      <dc:creator>PrachiBhende</dc:creator>
      <pubDate>Mon, 30 Mar 2026 13:02:13 +0000</pubDate>
      <link>https://dev.to/prachibhende/how-a-few-smart-engineering-choices-made-my-api-data-pipeline-reliable-4cgp</link>
      <guid>https://dev.to/prachibhende/how-a-few-smart-engineering-choices-made-my-api-data-pipeline-reliable-4cgp</guid>
      <description>&lt;p&gt;This was my very first time building a data pipeline — and I had no idea what I was getting myself into.&lt;/p&gt;

&lt;p&gt;When I first started building an API-based data pipeline, it seemed straightforward:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Call the API → get the data → store it → repeat.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;But very quickly, reality kicked in.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;APIs started timing out&lt;/li&gt;
&lt;li&gt;Data volumes increased&lt;/li&gt;
&lt;li&gt;Pipelines became slow&lt;/li&gt;
&lt;li&gt;Failures became frequent and unpredictable&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;At one point, it felt like the pipeline had a personality of its own — working perfectly one day and silently failing the next.&lt;/p&gt;

&lt;p&gt;That's when I made a few key changes: &lt;strong&gt;batch processing&lt;/strong&gt;, &lt;strong&gt;parallel processing&lt;/strong&gt;, &lt;strong&gt;incremental loading&lt;/strong&gt;, and &lt;strong&gt;exponential backoff&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;These weren't overly complex techniques — but together, they completely changed how reliable and scalable my pipeline became.&lt;/p&gt;

&lt;p&gt;Let me walk you through what changed and how each one helped.&lt;/p&gt;




&lt;h2&gt;
  
  
  1. From "One Big Call" to Batch Processing
&lt;/h2&gt;

&lt;h3&gt;
  
  
  🚨 The Problem
&lt;/h3&gt;

&lt;p&gt;Initially, I was trying to fetch large volumes of data in a single API call (or very few calls).&lt;/p&gt;

&lt;p&gt;This led to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Timeouts on large payloads&lt;/li&gt;
&lt;li&gt;Failures that were hard to recover from&lt;/li&gt;
&lt;li&gt;Difficult retries — because everything was bundled together&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  💡 What I Changed
&lt;/h3&gt;

&lt;p&gt;I switched to &lt;strong&gt;batch processing&lt;/strong&gt; — breaking the data into smaller chunks and processing them step by step.&lt;/p&gt;

&lt;p&gt;Instead of one massive request, the pipeline now makes many small, predictable requests. If one fails, only that batch needs to be retried — not the entire load.&lt;/p&gt;

&lt;h3&gt;
  
  
  ✅ The Impact
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Fewer API failures overall&lt;/li&gt;
&lt;li&gt;Isolated, retryable failures instead of full restarts&lt;/li&gt;
&lt;li&gt;Much better control and visibility into the pipeline&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Before:&lt;/strong&gt; "Let's download everything at once."&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;After:&lt;/strong&gt; "Let's not be greedy. Small bites only."&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  2. Speeding Things Up with Parallel Processing
&lt;/h2&gt;

&lt;h3&gt;
  
  
  🚨 The Problem
&lt;/h3&gt;

&lt;p&gt;Even with batching, the pipeline was still slow — because everything was running sequentially.&lt;/p&gt;

&lt;p&gt;It felt like ordering food one item at a time instead of placing the full order at once. Each batch waited for the previous one to finish before starting.&lt;/p&gt;

&lt;h3&gt;
  
  
  💡 What I Changed
&lt;/h3&gt;

&lt;p&gt;I introduced &lt;strong&gt;parallel processing&lt;/strong&gt; — running multiple API calls at the same time.&lt;/p&gt;

&lt;h3&gt;
  
  
  ✅ The Impact
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Significant reduction in overall runtime&lt;/li&gt;
&lt;li&gt;Better utilization of system resources&lt;/li&gt;
&lt;li&gt;Improved throughput without changing the core pipeline logic&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  ⚠️ One Important Lesson
&lt;/h3&gt;

&lt;p&gt;Parallelism is powerful — but too much of it can overwhelm the API and get you rate-limited fast.&lt;/p&gt;

&lt;p&gt;The key is to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Cap the number of concurrent calls&lt;/strong&gt; (use a semaphore or throttle wrapper)&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Respect the API's documented rate limits&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Controlled parallelism is fast. Unconstrained parallelism is a support ticket waiting to happen.&lt;/p&gt;




&lt;h2&gt;
  
  
  3. Moving Away from Full Loads with Incremental Processing
&lt;/h2&gt;

&lt;h3&gt;
  
  
  🚨 The Problem
&lt;/h3&gt;

&lt;p&gt;In the early version, I was fetching &lt;strong&gt;all the data&lt;/strong&gt; every single time the pipeline ran.&lt;/p&gt;

&lt;p&gt;This worked… until it didn't.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Data kept growing&lt;/li&gt;
&lt;li&gt;Load times increased linearly with dataset size&lt;/li&gt;
&lt;li&gt;Redundant processing wasted bandwidth and compute&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  💡 What I Changed
&lt;/h3&gt;

&lt;p&gt;I implemented &lt;strong&gt;incremental loading&lt;/strong&gt; — fetching only new or updated records using a timestamp or watermark field (e.g., &lt;code&gt;updated_at&lt;/code&gt;).&lt;/p&gt;

&lt;p&gt;The pipeline now remembers where it left off and only asks the API for what's changed since the last successful run.&lt;/p&gt;

&lt;h3&gt;
  
  
  ✅ The Impact
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Consistent, fast pipeline execution regardless of total data size&lt;/li&gt;
&lt;li&gt;Reduced data transfer and API load&lt;/li&gt;
&lt;li&gt;A design that naturally scales as data grows&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Before:&lt;/strong&gt; "Let me re-read everything just in case."&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;After:&lt;/strong&gt; "I trust what I already know. Just give me what's new."&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  4. Handling Failures Gracefully with Exponential Backoff
&lt;/h2&gt;

&lt;h3&gt;
  
  
  🚨 The Problem
&lt;/h3&gt;

&lt;p&gt;APIs don't always behave nicely. I encountered:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Temporary server failures&lt;/li&gt;
&lt;li&gt;Rate limit responses (429s)&lt;/li&gt;
&lt;li&gt;Intermittent network issues&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Initially, failures either broke the pipeline entirely — or triggered immediate retries that just failed again.&lt;/p&gt;

&lt;h3&gt;
  
  
  💡 What I Changed
&lt;/h3&gt;

&lt;p&gt;I implemented &lt;strong&gt;exponential backoff&lt;/strong&gt; for retries.&lt;/p&gt;

&lt;p&gt;Instead of retrying instantly, the system waits progressively longer between each attempt:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Attempt 1 → wait 1s
Attempt 2 → wait 2s
Attempt 3 → wait 4s
Attempt 4 → wait 8s
Attempt 5 → wait 16s…
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Adding a small random &lt;strong&gt;jitter&lt;/strong&gt; to each wait time also helps prevent multiple clients from retrying in lockstep and hammering the API simultaneously.&lt;/p&gt;

&lt;h3&gt;
  
  
  ✅ The Impact
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Higher success rate for transient failures&lt;/li&gt;
&lt;li&gt;Reduced load on the API during recovery&lt;/li&gt;
&lt;li&gt;A pipeline that stays calm instead of spiraling into failure&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Before:&lt;/strong&gt; "Retry NOW. Again. NOW. Again. NOW."&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;After:&lt;/strong&gt; "Let's calm down… give it a second."&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  Putting It All Together
&lt;/h2&gt;

&lt;p&gt;Here's a quick summary of the four techniques and what each one solves:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Technique&lt;/th&gt;
&lt;th&gt;Problem Solved&lt;/th&gt;
&lt;th&gt;Key Benefit&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Batch Processing&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Large payload failures&lt;/td&gt;
&lt;td&gt;Isolated, retryable units&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Parallel Processing&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Sequential slowness&lt;/td&gt;
&lt;td&gt;Faster runtime&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Incremental Loading&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Redundant full refreshes&lt;/td&gt;
&lt;td&gt;Scalable efficiency&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Exponential Backoff&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Brittle retry logic&lt;/td&gt;
&lt;td&gt;Graceful failure handling&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;None of these are extremely advanced on their own. But together, they turned a fragile pipeline into something genuinely reliable.&lt;/p&gt;

&lt;p&gt;What started as a system I had to babysit became one I could trust to run unattended.&lt;/p&gt;




&lt;h2&gt;
  
  
  Final Thoughts
&lt;/h2&gt;

&lt;p&gt;The biggest lesson from this experience:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Good data engineering is not just about getting data. It's about getting it efficiently, reliably, and repeatedly.&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;If you're working with APIs, even small improvements like these can save you hours of debugging — and a lot of stress.&lt;/p&gt;

&lt;p&gt;And trust me — future you will be very grateful for the extra hour you spent on retry logic today. 🙂&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Why We Used a Data Gateway to Connect On-Prem SQL with Microsoft Fabric</title>
      <dc:creator>PrachiBhende</dc:creator>
      <pubDate>Tue, 30 Sep 2025 11:08:36 +0000</pubDate>
      <link>https://dev.to/prachibhende/why-we-used-a-data-gateway-to-connect-on-prem-sql-with-microsoft-fabric-4i08</link>
      <guid>https://dev.to/prachibhende/why-we-used-a-data-gateway-to-connect-on-prem-sql-with-microsoft-fabric-4i08</guid>
      <description>&lt;p&gt;As a cloud data architect, one of my recurring challenges is bridging on-premises data sources with modern cloud analytics platforms—all while keeping the customer’s security, governance, and trust intact.&lt;/p&gt;

&lt;p&gt;Recently, I worked on a project that involved Microsoft Fabric and an on-premises SQL Server database. At first glance, it looked simple: “Just connect Fabric to SQL Server.” In reality, it required some thoughtful architectural choices.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Business Context
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;The source system:&lt;/strong&gt; A mission-critical SQL Server running in the customer’s data center.&lt;br&gt;
&lt;strong&gt;The requirement:&lt;/strong&gt; Enable Microsoft Fabric to analyze data from SQL Server.&lt;br&gt;
&lt;strong&gt;The constraints:&lt;/strong&gt; &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Direct database exposure was not allowed.&lt;/li&gt;
&lt;li&gt;The customer only permitted access to six curated SQL views, not full tables.&lt;/li&gt;
&lt;li&gt;Data was considered sensitive; governance and auditability were non-negotiable.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;em&gt;This was a classic scenario of balancing analytics enablement with security.&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  The Dilemma
&lt;/h2&gt;

&lt;p&gt;We had two competing needs:&lt;br&gt;
&lt;strong&gt;Data accessibility&lt;/strong&gt; – Our Fabric environment had to query the on-prem SQL views.&lt;br&gt;
&lt;strong&gt;Data security&lt;/strong&gt; – The customer wanted zero direct exposure of their database to the outside world.&lt;br&gt;
A direct connection from the cloud wasn’t on the table. Opening ports, punching holes in firewalls, or replicating sensitive data outside their environment would have been unacceptable.&lt;/p&gt;

&lt;p&gt;That’s where the &lt;strong&gt;&lt;em&gt;data gateway&lt;/em&gt;&lt;/strong&gt; came into the picture.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why the Data Gateway?
&lt;/h2&gt;

&lt;p&gt;The On-premises Data Gateway acted as a secure bridge between Fabric and the SQL Server. Think of it as a one-way handshake:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The SQL Server never had to expose itself to the cloud.&lt;/li&gt;
&lt;li&gt;All queries from Fabric flowed securely through the gateway, running directly against those six views.&lt;/li&gt;
&lt;li&gt;The customer’s security team could monitor and control access centrally, knowing nothing left their environment without their rules.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;From an architectural standpoint, the gateway gave us the best of both worlds:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Compliance and security - The data stayed on-prem, and only the vetted views were accessible.&lt;/li&gt;
&lt;li&gt;Cloud analytics power - Fabric could leverage the data without replicating or compromising it.&lt;/li&gt;
&lt;li&gt;Flexibility - If the customer wanted to grant or revoke access to more views later, it was just a matter of updating permissions—not redesigning the pipeline.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  The Outcome
&lt;/h2&gt;

&lt;p&gt;By setting up the gateway, we respected the boundaries set by the business, while still enabling modern analytics in Fabric. Analysts could build reports, dashboards, and models without worrying about the physical location of the data. The customer was happy because their crown jewels—the SQL database—never had to leave their castle.&lt;/p&gt;

&lt;p&gt;And for me as a data architect, this reinforced an important lesson:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Architecture is as much about people and trust as it is about technology.&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Key Takeaways
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;When working with sensitive on-premises data, customers may only expose views—not full tables or databases.&lt;/li&gt;
&lt;li&gt;A data gateway provides a secure bridge for Fabric (or Power BI, or other cloud services) to query on-prem data sources.&lt;/li&gt;
&lt;li&gt;The solution respects security, governance, and compliance requirements without slowing down analytics.&lt;/li&gt;
&lt;li&gt;Sometimes, the simplest connector—a gateway—turns out to be the most powerful enabler.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You can learn more about gateways from:&lt;br&gt;
&lt;a href="https://learn.microsoft.com/en-us/data-integration/gateway/service-gateway-onprem" rel="noopener noreferrer"&gt;Microsoft On Prem Data Gateway&lt;/a&gt;&lt;br&gt;
&lt;a href="https://docs.azure.cn/en-us/analysis-services/analysis-services-gateway-install" rel="noopener noreferrer"&gt;Installation guide&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Happy Exploring :)
&lt;/h1&gt;

</description>
      <category>msfabric</category>
      <category>datagateway</category>
      <category>security</category>
    </item>
    <item>
      <title>Security-First Architecture in Azure Logic Apps: Patterns, Practices, and Compliance</title>
      <dc:creator>PrachiBhende</dc:creator>
      <pubDate>Thu, 05 Dec 2024 09:04:46 +0000</pubDate>
      <link>https://dev.to/prachibhende/security-first-architecture-in-azure-logic-apps-patterns-practices-and-compliance-56j5</link>
      <guid>https://dev.to/prachibhende/security-first-architecture-in-azure-logic-apps-patterns-practices-and-compliance-56j5</guid>
      <description>&lt;p&gt;In today’s digital landscape, automation isn’t just a convenience—it’s a necessity. Yet, as workflows become more interconnected and data flows across systems, security becomes the cornerstone of any architecture. For workflows that handle sensitive healthcare data, financial transactions, or personal customer information, it’s not enough for them to be functional. They must be robust and secure by design.&lt;/p&gt;

&lt;p&gt;When designing solutions with Azure Logic Apps, architects must consider how to achieve the delicate balance between functionality and security. A well-designed architecture can ensure compliance, mitigate risks, and maintain agility—all while simplifying operational management.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Balancing Act of Simplicity and Security
&lt;/h2&gt;

&lt;p&gt;Azure Logic Apps is a powerful platform, offering simplicity in creating workflows across diverse systems. However, simplicity can sometimes conceal the complexity required to address the stringent demands of security and compliance. In scenarios involving regulations like HIPAA, GDPR, or PCI-DSS, the goal should be to design workflows that are inherently secure rather than relying on after-the-fact safeguards.&lt;/p&gt;

&lt;p&gt;A layered security approach is often the most effective. By focusing on identity, encryption, monitoring, and governance, architects can create solutions that are not just functional but resilient and compliant with even the most demanding requirements.&lt;/p&gt;

&lt;h2&gt;
  
  
  Managed Identities: Eliminating the Weakest Link
&lt;/h2&gt;

&lt;p&gt;One common security pitfall in automation workflows is the mishandling of credentials. Hardcoding secrets, storing API keys in plain text, or relying on shared storage for sensitive credentials creates vulnerabilities. In such situations, Azure Managed Identities provide a secure and seamless solution.&lt;/p&gt;

&lt;p&gt;Managed Identities allow Logic Apps to authenticate with Azure services without the need for hardcoded credentials. Each app is effectively issued a unique identity that can be securely used to access resources. This approach eliminates the need for manual secret management and reduces the risk of credential exposure.&lt;/p&gt;

&lt;p&gt;For example, when connecting Logic Apps to services like Azure Key Vault or Azure Storage, Managed Identities can ensure that only the authorized Logic App has access to the data, without requiring any sensitive information to be stored in the workflow itself.&lt;/p&gt;

&lt;h2&gt;
  
  
  Encrypting Data in Transit and at Rest
&lt;/h2&gt;

&lt;p&gt;Data security extends beyond credentials to the protection of data itself. Whether in transit or at rest, data needs to be safeguarded to prevent unauthorized access.&lt;/p&gt;

&lt;p&gt;Logic Apps inherently encrypt data in transit using HTTPS. However, for scenarios involving hybrid architectures or on-premises systems, further measures may be required. Azure Virtual Network (VNet) integration is an effective solution for isolating communications within a private, secure channel.&lt;/p&gt;

&lt;p&gt;In addition, sensitive configuration values like API keys, connection strings, or access tokens can be stored in Azure Key Vault. With Key Vault, these values are encrypted and can only be accessed by authorized applications, further minimizing exposure risks.&lt;/p&gt;

&lt;p&gt;Architects designing such workflows often implement these measures to comply with data protection standards while maintaining operational simplicity.&lt;/p&gt;

&lt;h2&gt;
  
  
  Proactive Monitoring and Threat Detection
&lt;/h2&gt;

&lt;p&gt;Even with robust security measures in place, unforeseen events can occur. A security-first architecture must anticipate and respond effectively to potential threats. Azure Monitor and Azure Sentinel are critical tools for achieving this.&lt;/p&gt;

&lt;p&gt;Azure Monitor provides detailed insights into the activities of Logic Apps, logging every action, whether successful or failed. These logs can then be fed into Azure Sentinel, a security information and event management (SIEM) tool, to enable real-time threat detection and response.&lt;/p&gt;

&lt;p&gt;For instance, if a Logic App unexpectedly accesses an unauthorized endpoint, Sentinel can flag this activity and notify the security team. This proactive approach ensures that potential threats are identified and addressed before they escalate.&lt;/p&gt;

&lt;h2&gt;
  
  
  Implementing Role-Based Governance
&lt;/h2&gt;

&lt;p&gt;Security is not solely about technology—it also involves managing how people interact with the system. Role-Based Access Control (RBAC) is a key mechanism for ensuring that users have only the permissions they need to perform their tasks.&lt;/p&gt;

&lt;p&gt;In scenarios where multiple teams or departments interact with Logic Apps, RBAC ensures that access is granted on a need-to-know basis. This reduces the risk of accidental or intentional misuse of sensitive workflows or data.&lt;/p&gt;

&lt;p&gt;Training operational teams on the principles of least privilege and access governance is equally important. Ensuring that access is audited and regularly reviewed can prevent common misconfigurations that lead to security breaches.&lt;/p&gt;

&lt;h2&gt;
  
  
  A Foundation of Trust
&lt;/h2&gt;

&lt;p&gt;When solutions are built with security as a core principle, they do more than just function—they inspire confidence. Stakeholders, whether internal or external, can trust that the system will protect sensitive data, comply with regulations, and adapt to future challenges.&lt;/p&gt;

&lt;p&gt;In security-first architectures for Azure Logic Apps, tools like Managed Identities, Key Vault, VNet integration, and Sentinel form the backbone of a resilient solution. Each component works together to safeguard workflows, making them not only efficient but also impenetrable.&lt;/p&gt;

&lt;h2&gt;
  
  
  Final Thoughts
&lt;/h2&gt;

&lt;p&gt;Azure Logic Apps offers unparalleled flexibility, but security must never be an afterthought. By embedding security into every layer of the architecture—identity management, encryption, monitoring, and governance—organizations can build workflows that are as secure as they are functional.&lt;/p&gt;

&lt;p&gt;A security-first approach is more than a technical achievement. It’s a commitment to protecting what matters most: the data, the users, and the trust of everyone who relies on the system.&lt;/p&gt;

</description>
      <category>azure</category>
      <category>logicapps</category>
      <category>noco</category>
      <category>security</category>
    </item>
    <item>
      <title>Visualizing AI Prompt Responses in Chart Format Using React and Node.js</title>
      <dc:creator>PrachiBhende</dc:creator>
      <pubDate>Thu, 29 Aug 2024 10:19:54 +0000</pubDate>
      <link>https://dev.to/prachibhende/visualizing-ai-prompt-responses-in-chart-format-using-react-and-nodejs-2d9b</link>
      <guid>https://dev.to/prachibhende/visualizing-ai-prompt-responses-in-chart-format-using-react-and-nodejs-2d9b</guid>
      <description>&lt;p&gt;Artificial Intelligence (AI) has transformed how we handle data, turning raw information into clear, actionable insights. While AI excels at processing large datasets and generating responses, presenting this information in an easy-to-understand format is key. Data visualization helps bridge this gap by converting AI outputs into charts and graphs, making complex data easier to grasp. In this blog, we'll discuss the importance of visualizing AI responses, the best chart types for different data, and how to implement these visualizations in a web application.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Visualize AI Responses?
&lt;/h2&gt;

&lt;p&gt;AI models, particularly those based on machine learning and natural language processing, generate insights that can be dense and hard to interpret in raw text form. Visualizations help bridge the gap between complex AI output and user comprehension by:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Simplifying Data Interpretation: Charts and graphs distill complex datasets into simple visual elements, making patterns and trends easier to identify.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Enhancing User Engagement: Interactive charts can make data exploration more engaging, allowing users to interact with and explore the data more deeply.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Facilitating Data-Driven Decisions: Visual representations of data help stakeholders quickly grasp critical insights, supporting faster and more informed decision-making.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Solution Overview
&lt;/h2&gt;

&lt;p&gt;To achieve the desired functionality, the application is built using the following technologies:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Server Side:&lt;/strong&gt; Node.js with Express.js for handling requests, OpenAI API for generating embeddings, Supabase as a vector store for embeddings.&lt;br&gt;
&lt;strong&gt;Client Side:&lt;/strong&gt; React.js for the UI, react-chartjs-2 for rendering charts.&lt;/p&gt;
&lt;h2&gt;
  
  
  Server-Side Implementation
&lt;/h2&gt;

&lt;p&gt;Let's start with the server-side setup. The server handles two main tasks:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Processing and Training the Model:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Accepting and reading files in various formats.&lt;/li&gt;
&lt;li&gt;Using OpenAI's API to generate embeddings for the text data.&lt;/li&gt;
&lt;li&gt;Storing the embeddings in Supabase for future retrieval and querying.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Responding to User Queries:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Accepting user queries in natural language.&lt;/li&gt;
&lt;li&gt;Fetching relevant data from the vector store based on embeddings.&lt;/li&gt;
&lt;li&gt;Returning the results as a JSON response.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Step-by-Step Guide:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;&lt;em&gt;Setup Node.js and Express Server&lt;/em&gt;&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;npm install express multer supabase-js openai
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;&lt;em&gt;Save the confidential info in the .env file&lt;/em&gt;&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;SUPERBASE_API_KEY= YOUR_SUPERBASE_API_KEY
SUPERBASE_URL=YOUR_SUPERBASE_URL
OPEN_AI_API_KEY=YOUR_OPEN_AI_API_KEY
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;&lt;em&gt;Setup a basic express server&lt;/em&gt;&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import express from 'express';
import multer from 'multer';
import { join, dirname } from 'path';
import { unlinkSync } from 'fs';
import { train } from './train.js';
import {  processPrompt } from './prompt.js';
import cors from 'cors';  // Use default import
import bodyParser from 'body-parser';
import { fileURLToPath } from 'url';
import fs from 'fs';

import dotenv from 'dotenv';
dotenv.config();
const __filename = fileURLToPath(import.meta.url);
const __dirname = dirname(__filename);

const app = express();
const port = 3001;
app.use(cors());

const storage = multer.diskStorage({
  destination: function (req, file, cb) {
    cb(null, 'uploads/');
  },
  filename: function (req, file, cb) {
    cb(null, file.originalname);
  },
});

const upload = multer({ storage: storage });
const uploadDir = 'uploads';
if (!fs.existsSync(uploadDir)) {
  fs.mkdirSync(uploadDir);
}

app.use(bodyParser.json());

app.post('/train', upload.single('file'), async (req, res) =&amp;gt; {
  const filePath = join(__dirname, req.file.path);
  try {
    await train(filePath);
    unlinkSync(filePath); // Clean up the uploaded file
    res.status(200).send('Training data uploaded and processed');
  } catch (error) {
    res.status(500).send(`Error training model: ${error.message}`);
  }
});

app.post('/processPrompt', async (req, res) =&amp;gt; {
  try {
    const question = req.body.question;
    if (!question) {
      return res.status(400).send('Question is required');
    }

    const response = await processPrompt(question);

    res.status(200).send({ response });
  } catch (error) {
    res.status(500).send(`Error processing prompt: ${error.message}`);
  }
});

app.listen(port, () =&amp;gt; {
  console.log(`Server running on http://localhost:${port}`);
});

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;&lt;em&gt;Handling File Uploads and Processing:&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Use multer for handling file uploads.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Read and parse the content of .txt, .xlsx, .xls, and .pdf files.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Generate embeddings using the OpenAI API.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Example code snippet for processing files:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import { RecursiveCharacterTextSplitter } from 'langchain/text_splitter';
import fs from 'fs/promises';
import { createClient } from '@supabase/supabase-js';
import { SupabaseVectorStore } from '@langchain/community/vectorstores/supabase';
import { OpenAIEmbeddings } from '@langchain/openai';
import xlsx from 'xlsx'; 

import dotenv from 'dotenv';
dotenv.config();

async function train(filepath) {
    try {
        const superbase_api_key = process.env.SUPERBASE_API_KEY;
        const superbase_url = process.env.SUPERBASE_URL;
        const openAIApiKey = process.env.OPEN_AI_API_KEY;

        const splitter = new RecursiveCharacterTextSplitter({
            chunkSize: 500,
            separators: ['\n\n', '\n', ' ', ''],
            chunkOverlap: 50
        });
        let text = '';
        if (filepath.endsWith('.xlsx') || filepath.endsWith('.xls')) {
            const workbook = xlsx.readFile(filepath);
            const sheetNames = workbook.SheetNames;
            const firstSheet = workbook.Sheets[sheetNames[0]];
            const jsonData = xlsx.utils.sheet_to_json(firstSheet, { header: 1 });

            text = jsonData.map(row =&amp;gt; row.join(' ')).join('\n');
        } else {
            text = await fs.readFile(filepath, 'utf-8');
        }

        const output = await splitter.createDocuments([text]);
        console.log(output);

        const client = createClient(superbase_url, superbase_api_key);
        console.log(client);

        await retryAsyncOperation(async () =&amp;gt; {
            await SupabaseVectorStore.fromDocuments(
                output,
                new OpenAIEmbeddings({ openAIApiKey }),
                {
                    client,
                    tableName: 'documents',
                }
            );
        });

    } catch (err) {
        console.log(err);
    }
}

async function retryAsyncOperation(operation, retries = 5, delay = 1000) {
    for (let i = 0; i &amp;lt; retries; i++) {
      try {
        await operation();
        return;
      } catch (err) {
        console.error(`Error on attempt ${i + 1}: ${err.message}`);
        if (err.response) {
          console.error('Supabase response:', err.response.data);
        }
        if (i &amp;lt; retries - 1) {
          console.log(`Retrying... (${i + 1}/${retries})`);
          await new Promise(resolve =&amp;gt; setTimeout(resolve, delay));
        } else {
          throw err;
        }
      }
    }
  }
export { train};
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;&lt;em&gt;Handling User Queries:&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Accept user queries from the client side.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Retrieve relevant embeddings from Supabase.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Use the retrieved data to generate responses.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Example code snippet for handling queries:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import { ChatOpenAI } from '@langchain/openai';
import { PromptTemplate } from '@langchain/core/prompts';
import { retriever } from './utils/retriever.js';
import { combineDocuments } from './utils/combineDocuments.js';
import { StringOutputParser } from '@langchain/core/output_parsers';
import { RunnableSequence, RunnablePassthrough } from '@langchain/core/runnables';
import dotenv from 'dotenv';
dotenv.config();

async function processPrompt(question) {
    const openAIApiKey = process.env.OPEN_AI_API_KEY;

    const standaloneQuestionTemplate = 'Given a question, convert it to a standalone question. question: {question} standalone question:';

    const answerTemplate = `You are a chart generator bot that generates accurate charts based on the provided context. Use only the provided context to generate the final answer. Do not invent or assume any information not explicitly stated in the context. While generating the final answer, please follow these guidelines:
        1. Your response must always be in valid JSON format, which can be parsed as valid JSON. Do not include backticks, newlines, or extra spaces in the JSON structure. Use the following structure for your response object and do not change the names of the keys: chartType, title, labels, values. chartType is a string, title is a string, labels is an array of strings, and values is an array of numbers. In case if any value is unavailable then set it to 0.
        2. If chartType isn't specified in the question, use "bar" as default.
        3. DO NOT include "chart" or any other keyword in the chartType string.
        4. If you do not find any valid information associated with the user's question in the provided Context, or if the Context does not contain sufficient data to answer the question accurately, return No data found
        5. Ensure that the response only includes data relevant to the specific query and is fully supported by the provided context, if requested then please to calculations as well.
        6. Do not make assumptions or generate data that is not explicitly provided in the Context.

        Context: {context}
        Question: {question}
        Answer:`;

    try {
        const llm = new ChatOpenAI({ openAIApiKey, verbose: true });
        const standaloneQuestionPrompt = PromptTemplate.fromTemplate(standaloneQuestionTemplate);
        const answerPrompt = PromptTemplate.fromTemplate(answerTemplate);

        const standaloneQuestionChain = RunnableSequence.from([standaloneQuestionPrompt, llm, new StringOutputParser()]);

        const retrieverChain = RunnableSequence.from([
            standaloneQuestionChain, 
            retriever, 
            combineDocuments 
        ]);

        const answerChain = RunnableSequence.from([answerPrompt, llm, new StringOutputParser()]);

        const chain = RunnableSequence.from([
            {
                context: retrieverChain, 
                question: ({ question }) =&amp;gt; question 
            },
            answerChain 
        ]);

        const response = await chain.invoke({
            question: question,
            verbose: true
        });

        let finalResponse = {
            statusCode: 200,
            body: response
        };

        return finalResponse;
    } catch (err) {
        console.error("There is an error in the code:", err);
        return {
            statusCode: 500,
            body: 'Internal server error'
        };
    }
}

export { processPrompt };
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;&lt;em&gt;Actual retriever function&lt;/em&gt;&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import { SupabaseVectorStore } from '@langchain/community/vectorstores/supabase'
import { OpenAIEmbeddings } from '@langchain/openai'
import { createClient } from '@supabase/supabase-js'
import dotenv from 'dotenv';
dotenv.config();
const openAIApiKey = process.env.OPEN_AI_API_KEY

const embeddings = new OpenAIEmbeddings({ openAIApiKey })
const sbApiKey = process.env.SUPERBASE_API_KEY
const sbUrl = process.env.SUPERBASE_URL
const client = createClient(sbUrl, sbApiKey)

const vectorStore = new SupabaseVectorStore(embeddings, {
    client,
    tableName: 'documents',
    queryName: 'match_documents'

})

const retriever = vectorStore.asRetriever()

export { retriever }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Client-Side Implementation
&lt;/h2&gt;

&lt;p&gt;The client side is built using React.js. It sends user queries to the server, receives the response, and dynamically visualizes the data using react-chartjs-2.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step-by-Step Guide:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;&lt;em&gt;Setup React Application:&lt;/em&gt;&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;npm install axios chart.js react-chartjs-2
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;&lt;em&gt;Fetching Data from the Server:&lt;/em&gt;&lt;/strong&gt;
Use axios to make API requests to the server for uploading files and querying data.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import React, { useState, useEffect, useRef } from 'react';
import axios from 'axios';
import { Chart as ChartJS, registerables } from 'chart.js';
import ChartComponent from './ChartComponent';
import './PromptInput.css';

ChartJS.register(...registerables);

const PromptInput = () =&amp;gt; {
  const [input, setInput] = useState('');
  const [chatHistory, setChatHistory] = useState([]);
  const [loading, setLoading] = useState(false);
  const [error, setError] = useState(null);
  const [isButtonDisabled, setIsButtonDisabled] = useState(true);
  const chatContainerRef = useRef(null);

  useEffect(() =&amp;gt; {
    setIsButtonDisabled(input.trim() === '');
  }, [input]);

  useEffect(() =&amp;gt; {
    if (chatContainerRef.current) {
      chatContainerRef.current.scrollTop = chatContainerRef.current.scrollHeight;
    }
  }, [chatHistory]);

  const handleSubmit = async (e) =&amp;gt; {
    e.preventDefault();
    setLoading(true);
    setError(null);

    if (!input.trim()) {
      setError('Please enter a question.');
      setLoading(false);
      return;
    }

    setChatHistory((prevHistory) =&amp;gt; [
      ...prevHistory,
      { sender: 'user', message: input },
    ]);

    try {
      const res = await axios.post('http://localhost:3001/processPrompt', { question: input });
      let result = res?.data?.response;
      console.log("result", result);
      console.log("type of result", typeof result.body);

      if (result &amp;amp;&amp;amp; result.statusCode === 200 &amp;amp;&amp;amp; result?.body) {
        let body = result.body !== 'No data found' &amp;amp;&amp;amp; typeof result?.body === 'string' ? JSON.parse(result.body) : result.body;
        console.log('body:', body);

        const hasValidData = body.values &amp;amp;&amp;amp; body.values.length &amp;gt; 0 &amp;amp;&amp;amp; body.values.some(value =&amp;gt; value !== 0);

        if (hasValidData) {
          setChatHistory((prevHistory) =&amp;gt; [
            ...prevHistory,
            { sender: 'bot', message: `Chart generated for: ${body.title}`, chartData: body },
          ]);
        } else {
          setChatHistory((prevHistory) =&amp;gt; [
            ...prevHistory,
            { sender: 'bot', message: 'No data found.' },
          ]);
        }
      } else {
        setError('An error occurred while fetching the response. Please try again.');
      }
    } catch (error) {
      console.error('Error:', error);
      setError('An error occurred while fetching the response. Please try again.');
    } finally {
      setLoading(false);
      setInput('');
    }
  };

  return (
    &amp;lt;div className="App"&amp;gt;
      &amp;lt;div className="chat-container" ref={chatContainerRef}&amp;gt;
        {chatHistory.map((chat, index) =&amp;gt; (
          &amp;lt;div
            key={index}
            className={chat.sender === 'user' ? 'user-message' : 'bot-message'}
          &amp;gt;
            &amp;lt;p&amp;gt;{chat.message}&amp;lt;/p&amp;gt;
            {chat.sender === 'bot' &amp;amp;&amp;amp; chat.chartData &amp;amp;&amp;amp; (
              &amp;lt;ChartComponent chartData={chat.chartData} /&amp;gt;
            )}
          &amp;lt;/div&amp;gt;
        ))}
      &amp;lt;/div&amp;gt;

      {error &amp;amp;&amp;amp; &amp;lt;p className="error-message"&amp;gt;{error}&amp;lt;/p&amp;gt;}

      &amp;lt;form className="input-container" onSubmit={handleSubmit}&amp;gt;
        &amp;lt;input
          type="textarea"
          value={input}
          onChange={(e) =&amp;gt; setInput(e.target.value)}
          placeholder="Ask a question..."
          className="prompt-text"
        /&amp;gt;
        &amp;lt;button type="submit" disabled={isButtonDisabled || loading}&amp;gt;
          {loading ? 'Loading...' : 'Submit'}
        &amp;lt;/button&amp;gt;
      &amp;lt;/form&amp;gt;
    &amp;lt;/div&amp;gt;
  );
};

export default PromptInput;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;&lt;em&gt;Visualizing Data with react-chartjs-2:&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Use different chart types (Bar, Line, Pie, etc.) to visualize data based on user queries.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Customize chart options for better data representation.&lt;br&gt;
Example chart setup:&lt;br&gt;
&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import React, { useEffect, useRef } from 'react';
import { Chart } from 'chart.js';

const ChartComponent = ({ chartData }) =&amp;gt; {
  const chartRef = useRef(null);
  const chartInstanceRef = useRef(null);

  useEffect(() =&amp;gt; {
    if (!chartData || !chartRef.current) return;

    if (chartInstanceRef.current) {
      chartInstanceRef.current.destroy();
    }

    const generateColors = (numColors) =&amp;gt; {
      const colors = [];
      for (let i = 0; i &amp;lt; numColors; i++) {
        const r = Math.floor(Math.random() * 255);
        const g = Math.floor(Math.random() * 255);
        const b = Math.floor(Math.random() * 255);
        colors.push(`rgba(${r}, ${g}, ${b}, 0.6)`);
      }
      return colors;
    };

    const backgroundColors = generateColors(chartData.values.length);
    const borderColors = backgroundColors.map(color =&amp;gt; color.replace('0.6', '1')); 

    chartInstanceRef.current = new Chart(chartRef.current, {
      type: chartData.chartType,
      data: {
        labels: chartData.labels,
        datasets: [
          {
            label: chartData.title,
            data: chartData.values,
            backgroundColor: backgroundColors,
            borderColor: borderColors,
            borderWidth: 1,
          },
        ],
      },
      options: {
        responsive: true,
        maintainAspectRatio: false, 
        scales: {
          y: {
            beginAtZero: true
          }
        }
      },
    });
  }, [chartData]);

  return (
    &amp;lt;div style={{ width: '100%', height: '300px' }}&amp;gt; 
      &amp;lt;canvas ref={chartRef} style={{ width: '100%', height: '100%' }} /&amp;gt;
    &amp;lt;/div&amp;gt;
  );
};

export default ChartComponent;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Visualizing AI responses in chart format provides a powerful way to convey complex insights more intuitively and effectively. By leveraging popular libraries like react-chartjs-2 in a React application, developers can create dynamic and interactive data visualizations that enhance user experience and support data-driven decisions.&lt;br&gt;
As AI continues to evolve, integrating visualization into AI-driven applications will become increasingly important. Start experimenting with different chart types and libraries today to make the most of your AI insights!&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Overcoming Docker Installation Woes on Windows Server 2019 for Lambda Function Testing</title>
      <dc:creator>PrachiBhende</dc:creator>
      <pubDate>Wed, 08 May 2024 11:55:40 +0000</pubDate>
      <link>https://dev.to/prachibhende/overcoming-docker-installation-woes-on-windows-server-2019-for-lambda-function-testing-3omh</link>
      <guid>https://dev.to/prachibhende/overcoming-docker-installation-woes-on-windows-server-2019-for-lambda-function-testing-3omh</guid>
      <description>&lt;p&gt;In the realm of modern software development, testing and deploying applications quickly and efficiently is paramount. However, sometimes the tools we rely on can present unexpected challenges. In this blog post, I'll share my experience grappling with Docker installation issues on Windows Server 2019 and how I found a workaround using serverless-offline to facilitate Lambda function testing.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Challenge: Docker Installation on Windows Server 2019:
&lt;/h2&gt;

&lt;p&gt;Docker serves as a crucial tool for containerization, offering a streamlined approach to building, shipping, and running applications across different environments. With Docker, we can encapsulate our applications and their dependencies into lightweight, portable containers, ensuring consistency and reliability across development, testing, and production environments.&lt;/p&gt;

&lt;p&gt;However, attempts to install Docker on Windows Server 2019 environment, the inability to enable Hyper-V due to compatibility issues posed a major roadblock. Hyper-V is Microsoft's virtualization platform, essential for running Docker on Windows. Without Hyper-V enabled, Docker installation was thwarted, leaving us unable to leverage this powerful tool for containerization.&lt;/p&gt;

&lt;h2&gt;
  
  
  Enter Serverless-Offline: A Lifesaver in Testing Lambda Functions:
&lt;/h2&gt;

&lt;p&gt;With Docker out of the picture, I began exploring alternative methods to simulate AWS Lambda environments for testing. That's when I stumbled upon serverless-offline, a powerful tool that emulates AWS Lambda and API Gateway on your local machine. Intrigued, I decided to give it a try.&lt;/p&gt;

&lt;h2&gt;
  
  
  Seamless Setup and Configuration:
&lt;/h2&gt;

&lt;p&gt;Setting up serverless-offline was surprisingly straightforward. With just a few simple commands, I was able to install the necessary dependencies and configure my serverless project to utilize the offline plugin. All you need to do is,&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Create IAM User for local testing from Amazon Management console and generate access keys under security credentials section. You can download the .csv file that has secret access key and access key ID.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Install AWS CLI, &lt;a href="https://awscli.amazonaws.com/AWSCLIV2.msi"&gt;click here&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Configure AWS from command prompt by running the command aws configure, &lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffv40fhpu4y0t5abw2kzy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffv40fhpu4y0t5abw2kzy.png" alt="Image description" width="676" height="135"&gt;&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Install NPM packages &lt;br&gt;
&lt;code&gt;npm install --save-dev  serverless-offline serverless-dotenv-plugin&lt;/code&gt;&lt;br&gt;
This will add devDependencies to your package.json&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Add below lines in the package.json for start command&lt;br&gt;
&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;"scripts": {
    "start-local": "serverless offline start"
  }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Create serverless.yml file in the root folder of your application. This is a file for saving all the configurations.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;service: project-name
provider:
  name: aws
  runtime: nodejs 20.x
  stage: dev
  region: ap-south-1

plugins:
  - serverless-offline
  - serverless-dotenv-plugin

functions:
  demo-lambda:
    handler: demo-lambda/index.handler
    events:
      - http:
          path:  /demo
          method: ANY
          cors: true
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Create .env file for storing environment variables. The serverless-dotenv-plugin is for accessing the environment variables.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Finally, run the &lt;code&gt;npm run start-local&lt;/code&gt; command in the terminal of your VS code. This will list the Links to access the lambda functions. Copy the link and run it from any of the testing applications like postman.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;To my delight, I found that it accurately emulated the AWS Lambda environment, allowing me to test my functions locally without the need for Docker or a live AWS environment. This significantly streamlined my development process, enabling rapid iteration and debugging right from my development machine.&lt;/p&gt;

&lt;h2&gt;
  
  
  Benefits Beyond Docker Dependency:
&lt;/h2&gt;

&lt;p&gt;While serverless-offline proved invaluable in circumventing my Docker woes, its benefits extend beyond mere convenience. By enabling local testing of Lambda functions, it promotes a more efficient and iterative development cycle, empowering developers to iterate quickly and deliver high-quality code with confidence.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion:
&lt;/h2&gt;

&lt;p&gt;In the face of adversity, it's important to remain adaptable and resourceful. Though Docker installation issues initially threatened to derail my development efforts, serverless-offline emerged as a beacon of hope, providing a viable alternative for testing Lambda functions locally. As developers, we must embrace tools like serverless-offline that empower us to overcome obstacles and continue innovating with confidence.&lt;/p&gt;

&lt;p&gt;Stay tuned for my next blog post, where I'll delve into the intricacies of utilizing Secrets Manager and Parameter Store for local testing, further enhancing our development workflow and ensuring secure management of sensitive information. Happy Reading!!&lt;/p&gt;

</description>
      <category>aws</category>
      <category>serverless</category>
      <category>lambda</category>
      <category>localtesting</category>
    </item>
    <item>
      <title>Sequencing Success: Migrating from SQS Standard to SQS FIFO</title>
      <dc:creator>PrachiBhende</dc:creator>
      <pubDate>Fri, 02 Feb 2024 11:38:17 +0000</pubDate>
      <link>https://dev.to/prachibhende/sequencing-success-migrating-from-sqs-standard-to-sqs-fifo-10bi</link>
      <guid>https://dev.to/prachibhende/sequencing-success-migrating-from-sqs-standard-to-sqs-fifo-10bi</guid>
      <description>&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbmlaxdrvye6dx538qrqw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbmlaxdrvye6dx538qrqw.png" alt="Image description" width="800" height="800"&gt;&lt;/a&gt;&lt;br&gt;
In the ever-evolving landscape of cloud services, optimizing workflows and ensuring the reliability of message queues is paramount. As I navigated the complexities of managing messages in my AWS infrastructure, I found myself at a crossroads—stick with the familiar SQS standard or explore the promising SQS FIFO (First-In-First-Out) queue.&lt;/p&gt;

&lt;p&gt;In this blog post, I'll share my journey and the compelling reasons behind my decision to transition from SQS standard to SQS FIFO. Brace yourself for a tale of faster processing, smarter sequencing, and a safer, more robust message handling experience.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Unpredictable Nature of SQS Standard
&lt;/h2&gt;

&lt;p&gt;Initially, like many others in the AWS ecosystem, I gravitated towards SQS standard due to its touted simplicity and efficiency. Another significant draw was the absence of payload size restrictions, providing flexibility in managing messages. However, as my message queue demands intensified, SQS Standard's unpredictability became a formidable hurdle. While the option to read messages in batches of 10 was a feature, the erratic nature of the batch sizes proved to be a persistent issue. Sometimes, it would fetch two records, other times eight; this lack of consistency was far from ideal. Managing a reliable and ordered messaging system became paramount, recognizing the need for a more sophisticated solution to tackle these issues and uplift my messaging infrastructure became glaringly evident.&lt;/p&gt;

&lt;h2&gt;
  
  
  A Deliberate Choice SQS FIFO
&lt;/h2&gt;

&lt;p&gt;The turning point in my AWS journey dawned upon me when I recognized that SQS FIFO (First-In-First-Out) presented the exact remedy for the challenges I encountered. The choice to transition was not merely strategic; it arose from the imperative need for a meticulously organized and dependable message handling system. Additionally, the payload size limitation of less than 256KB played a pivotal role in solidifying this decision.&lt;/p&gt;

&lt;h2&gt;
  
  
  Unravelling the Advantages of SQS FIFO
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Faster Processing&lt;br&gt;
SQS FIFO's ability to process messages in a strict order brings a new level of efficiency to my workflows. No longer do I worry about messages being processed out of sequence. This streamlined approach has significantly reduced the time it takes for critical tasks to move through the pipeline.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Smarter Sequencing&lt;br&gt;
With SQS FIFO, the sequencing of messages is not just a feature; it's a game-changer. I delve into how the smart sequencing capabilities of SQS FIFO have empowered me to design more complex workflows with confidence. The result? A messaging architecture that aligns seamlessly with my application's logic.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Safer, More Robust Handling&lt;br&gt;
Reliability is non-negotiable when it comes to message queues. SQS FIFO's dedication to maintaining the order of messages and ensuring exactly once processing has brought a new level of robustness to my system. Discover how this commitment to reliability has positively impacted the overall stability of my applications.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Payload Size Versatility&lt;br&gt;
With SQS FIFO, the days of uncertainty in batch processing were over. The ability to consistently process messages in the order they were received brought a level of predictability and control that SQS Standard simply couldn't match. &lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Transitioning from SQS standard to SQS FIFO has revolutionized my AWS infrastructure. This strategic move addresses the unpredictability of SQS standard's batch processing, providing faster, smarter, and more robust message handling.&lt;/p&gt;

&lt;p&gt;SQS FIFO's commitment to strict order and sequencing, coupled with payload size versatility and the assurance of exactly once processing, brings a level of control and predictability that was lacking before. The advantages in speed, intelligence, and reliability make SQS FIFO a game-changer for anyone managing cloud-based message queues.&lt;/p&gt;

&lt;p&gt;As you contemplate the future of your message handling, consider exploring the transformative power of SQS FIFO. Stay tuned for more insights and a firsthand account of how this switch can supercharge your message queues. Your journey to a more efficient and reliable messaging infrastructure might be just a switch away.&lt;/p&gt;

&lt;p&gt;Happy Reading and Happy Exploring!&lt;/p&gt;

&lt;h2&gt;
  
  
  Additional Resources
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/welcome.html"&gt;SQS Documentation&lt;/a&gt;&lt;br&gt;
&lt;a href="https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/FIFO-queues-understanding-logic.html"&gt;FIFO delivery logic&lt;/a&gt;&lt;br&gt;
&lt;a href="https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/standard-queues.html"&gt;Standard SQS&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Happy Reading!!&lt;/p&gt;

</description>
      <category>sqs</category>
      <category>aws</category>
      <category>fifo</category>
      <category>learning</category>
    </item>
    <item>
      <title>A Guide to Durable Function Chaining with Node.js</title>
      <dc:creator>PrachiBhende</dc:creator>
      <pubDate>Fri, 25 Aug 2023 07:25:35 +0000</pubDate>
      <link>https://dev.to/prachibhende/a-guide-to-durable-function-chaining-with-nodejs-4kf2</link>
      <guid>https://dev.to/prachibhende/a-guide-to-durable-function-chaining-with-nodejs-4kf2</guid>
      <description>&lt;p&gt;I recently got the opportunity to work on a project where I found myself confronted with the task of meticulously partitioning operations into two distinct steps, each intricately linked and demanding sequential execution. Adding an extra layer of complexity, these steps needed to be initiated daily at designated times. While doing the research on which architecture to follow I stumbled upon the ingenious concept of Durable Function Chaining. In this blog I am excited to delve into the intricacies of crafting a meticulously orchestrated Function App, where the seamless chaining of functions takes center stage.&lt;/p&gt;

&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;In the world of serverless computing, orchestrating complex workflows efficiently is crucial. Azure Durable Functions provide a powerful solution for building such orchestrations. Function orchestration refers to the process of coordinating and managing the execution of multiple functions or tasks in a specific sequence to achieve a desired outcome or complete a complex workflow. It involves defining the order of execution, managing dependencies between tasks, handling errors, and ensuring that the overall workflow is executed correctly and efficiently. This blog post will walk you through the process of creating durable function chains using Node.js.&lt;/p&gt;

&lt;h2&gt;
  
  
  Prerequisites
&lt;/h2&gt;

&lt;p&gt;Before diving into this tutorial, ensure you have the following:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Node.js installed on your machine.&lt;/li&gt;
&lt;li&gt;Azure Functions Core Tools for local development&lt;/li&gt;
&lt;li&gt;Basic understanding of Azure Functions and serverless concepts&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Setting Up the Development Environment
&lt;/h2&gt;

&lt;p&gt;Start by creating a new Azure Functions project. Open your terminal and run the following commands:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;npm install -g azure-functions-core-tools@3 --unsafe-perm true
func init DurableFunctionChaining --language javascript
cd DurableFunctionChaining

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Next, install the Durable Functions extension:&lt;br&gt;
&lt;code&gt;npm install durable-functions&lt;/code&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  Components involved in Durable Function Chaining
&lt;/h2&gt;

&lt;p&gt;Durable Function chain contains three major components. Let’s dive deeper into each of these,&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Durable Functions Http Starter-&lt;/strong&gt; The Durable Functions HTTP Starter serves as the entry point for initiating the entire durable workflow. It's an HTTP-triggered Azure Function that is typically executed only once per workflow instance. When a client sends an HTTP request to this starter function, it starts an instance of the defined orchestrator function and returns an instance ID that uniquely identifies the workflow instance. It accepts input parameters or data from the HTTP request payload. Returns an instance ID that can be used to track and manage the workflow's progress.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Durable Functions Orchestrator-&lt;/strong&gt; The Durable Functions Orchestrator is responsible for controlling the flow of the entire workflow. It defines the sequence of activities to be executed, manages dependencies between tasks, and handles error scenarios and retries. The orchestrator function Coordinates the logic that chains together the activities and manages their execution. Manages state across the workflow, storing and passing data between activities. Can pause execution, waiting for certain conditions or external inputs.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Durable Functions Activity-&lt;/strong&gt; Durable Functions Activities Represent individual units of work that are executed as part of the workflow. These functions are designed perform specific tasks, such as data processing, validation, integration with external services, and more. Activities are invoked by the orchestrator using the callActivity method, and their outputs can be used as inputs for subsequent activities. Can be implemented as independent functions with a single responsibility. Can be reused across different orchestrations and workflows.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
  Creating Your First Durable Function
&lt;/h2&gt;

&lt;p&gt;Now that you have understood that we basically need three components for implementing durable function chain. Let’s now go through the functions and their configurations. &lt;br&gt;
Before creating the functions, you need environment ready. In VS Code install the extension Azure Functions. After it is installed, you can view the icon at the left-hand side of the VS code as,&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1bgxvghblmgmkactkd0h.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1bgxvghblmgmkactkd0h.png" alt="Image description" width="376" height="277"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Next step is to sign into Azure. Then the subscriptions can be seen under the RESOURCES.&lt;br&gt;
Now add functions to the function app by clicking on the Azure Function app button on the WORKSPACE tab. You can create new function app or you can add functions to your existing function app.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fac6wnzctttuairltw9z5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fac6wnzctttuairltw9z5.png" alt="Image description" width="800" height="263"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Let's begin by creating an HTTP-triggered function. Click on the Create function option to add Durable Functions HTTP Starter&lt;/p&gt;

&lt;p&gt;When you create a function in your project in Node.JS, it will ask you to select the folder where you want to create the functions. Language in which you are building the app, select TypeScript, Then Typescript programming model would be Model V4. Now select Durable Functions Http Starter and name it. A folder gets added to your root folder with the name of your function and this folder will contain two files, &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7g2bykog94s9i4zn6hyv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7g2bykog94s9i4zn6hyv.png" alt="Image description" width="247" height="86"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;HttpStart is the Function name and function.json will have configurations where as index.ts will have the code. Same structure can be seen for all the function that are added to the project but with different configurations.&lt;/p&gt;

&lt;p&gt;function.json would look like&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
  "bindings": [
    {
      "authLevel": "function",
      "name": "req",
      "type": "httpTrigger",
      "direction": "in",
      "route": "orchestrators/{functionName}",
      "methods": [
        "post",
        "get"
      ]
    },
    {
      "name": "$return",
      "type": "http",
      "direction": "out"
    },
    {
      "name": "starter",
      "type": "orchestrationClient",
      "direction": "in"
    }
  ],
  "scriptFile": "../dist/HttpStart/index.js"
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And index.ts&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import * as df from "durable-functions"
import { AzureFunction, Context, HttpRequest } from "@azure/functions"

const httpStart: AzureFunction = async function (context: Context, req: HttpRequest): Promise&amp;lt;any&amp;gt; {
    const client = df.getClient(context);
    const instanceId = await client.startNew(req.params.functionName, undefined, req.body);

    context.log(`Started orchestration with ID = '${instanceId}'.`);

    return client.createCheckStatusResponse(context.bindingData.req, instanceId);
};

export default httpStart;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The client.startNew() creates the instance to call the Orchestrator as in the function.json we have specified route to point to orchestrator. &lt;/p&gt;

&lt;h2&gt;
  
  
  Implementing Function Chaining
&lt;/h2&gt;

&lt;p&gt;Consider a scenario where we process an order. We'll create three activity functions: validateOrder, processPayment, and sendConfirmation.&lt;br&gt;
Create an activities folder in your project directory.&lt;br&gt;
Inside the activities folder, create three files: validateOrder.js, processPayment.js, and sendConfirmation.js. Or same can be added from Azure Function App icon in the local workspace. &lt;/p&gt;

&lt;p&gt;In each of these files, create an activity function like the following:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// validateOrder.js
module.exports = async function (context, order) {
    // Validation logic
    return isValid;
};
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// processPayment.js
module.exports = async function (context, order) {
    // Payment processing logic
    return paymentStatus;
};
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// sendConfirmation.js
module.exports = async function (context, order, paymentStatus) {
    // Send confirmation logic
    return confirmationSent;
};
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now, create an orchestration function to chain these activities. In your main project folder, create a file named orchestration.js:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const df = require("durable-functions");

module.exports = df.orchestrator(function* (context) {
    const order = yield context.df.callActivity("validateOrder", context.bindingData.order);
    const paymentStatus = yield context.df.callActivity("processPayment", order);
    yield context.df.callActivity("sendConfirmation", order, paymentStatus);
});
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The callActivity method of orchestrator takes the name of the activity method as first parameter, make sure you have given the same name to your activity.&lt;/p&gt;

&lt;h2&gt;
  
  
  Error Handling and Retries
&lt;/h2&gt;

&lt;p&gt;In the orchestration function, you can implement error handling and retries using standard JavaScript try-catch blocks and delay logic.&lt;/p&gt;

&lt;p&gt;Error handling and retries are crucial aspects of building resilient orchestrations using Durable Functions. Durable Function chaining offers built-in mechanisms for handling errors and retrying activities or orchestrations. Let's explore these concepts with an example of an order processing workflow.&lt;/p&gt;

&lt;p&gt;We'll implement error handling and retries to ensure that the workflow can handle transient failures gracefully.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const df = require("durable-functions");

module.exports = df.orchestrator(function* (context) {
    const order = context.bindingData.order;

    try {
        // Step 1: Validate Order
        const isValid = yield context.df.callActivity("validateOrder", order);

        if (!isValid) {
            throw new Error("Invalid order");
        }

        // Step 2: Process Payment
        const paymentStatus = yield context.df.callActivityWithRetry(
            "processPayment",
            new df.RetryOptions(3, TimeSpan.FromSeconds(10)),
            order
        );

        // Step 3: Send Order Confirmation
        const confirmationSent = yield context.df.callActivity("sendConfirmation", order, paymentStatus);

        return confirmationSent;
    } catch (error) {
        // Handle errors
        context.log.error(`An error occurred: ${error}`);
        throw new Error("Failed to process order");
    }
});
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;It's important to note that Durable Functions provide several options for controlling retries, such as specifying the maximum number of retries, the delay between retries, and handling specific types of exceptions. You can tailor these options to your specific requirements.&lt;/p&gt;

&lt;h2&gt;
  
  
  Monitoring and Logging
&lt;/h2&gt;

&lt;p&gt;Durable Functions offer built-in logging capabilities that you can use to monitor the progress of your orchestrations. You can also integrate with Azure Monitor for more advanced monitoring.&lt;/p&gt;

&lt;h2&gt;
  
  
  Testing and Debugging
&lt;/h2&gt;

&lt;p&gt;You can test individual activity functions by invoking them directly. For the orchestration function, use the Durable Functions emulator or deploy it to Azure for testing.&lt;/p&gt;

&lt;h2&gt;
  
  
  When to use Durable Function Chains
&lt;/h2&gt;

&lt;p&gt;For simpler workflows like order processing as in the above example, it is better to opt for the durable function chaining to keep the architecture streamlined. It can handle sequencing, retries, error handling, and state management effectively. For more complex workflows with multiple integration points, Azure Service Bus might be a better fit as it offers decoupled architecture. It's important to evaluate your application's requirements and choose the approach that aligns best with your goals.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Durable Function chaining with Node.js provides an efficient way to manage complex workflows in a serverless environment. By following the steps in this guide, you've learned how to create, implement, and manage durable function chains. Experiment with different orchestrations and explore more advanced features to build robust, scalable applications.&lt;/p&gt;

&lt;h2&gt;
  
  
  Additional Resources
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://learn.microsoft.com/en-us/azure/azure-functions/durable/" rel="noopener noreferrer"&gt;Azure Durable Functions Documentation&lt;/a&gt;&lt;/p&gt;

</description>
      <category>node</category>
      <category>azure</category>
      <category>durablefunctionchaining</category>
      <category>azurefunctions</category>
    </item>
    <item>
      <title>Custom Angular Directives: Extending HTML Functionality</title>
      <dc:creator>PrachiBhende</dc:creator>
      <pubDate>Mon, 05 Jun 2023 07:20:38 +0000</pubDate>
      <link>https://dev.to/prachibhende/custom-angular-directives-extending-html-functionality-2f01</link>
      <guid>https://dev.to/prachibhende/custom-angular-directives-extending-html-functionality-2f01</guid>
      <description>&lt;p&gt;Angular is a powerful, a robust JavaScript framework that enables developers to build dynamic and interactive web applications. As I delved into my Angular learning journey, I discovered an interesting subject, which is the ability to create custom directives, which allow us to extend HTML functionality and create reusable components and behaviors. In this blog, we will explore the concept of custom Angular directives and learn how to create our own. &lt;/p&gt;

&lt;h2&gt;
  
  
  What are Angular Directives?
&lt;/h2&gt;

&lt;p&gt;Angular directives are markers on DOM elements that instruct the HTML compiler to attach specific behaviors and transform those elements. They are essentially functions that get executed when the compiler encounters them. Directives in Angular are broadly classified into three types: component directives, attribute directives, and structural directives.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Component Directives:&lt;/strong&gt; Component directives are the building blocks of Angular applications. They create reusable components with their own templates, styles, and logic. They are declared using the &lt;code&gt;@Component&lt;/code&gt; decorator and can be represented as elements in HTML with child elements. Component directives can communicate with other components using inputs and outputs.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Attribute Directives:&lt;/strong&gt; Attribute directives modify the behavior or appearance of an HTML element. They are typically used as attributes on HTML elements. Attribute directives are declared using the &lt;code&gt;@Directive&lt;/code&gt; decorator and can be applied as attributes on HTML elements to apply visual transformations or change behavior.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Structural Directives:&lt;/strong&gt; Structural directives allow dynamic manipulation of the DOM structure by adding, removing, or manipulating elements based on certain conditions. They are denoted by an asterisk (*) prefix in Angular syntax. Angular provides three built-in structural directives: &lt;code&gt;ngIf&lt;/code&gt;, &lt;code&gt;ngFor&lt;/code&gt;, and &lt;code&gt;ngSwitch&lt;/code&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Creating Custom Directives
&lt;/h2&gt;

&lt;p&gt;Creating custom directives in Angular is a straightforward process. Let's get started by creating our first custom attribute directive for setting the font size of HTML elements. Here's how you can implement it:&lt;/p&gt;

&lt;p&gt;Create new directive using the below command,&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

ng generate directive fontsize


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;This will create two files &lt;code&gt;fontsize.directive.ts&lt;/code&gt; and &lt;code&gt;fontsize.directive.spec.ts&lt;/code&gt;. Also, it imports this directive to &lt;code&gt;module.ts&lt;/code&gt; and includes it in the declaration section as follows,&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

import { FontsizeDirective } from './fontsize.directive';
 @NgModule({
  declarations: [
    FontsizeDirective
  ],
})
export class AppModule { }


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;The font size can be updated in the &lt;code&gt;fortsize.directive.ts&lt;/code&gt; file as,&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

import { Directive, ElementRef, Renderer2 } from '@angular/core';

@Directive({
  selector: '[appFontsize]'
})
export class FontsizeDirective {

  constructor(private el: ElementRef, private renderer: Renderer2) {

  this.renderer.setStyle(this.el.nativeElement, 'fontSize', 'small');
  }
}



&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;code&gt;ElementRef&lt;/code&gt; is for fetching the DOM element on which the style needs to be applied.&lt;br&gt;
Finally, the selector &lt;code&gt;appFontsize&lt;/code&gt; is used in the .html file on a particular element for which the fontSize needs to be applied.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

&amp;lt;p appFontsize&amp;gt;This is a paragraph&amp;lt;/p&amp;gt;
&amp;lt;div appFontsize&amp;gt;This is a div&amp;lt;/div&amp;gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h2&gt;
  
  
  Why Use Custom Directives Instead of styling using CSS?
&lt;/h2&gt;

&lt;p&gt;From our above example you might get a question in your mind that why we cannot achieve the same using CSS. Font size can be set through styling using CSS. However, there are situations where using a custom attribute directive can provide benefits over using inline styles or CSS classes. Here are a few reasons why you might choose to create a custom directive for setting font size:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; If you need to dynamically adjust the font size based on certain conditions or data, a custom directive allows you to handle the logic in your component or template. &lt;/li&gt;
&lt;li&gt; By encapsulating the font size logic within a directive, you can easily reuse it across multiple components and templates. &lt;/li&gt;
&lt;li&gt; It promotes a cleaner and more modular code structure. Using a directive allows you to keep the font size logic separate from your component's business logic.&lt;/li&gt;
&lt;li&gt; A custom directive gives you more flexibility in programmatically manipulating the font size. &lt;/li&gt;
&lt;li&gt; Custom directives can be combined with other Angular features, such as template expressions or structural directives, to achieve more complex behavior. For example, you can conditionally apply the &lt;code&gt;appFontSize&lt;/code&gt; directive using &lt;code&gt;ngIf&lt;/code&gt; or iterate over a list of font sizes using &lt;code&gt;ngFor&lt;/code&gt;.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;While setting font size through CSS styles is a common approach, creating a custom directive allows you to leverage the full power of Angular to manipulate font size based on dynamic conditions and achieve greater reusability and modularity in your code. Here is an example to demonstrate the same,&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

import { Directive, ElementRef, Renderer2 } from '@angular/core';

@Directive({
  selector: '[appFontsize]'
})
export class FontsizeDirective {

  constructor(private el: ElementRef, private renderer: Renderer2) {

    const elementType = this.el.nativeElement.tagName.toLowerCase();

    if(elementType == "p")
    {
      this.renderer.setStyle(this.el.nativeElement, 'fontSize', 'small');
    }
    else
    {
      this.renderer.setStyle(this.el.nativeElement, 'fontSize', 'x-large');
    }
  }



&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;So, in the example it can be observed that when the element type is paragraph&lt;/p&gt;
&lt;p&gt; then font size is set to small and x-large for other elements. In the browser it looks like,&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffknk4e8uc37vzngwbz9z.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffknk4e8uc37vzngwbz9z.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Custom directives in Angular provide a powerful way to extend and enhance the functionality of your applications. They allow you to create reusable components, manipulate the DOM, and encapsulate complex behaviors. By leveraging custom directives, you can achieve code reusability, maintainability, and improved developer productivity.&lt;/p&gt;

&lt;p&gt;Throughout this blog, we explored the fundamentals of custom directives. We discussed their syntax, use cases, and how to create and use them in Angular applications. Custom directives offer a level of flexibility and control that enables you to create unique and tailored experiences for your users. They allow you to implement custom behaviors, apply fine-grained styling, and interact with the DOM in ways that are not possible with built-in Angular directives alone. By mastering the art of custom directives, you can unlock the full potential of Angular and take your application development to the next level. Whether you're building complex UI components, adding interactive features, or improving accessibility, custom directives empower you to create truly dynamic and customizable Angular applications.&lt;/p&gt;

&lt;p&gt;Happy exploring!&lt;/p&gt;

</description>
      <category>angular</category>
      <category>learning</category>
      <category>html</category>
      <category>development</category>
    </item>
  </channel>
</rss>
