<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Bhavya Thakkar</title>
    <description>The latest articles on DEV Community by Bhavya Thakkar (@bhavya_thakkar_203f9c2f66).</description>
    <link>https://dev.to/bhavya_thakkar_203f9c2f66</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/bhavya_thakkar_203f9c2f66"/>
    <language>en</language>
    <item>
      <title>From Recursion to Backtracking</title>
      <dc:creator>Bhavya Thakkar</dc:creator>
      <pubDate>Sun, 01 Mar 2026 11:23:11 +0000</pubDate>
      <link>https://dev.to/bhavya_thakkar_203f9c2f66/from-recursion-to-backtracking-4ei2</link>
      <guid>https://dev.to/bhavya_thakkar_203f9c2f66/from-recursion-to-backtracking-4ei2</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;You already know recursion. Here's the one idea that unlocks everything else.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  1 — The Gap Nobody Talks About
&lt;/h2&gt;

&lt;p&gt;You understand recursion. You've done tree traversals, written Fibonacci, maybe even memoized things. Recursive functions feel natural — break the problem down, solve the smaller version, build back up.&lt;/p&gt;

&lt;p&gt;But then someone says "generate all permutations" or "find all valid partitions" and something shifts. You know the shape of the answer vaguely. You sit down to write it. And the code doesn't come.&lt;/p&gt;

&lt;p&gt;It's not that you're missing a data structure or a formula. The gap is conceptual — and it's smaller than you think.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Recursion computes. Backtracking explores.&lt;/strong&gt; Pure recursion commits to a path and returns a value. Backtracking explores a path, and if it doesn't work out, &lt;em&gt;undoes&lt;/em&gt; the choice and tries another. That undo step is the entire difference. Once you internalize it, the code starts writing itself.&lt;/p&gt;

&lt;p&gt;To make the idea clearer: imagine you're in a maze. The goal isn't to escape, it's to find a cat stranded somewhere inside. This subtle shift in goals probably made you think "cool, I'll just explore all the paths then." That's the crux of backtracking vs recursion. Now extend the example: what if you had to give me the exact sequence of turns for &lt;em&gt;every possible path&lt;/em&gt; to the cat?&lt;/p&gt;




&lt;h2&gt;
  
  
  2 — How to Know You're Looking at a Backtracking Problem
&lt;/h2&gt;

&lt;p&gt;Before writing a single line of code, you need to recognize what kind of problem you're facing.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Backtracking signals:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The problem asks for &lt;em&gt;all&lt;/em&gt; solutions, not just one&lt;/li&gt;
&lt;li&gt;You're building something incrementally and need to abandon partial builds&lt;/li&gt;
&lt;li&gt;There's a constraint that can fail mid-way through construction&lt;/li&gt;
&lt;li&gt;Keywords: generate all, find all permutations, all valid combinations, all ways to partition&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Pure recursion signals:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;You need a single answer&lt;/li&gt;
&lt;li&gt;Clear top-down decomposition — the answer is defined in terms of the same function on a smaller input&lt;/li&gt;
&lt;li&gt;Nothing is shared between branches, no exploration of alternatives&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Key question to ask yourself:&lt;/strong&gt; Am I computing a value — or am I exploring a space of possibilities? If you're exploring, you almost certainly need backtracking.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Remember the maze — if you just needed the cat's coordinates, you'd stop at the first path that works. But returning &lt;em&gt;all possible paths&lt;/em&gt; means exploring every route — and when a path leads nowhere, you backtrack to the last crossroads and try another branch.&lt;/p&gt;




&lt;h2&gt;
  
  
  3 — The Anatomy of Backtracking
&lt;/h2&gt;

&lt;p&gt;Every backtracking solution follows the same skeleton:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;function backtrack(state, choices):
  // base case: we've built a complete valid state
  if state is complete:
    collect or return state

  for each choice in choices:
    make choice              // modify state
    backtrack(updated state, remaining choices)
    undo choice              // ← restore state — this is backtracking
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Three steps repeat for every choice: &lt;strong&gt;make, recurse, undo&lt;/strong&gt;. The undo step is what separates backtracking from plain recursion. Without it, you aren't exploring alternatives — you're contaminating future branches with the choices of past ones.&lt;/p&gt;

&lt;p&gt;What "make" and "undo" look like concretely depends on the problem — adding and removing from a list, marking and unmarking a cell — but the pattern is always the same.&lt;/p&gt;




&lt;h2&gt;
  
  
  4 — Recursion vs Backtracking — Side by Side
&lt;/h2&gt;

&lt;p&gt;The best way to understand backtracking isn't to study it in isolation — it's to watch pure recursion &lt;em&gt;fail&lt;/em&gt; at a problem that needs it, then see exactly how backtracking fixes the breakage.&lt;/p&gt;

&lt;h3&gt;
  
  
  Example 1 — Pure recursion succeeds: Count all subsets
&lt;/h3&gt;

&lt;p&gt;How many subsets does &lt;code&gt;[1, 2, 3]&lt;/code&gt; have? For each element, we either include it or we don't:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;function countSubsets(index):
  if index == length:
    return 1

  return countSubsets(index + 1)   // exclude
       + countSubsets(index + 1)   // include
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzp39svstqucj9b35rvae.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzp39svstqucj9b35rvae.png" alt=" " width="800" height="436"&gt;&lt;/a&gt;&lt;br&gt;
Notice what we're &lt;em&gt;not&lt;/em&gt; doing. We're not building anything. No shared state between calls. Each recursive call lives in its own world and returns a number upward. Both branches are completely independent — they never interfere with each other.&lt;/p&gt;

&lt;p&gt;When recursion works cleanly, it's usually because each call is stateless — it receives inputs, does work, returns a value. Nothing is shared.&lt;/p&gt;
&lt;h3&gt;
  
  
  Example 2 — Pure recursion breaks: Generate all subsets
&lt;/h3&gt;

&lt;p&gt;Now change one thing. Instead of counting, collect the actual subsets:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;current = []
result  = []

function findSubsets(index):
  if index == length:
    result.add(current)       // feels right. it isn't.
    return

  current.add(nums[index])
  findSubsets(index + 1)      // "include" branch

  findSubsets(index + 1)      // "exclude" branch — but current still has nums[index]!
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Let's trace through what actually happens for &lt;code&gt;[1, 2]&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;findSubsets(0)
  → add 1.  current = [1]
  → findSubsets(1)               // include branch
      → add 2.  current = [1, 2]
      → base case: result = [[1, 2]]          ✓
      → findSubsets(2)           // "exclude 2" branch
          → base case: current is STILL [1, 2]
            result = [[1, 2], [1, 2]]         ✗
  → findSubsets(1)               // "exclude 1" branch
      → current is STILL [1, 2] from before...
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fg925cufcgd87gnyx8j11.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fg925cufcgd87gnyx8j11.png" alt=" " width="800" height="436"&gt;&lt;/a&gt;&lt;br&gt;
The "exclude" branch never actually excluded anything. It inherited the dirty state left behind by the "include" branch. Both branches share the same &lt;code&gt;current&lt;/code&gt;, and there's no mechanism to clean it up between them.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The diagnosis:&lt;/strong&gt; The recursion structure is correct. The problem is shared mutable state. Pure recursion has no mechanism to undo that.&lt;/p&gt;

&lt;p&gt;Think of it this way — pure recursion is like each call working on its own notepad. The moment you introduce a shared structure like &lt;code&gt;current&lt;/code&gt;, you've moved to a shared whiteboard. Every call reads and writes to the same surface. Without a rule that says &lt;em&gt;"erase before the next person writes"&lt;/em&gt;, it's chaos. Backtracking is that rule.&lt;/p&gt;
&lt;h3&gt;
  
  
  The fix — backtracking enters
&lt;/h3&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;function backtrack(index, current):
  if index == length:
    result.add(copy of current)   // snapshot, not reference
    return

  // include branch
  current.add(nums[index])
  backtrack(index + 1, current)
  current.remove(last)            // ← undo — clean the whiteboard

  // exclude branch — current is clean now
  backtrack(index + 1, current)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;One line — &lt;code&gt;current.remove(last)&lt;/code&gt; — is the entire difference. It erases the contamination before the exclude branch runs. Both branches now see a clean state.&lt;/p&gt;

&lt;p&gt;Notice also that we collect a &lt;em&gt;copy&lt;/em&gt; of &lt;code&gt;current&lt;/code&gt;, not &lt;code&gt;current&lt;/code&gt; itself. If you're wondering why — hold that thought. It's one of the most common bugs in the pitfalls section.&lt;/p&gt;


&lt;h2&gt;
  
  
  5 — The Two Flavors
&lt;/h2&gt;

&lt;p&gt;Not all backtracking problems have the same shape. Once you've recognised that a problem needs backtracking, the next question is: what kind?&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Picking from a pool&lt;/strong&gt; (permutations / combinations) — you have a set of items and you're choosing from them. Tracking what's been used is the key mechanic — visited array or a start index that prevents reuse.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Splitting a structure&lt;/strong&gt; (slicing / partitioning) — you have one string or array and you're cutting it at different points. A moving &lt;code&gt;start&lt;/code&gt; index is the key mechanic — you never "undo" the pointer, you just try different cut points from the same position.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;[Expand with side-by-side pseudocode skeletons showing the structural difference between the two]&lt;/em&gt;&lt;/p&gt;


&lt;h2&gt;
  
  
  6 — Common Mistakes and Pitfalls
&lt;/h2&gt;

&lt;p&gt;These are the bugs that appear even when you understand backtracking conceptually. Each one is subtle enough to survive a quick read of the code.&lt;/p&gt;
&lt;h3&gt;
  
  
  1 — Not restoring state (the bleed bug)
&lt;/h3&gt;

&lt;p&gt;You add to &lt;code&gt;current&lt;/code&gt; inside the loop but forget to remove afterward. The state from one branch bleeds into the next. Every subsequent branch starts dirty.&lt;/p&gt;

&lt;p&gt;The fix: every &lt;em&gt;make&lt;/em&gt; step must have a matching &lt;em&gt;undo&lt;/em&gt; step — always, unconditionally.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;for each choice:
  current.add(choice)
  backtrack(...)
  current.remove(last)   // never skip this
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  2 — The copy trap
&lt;/h3&gt;

&lt;p&gt;You collect &lt;code&gt;current&lt;/code&gt; directly into &lt;code&gt;result&lt;/code&gt; instead of a snapshot. Since &lt;code&gt;current&lt;/code&gt; keeps mutating, every entry in &lt;code&gt;result&lt;/code&gt; ends up pointing to the same final state — a list full of identical, wrong results.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;result.add(current)           // all entries become identical
result.add(copy of current)   // correct — snapshot at this moment
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  3 — Confusing permutation structure with partition structure
&lt;/h3&gt;

&lt;p&gt;Permutation problems track &lt;em&gt;which items have been used&lt;/em&gt; (visited array or used set). Partition problems track &lt;em&gt;where you are in the structure&lt;/em&gt; (a start index). Using the wrong mechanic gives you either wrong results or runaway recursion.&lt;/p&gt;

&lt;p&gt;Ask yourself: am I picking from a pool, or am I cutting a single sequence?&lt;/p&gt;

&lt;h3&gt;
  
  
  4 — Forgetting the base case is a collection step
&lt;/h3&gt;

&lt;p&gt;In problems that ask for all solutions, the base case isn't just a termination signal — it's where you capture the result. Returning early or returning a boolean and forgetting to collect means valid solutions silently disappear.&lt;/p&gt;

&lt;h3&gt;
  
  
  5 — Off-by-one in slicing
&lt;/h3&gt;

&lt;p&gt;In partitioning problems, the start index or slice boundary is off by one. You either process an empty prefix (producing invalid empty partitions) or skip the last valid cut point. Trace through the first and last iterations by hand before trusting the loop bounds.&lt;/p&gt;




&lt;h2&gt;
  
  
  7 — A Mental Checklist
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Does the problem ask for all solutions, not just one? → likely backtracking&lt;/li&gt;
&lt;li&gt;Is there an undo step? → backtracking. No undo → pure recursion.&lt;/li&gt;
&lt;li&gt;Am I picking from a pool or cutting a structure? → determines the flavor&lt;/li&gt;
&lt;li&gt;Am I collecting all results or stopping at first? → determines base case behavior&lt;/li&gt;
&lt;li&gt;Did I copy before collecting? → prevents the copy trap&lt;/li&gt;
&lt;li&gt;Does every make step have a matching undo? → prevents the bleed bug&lt;/li&gt;
&lt;/ul&gt;




</description>
      <category>algorithms</category>
      <category>tutorial</category>
      <category>beginners</category>
      <category>computerscience</category>
    </item>
    <item>
      <title>Kafka Self-Healing Cluster:</title>
      <dc:creator>Bhavya Thakkar</dc:creator>
      <pubDate>Sun, 05 Oct 2025 08:57:18 +0000</pubDate>
      <link>https://dev.to/bhavya_thakkar_203f9c2f66/kafka-self-healing-cluster-p4d</link>
      <guid>https://dev.to/bhavya_thakkar_203f9c2f66/kafka-self-healing-cluster-p4d</guid>
      <description>&lt;p&gt;Hey there, welcome back to the 2nd episode of the monthly misadventures of a regular dev. Thanks for sticking around, especially after how rough the last one was. Let’s get straight to the point, have you ever had your organization look at the Confluent or Kinesis MoM bills and think, “Why can’t we just self-host this again?” You tried explaining that scaling and meeting downtime SLAs would be tough, especially with a lean team, but those warnings fell on deaf ears. Now you’re facing the consequences, those infrequent outages with your self-hosted Kafka servers.Those unwelcomed weekend outages. Don’t worry, I’ve been there too. In this post, I’ll share how I tackled this challenge. The implementation details? They’re reserved for an upcoming blog.&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;What’s the issue:&lt;/strong&gt;&lt;br&gt;
Our self-hosted Kafka cluster has experienced intermittent downtimes caused by corrupted log files. Although these failures are infrequent, they require manual developer intervention to resolve—typically SSHing into the affected node, purging corrupted data, and restarting the broker.&lt;br&gt;
&lt;strong&gt;Proposed Solution:&lt;/strong&gt;&lt;br&gt;
We plan to add a sidecar to each Kafka node that exposes an API interface on the node for health checks, purging, starting, stopping, and restarting brokers. This sidecar abstracts the complexity of the underlying infrastructure and Kafka internals, giving developers a simple way to interact with each node.&lt;br&gt;
The second component is a centralized controller, hosted separately. It polls the health status of individual nodes, every 5 seconds. If a broker becomes unresponsive, the controller will trigger an automated purge and restart sequence, retrying up to three times. If the broker remains unavailable, a cluster-wide purge and restart will be attempted. Should that fail, an infrastructure operations alert will be sent, tagging the tech team to manually intervene.&lt;br&gt;
Although rare—over 1.5 years of managing our Kafka server, purging and restarting has always resolved availability issues,we included these escalation steps to prepare for unexpected scenarios. Besides polling, the controller also offers an API to manage cluster-wide operations with the same commands available to the sidecars.&lt;br&gt;
&lt;strong&gt;Limitations:&lt;/strong&gt;&lt;br&gt;
This solution effectively monitors and maintains broker health but currently lacks dynamic load management. We run a fixed number of brokers and controllers and do not support automatically adding or removing nodes based on load. However, in our production environment, we remain well within available compute capacity.&lt;br&gt;
Another limitation is visibility. While the system operates reliably, it lacks monitoring tools such as connection failure metrics, utilization stats, or a dashboard—interactions are limited to the API interface, with no UI available.As they say if you cant validate or monitor it running smoothly it isn't running smoothly enough&lt;br&gt;
&lt;strong&gt;Future Scope:&lt;/strong&gt;&lt;br&gt;
Future improvements will focus on adding a visibility dashboard with metrics, a user interface for cluster interaction, and support for dynamic cluster resizing via API and UI.&lt;/p&gt;

</description>
      <category>kafka</category>
      <category>devops</category>
      <category>infrastructureascode</category>
    </item>
    <item>
      <title>The Dashboard Dilemma</title>
      <dc:creator>Bhavya Thakkar</dc:creator>
      <pubDate>Fri, 05 Sep 2025 13:00:21 +0000</pubDate>
      <link>https://dev.to/bhavya_thakkar_203f9c2f66/the-dashboard-dilemma-1277</link>
      <guid>https://dev.to/bhavya_thakkar_203f9c2f66/the-dashboard-dilemma-1277</guid>
      <description>&lt;p&gt;&lt;strong&gt;Introduction&lt;/strong&gt;&lt;br&gt;
Sweaty palms, jittery hands, typing away at midnight to ship a crucial feature to improve team productivity, we've all been there. This could be avoided by building a reliable and easy to use dashboard that gives insights on KPIs, makes management smoother, makes any strategic pivots data-driven, and  iterations faster. The problem lies in what we want to track, how frequently, how stringently, and how do we present it in a quick yet reliable fashion. I was in the same conundrum. &lt;br&gt;
During August last year, after basic infrastructure and backend setup was done for our in-house ERP, I was tasked with any upcoming improvements and feature extensions. &lt;br&gt;
Imagine this, you are in the spotlight of the financial year-end party all thanks to one feature that quadrupled the  sales team's efficiency, I was stoked-that's an understatement; I wanted this every bit of it, even the possibility of failing miserably.&lt;/p&gt;

&lt;p&gt;But little did I know one of those features was going to be an organization-level dashboard involving complex KPIs, like call efficiency, call times, and all of it preferably real-time. So this blog is going to be about how I went about it, what were the trade offs I chose, what I could have done better, what I actually got right, and things I didn't.&lt;/p&gt;



&lt;p&gt;&lt;strong&gt;Freshness SLA &amp;amp; Its Impact&lt;/strong&gt;&lt;br&gt;
So what exactly is freshness SLA, and why does it matter so much? Let's take an example: if you were working at an HFT (high frequency trading) firm, if the prices that you execute your purchase/sell orders on are even a second old, it's going to cost the company probably millions of dollars, and on the stark end of the spectrum if you were working on a government project of verifying a user's document for vehicle registration you would be fine even if the system took, let's say an hour.&lt;br&gt;
What changed in these 2 examples is the requirement of frequent data updates/syncs, that's freshness SLA, and freshness is simply how old or new the data that you possess is &lt;br&gt;
The one thing about any business and any data-driven decision is, if you ask them what they want, they will most definitely answer with something along the lines of&lt;br&gt;
"The most recent accurate data, as quick as possible"&lt;br&gt;
The problem with these statements is I don't have anything to latch on to, nothing to map my user expectations, and nothing to derive my KPI. Luckily I was smart enough to ask the right question,&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Is it okay if the data is 15 mins stale?&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The answer to that question most definitely won't be a simple yes or no, and that's the beauty of it-it will give you insights on what can you get away with, it will be more like &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;15 mins is too old, we are willing to accept 5 mins stale data&amp;gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;or&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;15 mins is fine but no more delays than that&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;or a &lt;br&gt;
rigid real-time requirement, but then the expectation for performance would be lowered. I was working with a max staleness of 15 minutes and a sub-3 second response time, any slower and it would be a bad UX, and preferably sub-1 second responses. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fofgljnw90tzlvbruvurx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fofgljnw90tzlvbruvurx.png" alt="figure 1" width="800" height="769"&gt;&lt;/a&gt;&lt;em&gt;figure 1&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;This gives me quite a bit of wiggle room, even though the data as shown in figure 1 is quite complex to begin with. So let's break it down: each user is assigned a designation, and  a team which can change with time due to promotions, and internal movements. We need to extract information about KPI on customer state, derived from call_history, and whether the customer is closed or not, based on the invested flag in the customer table. All simple and straightforward so far, but we need to do it for each user in all the teams, over the time period specified, and aggregate under the correct hierarchy, this also includes contributions by past team members.&lt;br&gt;
(user here refers to the stakeholders of our ERP system—members of the sales team)&lt;br&gt;
Final data would look something like this.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"key"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"child"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"key"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"child"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
          &lt;/span&gt;&lt;span class="nl"&gt;"key"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;3&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
          &lt;/span&gt;&lt;span class="nl"&gt;"child"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
              &lt;/span&gt;&lt;span class="nl"&gt;"key"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;4&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
              &lt;/span&gt;&lt;span class="nl"&gt;"warm_calls"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
              &lt;/span&gt;&lt;span class="nl"&gt;"dialled_calls"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;100&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
              &lt;/span&gt;&lt;span class="nl"&gt;"call_efficiency"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;5&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
              &lt;/span&gt;&lt;span class="nl"&gt;"closures"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
              &lt;/span&gt;&lt;span class="nl"&gt;"sip_setup"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
              &lt;/span&gt;&lt;span class="nl"&gt;"designation"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"RM"&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
              &lt;/span&gt;&lt;span class="nl"&gt;"key"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;5&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
              &lt;/span&gt;&lt;span class="nl"&gt;"warm_calls"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
              &lt;/span&gt;&lt;span class="nl"&gt;"dialled_calls"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;100&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
              &lt;/span&gt;&lt;span class="nl"&gt;"call_efficiency"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;5&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
              &lt;/span&gt;&lt;span class="nl"&gt;"closures"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
              &lt;/span&gt;&lt;span class="nl"&gt;"sip_setup"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
              &lt;/span&gt;&lt;span class="nl"&gt;"designation"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"RM"&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
          &lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;&lt;span class="w"&gt;
          &lt;/span&gt;&lt;span class="nl"&gt;"warm_calls"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;4&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
          &lt;/span&gt;&lt;span class="nl"&gt;"dialled_calls"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;200&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
          &lt;/span&gt;&lt;span class="nl"&gt;"call_efficiency"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;5&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
          &lt;/span&gt;&lt;span class="nl"&gt;"closures"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
          &lt;/span&gt;&lt;span class="nl"&gt;"sip_setup"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
          &lt;/span&gt;&lt;span class="nl"&gt;"designation"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"team-lead"&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"warm_calls"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;4&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"dialled_calls"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;200&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"call_efficiency"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;5&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"closures"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"sip_setup"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"designation"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"manager"&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"warm_calls"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;4&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"dialled_calls"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;200&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"call_efficiency"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;5&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"closures"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"sip_setup"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"designation"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"avp"&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;em&gt;sample data for 1 team consisting of 2 sales reps&lt;/em&gt;&lt;/p&gt;




&lt;p&gt;We were mainly considering 3 approaches, as any lean startup running on a time crunch, our first idea was caching, but the problem was caching the whole data (as shown in sample data) would be expensive, largely due to the amount of recursive hierarchy that we need to maintain for team-based KPIs. Not to mention we would need to invalidate cache, and if a cache miss occurs, the DB query is still inefficient. We have neither optimized the query, nor have we reduced the data that query processes, so no real improvement on the first hit. This didn't sit right with me and if I had done the easy thing, and kept the TTL as 15 minutes and called it a day, it would be a band-aid fix waiting to explode in my face. Caching as a solution would work for frequently accessed time frames like daily or weekly data, but the moment we switch to say more infrequently accessed data like yearly, quarterly, or half-yearly we run into the problem of frequent cache misses&lt;/p&gt;




&lt;p&gt;The second approach was to use batch processing to precompute this data, although much better than the quasi improvement of caching, it had its own challenges setting up complex ETL pipelines, separate infrastructure setup, a lot of initial configuration, and still it isn't a perfect fit for the real-time requirement. Not to mention that if setup is anything short of perfect it would introduce data inconsistencies, and any minor schema change would need us to go through this whole process again, not ideal for development velocity&lt;/p&gt;




&lt;p&gt;The third approach we had was to use streaming of events from our mobile ERP app to directly populate web dashboards, fast, quick, reliable, initial infra setup in terms of Apache Kafka is required. This can be circumvented by using cloud solutions like Confluent and Kinesis, these would be plug and play. We only had to worry about triggering events and consuming them, but the problem is the data we required was too complex, mandating complex aggregations and joins. While data points sent by mobile client could be used to calculate KPIs, processing it in event-driven architecture would mean computing it as we receive the event making refreshes much slower than the required 3 seconds&lt;br&gt;
So what I ended up doing was to use a combination of these 3 to find the sweet spot between these approaches. I wanted a streamlined approach similar to Redis, with cost and accuracy of batch processing and ease of setup similar to streaming&lt;/p&gt;



&lt;p&gt;&lt;strong&gt;The Incremental Refresh Approach&lt;/strong&gt;&lt;br&gt;
So I lied, I didn't use all three, I held back caching. I just couldn't come to terms with a pseudo real-time dashboard, I used streaming, specifically a queue, Kafka to be even more specific, to ingest real-time events into the DB. These events included metrics like call times, leads closed, leads transitioning to different states in the funnel, so my database has fresh data, what about the aggregation problem, you ask. That's where realizing that all the data required for a real-time organizational dashboard mostly resides in the backend server, I would say mostly because some of the transactional data would be async, including but not limited to payment_status, fulfillment_timestamp. This doesn't affect the problem statement much because in such dashboards you work with an unsaid assumption, whatever data is present within the system, you crunch the numbers and give it to business teams. This context setting was required to appreciate a rather simple solution, materialized views,  all your aggregations could be pre-computed and stored as a materialized view. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Froonqo399b8o9kkwg01c.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Froonqo399b8o9kkwg01c.png" alt=" " width="800" height="665"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The problem with materialized view is that it doesn't refresh with updates, an intuitive solution would be to use hooks and refresh the view. That's only true if you serve all your data through  views to the dashboard, now let's think about this problem in a broader perspective. Does historical data change, the answer to that in my case was no. So can I save all the data until yesterday in a view and refresh it through a cron? Yes, and compute only today's slice irrespective of time frame. The aggregation considers significantly less data, and queries can be optimized to give sub-second results.&lt;br&gt;
Now coming to another downfall of views, they lock the whole table while they are refreshed, not good. So we need to run it in a cron, during phases with lower loads, I hear you, didn't I frown upon additional infra, and that's where a nifty little extension on PostgreSQL comes to save the day, pg_cron to be specific. So how it works is you configure your usual cron but on your db in a different schema, you just provide the refresh view command as the command to be executed to the cron config and whether it should be active or not.&lt;/p&gt;



&lt;p&gt;This is a clean solution, no additional infra overhead, no compromise on real-time SLA, minimal cost increase, if any due to views, and the best part it's elegant and easy to manage. The next section goes over how to setup pg_cron for your use case, I refrained from going on in-depth on how to make materialized views, this blog is already running too long and setting them up is really straightforward. While pg_cron does require some work if you don't self-host your servers and use a cloud provider, we use AWS RDS so all of this info will be in context to that but I will attach references on how to achieve the same on the other 2 major cloud providers, once pg_cron has been added as an extension in postgres, process after that is really straightforward.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;PG Cron Setup RDS&lt;/strong&gt;&lt;br&gt;
pg_cron is supported on RDS for PostgreSQL versions 12.5 and above.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Modify Parameter Group&lt;/li&gt;
&lt;li&gt;Go to the AWS RDS Console.&lt;/li&gt;
&lt;li&gt;Navigate to Parameter Groups on the sidebar.&lt;/li&gt;
&lt;li&gt;If you don’t have a custom parameter group, create one by copying the default group associated with your DB instance.&lt;/li&gt;
&lt;li&gt;Edit the custom parameter group. Find the parameter shared_preload_libraries.&lt;/li&gt;
&lt;li&gt;Add pg_cron to this parameter’s value (append it if other libraries exist).&lt;/li&gt;
&lt;li&gt;Save the changes.&lt;/li&gt;
&lt;li&gt;Apply the Parameter Group to Your DB Instance&lt;/li&gt;
&lt;li&gt;Modify your RDS instance to use the new/modified parameter group.&lt;/li&gt;
&lt;li&gt;A database restart is required for the changes to take effect. Restart your instance.&lt;/li&gt;
&lt;li&gt;Create the pg_cron Extension in the Database&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Connect to your PostgreSQL instance as a user with rds_superuser privileges (default user postgres usually has these).&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="k"&gt;CREATE&lt;/span&gt; &lt;span class="n"&gt;EXTENSION&lt;/span&gt; &lt;span class="n"&gt;pg_cron&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This will enable the pg_cron background worker and create the necessary objects, typically in the postgres database by default.&lt;br&gt;
&lt;strong&gt;Schedule Jobs&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="k"&gt;SELECT&lt;/span&gt; &lt;span class="n"&gt;cron&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;schedule&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s1"&gt;'job_name'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s1"&gt;'cron_schedule'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s1"&gt;'command_to_run'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="k"&gt;SELECT&lt;/span&gt; &lt;span class="n"&gt;cron&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;schedule&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s1"&gt;'refresh_materialized_view'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s1"&gt;'*/15 * * * *'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s1"&gt;'REFRESH MATERIALIZED VIEW my_mat_view'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Managing Jobs&lt;/strong&gt;&lt;br&gt;
To unschedule jobs:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="k"&gt;SELECT&lt;/span&gt; &lt;span class="n"&gt;cron&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;unschedule&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;job_id&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;  
&lt;span class="k"&gt;SELECT&lt;/span&gt; &lt;span class="n"&gt;cron&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;unschedule&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s1"&gt;'job_name'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Job details and histories can be viewed in the cron.job_run_details table in the database, like so&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="k"&gt;SELECT&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="k"&gt;FROM&lt;/span&gt; &lt;span class="n"&gt;cron&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;job_run_details&lt;/span&gt;
&lt;span class="k"&gt;WHERE&lt;/span&gt; &lt;span class="n"&gt;jobid&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;your_job_id&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



</description>
      <category>backend</category>
      <category>database</category>
      <category>postgres</category>
    </item>
  </channel>
</rss>
