DEV Community

quietpulse
quietpulse

Posted on • Originally published at quietpulse.xyz

Firebase Scheduled Functions Monitoring: How to Catch Missed Runs Before They Break Production

Firebase scheduled functions monitoring matters because scheduled backend work is easy to forget until it quietly stops doing its job.

A Cloud Function might clean old records every night, sync subscription status from a payment provider, send reminder notifications, refresh search indexes, or export analytics data. When that function runs correctly, nobody thinks about it. When it stops running, the app may still look healthy from the outside.

The website is up. The API responds. Users can log in.

But the scheduled work is missing.

That is the dangerous part: Firebase scheduled functions often fail in places that normal uptime monitoring cannot see.

The problem

A Firebase scheduled function is usually created with Cloud Scheduler behind the scenes. Depending on the generation and setup, it may be triggered through Pub/Sub or the newer scheduler integration.

A typical job might look like this:

const { onSchedule } = require("firebase-functions/v2/scheduler");

exports.cleanupExpiredSessions = onSchedule("every 24 hours", async () => {
  await cleanupExpiredSessions();
});
Enter fullscreen mode Exit fullscreen mode

It looks simple. Once deployed, you expect it to run forever.

But production systems are not that clean.

Scheduled functions can stop working because of deployment mistakes, billing issues, IAM changes, runtime errors, dependency failures, quota problems, region mismatches, or configuration drift. Sometimes the function does run, but exits early before doing the important work. Sometimes it starts failing every night and nobody notices because the rest of the app keeps responding normally.

The core problem is this:

Your app can be up while your scheduled work is broken.

That means uptime checks alone are not enough.

Why it happens

Firebase scheduled functions rely on several moving parts:

  • Cloud Scheduler
  • Cloud Functions
  • Pub/Sub or scheduler triggers
  • IAM permissions
  • runtime configuration
  • external APIs
  • database access
  • billing and quota limits

If any of those pieces changes, your scheduled task can fail.

Common causes include:

  • the function was renamed or removed during deployment
  • the schedule exists in one region while the function is deployed in another
  • an environment variable is missing in production
  • the service account lost permission to invoke the function
  • Firestore or Realtime Database rules changed
  • a third-party API started returning errors
  • the job times out on larger data sets
  • Firebase billing or Google Cloud quotas block execution
  • logs are noisy enough that nobody sees the failure

There is also a more subtle failure mode: partial success.

For example, a scheduled function might begin processing users, update the first 500 records, hit an exception, and stop. From a high level, you may see that the function ran. But the job did not actually complete the work it was responsible for.

That is why Firebase scheduled functions monitoring should focus on completion, not just invocation.

Why it's dangerous

Missed scheduled functions can create slow, silent damage.

A broken cleanup job might leave expired sessions, temporary files, or stale documents in your database. A missed billing sync might fail to downgrade unpaid accounts. A failed notification job might leave users waiting for reminders that never arrive. A broken analytics export might create missing reports for several days before anyone notices.

These failures are dangerous because they often do not produce an obvious incident right away.

Instead, they accumulate.

Examples:

  • trial users are not converted or expired correctly
  • stale Firestore documents keep growing storage costs
  • email or push notifications stop being sent
  • cache refresh jobs leave users seeing old data
  • daily reports are missing
  • webhook retry queues are never drained
  • database maintenance tasks silently stop
  • subscription state becomes inconsistent

By the time someone notices, the fix is no longer just “restart the job.”

You may need to backfill data, repair inconsistent records, explain missing notifications, or manually replay failed work.

That is why waiting for user complaints is a bad monitoring strategy for scheduled functions.

How to detect it

The simplest reliable pattern is heartbeat monitoring.

Instead of only checking whether the app is online, you check whether the scheduled function completed when expected.

The idea is straightforward:

  1. Create a heartbeat check for the job.
  2. Give the job a deadline, such as “must complete every 24 hours.”
  3. At the end of the function, send a ping to the heartbeat URL.
  4. If the ping does not arrive on time, alert someone.

This detects the thing you actually care about: whether the scheduled function finished successfully.

For Firebase scheduled functions monitoring, completion-based pings are usually better than start-based pings. A ping at the beginning only proves the function started. It does not prove the work finished.

A good signal should happen after the important work completes.

For example:

exports.dailyBillingSync = onSchedule("every 24 hours", async () => {
  await syncBillingState();
  await sendHeartbeatPing();
});
Enter fullscreen mode Exit fullscreen mode

If syncBillingState() fails, the heartbeat is not sent.

That means the missing heartbeat becomes a useful alert.

Simple solution with example

Here is a practical Firebase scheduled function example using a heartbeat ping.

const { onSchedule } = require("firebase-functions/v2/scheduler");

const HEARTBEAT_URL = process.env.QUIETPULSE_HEARTBEAT_URL;

async function pingHeartbeat() {
  if (!HEARTBEAT_URL) {
    throw new Error("Missing QUIETPULSE_HEARTBEAT_URL");
  }

  const response = await fetch(HEARTBEAT_URL, {
    method: "GET",
  });

  if (!response.ok) {
    throw new Error(`Heartbeat ping failed: ${response.status}`);
  }
}

async function syncBillingState() {
  // Example business logic:
  // - fetch active subscriptions from your payment provider
  // - update Firestore user records
  // - expire unpaid accounts
  // - write audit logs
}

exports.dailyBillingSync = onSchedule(
  {
    schedule: "every 24 hours",
    timeZone: "UTC",
    timeoutSeconds: 300,
    memory: "512MiB",
  },
  async () => {
    await syncBillingState();

    await pingHeartbeat();
  }
);
Enter fullscreen mode Exit fullscreen mode

Your environment variable would contain a heartbeat URL like:

https://quietpulse.xyz/ping/{token}
Enter fullscreen mode Exit fullscreen mode

The important detail is placement.

Put the heartbeat ping after the critical work, not before it.

If the scheduled function crashes, times out, or exits before finishing, the ping will not be sent. Your monitoring system can then alert you that the expected completion signal is missing.

You can also use finally, but be careful. If you always ping inside finally, you may report success even when the job failed. For scheduled jobs, that is usually the wrong signal.

This is risky:

exports.dailyJob = onSchedule("every 24 hours", async () => {
  try {
    await doImportantWork();
  } finally {
    await pingHeartbeat();
  }
});
Enter fullscreen mode Exit fullscreen mode

That sends a heartbeat even after failure.

This is usually better:

exports.dailyJob = onSchedule("every 24 hours", async () => {
  await doImportantWork();
  await pingHeartbeat();
});
Enter fullscreen mode Exit fullscreen mode

Now the ping means “the job completed,” not just “the job started.”

Instead of building the alerting layer yourself, you can use a simple heartbeat monitoring tool like QuietPulse. Create a check, copy the ping URL, call it after your Firebase scheduled function completes, and get alerted if the ping is late. The point is not the tool itself — the important part is having an external completion signal.

Common mistakes

1. Only checking Firebase logs

Logs are useful when you already know something is wrong.

They are not enough to tell you that a job never ran.

If a scheduled function is not invoked, there may be no application log from that function at all. You might need to inspect Cloud Scheduler logs, Pub/Sub delivery, function logs, IAM errors, and deployment history.

That is a lot to rely on during an incident.

2. Pinging before the work finishes

A heartbeat at the start of the function proves invocation, not completion.

For scheduled functions, completion is usually what matters. If the job starts and then fails halfway through, an early ping can hide the failure.

Put the ping after the important work succeeds.

3. Using one heartbeat for many jobs

It is tempting to reuse one heartbeat URL for every scheduled function.

Avoid that.

A billing sync, cleanup job, report exporter, and notification sender should each have their own check. Otherwise, one healthy job can mask another broken one.

Use separate heartbeat checks for separate responsibilities.

4. Ignoring time zones

Firebase schedules can run with a configured time zone. Your product logic may assume local time, while your monitoring window assumes UTC.

That mismatch can create false alerts or hide real delays.

Be explicit about time zones in both the scheduled function and monitoring configuration.

5. Not testing failure cases

Do not only test the happy path.

Test what happens when:

  • the function throws an error
  • an external API times out
  • the heartbeat URL is missing
  • the job takes longer than expected
  • the scheduled function is disabled
  • the deployment removes or renames the function

Monitoring that has never seen a failure is often monitoring you cannot trust.

Alternative approaches

Heartbeat monitoring is not the only option. It is just one of the clearest ways to detect missed scheduled work.

Other useful signals include:

Firebase and Google Cloud logs

Cloud Logging can show function errors, execution duration, and scheduler delivery events. This is useful for debugging.

The downside is that logs are often reactive. Someone still needs to notice the failure, query the right logs, and understand what should have happened.

Error tracking

Tools like Sentry can catch exceptions inside scheduled functions.

That helps when the function runs and throws.

But error tracking may not catch missed invocations. If the function never starts, there may be no exception inside your application code.

Cloud Scheduler monitoring

You can monitor Cloud Scheduler execution attempts and failures.

This helps detect trigger-level issues, but it may not prove business-level completion. The scheduler can successfully invoke a function that later fails internally.

Database audit records

Some teams write a job_runs document to Firestore for each scheduled task.

That can be very useful:

await db.collection("job_runs").add({
  job: "dailyBillingSync",
  status: "success",
  finishedAt: new Date().toISOString(),
});
Enter fullscreen mode Exit fullscreen mode

This gives you a history of runs.

But you still need something to watch that history and alert you when a run is missing.

Custom dashboards

You can build your own dashboard showing the last successful run of each job.

That works well if you have time to maintain it. For small teams and indie projects, an external heartbeat check is often simpler and less fragile.

FAQ

What is Firebase scheduled functions monitoring?

Firebase scheduled functions monitoring is the process of checking whether scheduled Cloud Functions run and complete on time. It helps detect missed executions, runtime failures, delays, and silent scheduled job problems before they affect users or data.

Are Firebase logs enough to monitor scheduled functions?

Firebase logs are helpful for debugging, but they are not enough by themselves. Logs can show errors after you look for them, but they may not proactively alert you when a scheduled function never runs or never completes.

Should I ping a heartbeat at the start or end of a scheduled function?

For most production jobs, ping at the end. A start ping only proves that the function began. An end ping proves that the important work completed successfully.

Can Firebase scheduled functions fail silently?

Yes. They can fail because of permissions, deployment changes, missing environment variables, timeouts, quota issues, external API failures, or scheduler configuration problems. Some failures may not be obvious from normal uptime checks.

How often should I monitor a Firebase scheduled function?

Match the monitoring window to the schedule. If a function runs every hour, alert if it does not complete within a reasonable grace period after that hour. If it runs daily, use a daily check with enough grace time for normal delays.

Conclusion

Firebase scheduled functions are great for background work, but they can fail quietly.

The app may stay online while billing syncs, cleanup tasks, reports, notifications, or maintenance jobs stop running.

Good Firebase scheduled functions monitoring focuses on completion. Add a heartbeat ping after the important work finishes, give each job its own check, and alert when the expected signal does not arrive.

That simple pattern catches the failures that uptime checks, dashboards, and logs often miss.


Originally published at https://quietpulse.xyz/blog/firebase-scheduled-functions-monitoring

Top comments (0)