DEV Community

Vhub Systems
Vhub Systems

Posted on

How to Build a Sales Signal Tracker That Alerts You When a Prospect Views Your Pricing Page

The Problem

You run a solid demo. The prospect seems interested. You follow up a couple of times — no response. Two weeks pass. Then you get a terse email: "We went with another vendor."

Here's what you didn't know: in that quiet two-week window, someone from that company visited your pricing page four times. They were still evaluating. They just weren't talking to you.

Most sales teams never find out. The signal was there — it just wasn't routed anywhere useful.

This post walks through building a lightweight tracker that watches your pricing and docs pages, matches visitors to your open opportunities, and fires a Slack alert to the right rep when a known prospect re-engages. Total cost: $0–$20/month.


Why Most CRMs Miss This Signal

CRMs are rep-activity systems. They track calls logged, emails sent, and stage changes — actions your team takes. They're not built to passively observe prospect behavior between touchpoints.

Intent data platforms do exist, but they operate at a different layer. They aggregate behavioral signals across a network of publisher sites and return scores like "company X is researching topic Y." Useful for top-of-funnel discovery, but at $500–$1,500/month, they're overkill if you just want to know when a known account is back on your own site.

What you actually want is simpler: per-account visibility into who's hitting your own pages, matched against the deals already in your pipeline.


The Signal Pattern

The core logic is straightforward:

  1. Your web server (or CDN) logs every request, including IP address and URL path
  2. You run a small process that reads those logs and checks whether any IPs belong to domains in your prospect list
  3. For IPs you don't recognize, you hit a free enrichment API to resolve them to a company
  4. If there's a match, you look up the associated rep and open opportunity
  5. You post a Slack message: "Acme Corp just visited /pricing — last contact was 9 days ago"

No reverse-proxy magic. No JavaScript tracking pixel that gets blocked by ad blockers. Just server-side log processing.


Using Apify for Public Signal Crawling

If you want to expand beyond your own logs — for example, tracking mentions of your company or category on public forums — Apify's website-content-crawler is a practical lightweight option. You can schedule runs that pull structured data from public pages and feed it into the same enrichment pipeline.

For the core use case (your own site traffic), you don't need Apify. But if you're building a broader signal layer, it pairs well with the IP enrichment approach below.


The Code

Here's a full working Node.js script. It reads an Nginx access log, resolves IPs to companies using ipinfo.io (free tier: 50k requests/month), matches against a local prospect list, and fires a Slack alert.

Prerequisites

npm install axios
Enter fullscreen mode Exit fullscreen mode

prospects.json

Store your open opportunities as a simple JSON file:

[
  {
    "domain": "acmecorp.com",
    "company": "Acme Corp",
    "rep_slack": "@jane",
    "last_contact": "2026-03-19"
  },
  {
    "domain": "globex.io",
    "company": "Globex",
    "rep_slack": "@carlos",
    "last_contact": "2026-03-10"
  }
]
Enter fullscreen mode Exit fullscreen mode

signal-tracker.js

const fs = require("fs");
const readline = require("readline");
const axios = require("axios");

const IPINFO_TOKEN = process.env.IPINFO_TOKEN || "";
const SLACK_WEBHOOK = process.env.SLACK_WEBHOOK_URL || "";
const LOG_FILE = process.env.NGINX_LOG || "/var/log/nginx/access.log";
const PROSPECTS_FILE = process.env.PROSPECTS_FILE || "./prospects.json";
const WATCH_PATHS = ["/pricing", "/docs", "/plans"];

const prospects = JSON.parse(fs.readFileSync(PROSPECTS_FILE, "utf8"));

// Parse a single Nginx combined log line
function parseLogLine(line) {
  const match = line.match(/^(\S+) \S+ \S+ \[([^\]]+)\] "(\w+) (\S+)/);
  if (!match) return null;
  return { ip: match[1], timestamp: match[2], method: match[3], path: match[4] };
}

// Resolve an IP to an org/domain via ipinfo.io
async function resolveIp(ip) {
  try {
    const url = IPINFO_TOKEN
      ? `https://ipinfo.io/${ip}?token=${IPINFO_TOKEN}`
      : `https://ipinfo.io/${ip}/json`;
    const { data } = await axios.get(url, { timeout: 3000 });
    // org field looks like "AS12345 Acme Corp" — also check hostname
    return {
      org: data.org || "",
      hostname: data.hostname || "",
    };
  } catch {
    return { org: "", hostname: "" };
  }
}

// Match resolved info against prospect list
function matchProspect(hostname, org) {
  return prospects.find((p) => {
    const domain = p.domain.toLowerCase();
    return (
      hostname.toLowerCase().includes(domain) ||
      org.toLowerCase().includes(domain.split(".")[0])
    );
  });
}

// Days since last contact
function daysSince(dateStr) {
  const last = new Date(dateStr);
  const now = new Date();
  return Math.floor((now - last) / (1000 * 60 * 60 * 24));
}

// Post a Slack alert
async function sendSlackAlert(prospect, path) {
  const days = daysSince(prospect.last_contact);
  const text =
    `*Sales Signal* — ${prospect.company} just visited \`${path}\`\n` +
    `Rep: ${prospect.rep_slack} | Last contact: ${days} day${days !== 1 ? "s" : ""} ago`;
  await axios.post(SLACK_WEBHOOK, { text }, { timeout: 5000 });
  console.log(`Alert sent for ${prospect.company}`);
}

// Main: tail the log and process new lines
async function processLog() {
  const alerted = new Set(); // avoid duplicate alerts in one run
  const rl = readline.createInterface({ input: fs.createReadStream(LOG_FILE) });

  for await (const line of rl) {
    const entry = parseLogLine(line);
    if (!entry) continue;
    if (!WATCH_PATHS.some((p) => entry.path.startsWith(p))) continue;

    const key = `${entry.ip}:${entry.path}`;
    if (alerted.has(key)) continue;

    const { org, hostname } = await resolveIp(entry.ip);
    const prospect = matchProspect(hostname, org);

    if (prospect) {
      alerted.add(key);
      await sendSlackAlert(prospect, entry.path);
    }
  }
}

processLog().catch(console.error);
Enter fullscreen mode Exit fullscreen mode

Running it

IPINFO_TOKEN=your_token \
SLACK_WEBHOOK_URL=https://hooks.slack.com/services/... \
NGINX_LOG=/var/log/nginx/access.log \
node signal-tracker.js
Enter fullscreen mode Exit fullscreen mode

For continuous monitoring, drop this in a cron job that runs every 15–30 minutes against a rolling log window, or use a tool like tail -f piped into a long-running process.


Alert Routing

The Slack message structure above is intentionally minimal:

  • Company name — so the rep knows immediately who it is
  • Page visited/pricing carries different intent than /docs/api
  • Days since last contact — the number that drives urgency

If your CRM has an API, you can extend matchProspect to pull the live opportunity stage and add that to the alert. HubSpot and Pipedrive both offer free-tier API access.


Enrichment Fallback: Clearbit and Hunter

ipinfo.io resolves IPs to ISP/org names, which works well for companies with dedicated IP ranges (common with mid-market and enterprise accounts). For smaller companies on shared hosting or consumer ISPs, you'll get noise.

Two fallback options:

  • Clearbit Reveal — reverse-IP to company, paid but accurate
  • Hunter.io — domain-to-company lookup; useful if you can extract a domain from a hostname

For most early-stage pipelines, ipinfo.io free tier is sufficient to catch the accounts that matter most.


Cost Framing

Layer Option Cost
Log processing Self-hosted Node.js $0
IP enrichment ipinfo.io free tier $0
IP enrichment ipinfo.io paid $11–$49/month
Slack alerts Incoming webhooks $0
Prospect matching Local JSON / CRM API $0
Total $0–$20/month

Compare that to enterprise intent platforms at $500–$1,500/month. Those platforms serve a different use case — broad market discovery across thousands of companies — but if your goal is monitoring a focused list of 20–200 open opportunities, a targeted setup like this is hard to beat on cost.


What This Won't Do

Be clear-eyed about the limits:

  • VPN and residential IPs — you won't match these reliably
  • Mobile traffic — carrier-grade NAT makes IP attribution unreliable
  • Privacy regulations — if you're processing traffic from EEA users, review your GDPR obligations before logging IPs server-side

The signal is probabilistic, not perfect. Treat it as a prompt to reach out, not as confirmed buying intent.


Next Steps

Once the basic pipeline works, a few natural extensions:

  1. Track a wider set of pages — add /case-studies, /security, /integrations as secondary signals
  2. Log to a database — store matches with timestamps so you can see visit frequency over time
  3. Weight by recency — an account that visited three times in 48 hours is a stronger signal than one visit two weeks ago
  4. Integrate with your CRM — auto-create a task or update the opportunity's "last prospect activity" field

The core pattern — log ingestion, IP enrichment, prospect matching, alert routing — is reusable across a lot of similar sales-signal problems. Build it once, then layer on the signals that matter for your pipeline.

Top comments (0)