I want to tell you about the moment I realized I was building the wrong thing.
It was 11pm. I had just finished adding the ninth feature to my sales platform — Nopp — and I was feeling good about it. More tools. More functionality. More value, right?
Then I got a message from a developer who had tried the platform that week.
"I already use Gemini Pro with Deep Research for this. Don't see the extra value."
That one comment restructured how I thought about everything I was building.
He wasn't wrong. If you squinted at Nopp, it looked like a smarter search engine. And nobody beats Google at search. So I had two choices: keep adding features and hope something stuck, or go deeper on the one thing I had that Gemini absolutely could not replicate.
That thing was real-time hiring signals matched to a specific company's ICP.
This is the story of how I built it, what broke along the way, and the three things I wish someone had told me before I started.
What Is a Hiring Signal and Why Does It Matter?
A hiring signal isn't just a job posting. Every company posts jobs. That's noise.
A hiring signal is a job posting that means something specific to a specific seller. When a company posts "VP of Sales — Enterprise" and that company matches your ideal customer profile, that's not a job posting — that's a buying event. That company is about to rebuild their sales stack. They have budget. They have urgency. And nobody is calling them yet because most salespeople don't find out until the hire is announced three months later.
The window between "job posted" and "new VP starts and freezes all vendor decisions" is roughly 30-60 days. That's your window. Miss it and you're starting from scratch.
The problem I was trying to solve: how do you monitor thousands of job postings across multiple platforms, filter them by ICP match, enrich the contact data, and surface only the ones worth acting on — in real time, for any company that uses your platform?
That's not a feature. That's infrastructure.
The Architecture I Ended Up With
I'll spare you the three architectures I tried before this one worked.
The final system has four layers that run continuously in the background:
Layer 1: Collection
Job data comes from multiple sources — LinkedIn Jobs, Indeed, Greenhouse, Lever, and Workable. Each source has different rate limits, different data formats, and different reliability. The first mistake I made was treating them as equivalent. They're not. LinkedIn has the richest data but the strictest rate limits. Indeed has volume but messy formatting. Greenhouse and Lever are goldmines because they're ATS platforms — companies that use them are usually scaling fast and their job data is structured and clean.
I run a collection job every 4 hours for high-priority sources and every 24 hours for the rest. Real-time sounds good until you realize that job postings don't change minute-to-minute. Four hours is fast enough to catch the signal before it goes cold.
Layer 2: Classification
Raw job postings are useless without context. "Sales Manager" at a 10-person startup is completely different from "Sales Manager" at a 500-person SaaS company. The classification layer does three things:
First, it normalizes job titles into seniority levels and function categories. "Head of Revenue," "VP Sales," "Chief Revenue Officer," and "Director of Business Development" all map to the same signal type: senior sales leadership hire. This sounds trivial. It took two weeks to get right because job title creativity is apparently limitless.
Second, it extracts buying intent signals from the job description itself. Words like "build from scratch," "first sales hire," "architect our go-to-market," and "evaluate and implement tools" are gold. They tell you this company is in active stack-building mode, not just backfilling a seat. I built an LLM classifier that reads the full job description and scores it on tool-buying intent from 0 to 100. Anything above 65 gets flagged.
Third, it assigns a signal type from a taxonomy I developed over about 200 manually labeled examples: First Sales Hire, Leadership Change, Team Expansion, Tech Stack Build, Market Expansion, and GTM Rebuild. Each type has a different outreach strategy and a different urgency level.
Layer 3: ICP Matching
This is where it gets personal. A hiring signal that's high-value for a CRM company is worthless for a recruiting tool. The matching layer takes every classified signal and scores it against every user's ICP profile in the system.
The ICP profile itself is generated when a user enters their company URL — we scrape their site, extract their value proposition, infer their target market, and build a structured ICP object with target titles, company sizes, industries, buying triggers, and disqualifiers.
The match score has three dimensions:
- Company fit: does this company's size, industry, and geography match the user's ICP?
- Signal relevance: is this the type of hire that indicates a buying event for this specific product category?
- Timing score: how recent is the signal and how urgent is the window?
The combined score determines placement in the Priority Feed. Only signals above 70 make it to the top. Below 50 gets filtered out entirely.
Layer 4: Enrichment
A signal without a contact is just an observation. The enrichment layer takes the company domain from the job posting and finds the right person to contact — not the person hiring for the role, but the person who would buy a tool to help that role succeed.
If a company posts for VP of Sales, I don't want to reach the recruiter. I want the current CRO or CEO who's about to onboard someone and is acutely aware of what tools they need. The enrichment layer figures out who that is, finds their verified email and LinkedIn, and passes it through to the signal card alongside the pre-generated opening line.
The Three Things I Got Wrong
1. I underestimated data quality as a product feature.
Early on I was focused on signal volume. More signals = more value, right? Wrong. Ten high-quality, perfectly matched signals are worth more than 500 mediocre ones. Users don't want a feed to manage — they want a shortlist to act on.
The turning point was adding the "Verified" badge to signal cards. When I started surfacing only signals with verified contact emails and source links users could click through to confirm, engagement went up dramatically. Trust is a product feature. Never underestimate it.
2. I tried to make the system real-time before I had the data quality right.
Real-time sounds impressive. "Get hiring signals the moment they're posted" is a great marketing line. But if those real-time signals are poorly classified, badly matched, and lack enriched contacts — you've just delivered garbage faster.
I spent three weeks optimizing ingestion speed and then realized the signals I was delivering quickly were not good enough to act on. I had to slow down, fix the classification layer, improve the ICP matching, and rebuild enrichment before I touched latency again.
Ship quality before you ship speed. Every time.
3. I built for the technical user first.
My first version of the signal feed was beautiful if you were a data engineer. Filterable, sortable, exportable. Raw signal data with all the metadata exposed.
What salespeople actually want is: "here's who to email today and here's what to say." That's it. The entire UI had to be rebuilt around that insight. The technical details moved to an expandable drawer. The opening line and recommended approach moved to the top of every card.
Know who you're building for and build for them, not for yourself.
What I Learned About Building a Signal API Specifically
When I decided to expose this as an API for other developers to build on, a few things became clear very quickly.
The response schema is your product. Developers will build their entire data model around whatever you return. If your schema is inconsistent, poorly named, or changes between versions — you've broken their product. I spent more time on the JSON schema than on any other part of the API. Naming, nesting, nullability, consistent date formats. Get it right before anyone integrates it because changing it later is painful for everyone.
A real sample response is worth a thousand words of documentation. The single most effective thing in my API docs is a real signal response from a real company with real data. Not "company": "Example Corp" — an actual signal card output from a real job posting with a real intent score and a real opening line. Developers can look at that one response and immediately understand the entire API. Don't sanitize your examples into meaninglessness.
Developers will find use cases you never imagined. I built the hiring signal endpoint for sales teams. The first three external developers who tested it were building a recruiting tool, a competitive intelligence dashboard, and a VC deal sourcing platform. None of those were in my product roadmap. All of them make perfect sense. Build a good API and get out of the way.
What the System Looks Like Today
The current version of the Buying Intent Discovery feature surfaces:
- Hiring signals from 5 job board sources refreshed every 4 hours
- Funding signals updated daily
- Competitor intercept signals from social and review platforms
- An intent score broken into Economic, Growth, and Behavioral dimensions
- Verified contact enrichment on every signal above 65
- A pre-generated opening line personalized to the specific signal and the user's value prop
- A recommended outreach approach based on signal type
The Priority Feed ranks everything by a composite score and shows users their top opportunities with close probability percentages. A "75% Close Ready" badge means the signal timing, ICP match, and contact quality all align — act now.
The API exposes all of this through a single GET /api/v1/signals/hiring endpoint. Pass your company URL as the ICP reference, set your minimum intent score, and get back a structured array of matched signals with enriched contacts and generated outreach — ready to drop into whatever product you're building.
The Honest Part
I'm going to tell you something most founder posts skip.
I almost didn't build the API layer. The developer syndrome is real — there's always another feature that feels more important than packaging what you have and putting it in front of people. I kept thinking the product needed to be more complete before I could open it up.
It didn't. It needed users. And the API was the fastest path to getting builders — people who would stress-test it, find the edge cases, and give me the feedback I couldn't generate by myself.
The Gemini user who told me he didn't see the value? He was right about what I had at that moment. But he was the reason I went deeper on the signal layer instead of wider on features. That one comment was worth more than a month of product planning.
Build the thing. Ship the thing. Listen to the people who use it. Repeat.
Try It
If you're building anything in the sales, CRM, recruiting, or outreach space and need a signal layer — the API is live and the free tier is genuinely free. 500 calls a month, no credit card, API key in your inbox in 60 seconds.
nopp.us/api
I read every response to this post. If you've built something similar or have questions about the architecture — drop a comment. I'll answer everything.
Nopp is a sales intelligence platform that turns any company URL into a complete go-to-market engine. We're building toward the point where a founder enters their URL on Monday and books their first meeting by Wednesday.
Follow along: nopp.us | @nopp
Top comments (0)