DEV Community

Cover image for I Stopped Buying SaaS for RevOps. I Built What I Needed on Harper Instead.
Jacob Cohen for Harper

Posted on

I Stopped Buying SaaS for RevOps. I Built What I Needed on Harper Instead.

TL;DR: A non-engineer RevOps leader, replaced a manual, error-prone reporting process by using OpenAI Codex to build and deploy a fully automated, production-grade Growth Weekly Digest application on the Harper platform.


I run commercial operations at a startup. That means Salesforce administration, deal desk, forecasting, pipeline management, contract workflows, and a dozen other things that keep our revenue engine running. I am not a software engineer. My day job is operations.

Last month, I built and deployed a production internal application that pulls live data from Salesforce and Slack, generates AI-assisted commentary, routes it through a multi-role approval workflow, and publishes a weekly growth digest for our entire company. It has eight database tables, five user roles, three external integrations, a scheduler with leader-node election, retry logic, CI/CD with automated testing, and it runs on a scaled Harper Fabric deployment. I built it in a few days using OpenAI Codex as my coding partner. I didn't write a single line of code myself.

What does "built" mean when you don't write code? It means I spent a few days having a conversation. I described what I needed, Codex planned and executed, and we iterated together until it was right. I'd explain a workflow, review its approach, push back when something didn't fit, ask it what common RevOps patterns look like, and refine until I was satisfied with the plan. Then it wrote the code. I did the thinking. It did the typing. That's the entire method, and it's why this post isn't a tutorial. There's no code to walk through. There's a conversation that produced a production application.

This post is about how I did it, why I did it on Harper instead of buying another SaaS tool, and why I think this changes the math for every business operator who's currently stuck choosing between manual processes and overpriced software subscriptions.


The RevOps Problem Nobody Talks About

If you work in RevOps, you know this situation. You have a process that's valuable, but tedious. It requires pulling data from countless, ever-changing systems, assembling it into something digestible, adding your own analysis, and distributing it. It takes 30 minutes to an hour. It's not hard, it's just manual enough that it consistently loses priority against everything else you're doing.

For me, that process was our Growth Weekly Digest.

Every Friday for the better part of a year, I produced a report that tracked our weekly sales activity: closed won and closed lost deals, new pipeline created, pipeline by rep and channel, BDR metrics, and narrative commentary about what happened and what to watch. The audience started as our executive team, expanded to team leads, and eventually the whole company. Our product management team relies on it to keep a pulse on sales activity, which is a critical input for product prioritization and roadmap decisions. People across the company found it valuable because it gave them visibility into the growth engine without having to dig through Salesforce or chase down updates.

The process to create it was entirely manual.

I'd start by pulling up Salesforce reports and dashboards. I had some dashboards on Salesforce's subscribe feature to email me at 9AM as a trigger to get started. Then I'd extract the numbers and copy them into a Google Doc: deal counts, revenue, pipeline breakdowns; you know the drill. That copy-and-paste step alone took ten minutes minimum and was highly error-prone. One wrong cell and the whole week's narrative would be based on bad data. This happened more often than I'd like to admit. The whole process violated one of my favorite rules: never introduce human error when you don't have to. I knew that copying and pasting numbers between systems was a terrible workflow, but I did it anyway because there wasn't a better option.

Then came the commentary. The numbers alone don't tell the story. I needed to explain which deals advanced, which stalled, what happened in customer conversations, and what risks were emerging. To write that, I'd piece together what I remembered from the week, scan Slack conversations, check meeting notes, and sometimes use Slack AI to help me summarize activity. The whole thing took 30 to 45 minutes, depending on how much digging was required.

It doesn't sound like much. Here's the real cost: I stopped doing it.

When the year turned over, and we were finalizing 2026 quotas, I paused the digest. We were well into Q1 before I confronted the fact that I didn't want to go back to the manual grind, even though people found the report valuable. At a startup, everyone wears multiple hats. I do far more than RevOps in a given week. The digest was always the first thing to get deprioritized. (Whatever you do, don't tell my boss, even though she's probably already forwarded this post to everyone she knows.)

So for weeks, the report just didn't get written.


Why I Didn't Buy Another SaaS Tool

The obvious move would have been to buy software for this. Here's why I didn't.

Salesforce dashboards aren't robust enough for this kind of consolidated weekly reporting, and more importantly, they require paid Salesforce licenses for every viewer. If you've ever priced Salesforce seats, you know the pain. The per-user licensing costs add up fast, and we already carry more licenses than we'd like. We are a cost-conscious startup and we absolutely do not get our money's worth out of every seat we hold. There was no world in which we were going to buy CRM licenses for the entire company just so people could read a weekly summary.

Databox was something we used for other executive-level dashboards that provided similar, though not identical, value. It worked for a while, but we ran into issues with their Salesforce connector that broke our data pipeline, and even when it was working, it didn't do everything we needed it to. That was the end of Databox for us.

Tableau was so comically expensive for our use cases that it wasn't even worth a serious conversation, even though we could have used it for other reporting needs too.

This is the pattern every RevOps person knows. The SaaS options are either too limited, too expensive, too rigid, or some combination of all three. And every tool you add is another vendor, another login, another integration to maintain, another line item for Finance to question. The thing you actually need is always slightly different from what the tool provides, so you end up building workarounds on top of the tool, at which point you're doing custom work anyway, just on someone else's platform.

I realized I could spend a few days building exactly what I needed, with real integrations, real auth, real deployment, on Harper. And it would cost less ongoing effort than re-establishing the manual process I'd abandoned. No point solution. No per-seat reporting license. No compromises on what the report should contain or who can see it. Harper is a platform, not a tool. The weekly digest is one application on it. The next one I build runs on the same infrastructure, the same deployment model, the same data layer. That's a fundamentally different cost equation than buying a new SaaS product for every internal need.


What I Built

The Growth Weekly Digest is now a Harper application. It runs on Harper Fabric with its own database, API, web interface, and scheduled jobs, all in one deployable unit.

Note: All screenshots in this post use mock data. Our actual digest contains proprietary sales information.

The digest dashboard showing runs at various workflow stages: draft, approved, and published.

Here's what it does each week:

Data collection. On a scheduled trigger (Friday mornings), the application queries Salesforce via JWT-authenticated API calls. It pulls Opportunity data: deal names, stages, forecast categories, amounts, ARR fields, close dates, lead sources, and rep assignments. It also queries OpportunityPartner objects to classify deals by channel (partner-sourced vs. direct). Separately, it reads recent activity from configured Slack channels to capture qualitative signals about deal movement and team conversations.

Metrics computation. The raw data is processed into structured, frozen metrics: closed won and closed lost counts and revenue broken out by channel, new pipeline created, pipeline by rep, and week-over-week comparisons. Each digest is a snapshot of that week's state.

Commentary generation. Metrics and Slack signals are sent to OpenAI's Responses API with a system prompt that enforces structured JSON output. The LLM produces commentary organized into wins, risks, and action items, each with citation links back to the original Slack messages so we can jump directly into a conversation if something needs follow-up. The AI commentary is never published automatically. It's a draft starting point.

Human review and approval. This is the part that matters. The generated digest enters a review workflow. I review the data and commentary for accuracy. Kelli, our VP of Sales, does the same. Either of us can edit the AI-generated commentary before approving. The system tracks each approver independently. Both must approve before the digest can be published. This is a real workflow with real gates, not a rubber stamp.

The approval workflow: RevOps and Growth Manager approvals are both pending, and publish is blocked until both clear.

Publication and notification. Once both approvals are in, the digest is published to a company Slack channel. Published digests are also accessible through the web UI, authenticated via Google OAuth restricted to Harper email addresses.

A published digest as seen by a company-wide reader: executive snapshot, AI-generated commentary with Slack citation links, and weekly revenue breakdowns.

Access controls. Five roles: company_reader, growth_editor, revops_approver, growth_manager, and admin. Readers only see published digests. Unpublished runs return a 404 for readers. They don't exist until they're ready. Editors and approvers see drafts and can act on them. Role assignments are stored in a Harper table and enforced server-side on every request.


The Architecture: One Runtime, Not a Rube Goldberg Machine

This is where Harper's value as a platform becomes concrete.

The entire application is a single Harper application. The database, the API, the web UI, the scheduler: they all ship together and deploy together. Here's what's inside:

Eight tables define the data model: DigestRun, DigestMetrics, DigestCommentary, DigestNote, DigestApproval, DigestPublication, DigestRoleMapping, and DigestCommentaryRevision. These are declared in schema files. When the application deploys, the tables exist. No external database. No migration scripts. No connection strings. No ORM. Related tables are linked via indexed fields like runId, so queries across the data model are fast and consistent. The application code interacts with these tables through Harper's Resource class, a unified API where your database tables, business logic, and HTTP endpoints are all part of the same runtime. There's no separate database driver, no connection pool, no serialization layer between your code and your data. This matters for AI-generated code because the agent doesn't have to wire together disconnected services. It writes application logic against a single integrated platform, and that code is production-ready and performant immediately.

A TypeScript service layer handles the digest lifecycle: generation, metrics computation, commentary requests, approval state derivation, publish-blocker validation, and data quality checks. Business rules live here. For example, you cannot generate a new digest for a week that already has a published one.

Three external connectors: Salesforce (JWT OAuth with RS256 signing, SOQL queries against Opportunity and OpportunityPartner objects), Slack (Web API with channel discovery, bounded concurrency, and thread fetching), and OpenAI (Responses API with structured JSON schema enforcement, citation sanitization, and configurable retry budgets).

A scheduler with two jobs: a weekly cron trigger for Friday morning digest generation, and a periodic retry loop for any LLM commentary requests that failed due to timeouts or rate limits. The scheduler includes leader-node gating. In a replicated Fabric deployment across multiple nodes, only one instance runs the scheduler. No duplicate digests.

Server-rendered HTML for the web interface. A dashboard listing all digest runs, detail views for each, and action endpoints for generate, approve, publish, and edit operations. No frontend framework. No separate build step.

A CI/CD pipeline built entirely by Codex using GitHub Actions. It runs strict typechecking, tests, a regex-based secrets scan, and a guard against raw environment variable usage in code. Every time we merge to main, it automatically deploys to our production Harper Fabric cluster and runs a post-deploy health check against the live API. I didn't configure any of this. Codex built the whole pipeline, and now I don't have to think about DevOps either.

Here's how the data flows:

Data flow diagram showing the weekly digest pipeline: from scheduled trigger through Salesforce and Slack data collection, metrics computation, OpenAI commentary generation, Harper table persistence, output channels, human review, and publication.

Now consider what this would look like if I'd let an AI coding tool pick a typical stack:

A Node.js or Python backend framework. A managed PostgreSQL instance. An ORM. A frontend framework with its own build pipeline. A reverse proxy. A hosting provider for the API. A different hosting solution for the frontend. A managed cron service or sidecar process for scheduled jobs. A secrets manager. Probably Redis for session management. Maybe a container orchestration layer to tie it all together. Each one of those is a configuration surface, a potential point of failure, and something I'd need to understand and maintain.

With Harper, the application is the whole thing. Database, server, scheduler, auth foundation, deployment target. It's one runtime. That's not a marketing claim. That's what made it possible for me to build this in a few days.


How I Built It: Vibe Coding for Business Operators

I used OpenAI Codex as my implementation partner. Let me be precise about what that means.

I did: Requirements definition. System design. Architecture decisions. Data model design. Integration specification: which Salesforce objects to query, which Slack channels to read, what the approval workflow should be, what the role model should look like. Acceptance testing. Best-practice review using Harper Agent.

Codex did: All code. Every line of TypeScript, every schema, every route handler, every connector, every CI workflow, every test.

I did not: Write code. Debug code. Configure deployment. Set up CI/CD.

The working pattern was iterative. I'd switch between Codex's planning mode and coding mode constantly. In planning mode, I'd describe what I wanted, sometimes high-level ("We need a Salesforce connector that uses JWT OAuth and pulls Opportunity and Partner data"), sometimes very specific ("Add a guard that prevents generating a new digest for a week that already has a published run. No new routes. No schema changes. Redirect with structured flash data for the dashboard."). Codex would implement it.

After each implementation pass, I'd feed the output to Harper Agent for best-practice review against Harper's application conventions. Harper Agent would flag issues: patterns that didn't align with how Harper resources should be structured, configuration that could be cleaner. I'd take that feedback and send Codex back in with specific corrections. Harper is currently building out Harper Agent to handle more of this end-to-end, but even today this feedback loop worked well.

This is the part of building software I've always done well: designing systems, defining requirements, making architecture decisions, evaluating tradeoffs. The part I don't want to do, and historically couldn't do without engineering support, is the implementation labor: the typing, the integration wiring, the debugging. Codex handled that. And because it was building against Harper's unified runtime, the output was deployable from day one. No glue code. No service orchestration. No separate infrastructure to configure.

Harper's npm create harper@latest scaffold was the starting point. That gives you a project structure with schema files, resource definitions, configuration, and importantly, a skills/ directory that grounds the AI agent in Harper's architecture patterns. Codex consumed those skills files as context, which meant it wasn't guessing at how to structure a Harper application. It had the conventions built into its working memory.

To give you a sense of what that first conversation looked like, here's a prompt similar to the one I used to kick off the project. I uploaded my existing Google Doc digest as a reference so Codex could see exactly what the output should look like, then described what I wanted to build:

I want to build a Harper application that replaces this manual Google Doc "Weekly Digest" report (attached). The app should pull data from Salesforce and Slack automatically, compute the same metrics I've been assembling by hand, and generate a weekly digest. Build it as a Harper Application (https://docs.harperdb.io/docs/developers/applications), ensuring you use the Harper Resource class (https://docs.harperdb.io/docs/reference/resources). Start with the create-harper template (https://www.npmjs.com/package/create-harper). The app needs role-based auth, where company-wide readers can only see published digests. An approval workflow requiring two approvers before publish. OpenAI integration for drafting commentary from the Slack and Salesforce data. Scheduled generation on Fridays plus a manual trigger. A dashboard, digest detail view, and ops page. Strong environment validation, secure cookies, structured logs. CI pipeline with secrets scanning, strict typechecking, and tests. Once we get a version of the app working locally, we will be deploying to Harper Fabric (https://docs.harperdb.io/fabric). Let's work together and build a plan to implement this solution. We will plan to work iteratively and add features as we go.

That's it. That prompt, the attached Google Doc as a reference, and the skills files Harper provides for AI context were enough for Codex to scaffold the entire project structure and start building. From there, every feature was a conversation: "Add the Salesforce connector," "Wire up the approval gates," "Build the CI pipeline." Each one a sentence or two of intent, followed by Codex executing and me reviewing.

For context: this was my second Harper application built with Codex. The first was a personal project I created just to learn the workflow. The moment I saw how it worked, I realized I should be building our actual internal tooling this way, everything running on a single Harper backend that scales through Fabric without me thinking about infrastructure.


What Harper Made Possible

Let me be specific about what Harper provided versus what I would have had to solve myself on any other platform.

No database to provision. Schemas declared in files. Deploy the application, tables exist. Eight tables with indexes. No DBA, no managed instance, no connection pooling.

No application server to configure. Harper serves HTTP routes natively, both the web UI and the JSON API. No Express, no nginx, no port management.

No deployment pipeline to architect. I pointed Codex at the Fabric documentation and told it to handle deployment. It built a GitHub Actions pipeline that deploys to our production Harper Fabric cluster with rolling restart across replicated nodes, runs a post-deploy health check, and confirms the API is responding. Every merge to main triggers it automatically. I have never manually deployed this application.

No scaling to think about. Fabric handles replication. The only concession to multi-node was the leader-node scheduler gating to prevent duplicate scheduled jobs, and that was a straightforward pattern. I don't manage infrastructure. I don't think about infrastructure.

Auth handled by the platform. Harper has an OAuth plugin that handles OAuth 2.0 and OpenID Connect authentication out of the box, with support for Google, GitHub, Azure AD, and other providers. The plugin manages the OAuth flow, token refresh, session integration, and CSRF protection. On top of that, the application layer enforces role-based authorization checked server-side on every request. Having the database and the application in the same runtime made this simple. Role mappings are just another table, checked in the same process that serves the request.

Mobile-friendly because I asked for it. I told Codex I wanted the UI to work on phones. It built a responsive layout. That's it. No separate mobile project, no responsive framework to configure. Our executives do a lot of work on their phones, especially when traveling, and the old Google Doc was painful to read on a small screen. With Codex building on Harper, "make it mobile-friendly" was a sentence in a conversation, not a sprint.

The same digest on mobile: executive snapshot, commentary, and revenue tables all responsive.

The critical point: what I built locally is what runs in production. There is no gap. I run npm run dev locally, iterate with Codex, and when it's ready, the same application deploys to Fabric. One platform. One set of concerns. One thing to understand.

This is what makes it viable for a business operator to ship production software. Not because the coding is easier (Codex handles the coding), but because the operational surface area is small enough that a non-engineer can reason about the whole system. I don't need to understand Kubernetes, or Terraform, or how to wire a PostgreSQL connection pool. I need to understand Harper, and Harper is one thing.


What's Running Today

The application is live in production on Harper Fabric. It went live in mid-February 2026. The system now automatically produces a new digest every Friday. As of this writing, it has produced eight digests. That includes backfilled digests for the weeks I missed at the start of the year, which the system generated retroactively once it was live.

The weekly process now works like this: the app generates a digest on Friday morning. Kelli and I get a Slack notification. We review the metrics and commentary, make edits, and approve. When both approvals clear, the digest publishes to a company Slack channel and is available in the web UI for anyone to read. The review step takes a few minutes.

The time savings are real, at least 30 minutes of manual work eliminated every week, but that understates the actual impact. The real win is that the digest gets produced now. Every week. Automatically. Before this, the manual effort meant it was always the first thing I'd cut when the week got busy. That's not a time-savings story. That's a "the process actually works now" story.

I also plan to keep iterating. Our CEO has already discussed combining this with other internal tools the team is building on Harper into a single unified internal operations platform. In fact, the moment he saw this tool running, he asked me to write this blog post. (So here we are.) That convergence is natural because all these applications share the same data layer and deployment model. No integration work required. They're already on the same backend.

This is the part that I think gets missed in the "build vs. buy" conversation. When you buy a SaaS tool, you get one solution for one problem, and the next problem requires a new vendor. When you build on a platform like Harper, the first application is the hardest. The second one is easier because the platform is already there, the deployment model is already running, and your team already knows how it works. Harper is the only platform where coding agents can build and deploy enterprise-grade applications end to end, and once you have that foundation, every internal tool you need is just another application on the same infrastructure. The weekly digest was my first internal tool. It won't be my last.


The Math That Changed

Let me frame this for every RevOps operator, sales ops lead, or business systems person reading this.

You probably have a process right now that works like my old digest. Valuable output, tedious assembly, always at risk of being deprioritized. You've probably looked at SaaS tools to automate it and found them too expensive, too rigid, or too limited. You've probably thought about asking engineering to build something and decided it wasn't worth the political capital or the wait.

The math has changed. Here's why:

AI coding tools can now write production-quality code when given clear requirements and a well-structured platform to build against. The bottleneck was never the coding. It was the operational complexity of deploying and maintaining what the code produces. If your AI tool generates a great application but it requires you to manage a database, a web server, a cron service, a CDN, and a container orchestrator, you haven't saved yourself anything. You've just traded one kind of complexity for another.

Harper collapses that complexity. One runtime. One deployment target. One thing to maintain. Your database, your API, your frontend, your scheduled jobs, your auth: they're all the same application. When you deploy to Fabric, it scales. When you iterate locally, you're working against the same system that runs in production.

That's what makes it possible for someone like me, an operations person who understands systems but doesn't write code, to build and maintain production internal software. Not a prototype. Not a demo. A real application with enterprise integrations, approval workflows, role-based access control, CI/CD, and automated deployment.

The alternative is buying more SaaS. Another subscription. Another vendor. Another tool that does 70% of what you need and requires workarounds for the rest. Another integration to maintain. Another thing that breaks when the vendor pushes an update you didn't ask for.

Or you can build exactly what you need, on a platform you control, and ship it to production in days.

I know which one I'd pick. I just did.


Jake Cohen is Senior Director of Commercial Operations at Harper, where he has spent over eight years across Solutions Architecture, Product Management, Engineering Leadership, and Commercial Operations. He holds a B.S. in Computer Engineering from George Mason University. You can find him on LinkedIn.

To start building on Harper, visit harper.fast/vibe or run npm create harper@latest.

Top comments (0)