<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Jacob Cohen</title>
    <description>The latest articles on DEV Community by Jacob Cohen (@jacob_b_cohen).</description>
    <link>https://dev.to/jacob_b_cohen</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/jacob_b_cohen"/>
    <language>en</language>
    <item>
      <title>I Stopped Buying SaaS for RevOps. I Built What I Needed on Harper Instead.</title>
      <dc:creator>Jacob Cohen</dc:creator>
      <pubDate>Mon, 02 Mar 2026 21:22:58 +0000</pubDate>
      <link>https://dev.to/harperfast/i-stopped-buying-saas-for-revops-i-built-what-i-needed-on-harper-instead-372e</link>
      <guid>https://dev.to/harperfast/i-stopped-buying-saas-for-revops-i-built-what-i-needed-on-harper-instead-372e</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;TL;DR: A non-engineer RevOps leader, replaced a manual, error-prone reporting process by using OpenAI Codex to build and deploy a fully automated, production-grade Growth Weekly Digest application on the Harper platform.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;p&gt;I run commercial operations at a startup. That means Salesforce administration, deal desk, forecasting, pipeline management, contract workflows, and a dozen other things that keep our revenue engine running. I am not a software engineer. My day job is operations.&lt;/p&gt;

&lt;p&gt;Last month, I built and deployed a production internal application that pulls live data from Salesforce and Slack, generates AI-assisted commentary, routes it through a multi-role approval workflow, and publishes a weekly growth digest for our entire company. It has eight database tables, five user roles, three external integrations, a scheduler with leader-node election, retry logic, CI/CD with automated testing, and it runs on a scaled Harper Fabric deployment. I built it in a few days using OpenAI Codex as my coding partner. I didn't write a single line of code myself.&lt;/p&gt;

&lt;p&gt;What does "built" mean when you don't write code? It means I spent a few days having a conversation. I described what I needed, Codex planned and executed, and we iterated together until it was right. I'd explain a workflow, review its approach, push back when something didn't fit, ask it what common RevOps patterns look like, and refine until I was satisfied with the plan. Then it wrote the code. I did the thinking. It did the typing. That's the entire method, and it's why this post isn't a tutorial. There's no code to walk through. There's a conversation that produced a production application.&lt;/p&gt;

&lt;p&gt;This post is about how I did it, why I did it on Harper instead of buying another SaaS tool, and why I think this changes the math for every business operator who's currently stuck choosing between manual processes and overpriced software subscriptions.&lt;/p&gt;




&lt;h2&gt;
  
  
  The RevOps Problem Nobody Talks About
&lt;/h2&gt;

&lt;p&gt;If you work in RevOps, you know this situation. You have a process that's valuable, but tedious. It requires pulling data from countless, ever-changing systems, assembling it into something digestible, adding your own analysis, and distributing it. It takes 30 minutes to an hour. It's not hard, it's just manual enough that it consistently loses priority against everything else you're doing.&lt;/p&gt;

&lt;p&gt;For me, that process was our Growth Weekly Digest.&lt;/p&gt;

&lt;p&gt;Every Friday for the better part of a year, I produced a report that tracked our weekly sales activity: closed won and closed lost deals, new pipeline created, pipeline by rep and channel, BDR metrics, and narrative commentary about what happened and what to watch. The audience started as our executive team, expanded to team leads, and eventually the whole company. Our product management team relies on it to keep a pulse on sales activity, which is a critical input for product prioritization and roadmap decisions. People across the company found it valuable because it gave them visibility into the growth engine without having to dig through Salesforce or chase down updates.&lt;/p&gt;

&lt;p&gt;The process to create it was entirely manual.&lt;/p&gt;

&lt;p&gt;I'd start by pulling up Salesforce reports and dashboards. I had some dashboards on Salesforce's subscribe feature to email me at 9AM as a trigger to get started. Then I'd extract the numbers and copy them into a Google Doc: deal counts, revenue, pipeline breakdowns; you know the drill. That copy-and-paste step alone took ten minutes minimum and was highly error-prone. One wrong cell and the whole week's narrative would be based on bad data. This happened more often than I'd like to admit. The whole process violated one of my favorite rules: never introduce human error when you don't have to. I knew that copying and pasting numbers between systems was a terrible workflow, but I did it anyway because there wasn't a better option.&lt;/p&gt;

&lt;p&gt;Then came the commentary. The numbers alone don't tell the story. I needed to explain which deals advanced, which stalled, what happened in customer conversations, and what risks were emerging. To write that, I'd piece together what I remembered from the week, scan Slack conversations, check meeting notes, and sometimes use Slack AI to help me summarize activity. The whole thing took 30 to 45 minutes, depending on how much digging was required.&lt;/p&gt;

&lt;p&gt;It doesn't sound like much. Here's the real cost: I stopped doing it.&lt;/p&gt;

&lt;p&gt;When the year turned over, and we were finalizing 2026 quotas, I paused the digest. We were well into Q1 before I confronted the fact that I didn't want to go back to the manual grind, even though people found the report valuable. At a startup, everyone wears multiple hats. I do far more than RevOps in a given week. The digest was always the first thing to get deprioritized. &lt;em&gt;(Whatever you do, don't tell my boss, even though she's probably already forwarded this post to everyone she knows.)&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;So for weeks, the report just didn't get written.&lt;/p&gt;




&lt;h2&gt;
  
  
  Why I Didn't Buy Another SaaS Tool
&lt;/h2&gt;

&lt;p&gt;The obvious move would have been to buy software for this. Here's why I didn't.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Salesforce dashboards&lt;/strong&gt; aren't robust enough for this kind of consolidated weekly reporting, and more importantly, they require paid Salesforce licenses for every viewer. If you've ever priced Salesforce seats, you know the pain. The per-user licensing costs add up fast, and we already carry more licenses than we'd like. We are a cost-conscious startup and we absolutely do not get our money's worth out of every seat we hold. There was no world in which we were going to buy CRM licenses for the entire company just so people could read a weekly summary.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Databox&lt;/strong&gt; was something we used for other executive-level dashboards that provided similar, though not identical, value. It worked for a while, but we ran into issues with their Salesforce connector that broke our data pipeline, and even when it was working, it didn't do everything we needed it to. That was the end of Databox for us.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Tableau&lt;/strong&gt; was so comically expensive for our use cases that it wasn't even worth a serious conversation, even though we could have used it for other reporting needs too.&lt;/p&gt;

&lt;p&gt;This is the pattern every RevOps person knows. The SaaS options are either too limited, too expensive, too rigid, or some combination of all three. And every tool you add is another vendor, another login, another integration to maintain, another line item for Finance to question. The thing you actually need is always slightly different from what the tool provides, so you end up building workarounds on top of the tool, at which point you're doing custom work anyway, just on someone else's platform.&lt;/p&gt;

&lt;p&gt;I realized I could spend a few days building exactly what I needed, with real integrations, real auth, real deployment, on Harper. And it would cost less ongoing effort than re-establishing the manual process I'd abandoned. No point solution. No per-seat reporting license. No compromises on what the report should contain or who can see it. Harper is a platform, not a tool. The weekly digest is one application on it. The next one I build runs on the same infrastructure, the same deployment model, the same data layer. That's a fundamentally different cost equation than buying a new SaaS product for every internal need.&lt;/p&gt;




&lt;h2&gt;
  
  
  What I Built
&lt;/h2&gt;

&lt;p&gt;The Growth Weekly Digest is now a Harper &lt;a href="https://docs.harperdb.io/docs/developers/applications" rel="noopener noreferrer"&gt;application&lt;/a&gt;. It runs on &lt;a href="https://fabric.harper.fast/" rel="noopener noreferrer"&gt;Harper Fabric&lt;/a&gt; with its own database, API, web interface, and scheduled jobs, all in one deployable unit.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Note: All screenshots in this post use mock data. Our actual digest contains proprietary sales information.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fru6gdconf3kztokq5p1v.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fru6gdconf3kztokq5p1v.png" alt="The digest dashboard showing runs at various workflow stages: draft, approved, and published." width="800" height="436"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Here's what it does each week:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Data collection.&lt;/strong&gt; On a scheduled trigger (Friday mornings), the application queries Salesforce via JWT-authenticated API calls. It pulls Opportunity data: deal names, stages, forecast categories, amounts, ARR fields, close dates, lead sources, and rep assignments. It also queries OpportunityPartner objects to classify deals by channel (partner-sourced vs. direct). Separately, it reads recent activity from configured Slack channels to capture qualitative signals about deal movement and team conversations.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Metrics computation.&lt;/strong&gt; The raw data is processed into structured, frozen metrics: closed won and closed lost counts and revenue broken out by channel, new pipeline created, pipeline by rep, and week-over-week comparisons. Each digest is a snapshot of that week's state.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Commentary generation.&lt;/strong&gt; Metrics and Slack signals are sent to OpenAI's Responses API with a system prompt that enforces structured JSON output. The LLM produces commentary organized into wins, risks, and action items, each with citation links back to the original Slack messages so we can jump directly into a conversation if something needs follow-up. The AI commentary is never published automatically. It's a draft starting point.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Human review and approval.&lt;/strong&gt; This is the part that matters. The generated digest enters a review workflow. I review the data and commentary for accuracy. Kelli, our VP of Sales, does the same. Either of us can edit the AI-generated commentary before approving. The system tracks each approver independently. Both must approve before the digest can be published. This is a real workflow with real gates, not a rubber stamp.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhr4ic290yfhh2vsak5ic.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhr4ic290yfhh2vsak5ic.png" alt="The approval workflow: RevOps and Growth Manager approvals are both pending, and publish is blocked until both clear." width="800" height="278"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Publication and notification.&lt;/strong&gt; Once both approvals are in, the digest is published to a company Slack channel. Published digests are also accessible through the web UI, authenticated via Google OAuth restricted to Harper email addresses.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8odx13zez1s42vr8q11t.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8odx13zez1s42vr8q11t.png" alt="A published digest as seen by a company-wide reader: executive snapshot, AI-generated commentary with Slack citation links, and weekly revenue breakdowns." width="800" height="523"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Access controls.&lt;/strong&gt; Five roles: &lt;code&gt;company_reader&lt;/code&gt;, &lt;code&gt;growth_editor&lt;/code&gt;, &lt;code&gt;revops_approver&lt;/code&gt;, &lt;code&gt;growth_manager&lt;/code&gt;, and &lt;code&gt;admin&lt;/code&gt;. Readers only see published digests. Unpublished runs return a 404 for readers. They don't exist until they're ready. Editors and approvers see drafts and can act on them. Role assignments are stored in a Harper table and enforced server-side on every request.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Architecture: One Runtime, Not a Rube Goldberg Machine
&lt;/h2&gt;

&lt;p&gt;This is where Harper's value as a platform becomes concrete.&lt;/p&gt;

&lt;p&gt;The entire application is a single Harper application. The database, the API, the web UI, the scheduler: they all ship together and deploy together. Here's what's inside:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Eight tables&lt;/strong&gt; define the data model: &lt;code&gt;DigestRun&lt;/code&gt;, &lt;code&gt;DigestMetrics&lt;/code&gt;, &lt;code&gt;DigestCommentary&lt;/code&gt;, &lt;code&gt;DigestNote&lt;/code&gt;, &lt;code&gt;DigestApproval&lt;/code&gt;, &lt;code&gt;DigestPublication&lt;/code&gt;, &lt;code&gt;DigestRoleMapping&lt;/code&gt;, and &lt;code&gt;DigestCommentaryRevision&lt;/code&gt;. These are declared in &lt;a href="https://docs.harperdb.io/docs/developers/applications/defining-schemas" rel="noopener noreferrer"&gt;schema files&lt;/a&gt;. When the application deploys, the tables exist. No external database. No migration scripts. No connection strings. No ORM. Related tables are linked via indexed fields like &lt;code&gt;runId&lt;/code&gt;, so queries across the data model are fast and consistent. The application code interacts with these tables through Harper's &lt;a href="https://docs.harperdb.io/docs/reference/resources" rel="noopener noreferrer"&gt;Resource class&lt;/a&gt;, a unified API where your database tables, business logic, and HTTP endpoints are all part of the same runtime. There's no separate database driver, no connection pool, no serialization layer between your code and your data. This matters for AI-generated code because the agent doesn't have to wire together disconnected services. It writes application logic against a single integrated platform, and that code is production-ready and performant immediately.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;A TypeScript service layer&lt;/strong&gt; handles the digest lifecycle: generation, metrics computation, commentary requests, approval state derivation, publish-blocker validation, and data quality checks. Business rules live here. For example, you cannot generate a new digest for a week that already has a published one.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Three external connectors&lt;/strong&gt;: Salesforce (JWT OAuth with RS256 signing, SOQL queries against Opportunity and OpportunityPartner objects), Slack (Web API with channel discovery, bounded concurrency, and thread fetching), and OpenAI (Responses API with structured JSON schema enforcement, citation sanitization, and configurable retry budgets).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;A scheduler&lt;/strong&gt; with two jobs: a weekly cron trigger for Friday morning digest generation, and a periodic retry loop for any LLM commentary requests that failed due to timeouts or rate limits. The scheduler includes leader-node gating. In a replicated Fabric deployment across multiple nodes, only one instance runs the scheduler. No duplicate digests.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Server-rendered HTML&lt;/strong&gt; for the web interface. A dashboard listing all digest runs, detail views for each, and action endpoints for generate, approve, publish, and edit operations. No frontend framework. No separate build step.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;A CI/CD pipeline&lt;/strong&gt; built entirely by Codex using GitHub Actions. It runs strict typechecking, tests, a regex-based secrets scan, and a guard against raw environment variable usage in code. Every time we merge to main, it automatically deploys to our production Harper Fabric cluster and runs a post-deploy health check against the live API. I didn't configure any of this. Codex built the whole pipeline, and now I don't have to think about DevOps either.&lt;/p&gt;

&lt;p&gt;Here's how the data flows:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7rc0l49j6979v8a0syfj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7rc0l49j6979v8a0syfj.png" alt="Data flow diagram showing the weekly digest pipeline: from scheduled trigger through Salesforce and Slack data collection, metrics computation, OpenAI commentary generation, Harper table persistence, output channels, human review, and publication." width="800" height="697"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now consider what this would look like if I'd let an AI coding tool pick a typical stack:&lt;/p&gt;

&lt;p&gt;A Node.js or Python backend framework. A managed PostgreSQL instance. An ORM. A frontend framework with its own build pipeline. A reverse proxy. A hosting provider for the API. A different hosting solution for the frontend. A managed cron service or sidecar process for scheduled jobs. A secrets manager. Probably Redis for session management. Maybe a container orchestration layer to tie it all together. Each one of those is a configuration surface, a potential point of failure, and something I'd need to understand and maintain.&lt;/p&gt;

&lt;p&gt;With Harper, the application is the whole thing. Database, server, scheduler, auth foundation, deployment target. It's one runtime. That's not a marketing claim. That's what made it possible for me to build this in a few days.&lt;/p&gt;




&lt;h2&gt;
  
  
  How I Built It: Vibe Coding for Business Operators
&lt;/h2&gt;

&lt;p&gt;I used OpenAI Codex as my implementation partner. Let me be precise about what that means.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;I did:&lt;/strong&gt; Requirements definition. System design. Architecture decisions. Data model design. Integration specification: which Salesforce objects to query, which Slack channels to read, what the approval workflow should be, what the role model should look like. Acceptance testing. Best-practice review using &lt;a href="https://github.com/HarperFast/harper-agent" rel="noopener noreferrer"&gt;Harper Agent&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Codex did:&lt;/strong&gt; All code. Every line of TypeScript, every schema, every route handler, every connector, every CI workflow, every test.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;I did not:&lt;/strong&gt; Write code. Debug code. Configure deployment. Set up CI/CD.&lt;/p&gt;

&lt;p&gt;The working pattern was iterative. I'd switch between Codex's planning mode and coding mode constantly. In planning mode, I'd describe what I wanted, sometimes high-level ("We need a Salesforce connector that uses JWT OAuth and pulls Opportunity and Partner data"), sometimes very specific ("Add a guard that prevents generating a new digest for a week that already has a published run. No new routes. No schema changes. Redirect with structured flash data for the dashboard."). Codex would implement it.&lt;/p&gt;

&lt;p&gt;After each implementation pass, I'd feed the output to &lt;a href="https://github.com/HarperFast/harper-agent" rel="noopener noreferrer"&gt;Harper Agent&lt;/a&gt; for best-practice review against Harper's application conventions. Harper Agent would flag issues: patterns that didn't align with how Harper resources should be structured, configuration that could be cleaner. I'd take that feedback and send Codex back in with specific corrections. Harper is currently building out Harper Agent to handle more of this end-to-end, but even today this feedback loop worked well.&lt;/p&gt;

&lt;p&gt;This is the part of building software I've always done well: designing systems, defining requirements, making architecture decisions, evaluating tradeoffs. The part I don't want to do, and historically couldn't do without engineering support, is the implementation labor: the typing, the integration wiring, the debugging. Codex handled that. And because it was building against Harper's unified runtime, the output was deployable from day one. No glue code. No service orchestration. No separate infrastructure to configure.&lt;/p&gt;

&lt;p&gt;Harper's &lt;a href="https://www.npmjs.com/package/create-harper" rel="noopener noreferrer"&gt;&lt;code&gt;npm create harper@latest&lt;/code&gt;&lt;/a&gt; scaffold was the starting point. That gives you a project structure with schema files, resource definitions, configuration, and importantly, a &lt;code&gt;skills/&lt;/code&gt; directory that grounds the AI agent in Harper's architecture patterns. Codex consumed those skills files as context, which meant it wasn't guessing at how to structure a Harper application. It had the conventions built into its working memory.&lt;/p&gt;

&lt;p&gt;To give you a sense of what that first conversation looked like, here's a prompt similar to the one I used to kick off the project. I uploaded my existing Google Doc digest as a reference so Codex could see exactly what the output should look like, then described what I wanted to build:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;I want to build a Harper application that replaces this manual Google Doc "Weekly Digest" report (attached). The app should pull data from Salesforce and Slack automatically, compute the same metrics I've been assembling by hand, and generate a weekly digest. Build it as a Harper Application (&lt;a href="https://docs.harperdb.io/docs/developers/applications" rel="noopener noreferrer"&gt;https://docs.harperdb.io/docs/developers/applications&lt;/a&gt;), ensuring you use the Harper Resource class (&lt;a href="https://docs.harperdb.io/docs/reference/resources" rel="noopener noreferrer"&gt;https://docs.harperdb.io/docs/reference/resources&lt;/a&gt;). Start with the create-harper template (&lt;a href="https://www.npmjs.com/package/create-harper" rel="noopener noreferrer"&gt;https://www.npmjs.com/package/create-harper&lt;/a&gt;). The app needs role-based auth, where company-wide readers can only see published digests. An approval workflow requiring two approvers before publish. OpenAI integration for drafting commentary from the Slack and Salesforce data. Scheduled generation on Fridays plus a manual trigger. A dashboard, digest detail view, and ops page. Strong environment validation, secure cookies, structured logs. CI pipeline with secrets scanning, strict typechecking, and tests. Once we get a version of the app working locally, we will be deploying to Harper Fabric (&lt;a href="https://docs.harperdb.io/fabric" rel="noopener noreferrer"&gt;https://docs.harperdb.io/fabric&lt;/a&gt;). Let's work together and build a plan to implement this solution. We will plan to work iteratively and add features as we go.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;That's it. That prompt, the attached Google Doc as a reference, and the skills files Harper provides for AI context were enough for Codex to scaffold the entire project structure and start building. From there, every feature was a conversation: "Add the Salesforce connector," "Wire up the approval gates," "Build the CI pipeline." Each one a sentence or two of intent, followed by Codex executing and me reviewing.&lt;/p&gt;

&lt;p&gt;For context: this was my second Harper application built with Codex. The first was a personal project I created just to learn the workflow. The moment I saw how it worked, I realized I should be building our actual internal tooling this way, everything running on a single Harper backend that scales through Fabric without me thinking about infrastructure.&lt;/p&gt;




&lt;h2&gt;
  
  
  What Harper Made Possible
&lt;/h2&gt;

&lt;p&gt;Let me be specific about what Harper provided versus what I would have had to solve myself on any other platform.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;No database to provision.&lt;/strong&gt; Schemas declared in files. Deploy the application, tables exist. Eight tables with indexes. No DBA, no managed instance, no connection pooling.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;No application server to configure.&lt;/strong&gt; Harper serves HTTP routes natively, both the web UI and the JSON API. No Express, no nginx, no port management.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;No deployment pipeline to architect.&lt;/strong&gt; I pointed Codex at the &lt;a href="https://docs.harperdb.io/fabric" rel="noopener noreferrer"&gt;Fabric documentation&lt;/a&gt; and told it to handle deployment. It built a GitHub Actions pipeline that deploys to our production Harper Fabric cluster with rolling restart across replicated nodes, runs a post-deploy health check, and confirms the API is responding. Every merge to main triggers it automatically. I have never manually deployed this application.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;No scaling to think about.&lt;/strong&gt; Fabric handles replication. The only concession to multi-node was the leader-node scheduler gating to prevent duplicate scheduled jobs, and that was a straightforward pattern. I don't manage infrastructure. I don't think about infrastructure.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Auth handled by the platform.&lt;/strong&gt; Harper has an &lt;a href="https://github.com/HarperFast/oauth" rel="noopener noreferrer"&gt;OAuth plugin&lt;/a&gt; that handles OAuth 2.0 and OpenID Connect authentication out of the box, with support for Google, GitHub, Azure AD, and other providers. The plugin manages the OAuth flow, token refresh, session integration, and CSRF protection. On top of that, the application layer enforces role-based authorization checked server-side on every request. Having the database and the application in the same runtime made this simple. Role mappings are just another table, checked in the same process that serves the request.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Mobile-friendly because I asked for it.&lt;/strong&gt; I told Codex I wanted the UI to work on phones. It built a responsive layout. That's it. No separate mobile project, no responsive framework to configure. Our executives do a lot of work on their phones, especially when traveling, and the old Google Doc was painful to read on a small screen. With Codex building on Harper, "make it mobile-friendly" was a sentence in a conversation, not a sprint.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flcqikxqn8nojy3th8co6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flcqikxqn8nojy3th8co6.png" alt="The same digest on mobile: executive snapshot, commentary, and revenue tables all responsive." width="239" height="550"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The critical point: what I built locally is what runs in production.&lt;/strong&gt; There is no gap. I run &lt;code&gt;npm run dev&lt;/code&gt; locally, iterate with Codex, and when it's ready, the same application deploys to Fabric. One platform. One set of concerns. One thing to understand.&lt;/p&gt;

&lt;p&gt;This is what makes it viable for a business operator to ship production software. Not because the coding is easier (Codex handles the coding), but because the operational surface area is small enough that a non-engineer can reason about the whole system. I don't need to understand Kubernetes, or Terraform, or how to wire a PostgreSQL connection pool. I need to understand Harper, and Harper is one thing.&lt;/p&gt;




&lt;h2&gt;
  
  
  What's Running Today
&lt;/h2&gt;

&lt;p&gt;The application is live in production on Harper Fabric. It went live in mid-February 2026. The system now automatically produces a new digest every Friday. As of this writing, it has produced eight digests. That includes backfilled digests for the weeks I missed at the start of the year, which the system generated retroactively once it was live.&lt;/p&gt;

&lt;p&gt;The weekly process now works like this: the app generates a digest on Friday morning. Kelli and I get a Slack notification. We review the metrics and commentary, make edits, and approve. When both approvals clear, the digest publishes to a company Slack channel and is available in the web UI for anyone to read. The review step takes a few minutes.&lt;/p&gt;

&lt;p&gt;The time savings are real, at least 30 minutes of manual work eliminated every week, but that understates the actual impact. The real win is that the digest gets produced now. Every week. Automatically. Before this, the manual effort meant it was always the first thing I'd cut when the week got busy. That's not a time-savings story. That's a "the process actually works now" story.&lt;/p&gt;

&lt;p&gt;I also plan to keep iterating. Our CEO has already discussed combining this with other internal tools the team is building on Harper into a single unified internal operations platform. In fact, the moment he saw this tool running, he asked me to write this blog post. &lt;em&gt;(So here we are.)&lt;/em&gt; That convergence is natural because all these applications share the same data layer and deployment model. No integration work required. They're already on the same backend.&lt;/p&gt;

&lt;p&gt;This is the part that I think gets missed in the "build vs. buy" conversation. When you buy a SaaS tool, you get one solution for one problem, and the next problem requires a new vendor. When you build on a platform like Harper, the first application is the hardest. The second one is easier because the platform is already there, the deployment model is already running, and your team already knows how it works. &lt;a href="https://www.harper.fast/vibe" rel="noopener noreferrer"&gt;Harper&lt;/a&gt; is the only platform where coding agents can build and deploy enterprise-grade applications end to end, and once you have that foundation, every internal tool you need is just another application on the same infrastructure. The weekly digest was my first internal tool. It won't be my last.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Math That Changed
&lt;/h2&gt;

&lt;p&gt;Let me frame this for every RevOps operator, sales ops lead, or business systems person reading this.&lt;/p&gt;

&lt;p&gt;You probably have a process right now that works like my old digest. Valuable output, tedious assembly, always at risk of being deprioritized. You've probably looked at SaaS tools to automate it and found them too expensive, too rigid, or too limited. You've probably thought about asking engineering to build something and decided it wasn't worth the political capital or the wait.&lt;/p&gt;

&lt;p&gt;The math has changed. Here's why:&lt;/p&gt;

&lt;p&gt;AI coding tools can now write production-quality code when given clear requirements and a well-structured platform to build against. The bottleneck was never the coding. It was the operational complexity of deploying and maintaining what the code produces. If your AI tool generates a great application but it requires you to manage a database, a web server, a cron service, a CDN, and a container orchestrator, you haven't saved yourself anything. You've just traded one kind of complexity for another.&lt;/p&gt;

&lt;p&gt;Harper collapses that complexity. One runtime. One deployment target. One thing to maintain. Your database, your API, your frontend, your scheduled jobs, your auth: they're all the same application. When you deploy to &lt;a href="https://fabric.harper.fast/" rel="noopener noreferrer"&gt;Fabric&lt;/a&gt;, it scales. When you iterate locally, you're working against the same system that runs in production.&lt;/p&gt;

&lt;p&gt;That's what makes it possible for someone like me, an operations person who understands systems but doesn't write code, to build and maintain production internal software. Not a prototype. Not a demo. A real application with enterprise integrations, approval workflows, role-based access control, CI/CD, and automated deployment.&lt;/p&gt;

&lt;p&gt;The alternative is buying more SaaS. Another subscription. Another vendor. Another tool that does 70% of what you need and requires workarounds for the rest. Another integration to maintain. Another thing that breaks when the vendor pushes an update you didn't ask for.&lt;/p&gt;

&lt;p&gt;Or you can build exactly what you need, on a platform you control, and ship it to production in days.&lt;/p&gt;

&lt;p&gt;I know which one I'd pick. I just did.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Jake Cohen is Senior Director of Commercial Operations at &lt;a href="https://harper.fast/" rel="noopener noreferrer"&gt;Harper&lt;/a&gt;, where he has spent over eight years across Solutions Architecture, Product Management, Engineering Leadership, and Commercial Operations. He holds a B.S. in Computer Engineering from George Mason University. You can find him on &lt;a href="https://www.linkedin.com/in/jacobbcohen" rel="noopener noreferrer"&gt;LinkedIn&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;To start building on Harper, visit &lt;a href="https://www.harper.fast/vibe" rel="noopener noreferrer"&gt;harper.fast/vibe&lt;/a&gt; or run &lt;code&gt;npm create harper@latest&lt;/code&gt;.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>vibecoding</category>
      <category>ai</category>
      <category>devops</category>
      <category>webdev</category>
    </item>
    <item>
      <title>HarperDB is Collapsing the Stack: Introducing Custom Functions</title>
      <dc:creator>Jacob Cohen</dc:creator>
      <pubDate>Thu, 09 Sep 2021 15:35:17 +0000</pubDate>
      <link>https://dev.to/harperfast/harperdb-is-collapsing-the-stack-introducing-custom-functions-3j4k</link>
      <guid>https://dev.to/harperfast/harperdb-is-collapsing-the-stack-introducing-custom-functions-3j4k</guid>
      <description>&lt;p&gt;Introducing the newest innovation from HarperDB: &lt;a href="https://harperdb.io/docs/custom-functions/" rel="noopener noreferrer"&gt;HarperDB Custom Functions&lt;/a&gt;. With the release of HarperDB 3.1 users are able to define their own API endpoints within HarperDB. What does that mean for you? &lt;strong&gt;&lt;em&gt;HarperDB grows from a distributed database to a distributed application development platform with integrated persistence - one that can serve as a single solution for all of your backend needs&lt;/em&gt;&lt;/strong&gt;. We’re collapsing the stack!&lt;/p&gt;

&lt;p&gt;Alright, alright, what’s the big deal? Up until 3.1, in order to power an application you would need to deploy and host your backend API code on additional servers, then have them call out to HarperDB for database needs. This is a pretty typical software stack, but at HarperDB, we’re far from typical. We’re constantly innovating and changing the game. Custom Functions enable developers to build their entire application backend in one place. Oh yeah, and it’s faster, significantly faster! Traditional architectures naturally introduce latency as data moves across multiple servers through a local network or potentially even the Internet. HarperDB is collapsing the stack onto a single server, which eliminates any and all network latency. This frees up headroom for achieving higher throughput from a single server. Capitalizing on &lt;a href="https://harperdb.io/blog/reduce-latency-geo-distributed-databases/" rel="noopener noreferrer"&gt;HarperDB’s already powerful horizontal scalability&lt;/a&gt;, this means you can now distribute &lt;strong&gt;both&lt;/strong&gt; your APIs and your database to the edge.&lt;/p&gt;

&lt;p&gt;For those familiar with modern cloud architectures, Custom Functions are just like AWS Lambda functions. For those familiar with relational databases, they’re like Stored Procedures. You define your logic and choose when to execute it. At a high level it’s as simple as that! They’re low maintenance and easy to develop. You can develop HarperDB Custom Functions in the Studio or in your own IDE and Version Management System. HarperDB Custom Functions can be maintained like any other development project, in fact, the sample Custom Functions provided in the Studio are generated from our public GitHub repository. That means you can develop, maintain, and deploy your HarperDB Custom Functions code just like any other development project, so you don’t have to deviate from your existing development practices. That’s great news! &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6vte11j2ulbpvs7c2x28.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6vte11j2ulbpvs7c2x28.png" alt="HarperDB Studio Functions Editor" width="800" height="528"&gt;&lt;/a&gt;&lt;em&gt;HarperDB Studio Functions Editor&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;What makes Custom Functions so powerful? They leverage the full power of &lt;a href="https://nodejs.org/" rel="noopener noreferrer"&gt;Node.js&lt;/a&gt; and &lt;a href="https://www.fastify.io/" rel="noopener noreferrer"&gt;Fastify&lt;/a&gt;. HarperDB Custom Functions projects are effectively just Node.js projects, which means you can leverage the npm ecosystem, opening the doors to fast and efficient development. Fastify serves as the basis for the webserver, which means you can define and build a fully functional REST API with all the bells and whistles you’d expect. The key differentiator is that these Fastify routes have direct access to HarperDB core methods, bypassing the HarperDB API, and instead interacting directly with HarperDB on the same machine. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx279wa5z0ke9ka9hnf27.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx279wa5z0ke9ka9hnf27.png" alt="WebStorm IDE with HarperDB Functions Project" width="800" height="516"&gt;&lt;/a&gt;&lt;em&gt;WebStorm IDE with HarperDB Functions Project&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;By collapsing the stack, we deliver unparalleled performance and efficiency out of the box. Let’s take a look at some different ways Custom Functions can be used. I’m not going to cover everything here, in fact, I’m sure there are plenty of options that I haven’t even thought of.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Integrate with third-party apps and APIs&lt;/strong&gt;: Seamlessly connect third-party/external data with data stored in HarperDB within a single function.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Utilize third-party authentication&lt;/strong&gt;: Tightly integrate with third-party application providers to validate user requests within your API.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Define your own database functionality&lt;/strong&gt;: HarperDB is always adding features, but let’s say there’s a feature you need that’s missing. Build a HarperDB Custom Function to solve the problem. For example, if you need to enforce row-level security based on a user account, write a function! &lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Serve a website&lt;/strong&gt;: Custom Functions can serve static content and serve as backend APIs, which means you can fully power a website or web app all with HarperDB.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These are just some of the ideas we’re kicking around with HarperDB Custom Functions. We’ll be hosting a &lt;a href="https://harperdb.io/harperdb-custom-functions-event/" rel="noopener noreferrer"&gt;livestream event next week (September 14th at 6pm MT)&lt;/a&gt; where you can watch a live product tour! We look forward to hearing what the HarperDB community can build. Please share any ideas you have in the comments, I’m eager to hear what the community has to say! &lt;/p&gt;




&lt;p&gt;Since this will be the initial release of HarperDB Custom Functions, please let us know what else you’d like to see in future releases. You can submit your ideas to our feedback board here: &lt;a href="https://feedback.harperdb.io/" rel="noopener noreferrer"&gt;feedback.harperdb.io&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>database</category>
      <category>showdev</category>
      <category>node</category>
      <category>serverless</category>
    </item>
    <item>
      <title>Reducing Data Latency with Geographically Distributed Databases</title>
      <dc:creator>Jacob Cohen</dc:creator>
      <pubDate>Tue, 02 Mar 2021 15:53:02 +0000</pubDate>
      <link>https://dev.to/harperfast/reducing-data-latency-with-geographically-distributed-databases-41oa</link>
      <guid>https://dev.to/harperfast/reducing-data-latency-with-geographically-distributed-databases-41oa</guid>
      <description>&lt;p&gt;Do you ever have those moments where you know you’re thinking faster than the app you’re using? You click something and have time to think “what’s taking so long?” It’s frustrating to say the least, but it’s an all-too-common problem in modern applications. A driving factor of this delay is latency, caused by offloading processing from the app to an external server. More often than not, that external server is a monolithic database residing in a single cloud region. This article will dig into some of the existing architectures that cause this issue and provide solutions on how to resolve them.&lt;/p&gt;

&lt;h3&gt;
  
  
  Latency Defined
&lt;/h3&gt;

&lt;p&gt;Before we get ahead of ourselves, let’s define “latency.” In a general sense, latency measures the duration between an action and a response. In user facing applications, that can be narrowed down to the delay between when a user makes a request and when the application responds to a request. As a user, I don’t really care what is causing the delay resulting in a poor user experience, I just want it to go away. In a typical cloud application architecture, latency is caused by the Internet and the time it takes to make requests back and forth from the user’s device and the cloud, referred to as Internet latency. There is also processing time to consider, the time it takes to actually execute the request, which is referred to as operational latency. This article will focus on Internet latency with a hint of operational latency. If you’re interested in other types of latency, &lt;a href="https://whatis.techtarget.com/definition/latency" rel="noopener noreferrer"&gt;TechTarget has a good deep dive into specifics of the term&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Modern applications have reached a point where end-user performance is critical. However, in practice, most application architectures have not fully caught up to support globally consumed applications. I’ve personally run into cases over and over again where I hear that the application has been distributed around the world, but the &lt;a href="https://dev.to/margo_hdb/enhance-your-hybrid-cloud-strategy-with-a-new-edge-36c4"&gt;database is stuck back in a single cloud geography&lt;/a&gt;. We’ve reached a point in technology where static assets are easy to distribute, but persistent data storage is not. &lt;a href="https://en.wikipedia.org/wiki/Content_delivery_network" rel="noopener noreferrer"&gt;Content delivery networks (CDN)&lt;/a&gt; have effectively solved the problem of latency in delivering static content. Movies can be streamed across the globe effortlessly because they are static, they don’t change. I can stream &lt;a href="https://www.imdb.com/title/tt0058150/" rel="noopener noreferrer"&gt;Goldfinger&lt;/a&gt; (the best Bond movie, by the way), all over the world because it’s hosted across the globe via a CDN. That’s an incredible feat, but what about the metadata associated with that streaming app? What happens when the app needs to remember where I paused the film, when I rate the film (5/5 of course), or if I want to add it to my list of favorites? That data needs to be recorded in a database somewhere for future access. Based on my experience with modern application architecture, that database is most likely centralized in a single cloud region. Depending on where I am in the world, this can result in excess latency for simple application features like clicking around the streaming app. Metadata has to be queried and returned through the pipes of the Internet, potentially across the world, creating a poor user experience on basic application features even when my movie is streaming crystal clear.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Problem: A Centralized Database
&lt;/h3&gt;

&lt;p&gt;Why is data storage such a bottleneck? Databases are complex, they need to be able to process large amounts of data transactions and stay responsive at all times. The most popular and flexible databases on the market, both SQL and NoSQL, have proven to be incredibly difficult to distribute. As such, application architects typically choose to leave them in a single geographic region and scale vertically to handle increasing usage requirements. This works for a while, but as the application grows, so does the demand on the database, eventually causing ballooning costs and increased latency when not physically near the database’s region. I’ve seen standard Internet latency range from anywhere between 200 milliseconds to a few seconds.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw5w5zoy1vvgpmt6f9xyb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw5w5zoy1vvgpmt6f9xyb.png" alt="Alt Text" width="800" height="408"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Internet Latency with a Centralized Database&lt;/em&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Real World Examples
&lt;/h3&gt;

&lt;p&gt;Before jumping into the solution, let’s take a quick look at some latency-sensitive applications where high latency quickly leads to poor user experience.&lt;/p&gt;

&lt;h4&gt;
  
  
  Gaming
&lt;/h4&gt;

&lt;p&gt;I don’t pretend to be an expert gamer, but I’ve dabbled in the past. Nothing was more infuriating than my character getting crashed/killed/destroyed because of lag. In my case it didn’t really matter, I’d yell and scream at my TV for a few seconds, like the mature high schooler that I was, and go about my day hoping for redemption (and no lag) in the next match. That said, gaming has turned into a major business with huge competitions, each with millions of dollars in prize money on the line. Imagine losing a million-dollar prize because of a latency issue. Talk about a poor user experience.&lt;/p&gt;

&lt;h4&gt;
  
  
  Home Internet of Things (IoT)
&lt;/h4&gt;

&lt;p&gt;As soon as I moved into my new house I went a little crazy with the smart home things. I’ve got cameras, alarm sensors, smart plugs, smart speakers, a connected thermostat, and some more gadgets I’m forgetting. They’re pretty cool, but the one thing that drives me absolutely crazy is just how long it takes for the apps to respond. My smart alarm system is not quite as smart as I’d hoped when I installed it. Someone could break into my house and get to the second floor before the alarm detects the break in. Why? Because the processing takes place in the cloud, somewhere that’s far away from my house. Now, I understand that when it comes to speech processing for my smart speakers, but you’re telling me that a contact sensor needs to go all the way to a major data center just to tell me that the door is open? That’s a blatant latency issue. Keep in mind, the cloud is still an important factor here because that activity needs to be logged and recorded. However, reducing that latency from multiple seconds down to a few milliseconds would certainly give me more peace of mind.&lt;/p&gt;

&lt;h4&gt;
  
  
  Autonomous Vehicles
&lt;/h4&gt;

&lt;p&gt;It wouldn’t be a proper blog by yours truly if I didn’t sneak in some sort of car reference. Autonomous vehicles are the future. Yes, I’m going to cling to the steering wheel as long as I can, but that’s because I enjoy driving. That said, if the car next to me is autonomous, I want it to be able to detect any mistakes I make as quickly as possible. This is why most of the processing is done on board the vehicle itself. Personally, I see a later phase of vehicles being connected and communicating with each other, which will require some sort of data orchestration between vehicles. This is a place where latency is not an option. Imagine a connected car needing a few seconds to alert another connected semi-truck behind it that there’s a traffic jam and it should exit first. Those few seconds may be the difference between the truck rerouting or being stuck in traffic. Another case where latency is the difference between success and failure. &lt;/p&gt;

&lt;h4&gt;
  
  
  Warehouse Robotics
&lt;/h4&gt;

&lt;p&gt;Beyond cars, I’m also a big robot guy, so this is another fun example for me. There is all sorts of innovation going on within warehouse robotics and distribution facilities. In many of these cases control decisions are still made in the cloud, which makes sense, because you need serious computing power to control a swarm of robots. Latency can seriously affect production, or in this case, output. If a robot has to stop what it’s doing while waiting for a response from the cloud, even just for a second, that can lead to hours of lost productivity across the swarm of robots over the course a day. Sure, it’s automated, but you might as well have your robots operating as efficiently as possible.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Solution: Decentralization
&lt;/h3&gt;

&lt;p&gt;If you’ve made it this far, I’d imagine this issue resonates with you, so let’s get into the solution. To solve these latency challenges, you need two very important things: distributed data centers and a database technology that can be distributed. Effectively, decentralization. Some of you may have seen this coming based on the title… Let’s dig into each of them separately.&lt;/p&gt;

&lt;h4&gt;
  
  
  Distributed Data Centers
&lt;/h4&gt;

&lt;p&gt;You can’t geographically distribute any software without the physical hardware to deploy it on. There are all sorts of options for geographically distributing software, I prefer edge data centers because they bring the computing power physically closer to the end user. That means a shorter distance for network traffic to travel and typically means fewer &lt;a href="https://en.wikipedia.org/wiki/Hop_(networking)" rel="noopener noreferrer"&gt;hops&lt;/a&gt;, resulting in faster response time based purely on physics. Alternatively, you could choose a multi-cloud approach using a combination of cloud providers and/or private data centers to achieve a more distributed solution free from single cloud provider lock-in (something that will certainly make your CFO happy). Realistically, I see a hybrid of both as the solution which allows you to capitalize on the best of both worlds.&lt;/p&gt;

&lt;h4&gt;
  
  
  Distributed Database
&lt;/h4&gt;

&lt;p&gt;This is the tricky part. Like I said earlier, CDN technology is established, but distributing data effectively is a whole different beast. Enter HarperDB, a distributed database solution that can be installed anywhere while presenting a single interface across a range of clouds, with backend ability to keep data synchronized everywhere. What makes HarperDB unique from other, more traditional database solutions? Critically, it’s not a cloud exclusive product like DynamoDB (exclusive to AWS), Cosmos DB (exclusive to Azure), or Firebase (exclusive to GCP). These are all strong products, but they are exclusive to a single cloud provider and as a result can only be distributed in those respective data center locations. In practice, they’re also difficult to distribute in general, as I discussed earlier. That’s not the case with HarperDB. Instead, HarperDB is cloud agnostic, meaning it can run anywhere, whether you’re installing from &lt;a href="https://www.npmjs.com/package/harperdb" rel="noopener noreferrer"&gt;npm&lt;/a&gt;, running a &lt;a href="https://hub.docker.com/r/harperdb/hdb" rel="noopener noreferrer"&gt;Docker&lt;/a&gt; container, or choosing a &lt;a href="https://harperdb.io/product/harperdb-cloud/" rel="noopener noreferrer"&gt;managed service&lt;/a&gt;. It’s so flexible that I have a HarperDB instance running on my laptop. Once installed, HarperDB’s &lt;a href="https://harperdb.io/product/#pubsub" rel="noopener noreferrer"&gt;clustering and replication&lt;/a&gt; can be configured to automatically synchronize data between nodes, regardless of where they’re installed, often times faster than an external request would execute. I should mention that HarperDB is &lt;a href="https://harperdb.io/product/benchmarks/" rel="noopener noreferrer"&gt;fast on its own&lt;/a&gt;, which helps to reduce the operational latency that I explained above. Finally, HarperDB provides a simple and elegant &lt;a href="http://docs.harperdb.io/" rel="noopener noreferrer"&gt;single endpoint API&lt;/a&gt;, which provides a consistent interface for consumers.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feyyrxofuqxyq21gcm5aw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feyyrxofuqxyq21gcm5aw.png" alt="Alt Text" width="800" height="407"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Reduced Internet Latency with Geographically Distributed HarperDB&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;With the power of HarperDB, we can extend the concept of CDN to geographically distributed data centers, providing end users with a fast and consistent solution and an improved user experience. Gamers can stop throwing their controllers at the TV, autonomous vehicles can route more effectively, home security can be more responsive, and warehouse robots can be more time efficient. Many organizations have come to  accept cloud latency as a given, but it doesn’t have to be. With the power of a geographically distributed database, we can empower innovators to create faster and smoother applications by reducing latency caused by long distance routing. Thus, latency is solved!&lt;/p&gt;

</description>
      <category>database</category>
      <category>cloud</category>
      <category>architecture</category>
      <category>distributedsystems</category>
    </item>
    <item>
      <title>The Unbiased Guide to Choosing the Right BI Tool</title>
      <dc:creator>Jacob Cohen</dc:creator>
      <pubDate>Mon, 08 Feb 2021 19:54:21 +0000</pubDate>
      <link>https://dev.to/harperfast/the-unbiased-guide-to-choosing-the-right-bi-tool-3oe6</link>
      <guid>https://dev.to/harperfast/the-unbiased-guide-to-choosing-the-right-bi-tool-3oe6</guid>
      <description>&lt;p&gt;Business Intelligence (BI) is the ultimate end goal of digital transformation. The ability to make better, smarter, more efficient decisions based on data collected across the business is what drives technical investment. This makes choosing the right tool paramount for your organization. As a database instructor and mentor of mine used to say: data is worthless until it becomes information. Information is what happens when you use data to glean insights and conclusions. What better way to do that than with a tool designed for the job? Great news! There are a ton of options out there for you, potentially too many. &lt;strong&gt;In this blog, I’ll discuss strategies for selecting the right BI tool as well as some important things to keep in mind throughout the process.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Let’s start by defining what exactly makes up a BI tool. Don’t tell my English teachers, but I’m going to cite Wikipedia. According to them, “Business intelligence software is a type of application software designed to retrieve, analyze, transform and report data for business intelligence. The applications generally read data that has been previously stored…” I think that does a pretty good job of defining the broad terms, but that still leaves us with a wide variety of software choices. Spreadsheets, database management studios, dashboarding tools, data mining software, all fall under this umbrella. For the purposes of this blog, I’ll be operating under the assumption that we are referring to the more colloquial BI tools which focus on providing visualization capabilities to end users of any skill level. &lt;em&gt;Please note that I’m agnostic here and neither I nor HarperDB are promoting certain tools over others. This blog is intended to share my experience and provide general guidelines for making the decision that’s right for you.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;There are plenty of minor factors that come into play when choosing the right BI tool, but let’s take a look at some important ones. Here are what I consider to be effective criteria to determine the right BI tool for your organization: &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Technical Compatibility&lt;/strong&gt;&lt;br&gt;
Does the tool connect to the existing datastores in your organization? If the tool can’t connect to your datastores or would require a large data integration project, it’s probably best to move on and find something that works a little closer to out-of-the-box. I’ve worked with quite a few organizations that have legacy data stored in legacy database technologies. Just because the tools are old doesn’t make the data any less valuable and it may be critical for true business intelligence understanding. If this is the case for your organization, you may have to skip some of the more modern, web-based solutions. For example, neither Looker nor Google Data Studio support old school JDBC or ODBC connections, which can be a limiting factor of legacy technology.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;User Base/Ease of Use&lt;/strong&gt;&lt;br&gt;
What tools are your end-users already familiar with? Providing them with BI tools that have a similar user experience to tools they already use helps ease the transition to the new tool and drives adoption. If end users are intimidated by the complexity of the new solution they are far less likely to adopt it. For example, if most of the end-users are already Excel power users, selecting Microsoft PowerBI is going to have a far greater chance of success because PowerBI is basically Excel for analytics. Another example is if your organization is powered by G Suite, going with Google Data Studio might make the most sense because it fits into the already existing ecosystem.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Reporting Capabilities&lt;/strong&gt;&lt;br&gt;
Does the tool provide the proper reporting and analytics? At the end of the day, if you need to run specific analytics and produce distinctive charts, you need the right tool that can do it. The big players in the BI tool world most likely have everything you need, but it’s still important to verify ahead of time. I have seen cases where the leading tool is disqualified for not being able to produce a chart that a key stakeholder considered critical.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Collaborative Functionality&lt;/strong&gt;&lt;br&gt;
Does the tool feature collaborative functionality to build and share charts and dashboards across teams? We should be past the days of accessing an Excel file or, even worse, an Access database file from a shared drive. (Yet I keep seeing them, so don’t feel too bad if you have some left, but please digitally transform already). This means we should be choosing tools that are collaborative at their core. The ability to build a chart with your team, share, and distribute is critical in modern business. Especially now that I haven’t left my house in months.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Pricing&lt;/strong&gt;&lt;br&gt;
Is the functionality worth the cost? I’m not going to dig too much into this one, it’s pretty obvious. However, if Fred, our COO, were writing this blog this section would be the largest one, but that’s his job. That’s the point, the technical team may want the biggest and baddest tool, but if the business stakeholders don’t see the value in it then you’re not getting your fancy tool. Pricing for BI tools is wide-ranging, for example, Tableau is over $800 per user per year and Google Data Studio is free. Sure, Tableau has more features, but ask your COO which one they prefer once they hear that tidbit.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Optional: Mobile Friendly&lt;/strong&gt;&lt;br&gt;
Does the tool offer mobile-friendly dashboarding and reporting? Now, I say this is optional because for some organizations this just isn’t a priority or a necessity. Others could find immense value in giving its users a real-time dashboard in the palm of their hands. For example, anyone with physical assets could benefit from having status and management of everything with them at all times instead of relying on a laptop.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Ancillary Requirements&lt;/strong&gt;&lt;br&gt;
What else do we need? Here’s my catchall. Each organization operates its own way, I’m sure there are things I’ve missed here that are important to you. Maybe you only run Microsoft OS, so the tool needs to be able to run there as well. Perhaps email alerting is a requirement.&lt;/p&gt;

&lt;p&gt;There are plenty of criteria and requirements that could sway the decision of a BI tool. Hopefully, this blog helps provide some unbiased opinions and thoughts of what to look for when getting started with your BI tool search. &lt;/p&gt;

&lt;p&gt;Now for a bit of HarperDB bias. Our database technology was built to be powerful and easy to use for all aspects of data collection and analysis. In fact, HarperDB can easily serve as a data warehouse to coalesce data from disparate data stores, so you only need to hook your BI tool up to a single data source. We offer plenty of &lt;a href="https://studio.harperdb.io/support/drivers" rel="noopener noreferrer"&gt;drivers, like ODBC and JDBC, out-of-the-box&lt;/a&gt; that work natively with client-based tools like Tableau and PowerBI. We also have an &lt;a href="https://studio.harperdb.io/support/drivers" rel="noopener noreferrer"&gt;Excel Add-In&lt;/a&gt; that can be used for makeshift BI purposes. In fact, I’ve seen some of our partners build some incredibly powerful tools for users who just want to stay in Excel for everything. However, as a newer database, there are times when we run into BI tools that don’t support HarperDB out of the box. Fortunately, many of these tools allow the community to build their own connectors and integrations. We currently have someone working on a &lt;a href="https://feedback.harperdb.io/suggestions/107119/create-a-data-studio-connector-bounty-2000-usd" rel="noopener noreferrer"&gt;Google Data Studio connector&lt;/a&gt; through our &lt;a href="https://harperdb.io/developers/bounty-program/" rel="noopener noreferrer"&gt;bounty program&lt;/a&gt; which should be ready to go in the near future. I’m pretty excited for that one! Additionally, the &lt;a href="http://studio.harperdb.io/" rel="noopener noreferrer"&gt;HarperDB Studio&lt;/a&gt; has built in charting functionality, making it a free and powerful BI tool of its own. Did I convince you to give it a try? &lt;a href="https://harperdb.io/harperdb-cloud-get-started-today/" rel="noopener noreferrer"&gt;Try HarperDB Cloud for free today!&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fmsyxnk1a7sqoeq8y0fd4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fmsyxnk1a7sqoeq8y0fd4.png" alt="charting" width="800" height="495"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;HarperDB Studio Charting&lt;/em&gt;&lt;/p&gt;

</description>
      <category>database</category>
      <category>datascience</category>
      <category>webdev</category>
      <category>bigdata</category>
    </item>
    <item>
      <title>5 Ways to Use HarperDB in Your Next Project</title>
      <dc:creator>Jacob Cohen</dc:creator>
      <pubDate>Tue, 05 Jan 2021 17:49:04 +0000</pubDate>
      <link>https://dev.to/harperfast/5-ways-to-use-harperdb-in-your-next-project-2fo7</link>
      <guid>https://dev.to/harperfast/5-ways-to-use-harperdb-in-your-next-project-2fo7</guid>
      <description>&lt;p&gt;HarperDB strives to provide the simplest and most streamlined database solution for developers everywhere. That said, just because it’s simple doesn’t mean that it’s not powerful. HarperDB provides plenty of powerful tools to use in a diverse set of projects. In a year that has been unhinged, HarperDB has only added stability. Features and functionality have been hardened, and we’ve added some exciting new features like &lt;a href="https://docs.harperdb.io/#e655acb2-3c75-4155-939c-b68e0b9f87d6" rel="noopener noreferrer"&gt;Upsert&lt;/a&gt;, &lt;a href="https://harperdb.io/developers/documentation/sql-json-search/" rel="noopener noreferrer"&gt;JSON Search&lt;/a&gt;, and &lt;a href="https://harperdb.io/developers/documentation/security/jwt-authentication/" rel="noopener noreferrer"&gt;Token Authentication&lt;/a&gt;. As we look forward to next year, I wanted to hit on some of the things that I consider to be emerging and ongoing trends into 2021. Let’s take a look at some examples of how you can use HarperDB in your next project! &lt;/p&gt;

&lt;h3&gt;
  
  
  Metadata Management
&lt;/h3&gt;

&lt;p&gt;Let’s start simple with metadata. As our ability to collect data improves, our data footprint also grows to the point where we are storing metadata, which literally means data about data. It’s data that informs us about other data, and that right there is how you know you need a powerful database solution. &lt;/p&gt;

&lt;p&gt;Let’s dig into an example. I’m going to take one of the easier, more obvious examples: photos. Particularly photos from your phone. Both Android and iOS (and most modern digital cameras) store EXIF data, which is a standardized format for photo metadata that includes typical things like: timestamp, dimensions, and resolution. It also includes more detailed data like: device make and model, aperture value, focal length, ISO speed, F-number, and longitude and latitude. That implies that there are plenty of different ways to search for photos in an application. &lt;/p&gt;

&lt;p&gt;Why is HarperDB better at this than the average NoSQL database? Enter my old friend &lt;a href="https://harperdb.io/developers/documentation/sql-overview/" rel="noopener noreferrer"&gt;SQL&lt;/a&gt;! Data can be ingested into HarperDB however you want, but the easiest way to build a crazy conditional query is with SQL. For example, I can find all photos that were taken on an Apple product with ISO speed between 25 and 200 with the following query.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="k"&gt;SELECT&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt;
&lt;span class="k"&gt;FROM&lt;/span&gt; &lt;span class="n"&gt;photos&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;metadata&lt;/span&gt; 
&lt;span class="k"&gt;WHERE&lt;/span&gt; &lt;span class="n"&gt;device_make&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s1"&gt;'Apple'&lt;/span&gt; 
  &lt;span class="k"&gt;AND&lt;/span&gt; &lt;span class="n"&gt;iso_speed&lt;/span&gt; &lt;span class="k"&gt;BETWEEN&lt;/span&gt; &lt;span class="mi"&gt;25&lt;/span&gt; &lt;span class="k"&gt;AND&lt;/span&gt; &lt;span class="mi"&gt;200&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Yeah, I get it, that’s a weird thing to be looking for, but maybe that’s exactly what I need. &lt;/p&gt;

&lt;h3&gt;
  
  
  Geospatial Data Analysis
&lt;/h3&gt;

&lt;p&gt;Heavily related to metadata, but more specific, geospatial data deals with the added complication of maps, and wow is it complex! So complex, in fact, that there are a bunch of competing geospatial data standards. I’m a big fan of JSON, so I tend to use &lt;a href="https://geojson.org/" rel="noopener noreferrer"&gt;GeoJSON&lt;/a&gt; when dealing with geospatial data. This certainly falls in the metadata category, but geospatial data is considered a subset because it requires specially tailored functions to effectively gain insights into the data.&lt;/p&gt;

&lt;p&gt;I’m going to keep going with my photo example from above. We know every photo already comes with latitude and longitude coordinates, so I can easily write queries in HarperDB using the built-in &lt;a href="https://harperdb.io/developers/documentation/geospatial-functions/" rel="noopener noreferrer"&gt;geospatial functions&lt;/a&gt;. I grew up near Washington, DC, so I can easily run a query to count how many photos I’ve taken within a mile radius of the &lt;a href="https://www.google.com/maps/place/38%C2%B053'22.5%22N+77%C2%B002'06.9%22W/@38.8891887,-77.0363483,19.27z" rel="noopener noreferrer"&gt;Washington Monument&lt;/a&gt; with this query:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="k"&gt;SELECT&lt;/span&gt; &lt;span class="k"&gt;COUNT&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="k"&gt;FROM&lt;/span&gt; &lt;span class="n"&gt;photos&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;metadata&lt;/span&gt;
&lt;span class="k"&gt;WHERE&lt;/span&gt; &lt;span class="n"&gt;geoNear&lt;/span&gt;&lt;span class="p"&gt;([&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;77&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="mi"&gt;035257&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="mi"&gt;38&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="mi"&gt;889571&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt; &lt;span class="n"&gt;geojson_point&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s1"&gt;'miles'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Because we’re using SQL, we can run all sorts of different filters and aggregations to narrow down to the exact data we need.&lt;/p&gt;

&lt;h3&gt;
  
  
  IoT Sensor Data Collection
&lt;/h3&gt;

&lt;p&gt;Changing it up a little bit on this one. The Internet of Things (IoT) has taken off in the last few years and only has more room to grow. The thing about IoT (no pun intended) is that the data is incredibly unstructured. Sensors can return data in all sorts of different formats and you never really know what you’re going to get, especially when trying to configure a hodgepodge of different sensors for different manufacturers. You could use a NoSQL database to collect all of that data, but then you have to ask yourself: what happens once I have that data? If you’re going to do anything beyond archive it, you’re going to need to be able to query it effectively. That’s where HarperDB thrives, as I mentioned above, you can execute SQL on that data immediately. That is made possible by the HarperDB &lt;a href="https://harperdb.io/blog/dynamic-schema-the-harperdb-way/" rel="noopener noreferrer"&gt;Dynamic Schema&lt;/a&gt;!&lt;/p&gt;

&lt;p&gt;You might say: What’s a Dynamic Schema and why do I care? A dynamic schema adjusts to the data as it’s ingested. In the HarperDB case this means attributes are reflexively added to the schema as they come into the database. For example, if I add a new sensor with additional attributes and dump them into a &lt;code&gt;sensor&lt;/code&gt; table, any new attributes just show up. This means you now automatically have metadata (Aha!) on your schema that you would not have in a standard NoSQL database. &lt;/p&gt;

&lt;p&gt;There are a couple of other fancy HarperDB features you might appreciate when working with sensor data. Take for example the &lt;a href="https://dev.to/ethanarrowood/harperdb-and-websockets-3p6k"&gt;HarperDB WebSocket SDK&lt;/a&gt; where you can create a publish/subscribe client which you can use to listen to data as it’s ingested and take immediate action. Additionally, you can use Clustering and Replication features to move data between instances, but I’m getting ahead of myself…&lt;/p&gt;

&lt;h3&gt;
  
  
  Distributed Data Systems
&lt;/h3&gt;

&lt;p&gt;This is what I consider to be the most promising future trend! Truly distributed data, not simply a few data centers across the world, but points of presence physically near users. Sort of like 5G towers everywhere, but for your data. Distributing data on or near the edge is the most effective way to reduce latency for your users and reduce load on your servers, ultimately improving overall customer experience. Of course, this is not something that happens overnight, and most likely not on the initial launch of your project, but it’s important to consider scalability of a project. &lt;/p&gt;

&lt;p&gt;At HarperDB, this is top of mind for us always. We call this &lt;a href="https://harperdb.io/developers/documentation/clustering/" rel="noopener noreferrer"&gt;clustering and replication&lt;/a&gt;, and we provide users with the granularity to define exactly what data is moving and where it’s going by configuring data to publish/subscribe at a table level. This means, in the IoT example above, we can configure our devices to publish their data to a primary server, but not receive (or subscribe) any data from anywhere else. In a fully distributed example where we want exact replicas across the globe, we would configure all tables to publish and subscribe. This flexibility enables you to define exactly how your data moves. We have some exciting distributed features on our roadmap for 2021, so be sure to keep an eye out for them!&lt;/p&gt;

&lt;h3&gt;
  
  
  Data Science
&lt;/h3&gt;

&lt;p&gt;This is a broad category that involves the analysis of data, both structured and unstructured, to extract knowledge. Two of the most popular data science techniques are Machine Learning (ML) and Artificial Intelligence (AI). To call ML/AI a trend would be an understatement. Sometimes it seems like they’re all people can talk about, but for good reason- they’re powerful. I’ve always been a proponent of using a database to power these models, as it provides a great foundation of tools to aggregate and query data. My colleague &lt;a href="https://dev.to/margo_hdb"&gt;Margo&lt;/a&gt; put together a &lt;a href="https://harperdb.io/blog/machine-learning-in-a-database/" rel="noopener noreferrer"&gt;great blog on using a database for Machine Learning&lt;/a&gt;, which you should absolutely check out if you’re interested in this sort of thing.&lt;/p&gt;

&lt;h3&gt;
  
  
  Closing Thoughts
&lt;/h3&gt;

&lt;p&gt;This is a small subset of projects where HarperDB will provide solid underpinnings and is by no means a complete list. You can try out &lt;a href="https://harperdb.io/developers/harperdb-cloud/link/devtojake/content" rel="noopener noreferrer"&gt;HarperDB for free with HarperDB Cloud&lt;/a&gt;. Give it shot and I think you’ll find that it’s incredibly easy to use and great for rapid and effective development. Have you used HarperDB in other types of projects? What other types of projects do you think HarperDB would be good for? Drop your thoughts in the comments below!&lt;/p&gt;

</description>
      <category>database</category>
      <category>datascience</category>
      <category>iot</category>
      <category>distributedsystems</category>
    </item>
    <item>
      <title>Hey CodeLand! We're HarperDB and we make data management easy throughout your coding journey!</title>
      <dc:creator>Jacob Cohen</dc:creator>
      <pubDate>Thu, 23 Jul 2020 12:00:31 +0000</pubDate>
      <link>https://dev.to/harperfast/hey-codeland-we-re-harperdb-and-we-make-data-management-easy-throughout-your-coding-journey-1f7b</link>
      <guid>https://dev.to/harperfast/hey-codeland-we-re-harperdb-and-we-make-data-management-easy-throughout-your-coding-journey-1f7b</guid>
      <description>&lt;p&gt;Hello! We're excited to be here and meet all of you during CodeLand and beyond. Collaborative developer communities like DEV and CodeNewbies have full support from the HarperDB team, and we love being a resource for people across the world on their coding journeys. &lt;/p&gt;

&lt;p&gt;HarperDB was founded with the goal of delivering a simplified solution for developers without sacrificing scale or performance. HarperDB is a distributed database focused on making data management easy. With an intuitive REST API, and NoSQL &amp;amp; SQL operations including joins, users can be up and running in minutes.&lt;/p&gt;

&lt;h2&gt;
  
  
  Our Latest News
&lt;/h2&gt;

&lt;p&gt;We're excited to announce that we recently released &lt;a href="https://harperdb.io/harperdb-cloud-get-started-today/?code=CODELAND" rel="noopener noreferrer"&gt;HarperDB Cloud&lt;/a&gt;, our fully managed and hosted Database-as-a-Service powered by AWS with the same code base and single easy to use API endpoint. It supports NoSQL &amp;amp; SQL including joins, has an intuitive Management Studio enabling you to install, design, cluster, and manage your databases without writing a line of code. &lt;/p&gt;

&lt;p&gt;We would love for you to check it out. You can learn more &lt;a href="https://harperdb.io/harperdb-cloud-get-started-today/?code=CODELAND" rel="noopener noreferrer"&gt;here&lt;/a&gt; or watch our launch tour video:&lt;/p&gt;

&lt;p&gt;&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/fAKZxK-XamM"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

&lt;h3&gt;
  
  
  Got Questions? Want to walkthrough a demo together? Let's Chat!
&lt;/h3&gt;

&lt;p&gt;We'll be hanging out at our &lt;a href="https://dev.to/join_channel_invitation/harperdb-36f6?invitation_slug=invitation-link-9839aa"&gt;DEV Connect channel&lt;/a&gt; from 10:00am to 9:00pm ET today. We'd love to answer any questions you have about HarperDB, so please swing by!&lt;/p&gt;

&lt;p&gt;If today doesn't work, you can schedule a call with us anytime: &lt;a href="https://app.hubspot.com/meetings/margo8" rel="noopener noreferrer"&gt;https://app.hubspot.com/meetings/margo8&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;or simply leave a comment down below :)&lt;/p&gt;

&lt;h3&gt;
  
  
  A special discount just for CodeLand attendees ✨
&lt;/h3&gt;

&lt;p&gt;Try HarperDB Cloud for free, &lt;strong&gt;or use coupon code “CODELAND” for $250 off any larger instance to unlock features like clustering and full support&lt;/strong&gt;. &lt;/p&gt;

</description>
      <category>codeland</category>
    </item>
    <item>
      <title>SQL Queries to Complex JSON Objects</title>
      <dc:creator>Jacob Cohen</dc:creator>
      <pubDate>Tue, 02 Jun 2020 18:44:53 +0000</pubDate>
      <link>https://dev.to/harperfast/sql-queries-to-complex-objects-with-array-function-4p6</link>
      <guid>https://dev.to/harperfast/sql-queries-to-complex-objects-with-array-function-4p6</guid>
      <description>&lt;p&gt;How many times have you run into a situation where you wish you could do a SQL join without getting duplicate rows back? What if we could get a list "column" returned instead? &lt;a href="https://harperdb.io/" rel="noopener noreferrer"&gt;HarperDB&lt;/a&gt;’s ARRAY() function enables just that. In this post we’re going to take a look at a basic example of people with addresses and phone numbers. &lt;/p&gt;

&lt;h2&gt;
  
  
  Use Cases for the ARRAY() Function
&lt;/h2&gt;

&lt;p&gt;Most existing systems have trouble transforming relational data into hierarchical data. Typically large batch processes or ETL jobs exist to perform these data transformations. HarperDB can perform these transformations out-of-the-box with a single SQL query. This query effectively performs the job of an &lt;a href="https://harperdb.io/blog/what-is-an-orm-and-why-the-madness/" rel="noopener noreferrer"&gt;ORM&lt;/a&gt; without the need for bloated software. Don’t think this is possible? Keep reading.  &lt;/p&gt;

&lt;h2&gt;
  
  
  How the ARRAY() Function Works
&lt;/h2&gt;

&lt;p&gt;The HarperDB ARRAY() function, forthcoming in a future release, is an aggregate function, similar to COUNT, SUM, AVG. The difference is that while standard aggregate functions will return computation results, ARRAY() returns a list of data as a field. While this may not be intuitive to those, like myself, who have been using SQL for years, it does enable the developer to create complex JSON objects with a single query. Let’s take a look at an example use case… &lt;/p&gt;

&lt;h2&gt;
  
  
  Example Data
&lt;/h2&gt;

&lt;p&gt;We’ll be working with People, Phone Numbers, and Addresses. Each Address and/or Phone Number links back to a single Person. We have 10 person records, each with one or more phone numbers and addresses for a total of 20 addresses and 24 phone numbers.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fr231fsof7wrjdwmmerto.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fr231fsof7wrjdwmmerto.png" title="Array Example ERD" alt="Alt Text" width="561" height="238"&gt;&lt;/a&gt;&lt;em&gt;&lt;p&gt;Array Example ERD&lt;/p&gt;&lt;/em&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  Connecting Person and Phone Number
&lt;/h2&gt;

&lt;p&gt;Let’s say I want to get all of the phone numbers for a person with ID 1. That’s fairly simple, I just query the phone number table for that person. But what happens if I also want to get the person data? I have to execute two queries and connect the data in my application.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="k"&gt;SELECT&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="k"&gt;FROM&lt;/span&gt; &lt;span class="n"&gt;arr&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;person&lt;/span&gt; &lt;span class="k"&gt;WHERE&lt;/span&gt; &lt;span class="n"&gt;person_id&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;
&lt;span class="k"&gt;SELECT&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="k"&gt;FROM&lt;/span&gt; &lt;span class="n"&gt;arr&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;phone&lt;/span&gt; &lt;span class="k"&gt;WHERE&lt;/span&gt; &lt;span class="n"&gt;person_id&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now what happens if I want to get all people and all of their phone numbers. While I’d like to do a simple join, I can’t, because I’d end up with duplicate person data.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="k"&gt;SELECT&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="k"&gt;FROM&lt;/span&gt; &lt;span class="n"&gt;arr&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;person&lt;/span&gt; &lt;span class="k"&gt;LEFT&lt;/span&gt; &lt;span class="k"&gt;JOIN&lt;/span&gt; &lt;span class="n"&gt;arr&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;phone&lt;/span&gt; &lt;span class="k"&gt;ON&lt;/span&gt; &lt;span class="n"&gt;person&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;person_id&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;phone&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;person_id&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;So, again, I have to run two queries and aggregate the data together in my application.&lt;/p&gt;

&lt;p&gt;In HarperDB, we have the ARRAY() aggregate function which allows us to return this data, with no duplicates, in a single query. Remember, because ARRAY() is an aggregate function that we need to have a GROUP BY clause specified. In this case, since we are selecting multiple person fields, we need to specify all of them in our GROUP BY clause. Since we included our hash, person_id, we will safely retrieve each person record.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="k"&gt;SELECT&lt;/span&gt;
  &lt;span class="n"&gt;person&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;person_id&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; 
  &lt;span class="n"&gt;person&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;first_name&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="n"&gt;person&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;middle_name&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="n"&gt;person&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;last_name&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="n"&gt;person&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;saluation&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="n"&gt;person&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;dob&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; 
  &lt;span class="n"&gt;ARRAY&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
    &lt;span class="k"&gt;type&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;addr&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;address_type&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;addressLine1&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;addr&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;address_line_1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;addressLine2&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;addr&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;address_line_2&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;city&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;addr&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;city&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="k"&gt;state&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;addr&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="k"&gt;state&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;zip&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;addr&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;zip_code&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="p"&gt;})&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;address&lt;/span&gt; 
&lt;span class="k"&gt;FROM&lt;/span&gt; 
  &lt;span class="n"&gt;arr&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;person&lt;/span&gt; &lt;span class="k"&gt;AS&lt;/span&gt; &lt;span class="n"&gt;person&lt;/span&gt; 
    &lt;span class="k"&gt;LEFT&lt;/span&gt; &lt;span class="k"&gt;JOIN&lt;/span&gt; &lt;span class="n"&gt;arr&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;address&lt;/span&gt; &lt;span class="k"&gt;AS&lt;/span&gt; &lt;span class="n"&gt;addr&lt;/span&gt; 
      &lt;span class="k"&gt;ON&lt;/span&gt; &lt;span class="n"&gt;person&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;person_id&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;addr&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;person_id&lt;/span&gt;
&lt;span class="k"&gt;GROUP&lt;/span&gt; &lt;span class="k"&gt;BY&lt;/span&gt; 
  &lt;span class="n"&gt;person&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;person_id&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="n"&gt;person&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;first_name&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="n"&gt;person&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;middle_name&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="n"&gt;person&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;last_name&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="n"&gt;person&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;saluation&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="n"&gt;person&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;dob&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This returns a list of complex JSON objects where each Person object contains a list of Phone objects. For example, the complex object for person ID 1 would look like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"person_id"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"first_name"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Doug"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"middle_name"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"James"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"last_name"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Henley"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"saluation"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Mr."&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"dob"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"8/15/57"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"address"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"type"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"MAILING"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"addressLine1"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"94317 Roxbury Court"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"addressLine2"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Apt 102"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"city"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Tampa"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"state"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"FL"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"zip"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;33625&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"type"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"MAILING"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"addressLine1"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"35 Elgar Court"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"city"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Arvada"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"state"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"CO"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"zip"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;80005&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Connecting Person, Phone Number, and Address
&lt;/h2&gt;

&lt;p&gt;Now that we’ve shown how to aggregate list data from a single table let’s take a look at how we can retrieve multiple lists within our complex JSON objects. Ordinarily, if I wanted to pull data for person, phone, and address, then I would need three SQL queries.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="k"&gt;SELECT&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="k"&gt;FROM&lt;/span&gt; &lt;span class="n"&gt;person&lt;/span&gt; &lt;span class="k"&gt;WHERE&lt;/span&gt; &lt;span class="n"&gt;person_id&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;
&lt;span class="k"&gt;SELECT&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="k"&gt;FROM&lt;/span&gt; &lt;span class="n"&gt;phone&lt;/span&gt; &lt;span class="k"&gt;WHERE&lt;/span&gt; &lt;span class="n"&gt;person_id&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;
&lt;span class="k"&gt;SELECT&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="k"&gt;FROM&lt;/span&gt; &lt;span class="n"&gt;address&lt;/span&gt; &lt;span class="k"&gt;WHERE&lt;/span&gt; &lt;span class="n"&gt;person_id&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now, if I were to put all three of those tables into a JOIN statement, I would receive a lot of duplicate data across all three tables. Take a look, here, at what is returned by the below SQL statement.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="k"&gt;SELECT&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; 
&lt;span class="k"&gt;FROM&lt;/span&gt; &lt;span class="n"&gt;person&lt;/span&gt; 
  &lt;span class="k"&gt;LEFT&lt;/span&gt; &lt;span class="k"&gt;JOIN&lt;/span&gt; &lt;span class="n"&gt;phone&lt;/span&gt; 
    &lt;span class="k"&gt;ON&lt;/span&gt; &lt;span class="n"&gt;person&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;person_id&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;phone&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;person_id&lt;/span&gt; 
  &lt;span class="k"&gt;LEFT&lt;/span&gt; &lt;span class="k"&gt;JOIN&lt;/span&gt; &lt;span class="n"&gt;address&lt;/span&gt;
    &lt;span class="k"&gt;ON&lt;/span&gt; &lt;span class="n"&gt;person&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;person_id&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;address&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;person_id&lt;/span&gt; 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Moving back to HarperDB we can query with the ARRAY() function to help us out with this. However, because we are joining across multiple tables we may still see some duplicate data in the phone and address lists. This is the inherent nature of SQL JOINS. In order to solve this problem, HarperDB created the DISTINCT_ARRAY() wrapper function. This function can be placed around a standard ARRAY() function call to ensure a distinct (deduplicated) results set is returned. Now to create our complex Person object with lists of both Phone and Address we can write a SQL statement like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="k"&gt;SELECT&lt;/span&gt; 
  &lt;span class="n"&gt;person&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;person_id&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="n"&gt;person&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;first_name&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="n"&gt;person&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;middle_name&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="n"&gt;person&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;last_name&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="n"&gt;person&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;saluation&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="n"&gt;person&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;dob&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="n"&gt;DISTINCT_ARRAY&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;ARRAY&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
    &lt;span class="k"&gt;type&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;addr&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;address_type&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;addressLine1&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;addr&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;address_line_1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;addressLine2&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;addr&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;address_line_2&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;city&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;addr&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;city&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="k"&gt;state&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;addr&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="k"&gt;state&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;zip&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;addr&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;zip_code&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="p"&gt;}))&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;address&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="n"&gt;DISTINCT_ARRAY&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;ARRAY&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
    &lt;span class="k"&gt;type&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;phone&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;phone_type&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;num&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;phone&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;number&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;primaryFlag&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;phone&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;primary_flag&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="p"&gt;}))&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;phone&lt;/span&gt;
&lt;span class="k"&gt;FROM&lt;/span&gt; &lt;span class="n"&gt;arr&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;person&lt;/span&gt; &lt;span class="k"&gt;AS&lt;/span&gt; &lt;span class="n"&gt;person&lt;/span&gt; 
  &lt;span class="k"&gt;LEFT&lt;/span&gt; &lt;span class="k"&gt;JOIN&lt;/span&gt; &lt;span class="n"&gt;arr&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;address&lt;/span&gt; &lt;span class="k"&gt;AS&lt;/span&gt; &lt;span class="n"&gt;addr&lt;/span&gt;
    &lt;span class="k"&gt;ON&lt;/span&gt; &lt;span class="n"&gt;person&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;person_id&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;addr&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;person_id&lt;/span&gt;
  &lt;span class="k"&gt;LEFT&lt;/span&gt; &lt;span class="k"&gt;JOIN&lt;/span&gt; &lt;span class="n"&gt;arr&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;phone&lt;/span&gt; &lt;span class="k"&gt;AS&lt;/span&gt; &lt;span class="n"&gt;phone&lt;/span&gt;
    &lt;span class="k"&gt;ON&lt;/span&gt; &lt;span class="n"&gt;person&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;person_id&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;phone&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;person_id&lt;/span&gt;
&lt;span class="k"&gt;GROUP&lt;/span&gt; &lt;span class="k"&gt;BY&lt;/span&gt;
  &lt;span class="n"&gt;person&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;person_id&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="n"&gt;person&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;first_name&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="n"&gt;person&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;middle_name&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="n"&gt;person&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;last_name&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="n"&gt;person&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;saluation&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="n"&gt;person&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;dob&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The complex object for Person ID 1 returned from the above query looks like this&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"person_id"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"first_name"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Doug"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"middle_name"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"James"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"last_name"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Henley"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"saluation"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Mr."&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"dob"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"8/15/57"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"address"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"type"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"MAILING"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"line1"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"94317 Roxbury Court"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"line2"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Apt 102"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"city"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Tampa"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"state"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"FL"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"zip"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;33625&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"type"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"MAILING"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"line1"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"35 Elgar Court"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"city"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Arvada"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"state"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"CO"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"zip"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;80005&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"phone"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"type"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"REFERENCE"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"num"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"926-647-6907"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"primaryFlag"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"type"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"HOME"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"num"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"737-377-6038"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"primaryFlag"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;With a single query in &lt;a href="https://harperdb.io/" rel="noopener noreferrer"&gt;HarperDB&lt;/a&gt; we were able to transform SQL data into a complex JSON object that can be used in your modern application!&lt;/p&gt;

&lt;h2&gt;
  
  
  Sample Data
&lt;/h2&gt;

&lt;p&gt;Here are links to CSVs of each table used in the above example. You can also view the data below.&lt;br&gt;
&lt;a href="https://array-function-sample-data.s3.us-east-2.amazonaws.com/person.csv" rel="noopener noreferrer"&gt;person.csv&lt;/a&gt;&lt;br&gt;
&lt;a href="https://array-function-sample-data.s3.us-east-2.amazonaws.com/address.csv" rel="noopener noreferrer"&gt;address.csv&lt;/a&gt;&lt;br&gt;
&lt;a href="https://array-function-sample-data.s3.us-east-2.amazonaws.com/phone.csv" rel="noopener noreferrer"&gt;phone.csv&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Person Table
&lt;/h3&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;person_id&lt;/th&gt;
&lt;th&gt;first_name&lt;/th&gt;
&lt;th&gt;middle_name&lt;/th&gt;
&lt;th&gt;last_name&lt;/th&gt;
&lt;th&gt;saluation&lt;/th&gt;
&lt;th&gt;dob&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;1&lt;/td&gt;
&lt;td&gt;Doug&lt;/td&gt;
&lt;td&gt;James&lt;/td&gt;
&lt;td&gt;Henley&lt;/td&gt;
&lt;td&gt;Mr.&lt;/td&gt;
&lt;td&gt;8/15/57&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;2&lt;/td&gt;
&lt;td&gt;Megan&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;Creech&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;6/29/66&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;3&lt;/td&gt;
&lt;td&gt;Michael&lt;/td&gt;
&lt;td&gt;Samuel&lt;/td&gt;
&lt;td&gt;Lang&lt;/td&gt;
&lt;td&gt;Mr.&lt;/td&gt;
&lt;td&gt;9/18/68&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;4&lt;/td&gt;
&lt;td&gt;Charles&lt;/td&gt;
&lt;td&gt;Jay&lt;/td&gt;
&lt;td&gt;Cohen&lt;/td&gt;
&lt;td&gt;Mr.&lt;/td&gt;
&lt;td&gt;1/12/76&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;5&lt;/td&gt;
&lt;td&gt;Gabby&lt;/td&gt;
&lt;td&gt;Sarah&lt;/td&gt;
&lt;td&gt;Hughes&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;9/30/82&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;6&lt;/td&gt;
&lt;td&gt;Emily&lt;/td&gt;
&lt;td&gt;Alexandra&lt;/td&gt;
&lt;td&gt;Wood&lt;/td&gt;
&lt;td&gt;Mrs.&lt;/td&gt;
&lt;td&gt;1/18/64&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;7&lt;/td&gt;
&lt;td&gt;Samantha&lt;/td&gt;
&lt;td&gt;Grace&lt;/td&gt;
&lt;td&gt;Choi&lt;/td&gt;
&lt;td&gt;Mrs.&lt;/td&gt;
&lt;td&gt;5/25/64&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;8&lt;/td&gt;
&lt;td&gt;Hana&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;Smith&lt;/td&gt;
&lt;td&gt;Ms.&lt;/td&gt;
&lt;td&gt;3/12/72&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;9&lt;/td&gt;
&lt;td&gt;Kent&lt;/td&gt;
&lt;td&gt;Richard&lt;/td&gt;
&lt;td&gt;Garrett&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;9/24/79&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;10&lt;/td&gt;
&lt;td&gt;Kara&lt;/td&gt;
&lt;td&gt;Caitlin&lt;/td&gt;
&lt;td&gt;May&lt;/td&gt;
&lt;td&gt;Ms.&lt;/td&gt;
&lt;td&gt;9/17/90&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h3&gt;
  
  
  Address Table
&lt;/h3&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;address_id&lt;/th&gt;
&lt;th&gt;person_id&lt;/th&gt;
&lt;th&gt;address_type&lt;/th&gt;
&lt;th&gt;address_line_1&lt;/th&gt;
&lt;th&gt;address_line_2&lt;/th&gt;
&lt;th&gt;city&lt;/th&gt;
&lt;th&gt;state&lt;/th&gt;
&lt;th&gt;zip_code&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;1&lt;/td&gt;
&lt;td&gt;1&lt;/td&gt;
&lt;td&gt;MAILING&lt;/td&gt;
&lt;td&gt;94317 Roxbury Court&lt;/td&gt;
&lt;td&gt;Apt 102&lt;/td&gt;
&lt;td&gt;Tampa&lt;/td&gt;
&lt;td&gt;FL&lt;/td&gt;
&lt;td&gt;33625&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;2&lt;/td&gt;
&lt;td&gt;2&lt;/td&gt;
&lt;td&gt;BILLING&lt;/td&gt;
&lt;td&gt;9 Mayer Plaza&lt;/td&gt;
&lt;td&gt;#277&lt;/td&gt;
&lt;td&gt;Washington&lt;/td&gt;
&lt;td&gt;DC&lt;/td&gt;
&lt;td&gt;20430&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;3&lt;/td&gt;
&lt;td&gt;3&lt;/td&gt;
&lt;td&gt;MAILING&lt;/td&gt;
&lt;td&gt;99 Cascade Crossing&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;Hartford&lt;/td&gt;
&lt;td&gt;CT&lt;/td&gt;
&lt;td&gt;6152&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;4&lt;/td&gt;
&lt;td&gt;4&lt;/td&gt;
&lt;td&gt;MAILING&lt;/td&gt;
&lt;td&gt;39094 Hoard Center&lt;/td&gt;
&lt;td&gt;#418&lt;/td&gt;
&lt;td&gt;Flushing&lt;/td&gt;
&lt;td&gt;NY&lt;/td&gt;
&lt;td&gt;11388&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;5&lt;/td&gt;
&lt;td&gt;5&lt;/td&gt;
&lt;td&gt;MAILING&lt;/td&gt;
&lt;td&gt;6 Waubesa Point&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;Aurora&lt;/td&gt;
&lt;td&gt;CO&lt;/td&gt;
&lt;td&gt;80045&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;6&lt;/td&gt;
&lt;td&gt;6&lt;/td&gt;
&lt;td&gt;BILLING&lt;/td&gt;
&lt;td&gt;94209 Kinsman Place&lt;/td&gt;
&lt;td&gt;#135&lt;/td&gt;
&lt;td&gt;Atlanta&lt;/td&gt;
&lt;td&gt;GA&lt;/td&gt;
&lt;td&gt;30311&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;7&lt;/td&gt;
&lt;td&gt;7&lt;/td&gt;
&lt;td&gt;MAILING&lt;/td&gt;
&lt;td&gt;526 Barnett Hill&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;Waco&lt;/td&gt;
&lt;td&gt;TX&lt;/td&gt;
&lt;td&gt;76711&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;8&lt;/td&gt;
&lt;td&gt;8&lt;/td&gt;
&lt;td&gt;BILLING&lt;/td&gt;
&lt;td&gt;9 Luster Trail&lt;/td&gt;
&lt;td&gt;#348&lt;/td&gt;
&lt;td&gt;Nashville&lt;/td&gt;
&lt;td&gt;TN&lt;/td&gt;
&lt;td&gt;37240&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;9&lt;/td&gt;
&lt;td&gt;9&lt;/td&gt;
&lt;td&gt;BILLING&lt;/td&gt;
&lt;td&gt;33553 Talmadge Hill&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;Bakersfield&lt;/td&gt;
&lt;td&gt;CA&lt;/td&gt;
&lt;td&gt;93386&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;10&lt;/td&gt;
&lt;td&gt;10&lt;/td&gt;
&lt;td&gt;MAILING&lt;/td&gt;
&lt;td&gt;21900 Rusk Drive&lt;/td&gt;
&lt;td&gt;Apt 8&lt;/td&gt;
&lt;td&gt;Harrisburg&lt;/td&gt;
&lt;td&gt;PA&lt;/td&gt;
&lt;td&gt;17121&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;11&lt;/td&gt;
&lt;td&gt;1&lt;/td&gt;
&lt;td&gt;MAILING&lt;/td&gt;
&lt;td&gt;35 Elgar Court&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;Arvada&lt;/td&gt;
&lt;td&gt;CO&lt;/td&gt;
&lt;td&gt;80005&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;12&lt;/td&gt;
&lt;td&gt;3&lt;/td&gt;
&lt;td&gt;BILLING&lt;/td&gt;
&lt;td&gt;9 Tennessee Street&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;Trenton&lt;/td&gt;
&lt;td&gt;NJ&lt;/td&gt;
&lt;td&gt;8619&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;13&lt;/td&gt;
&lt;td&gt;6&lt;/td&gt;
&lt;td&gt;MAILING&lt;/td&gt;
&lt;td&gt;0 Old Gate Alley&lt;/td&gt;
&lt;td&gt;Apt 439&lt;/td&gt;
&lt;td&gt;Wilkes Barre&lt;/td&gt;
&lt;td&gt;PA&lt;/td&gt;
&lt;td&gt;18768&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;14&lt;/td&gt;
&lt;td&gt;7&lt;/td&gt;
&lt;td&gt;BILLING&lt;/td&gt;
&lt;td&gt;3918 Messerschmidt Way&lt;/td&gt;
&lt;td&gt;Apt 234&lt;/td&gt;
&lt;td&gt;Oklahoma City&lt;/td&gt;
&lt;td&gt;OK&lt;/td&gt;
&lt;td&gt;73173&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;15&lt;/td&gt;
&lt;td&gt;9&lt;/td&gt;
&lt;td&gt;BILLING&lt;/td&gt;
&lt;td&gt;41778 Stephen Circle&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;Salt Lake City&lt;/td&gt;
&lt;td&gt;UT&lt;/td&gt;
&lt;td&gt;84145&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;16&lt;/td&gt;
&lt;td&gt;10&lt;/td&gt;
&lt;td&gt;BILLING&lt;/td&gt;
&lt;td&gt;50 Tony Terrace&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;Sioux Falls&lt;/td&gt;
&lt;td&gt;SD&lt;/td&gt;
&lt;td&gt;57198&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;17&lt;/td&gt;
&lt;td&gt;2&lt;/td&gt;
&lt;td&gt;MAILING&lt;/td&gt;
&lt;td&gt;6 Hanson Trail&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;Nashville&lt;/td&gt;
&lt;td&gt;TN&lt;/td&gt;
&lt;td&gt;37240&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;18&lt;/td&gt;
&lt;td&gt;3&lt;/td&gt;
&lt;td&gt;BILLING&lt;/td&gt;
&lt;td&gt;0 Darwin Terrace&lt;/td&gt;
&lt;td&gt;#144&lt;/td&gt;
&lt;td&gt;Montpelier&lt;/td&gt;
&lt;td&gt;VT&lt;/td&gt;
&lt;td&gt;5609&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;19&lt;/td&gt;
&lt;td&gt;5&lt;/td&gt;
&lt;td&gt;MAILING&lt;/td&gt;
&lt;td&gt;59265 Dakota Center&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;Pittsburgh&lt;/td&gt;
&lt;td&gt;PA&lt;/td&gt;
&lt;td&gt;15279&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;20&lt;/td&gt;
&lt;td&gt;9&lt;/td&gt;
&lt;td&gt;MAILING&lt;/td&gt;
&lt;td&gt;369 Badeau Road&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;Miami&lt;/td&gt;
&lt;td&gt;FL&lt;/td&gt;
&lt;td&gt;33283&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h3&gt;
  
  
  Phone Table
&lt;/h3&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;phone_id&lt;/th&gt;
&lt;th&gt;person_id&lt;/th&gt;
&lt;th&gt;phone_type&lt;/th&gt;
&lt;th&gt;number&lt;/th&gt;
&lt;th&gt;primary_flag&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;1&lt;/td&gt;
&lt;td&gt;1&lt;/td&gt;
&lt;td&gt;REFERENCE&lt;/td&gt;
&lt;td&gt;926-647-6907&lt;/td&gt;
&lt;td&gt;1&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;2&lt;/td&gt;
&lt;td&gt;2&lt;/td&gt;
&lt;td&gt;REFERENCE&lt;/td&gt;
&lt;td&gt;864-324-2292&lt;/td&gt;
&lt;td&gt;1&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;3&lt;/td&gt;
&lt;td&gt;3&lt;/td&gt;
&lt;td&gt;WORK&lt;/td&gt;
&lt;td&gt;540-908-1691&lt;/td&gt;
&lt;td&gt;1&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;4&lt;/td&gt;
&lt;td&gt;4&lt;/td&gt;
&lt;td&gt;HOME&lt;/td&gt;
&lt;td&gt;253-590-9734&lt;/td&gt;
&lt;td&gt;1&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;5&lt;/td&gt;
&lt;td&gt;5&lt;/td&gt;
&lt;td&gt;CELL&lt;/td&gt;
&lt;td&gt;302-785-7313&lt;/td&gt;
&lt;td&gt;1&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;6&lt;/td&gt;
&lt;td&gt;6&lt;/td&gt;
&lt;td&gt;REFERENCE&lt;/td&gt;
&lt;td&gt;670-198-4073&lt;/td&gt;
&lt;td&gt;1&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;7&lt;/td&gt;
&lt;td&gt;7&lt;/td&gt;
&lt;td&gt;CELL&lt;/td&gt;
&lt;td&gt;923-662-5491&lt;/td&gt;
&lt;td&gt;1&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;8&lt;/td&gt;
&lt;td&gt;8&lt;/td&gt;
&lt;td&gt;REFERENCE&lt;/td&gt;
&lt;td&gt;176-225-5902&lt;/td&gt;
&lt;td&gt;1&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;9&lt;/td&gt;
&lt;td&gt;9&lt;/td&gt;
&lt;td&gt;WORK&lt;/td&gt;
&lt;td&gt;228-536-6858&lt;/td&gt;
&lt;td&gt;1&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;10&lt;/td&gt;
&lt;td&gt;10&lt;/td&gt;
&lt;td&gt;REFERENCE&lt;/td&gt;
&lt;td&gt;175-549-9915&lt;/td&gt;
&lt;td&gt;1&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;11&lt;/td&gt;
&lt;td&gt;1&lt;/td&gt;
&lt;td&gt;HOME&lt;/td&gt;
&lt;td&gt;737-377-6038&lt;/td&gt;
&lt;td&gt;0&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;12&lt;/td&gt;
&lt;td&gt;2&lt;/td&gt;
&lt;td&gt;WORK&lt;/td&gt;
&lt;td&gt;603-492-5375&lt;/td&gt;
&lt;td&gt;0&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;13&lt;/td&gt;
&lt;td&gt;4&lt;/td&gt;
&lt;td&gt;WORK&lt;/td&gt;
&lt;td&gt;192-656-9676&lt;/td&gt;
&lt;td&gt;0&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;14&lt;/td&gt;
&lt;td&gt;5&lt;/td&gt;
&lt;td&gt;REFERENCE&lt;/td&gt;
&lt;td&gt;537-446-7971&lt;/td&gt;
&lt;td&gt;0&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;15&lt;/td&gt;
&lt;td&gt;7&lt;/td&gt;
&lt;td&gt;WORK&lt;/td&gt;
&lt;td&gt;627-936-7236&lt;/td&gt;
&lt;td&gt;0&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;16&lt;/td&gt;
&lt;td&gt;8&lt;/td&gt;
&lt;td&gt;WORK&lt;/td&gt;
&lt;td&gt;762-324-7571&lt;/td&gt;
&lt;td&gt;0&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;17&lt;/td&gt;
&lt;td&gt;10&lt;/td&gt;
&lt;td&gt;WORK&lt;/td&gt;
&lt;td&gt;521-906-6326&lt;/td&gt;
&lt;td&gt;0&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;18&lt;/td&gt;
&lt;td&gt;2&lt;/td&gt;
&lt;td&gt;REFERENCE&lt;/td&gt;
&lt;td&gt;390-785-1962&lt;/td&gt;
&lt;td&gt;0&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;19&lt;/td&gt;
&lt;td&gt;4&lt;/td&gt;
&lt;td&gt;CELL&lt;/td&gt;
&lt;td&gt;787-954-6675&lt;/td&gt;
&lt;td&gt;0&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;20&lt;/td&gt;
&lt;td&gt;7&lt;/td&gt;
&lt;td&gt;WORK&lt;/td&gt;
&lt;td&gt;168-382-4627&lt;/td&gt;
&lt;td&gt;0&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;21&lt;/td&gt;
&lt;td&gt;8&lt;/td&gt;
&lt;td&gt;WORK&lt;/td&gt;
&lt;td&gt;199-264-7443&lt;/td&gt;
&lt;td&gt;0&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;22&lt;/td&gt;
&lt;td&gt;2&lt;/td&gt;
&lt;td&gt;REFERENCE&lt;/td&gt;
&lt;td&gt;212-508-4836&lt;/td&gt;
&lt;td&gt;0&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;23&lt;/td&gt;
&lt;td&gt;4&lt;/td&gt;
&lt;td&gt;WORK&lt;/td&gt;
&lt;td&gt;493-724-1771&lt;/td&gt;
&lt;td&gt;0&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;24&lt;/td&gt;
&lt;td&gt;8&lt;/td&gt;
&lt;td&gt;CELL&lt;/td&gt;
&lt;td&gt;156-617-7276&lt;/td&gt;
&lt;td&gt;0&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

</description>
      <category>sql</category>
      <category>database</category>
      <category>json</category>
      <category>harperdb</category>
    </item>
  </channel>
</rss>
