DEV Community

Cover image for Building Parallel You: A Chrome Extension + Cloud Run AI Layer for Social Writing
Harish Kotra (he/him)
Harish Kotra (he/him)

Posted on

Building Parallel You: A Chrome Extension + Cloud Run AI Layer for Social Writing

TL;DR

Parallel You is a production-deployed AI layer that augments social conversations on X, LinkedIn, and Reddit. It combines a Chrome extension overlay with a Cloud Run backend to generate persona-specific replies, predict post outcomes, rewrite drafts with Truth Mode, and support async reply workflows.

  • Stack: Chrome MV3 + Node/Express + OpenAI + Cloud Run

Problem

Most creators and professionals face the same social friction:

  • They know what they want to say, but not how to say it for a specific audience.
  • They post quickly without evaluating tone/risk.
  • They lose momentum when they can’t reply right away.

Existing tools are usually detached from the actual platform context.

Parallel You solves that by embedding directly in the reply composer.


Product Principles

  1. Assistive, not autonomous
  2. No auto-posting.
  3. User always approves final text.

  4. Context-native UX

  5. Overlay appears where users already write.

  6. Observable communication quality

  7. Predict tone/engagement/risk before posting.

  8. Safety-first defaults

  9. Shadow tracking is opt-in.

  10. Server-side API key handling.


High-Level Architecture

High-Level Architecture

Why this split?

  • contentScript.js handles DOM/platform interaction.
  • background.js centralizes API calls and extension config.
  • Cloud Run backend keeps model prompts/secrets server-side.

Core Workflows

1) Persona Reply Generation

User action: click Generate Reply.

Backend endpoint:

  • POST /generate-reply

Service behavior:

  • Validate input
  • Normalize persona
  • Build system prompt template
  • Call OpenAI
  • Return one concise reply
const result = await generatePersonaReply(postContent, persona);
return res.json(result);
Enter fullscreen mode Exit fullscreen mode

2) Prediction Engine

User action: click Predict while drafting.

Backend endpoint:

  • POST /predict

Prompt requires strict JSON shape:

  • tone
  • engagement_prediction
  • risk_flags

This enables deterministic UI rendering and safer contract handling.

3) Truth Mode

User action: click Truth Mode.

Backend endpoint:

  • POST /truth

Prompt reframes user intent into direct but bounded language.

4) Reply Later Mode

User action: click Reply Later.

Backend endpoint:

  • POST /schedule-reply
  • Later retrieval via GET /scheduled-replies

Current demo implementation uses delayed in-process scheduling. In production, this should shift to Cloud Tasks + Pub/Sub.


LinkedIn-Specific Challenges (and Fix)

LinkedIn comment editors often use nested contenteditable layers and selection-driven state.

Initial bug symptoms:

  • Predict button click could steal focus.
  • Composer text was sometimes empty at read time.

Fixes implemented:

  • Focus-safe panel interactions (mousedown.preventDefault() on controls)
  • Selection tracking for LinkedIn editors
  • Composer resolution from active element + selection anchor + platform-specific selectors
  • Last-known composer text caching for transient re-render cases
document.addEventListener("selectionchange", () => {
  if (platform !== "linkedin") return;
  const composer = resolveComposerFromNode(document.getSelection?.()?.anchorNode);
  if (composer) activeComposer = composer;
});
Enter fullscreen mode Exit fullscreen mode

API Design

Routes

  • POST /observe
  • POST /generate-reply
  • POST /predict
  • POST /truth
  • POST /schedule-reply
  • GET /scheduled-replies
  • GET /health
  • GET /privacy

Validation and Error Strategy

  • Input checks at route layer
  • Centralized error-to-HTTP translation
  • Model call failures surface as explicit API errors

Deployment Model

Container

Dockerfile uses node:20-alpine, installs production deps, copies server code + public assets, exposes 8080.

Cloud Run

  • Build with Cloud Build
  • Deploy public endpoint
  • Inject runtime env vars
gcloud run deploy parallel-you-backend \
  --image gcr.io/$PROJECT_ID/parallel-you-backend \
  --region us-central1 \
  --allow-unauthenticated \
  --set-env-vars OPENAI_API_KEY="...",OPENAI_MODEL="gpt-4.1-mini"
Enter fullscreen mode Exit fullscreen mode

Security and Privacy Posture

  • API key remains server-side only.
  • Extension uses backend endpoint, not direct OpenAI calls.
  • Privacy policy hosted publicly for Web Store compliance.
  • Shadow tracking is explicit opt-in and reversible.

Contribution Model

If you fork this project:

  1. Keep extension and backend contracts stable.
  2. Add tests around selector regressions for each platform surface.
  3. Prefer pluggable adapter modules for new platforms.
  4. Move demo in-memory storage to external persistence before scale.

Future Work

  1. Replace setTimeout scheduler with Cloud Tasks + Pub/Sub.
  2. Add JWT-based user identity and per-user memory partitions.
  3. Introduce policy filters for risky outputs.
  4. Add analytics for reply acceptance/edit rates.
  5. Add persona training from approved historical posts.

Parallel You demonstrates a practical pattern for AI products:

  • keep the UX inside existing workflows,
  • keep the model integration server-side,
  • and keep humans in control of publication.

That pattern is reusable across many domains beyond social writing.

Github: https://github.com/harishkotra/parallel-you

Chrome Extension: https://chromewebstore.google.com/detail/parallel-you/pdjlagonmnpdopnfgogmdapmgamblife

Top comments (0)