TL;DR
Parallel You is a production-deployed AI layer that augments social conversations on X, LinkedIn, and Reddit. It combines a Chrome extension overlay with a Cloud Run backend to generate persona-specific replies, predict post outcomes, rewrite drafts with Truth Mode, and support async reply workflows.
- Stack: Chrome MV3 + Node/Express + OpenAI + Cloud Run
Problem
Most creators and professionals face the same social friction:
- They know what they want to say, but not how to say it for a specific audience.
- They post quickly without evaluating tone/risk.
- They lose momentum when they canβt reply right away.
Existing tools are usually detached from the actual platform context.
Parallel You solves that by embedding directly in the reply composer.
Product Principles
- Assistive, not autonomous
- No auto-posting.
User always approves final text.
Context-native UX
Overlay appears where users already write.
Observable communication quality
Predict tone/engagement/risk before posting.
Safety-first defaults
Shadow tracking is opt-in.
Server-side API key handling.
High-Level Architecture
Why this split?
-
contentScript.jshandles DOM/platform interaction. -
background.jscentralizes API calls and extension config. - Cloud Run backend keeps model prompts/secrets server-side.
Core Workflows
1) Persona Reply Generation
User action: click Generate Reply.
Backend endpoint:
POST /generate-reply
Service behavior:
- Validate input
- Normalize persona
- Build system prompt template
- Call OpenAI
- Return one concise reply
const result = await generatePersonaReply(postContent, persona);
return res.json(result);
2) Prediction Engine
User action: click Predict while drafting.
Backend endpoint:
POST /predict
Prompt requires strict JSON shape:
toneengagement_predictionrisk_flags
This enables deterministic UI rendering and safer contract handling.
3) Truth Mode
User action: click Truth Mode.
Backend endpoint:
POST /truth
Prompt reframes user intent into direct but bounded language.
4) Reply Later Mode
User action: click Reply Later.
Backend endpoint:
POST /schedule-reply- Later retrieval via
GET /scheduled-replies
Current demo implementation uses delayed in-process scheduling. In production, this should shift to Cloud Tasks + Pub/Sub.
LinkedIn-Specific Challenges (and Fix)
LinkedIn comment editors often use nested contenteditable layers and selection-driven state.
Initial bug symptoms:
- Predict button click could steal focus.
- Composer text was sometimes empty at read time.
Fixes implemented:
- Focus-safe panel interactions (
mousedown.preventDefault()on controls) - Selection tracking for LinkedIn editors
- Composer resolution from active element + selection anchor + platform-specific selectors
- Last-known composer text caching for transient re-render cases
document.addEventListener("selectionchange", () => {
if (platform !== "linkedin") return;
const composer = resolveComposerFromNode(document.getSelection?.()?.anchorNode);
if (composer) activeComposer = composer;
});
API Design
Routes
POST /observePOST /generate-replyPOST /predictPOST /truthPOST /schedule-replyGET /scheduled-repliesGET /healthGET /privacy
Validation and Error Strategy
- Input checks at route layer
- Centralized error-to-HTTP translation
- Model call failures surface as explicit API errors
Deployment Model
Container
Dockerfile uses node:20-alpine, installs production deps, copies server code + public assets, exposes 8080.
Cloud Run
- Build with Cloud Build
- Deploy public endpoint
- Inject runtime env vars
gcloud run deploy parallel-you-backend \
--image gcr.io/$PROJECT_ID/parallel-you-backend \
--region us-central1 \
--allow-unauthenticated \
--set-env-vars OPENAI_API_KEY="...",OPENAI_MODEL="gpt-4.1-mini"
Security and Privacy Posture
- API key remains server-side only.
- Extension uses backend endpoint, not direct OpenAI calls.
- Privacy policy hosted publicly for Web Store compliance.
- Shadow tracking is explicit opt-in and reversible.
Contribution Model
If you fork this project:
- Keep extension and backend contracts stable.
- Add tests around selector regressions for each platform surface.
- Prefer pluggable adapter modules for new platforms.
- Move demo in-memory storage to external persistence before scale.
Future Work
- Replace setTimeout scheduler with Cloud Tasks + Pub/Sub.
- Add JWT-based user identity and per-user memory partitions.
- Introduce policy filters for risky outputs.
- Add analytics for reply acceptance/edit rates.
- Add persona training from approved historical posts.
Parallel You demonstrates a practical pattern for AI products:
- keep the UX inside existing workflows,
- keep the model integration server-side,
- and keep humans in control of publication.
That pattern is reusable across many domains beyond social writing.
Github: https://github.com/harishkotra/parallel-you
Chrome Extension: https://chromewebstore.google.com/detail/parallel-you/pdjlagonmnpdopnfgogmdapmgamblife

Top comments (0)