DEV Community

CIZO
CIZO

Posted on

Architecting a Zero-Touch AI Delivery System with Make.com, GPT-4 & Google Workspace

TL;DR — We built a 5-workflow automation pipeline that triggers on Stripe payment, pulls user context from a data store, runs 15 sequential GPT-4 completions, assembles a branded Google Doc, and delivers it to the buyer — all in under 5 minutes, with zero manual intervention.


System Overview

This is a production architecture breakdown of an AI-powered delivery system built for a personal brand products business. Two products, two independent pipelines, one shared data layer.

[Voice Form] ──► [3 x Make.com Workflows] ──► [Make.com Data Store + Airtable]
                                                          │
                        ┌─────────────────────────────────┤
                        │                                 │
              [Stripe: Playbook]               [Stripe: Content Machine]
                        │                                 │
              [Playbook Workflow]           [Content Machine Workflow]
                        │                                 │
              [15 x GPT-4 Completions]      [Whisper → GPT-4 → Leonardo AI]
                        │                                 │
              [Google Doc Assembly]         [Google Doc + Slides + Drive]
                        │                                 │
                  [Email Delivery]               [Email Delivery]
                        │                                 │
                  [Airtable Log]                 [Airtable Log]
Enter fullscreen mode Exit fullscreen mode

Entry Point 1 — The Voice Form (Top of Funnel)

Before any purchase, users complete a Tally voice form. This is the data collection layer — it captures:

  • Name, email, phone
  • Niche and area of expertise
  • Goals and target audience
  • Tone of voice preferences
  • Platform focus (LinkedIn, Instagram, YouTube, etc.)

On submission, three Make.com workflows fire simultaneously:

Workflow 1 — Data Store Write

Trigger: Tally form webhook
Action:  Write all form fields to Make.com Data Store
Key:     user_email (used as lookup key at purchase time)
Enter fullscreen mode Exit fullscreen mode

Workflow 2 — Airtable CRM Upsert

Trigger: Tally form webhook
Action:  Search Airtable for existing record by email
         → If found: UPDATE record with new form data
         → If not found: CREATE new record
Status:  Set to "New User" or "User Updated"
Enter fullscreen mode Exit fullscreen mode

Workflow 3 — Additional Data Processing

Trigger: Tally form webhook
Action:  Supplementary processing and storage logic
Enter fullscreen mode Exit fullscreen mode

Why split into 3 workflows?
Separation of concerns. Each workflow has a single responsibility. If Airtable goes down, the data store write still succeeds. Easier to debug, easier to extend.


Entry Point 2 — Stripe Payment Webhooks

Both main pipelines are payment-triggered. Stripe fires a checkout.session.completed webhook into Make.com when a purchase completes.

Each product has its own dedicated webhook endpoint → its own Make.com scenario. This keeps the pipelines fully independent — a failure in one never affects the other.


Pipeline 1 — The Playbook Workflow

Step 1: User Validation & Context Retrieval

1. Search Airtable by email → validate user exists
2. GET from Make.com Data Store using email as key
   → Retrieves all voice form answers stored at top of funnel
3. GET brand archetype reference doc from Google Docs
   → Used as a reference document in AI prompts
Enter fullscreen mode Exit fullscreen mode

Step 2: 15 Sequential GPT-4 Completions

This is the core of the Playbook pipeline. Each completion generates one section of the brand strategy document.

For each completion:
  1. Build prompt (user context + archetype reference + section instructions)
  2. POST to OpenAI /v1/chat/completions
  3. Parse and format response
  4. Sleep buffer (avoid rate limits)
  5. Store output variable for Google Doc assembly
Enter fullscreen mode Exit fullscreen mode

The 15 sections generated:

# Section Description
1 Tone of Voice How the user communicates
2 Niche of Genius Their specific expertise area
3 Claim to Fame Unique credibility statement
4 Tagline One-line brand statement
5 Buyer Persona Ideal client profile
6 Buyer Journey Client journey stages
7 Sales Navigator Strategic sales positioning
8 Keywords SEO & content keywords
9 About Section 3-part bio combining all sections
10 LinkedIn Bio Platform-optimised profile copy
11 YouTube Bio Channel description
12 Instagram Bio 150-character profile copy
13 Facebook Bio Page description copy
14 Brand Archetype Personality archetype classification
15 Content Pillars Core content themes

Key prompt engineering consideration:
Each completion receives the outputs of previous completions as context. By completion 9 (About Section), the prompt includes tone of voice, niche, claim to fame, tagline, and buyer persona — creating a coherent, internally consistent document.

Step 3: Google Doc Assembly

1. Copy branded Google Doc template (via Drive API)
2. Use Docs API to replace placeholder tokens with AI outputs
   e.g. {{TONE_OF_VOICE}} → generated content
3. Set document sharing permissions
4. Store Doc URL in Airtable record
Enter fullscreen mode Exit fullscreen mode

Step 4: Delivery & CRM Update

1. Send delivery email with Google Doc link
2. Update Airtable record:
   - status: "Playbook Delivered"
   - playbook_url: [doc link]
   - delivered_at: [timestamp]
Enter fullscreen mode Exit fullscreen mode

Pipeline 2 — The Content Machine Workflow

More complex than the Playbook. Involves file handling, transcription, multi-modal AI, and Drive folder management.

Step 1: Email Validation

Trigger: Tally form submission (video upload + contact details)
Action:  Search Airtable by email → validate and link to existing record
Enter fullscreen mode Exit fullscreen mode

Step 2: File Routing Logic

The system handles three input types:

IF file_type == "video/*":
   CloudConvert: video  MP3
   OpenAI Whisper: MP3  transcript text

ELSE IF file_type == "audio/*":
   OpenAI Whisper: audio  transcript text (skip conversion)

ELSE IF transcript_provided == true:
   Use provided transcript directly (skip conversion + transcription)
Enter fullscreen mode Exit fullscreen mode

Why this matters architecturally: Different buyers submit different file types. The routing logic means the pipeline handles all cases gracefully without requiring users to pre-convert anything.

Step 3: Playbook PDF Processing (Optional)

IF user has Playbook PDF:
  1. Upload PDF to OpenAI Files API
  2. Extract content via file retrieval
  3. Delete file from OpenAI (cleanup)
  4. Use extracted content as brand reference in content prompts
Enter fullscreen mode Exit fullscreen mode

This is the cross-product integration point — The Content Machine uses The Playbook's content to generate brand-consistent output.

Step 4: GPT-4 Content Generation

Six content outputs, each with a dedicated generation pass and an emoji-cleaning pass:

Pass 1:  AI Content Strategy Overview    → Formatter → Emoji clean
Pass 2:  Newsletter Article              → Formatter → Emoji clean
Pass 3:  Blog Post                       → Formatter → Emoji clean
Pass 4:  Hashtag Set                     → Formatter → Emoji clean
Pass 5:  LinkedIn Carousel Copy          → Formatter → Emoji clean
Pass 6:  [Additional output]             → Formatter → Emoji clean
Enter fullscreen mode Exit fullscreen mode

Why the emoji cleaning pass?
GPT-4 frequently inserts emojis in content outputs by default. For professional brand copy destined for a Google Doc, this needs stripping. A dedicated cleaning completion is cleaner than prompt engineering alone.

Step 5: Google Drive Folder Management

1. Search Airtable for existing Drive folder ID for this user
   IF exists: use existing folder IDs
   IF not exists:
     → Create main folder: "[User Name] - CIZO Content"
     → Create subfolder: "Outputs"
     → Store folder IDs in Airtable for future runs
Enter fullscreen mode Exit fullscreen mode

The folder ID persistence pattern is important. Users can resubmit videos for new content packages. On second and subsequent runs, the system finds the existing folder and adds to it rather than creating duplicates.

Step 6: Asset Generation & Upload

1. Leonardo AI: Generate custom image from content theme prompt
2. Download image from Leonardo CDN
3. Upload image to user's Google Drive output folder
4. Upload MP3 audio to Google Drive output folder
5. Create LinkedIn carousel in Google Slides:
   a. Copy branded Slides template
   b. Apply custom brand colours via Slides API
   c. Populate slide content with carousel copy
Enter fullscreen mode Exit fullscreen mode

Step 7: Google Doc Assembly & Delivery

1. Copy branded Google Doc template
2. Populate with all generated content sections
3. Insert carousel link + image reference
4. Insert headshot if uploaded
5. Create Airtable record logging all output URLs and folder IDs
6. Generate short URL via Short.cm API
7. Send delivery email with short URL
Enter fullscreen mode Exit fullscreen mode

Data Layer — Airtable CRM Schema

Every user interaction updates a central Airtable record. The key fields:

Users Table:
├── email (primary key)
├── name, phone
├── niche, goals, tone_preferences
├── status [New User | Updated | Playbook Delivered | Content Delivered]
├── playbook_url
├── drive_folder_id
├── drive_output_folder_id
├── content_doc_url
├── created_at, updated_at, delivered_at
Enter fullscreen mode Exit fullscreen mode

The drive_folder_id field is what enables the repeatable content system — once set, it persists across all future Content Machine runs for that user.


Key Architecture Decisions

1. Make.com Data Store as session cache
Rather than re-querying Airtable for form data at purchase time, the data store acts as a fast key-value cache keyed by email. Lower latency, simpler lookup, independent of CRM availability.

2. Sleep buffers between GPT-4 completions
With 15 sequential completions in the Playbook workflow, rate limit management is critical. Sleep modules between completions prevent 429 errors without requiring retry logic.

3. File cleanup after OpenAI Files API use
PDFs uploaded to OpenAI's Files API are deleted immediately after content extraction. This keeps the account clean, avoids storage costs, and is better practice from a data minimisation perspective.

4. Independent webhook endpoints per product
Stripe webhooks route to separate Make.com scenarios per product. This means product-specific logic changes never risk breaking the other pipeline, and each can be tested and deployed independently.

5. Folder ID persistence in Airtable
Storing Drive folder IDs after first creation turns a stateless workflow into a stateful one — without a database. The CRM becomes the state store.


Failure Modes & Considerations

Scenario Handling
User buys without completing voice form Airtable record missing → system creates one with payment data only; AI outputs will be less personalised
OpenAI rate limit hit Sleep buffers reduce likelihood; Make.com retry logic handles transient failures
CloudConvert job fails Workflow errors out; Make.com error handler can notify admin
Google Drive API quota Unlikely at this scale; monitor via Google Cloud Console
Duplicate purchase (same email) Airtable upsert handles gracefully; new doc created and linked

Estimated Build Scope

Phase                    Hours
─────────────────────────────
Architecture design      6–8
Prompt engineering       8–10
Make.com workflow build  12–16
CRM schema & logic       4–6
Google Workspace APIs    6–8
Testing & QA             8–12
─────────────────────────────
Total                    44–60 hrs
Enter fullscreen mode Exit fullscreen mode

What Would You Do Differently?

A few things worth considering if rebuilding this today:

  • Replace Make.com with a custom Node.js service for the 15-completion Playbook workflow — more control over retry logic, error handling, and execution time
  • Add a webhook queue (e.g. via Inngest or Quirrel) between Stripe and Make.com to handle burst traffic gracefully
  • Stream GPT-4 outputs rather than waiting for full completion on each pass — would reduce total pipeline latency significantly
  • Abstract the Google Doc templating into a reusable service — currently tightly coupled to specific template IDs

Wrapping Up

The core insight here isn't the tools — it's the architecture pattern: capture context early, trigger on payment, personalise at generation time, deliver automatically.

That pattern is reusable across a wide range of product businesses. Anywhere personalised document delivery is the bottleneck, this approach applies.


Built by the engineering team at CIZO — we build AI-powered mobile apps and automation systems. Open to questions in the comments.

Top comments (0)