First time presenting a startup at a major tech exhibition. 3-minute pitches, hundreds of conversations, and one question I heard 50+ times: "Wait, so it's NOT a camera?"
The Setup
One day I'm in my home office in Germany, debugging a React component. The next — I'm standing at a booth at 4YFN (4 Years From Now) during MWC 2026 in Barcelona, wearing a Scople prototype on my chest and explaining to VCs, developers, and curious attendees why our AI wearable doesn't store a single photo.
I'm Nazar, CTO at Scople — a startup building a wearable AI device that analyzes your life quality using computer vision. Think of it as a Fitbit, but instead of tracking steps, it tracks how much time you spend with family, whether you're eating healthy, if your partner is comfortable around you, or how engaged your audience is during a presentation.
The device gives you insights in real-time ("You've been sitting for 2 hours"), daily reports ("You spent 3 hours with family today"), weekly summaries ("You smiled 20% more this week"), or monthly analytics — depending on what you're tracking.
The catch? We don't record anything. We don't save photos. We don't use ChatGPT.
And that confused the hell out of people.
The First Pitch (and the Nerves)
I've done plenty of technical presentations — conference talks, team demos, client pitches. But standing at an exhibition booth with a wearable device on your chest, waiting for strangers to approach you? That's different.
The first visitor was a middle-aged man in a suit. He glanced at the Scople logo, looked at the device on my chest, and asked:
"Is that a body camera?"
I smiled. I'd practiced this.
"No, it's not a camera. Think of it like your own eye and brain. When you see something, you can't share the image itself — only the idea, the meaning. That's how Scople works. It captures the world around you, processes it using computer vision directly on the device, and sends you insights as push notifications. No recordings. No saved photos."
He frowned. "But... it has a lens, right?"
"Yes, it captures images — but only to analyze them. Like your brain processes what your eyes see, but doesn't 'save' every frame. Scople processes and instantly deletes the data."
He nodded slowly. "So you're saying I can't go back and watch a recording of my day?"
"Exactly. You can't. Because it doesn't exist. You get insights — like 'You spent 3 hours with your family today' or 'You smiled 20% more this week' — but the raw images are gone."
He paused. Then smiled. "That's... actually smart. Privacy-first, huh?"
That was the first of 50+ times I'd have that exact conversation.
The Privacy Question Everyone Asked
If I had to summarize the exhibition in one question, it would be:
"Wait, so it's NOT a camera?"
Followed immediately by:
"But how do you process images without storing them?"
And then:
"Where does the data go?"
Here's how I learned to answer it over the course of 3 days:
Version 1 (Too Technical)
"The device processes data with simple algorithms locally. For heavier tasks like face recognition or emotion analysis, it syncs with a portable dock station you carry in your pocket — slightly bigger than an AirPods case. The dock handles complex processing while you're on the go. Everything stays with you — device in your pocket, dock in your bag. No cloud. The data pipeline is: device captures → processes simple algorithms → syncs to dock → dock processes heavy algorithms → sends insights to your phone app."
Result: Blank stares. "So... it doesn't use the cloud?"
Version 2 (Too Vague)
"It's like your brain. You see things, you process them, but you don't 'record' every moment of your life. Scople does the same — it sees, analyzes, and forgets."
Result: "But my brain does remember things..."
Version 3 (The One That Worked)
"Think of a security camera vs. a motion detector. A security camera records everything and stores it. A motion detector only tells you 'someone walked by' — no video, no photos, just the insight. Scople is the motion detector, not the camera."
Result: "Oh! So it's like... privacy by design?"
Bingo.
Here's how I started visualizing it for people:
| Aspect | Cloud AI (OpenAI) | Scople Edge AI |
|---|---|---|
| Privacy | Data to servers | Local only |
| Latency | 200-500ms | <50ms |
| Battery | High drain | Optimized |
| Cost | $$$$ at scale | One-time hardware |
| Control | Vendor lock-in | Full ownership |
| Updates | Automatic (risky) | User-controlled |
The Questions I Didn't Expect
1. "Do you use ChatGPT?"
This came up a lot. Especially from tech people.
"No. We built our own proprietary AI. We don't use ChatGPT, Gemini, or any third-party models."
"Why not? Wouldn't that be easier?"
"Two reasons. First: security. If we used OpenAI's API, your data would go to their servers. Even if they promise privacy, we'd rather not take the risk. Second: optimization. Our device is small. GPT-4 is massive. We needed something lightweight, optimized specifically for Scople's hardware."
Some developers loved this answer. Others were skeptical.
One guy asked: "So you trained your own models from scratch?"
"For certain tasks, yes. For others, we fine-tuned smaller open-source models and optimized them for edge inference."
He nodded. "That's... actually impressive. Most startups just slap an OpenAI wrapper on something and call it AI."
That felt good.
For those curious — here's what the architecture actually looks like:
┌──────────────┐
│ DEVICE │ Wearable on chest
│ (Eye) │ • Simple CV models
│ │ • 2h battery
│ ARM M7 │ • Captures & processes
└──────┬───────┘
│ Bluetooth sync
▼
┌──────────────┐
│ DOCK STATION │ Portable (pocket)
│ (Brain) │ • Heavy inference
│ │ • Face recognition
│ ARM A + TPU │ • 8h total battery
└──────┬───────┘
│ WiFi sync
▼
┌──────────────┐
│ PHONE APP │ Insights only
│ │ • Real-time alerts
│ React Native │ • Daily/weekly reports
│ │ • No raw images
└──────────────┘
┌──────────────┐
│ CLOUD │ Optional, metadata only
│ (Opt-in) │ • User consent required
└──────────────┘
The key insight: raw images never leave the device-dock pair. The phone only receives processed insights. The cloud (if you opt in) only sees aggregated metadata.
2. "What if someone hacks it?"
A security researcher asked this. Legitimate concern.
"Great question. Here's the thing: even if someone breaks into the device, there's nothing to steal. We don't store images, videos, or raw data. The only thing they'd find is metadata — aggregated insights like 'user spent 3 hours in front of a screen today.'"
"But what about the processing pipeline? Could they intercept the data before it's deleted?"
"Theoretically, yes — if they have physical access to the device. But the data exists in volatile memory for milliseconds. By the time an attacker could extract it, it's already gone. And we're working on hardware-level encryption for the processing pipeline."
He smiled. "Good answer. Most founders don't think that far ahead."
Another win.
3. "Who would actually use this?"
This one stung a bit. A VC asked it.
"You're asking people to wear a camera-like device on their chest all day, and you're saying it's for... tracking family time? Who's the target market?"
I took a breath.
"Four main groups. First: parents. 70% of relationships fail due to conflict. Scople can detect early signs of stress, emotional distance, or even domestic violence — before it becomes critical. Second: wellness enthusiasts. People who track their steps, sleep, calories — but no one tracks quality of life. Third: professionals. Developers who sit 9 hours a day and don't realize it. Managers who want to know if their team is burning out. And fourth: businesses. Retail stores want to know what customers actually look at, not just what they buy."
He leaned back. "Okay. That's a broader market than I thought."
Not a deal, but not a rejection either.
4. "This feels dystopian."
A younger developer said this. I respected the honesty.
"I get it. A wearable device with a camera lens sounds like Black Mirror. But here's the difference: you control it. You decide when to wear it, what to track, what insights to receive. And most importantly — no one else has access to your data. Not us, not the government, not advertisers. It's your device, your data, your insights."
"But what if the company gets acquired? Or changes its privacy policy?"
"We're designing the architecture so that even we can't access your data. All processing happens on the device or in the portable dock station you carry with you — no central database to hack, no cloud storage to subpoena. Your data stays in your pocket, literally. If we wanted to betray users, we'd have to redesign the entire system."
He paused. "Fair. I'd want to see the code, though."
"We're planning to open-source parts of the processing pipeline once we're out of stealth."
He nodded. "Okay. I'll keep an eye on you."
That felt like a win, too.
The "Wait, That's Brilliant" Moments
Not all conversations were defensive. Some people got it immediately.
A wellness coach:
"Oh my God, this is what I've been trying to explain to clients for years. You can't improve what you don't measure. But no one measures quality time or emotional health. This could change everything."
A developer:
"I've been trying to cut down on screen time, but I have no idea how much I actually sit in front of the computer. My smartwatch says I'm 'active' because I type a lot. This would tell me the truth."
A retail manager:
"We spend thousands on A/B testing product placement, but we're just guessing. If we could see what customers actually look at before they buy... that's gold."
These conversations reminded me why we're building this.
What I Learned About Pitching
1. Start with the metaphor, not the tech stack
Early on, I tried leading with "edge computing" and "computer vision pipeline." People tuned out.
When I started with "Think of your eye and brain — you see, you process, but you don't record," people leaned in.
Lesson: Give people a mental model first. Then explain the details.
2. Address privacy immediately
I learned to bring up privacy before they asked.
Instead of:
"Scople is a wearable AI device that captures the world around you..."
I started saying:
"Scople is a wearable AI device — but it's not a camera. No recordings, no saved photos. It processes data on-device and deletes it instantly."
Lesson: If you know the objection is coming, handle it upfront.
3. Have a one-sentence answer for everything
People don't want a dissertation. They want clarity.
- "What is it?" → "A wearable AI that tracks quality of life, not just physical health."
- "Is it a camera?" → "No. It's like a motion detector — it gives you insights, not recordings."
- "Do you use ChatGPT?" → "No. We built our own proprietary AI for privacy and optimization."
- "Who's it for?" → "Parents, wellness enthusiasts, professionals, and businesses."
Lesson: If you can't explain it in one sentence, you don't understand it well enough.
4. People remember stories, not specs
No one remembered that we use "edge computing with a dock station for heavy inference."
But they did remember:
"The guy who said his device can tell if your partner is comfortable around you."
"The startup that doesn't use ChatGPT because they care about privacy."
"The wearable that's NOT a camera."
Lesson: Be memorable. Specs are forgettable.
The Conversations That Changed My Perspective
A privacy advocate told me:
"You're building something powerful. But power can be misused. What happens when someone uses Scople to spy on others? Or when a company forces employees to wear it?"
I didn't have a great answer for that. We've been focused on technical privacy (edge computing, no cloud storage), but not social privacy (misuse cases).
That's something we need to think about.
An investor told me:
"Privacy-first is great for early adopters. But most people don't care. They use Facebook, TikTok, Alexa. How do you convince them to care about privacy?"
My answer:
"We're not competing with Facebook. We're competing with Fitbit. And people do care about privacy when it comes to their body, their home, their family. They don't want Amazon recording their bedroom. They don't want a startup storing videos of their kids."
He smiled. "Good answer. But you'll still need to prove it."
Fair point.
A developer asked:
"Why not just make it open-source from day one? If you're serious about privacy, let the community verify it."
I hesitated.
"We want to. But we're also trying to build a business. If we open-source everything now, we lose our competitive edge."
He shrugged. "Then you're not really privacy-first. You're privacy-first with caveats."
Ouch. But he's right.
We're planning to open-source the processing pipeline eventually. But he made me realize we need to do it sooner, not "when we're ready."
The Stats
After 3 days at 4YFN, here's what we got:
- ~200 conversations (ranging from 2 minutes to 20 minutes)
- ~50 "Wait, it's NOT a camera?" questions
- ~30 business cards exchanged
- 12 serious investor/partner follow-ups
- 6 demo requests (people wanted to see the live dashboard)
- 3 offers to pilot with retail/corporate wellness programs
And one guy who asked if we could make a version for his dog.
(We politely declined.)
What's Next
We're taking all the feedback and refining the product. The big questions we're tackling:
- How do we communicate privacy better? (It's our biggest strength, but also the hardest to explain.)
- How do we prevent misuse? (What happens if someone uses Scople to surveil others?)
- How do we balance open-source with business goals? (We want transparency, but we also need to protect our IP.)
And we're working on the next prototype — smaller form factor (the device currently runs ~2 hours on complex algorithms, ~8 hours total with the portable dock), better battery optimization, and a cleaner UI for the phone app that syncs with the dock.
Update: Since 4YFN, The Gadgeteer published a piece calling Scople "a tiny wearable that reads the room and forgets it." Their take: "Moxiebyte built its entire pitch around the idea that the device sees everything but keeps nothing." — which is exactly the message we were going for.
We're launching on Kickstarter in mid-April 2026. If you want to be among the first to get Scople, follow us at scople.ai for launch updates.
Final Thoughts
If you'd told me a year ago that I'd be standing at a tech exhibition, wearing an AI device on my chest, and explaining to VCs why we don't use ChatGPT — I'd have laughed.
But here we are.
Building Scople has been one of the hardest, most rewarding things I've done. And presenting it at 4YFN forced me to get better at explaining why it matters.
Because here's the thing: we're not building a camera. We're not building a surveillance tool.
We're building a way for people to understand their own lives — their time, their emotions, their relationships, their habits — without sacrificing their privacy.
And if that sounds impossible, well... that's why we're building it.
TL;DR
- I pitched Scople (a privacy-first wearable AI) at 4YFN as CTO
- The #1 question: "Wait, it's NOT a camera?"
- The #1 learning: Start with metaphors, not tech specs
- The #1 challenge: Explaining edge computing to non-technical people
- The #1 surprise: How many people actually care about privacy
If you're building something in the AI/wearables/privacy space, feel free to reach out. I'm happy to share more behind-the-scenes learnings.
And if you think Scople sounds interesting — we're launching on Kickstarter in mid-April. Check out scople.ai or drop a comment below.
What would you ask if you saw Scople at an exhibition? Drop your questions in the comments — I'll answer them all.


Top comments (0)