DEV Community

Cover image for JoyConf: a live emoji reaction app for presentations
Tracy Atteberry
Tracy Atteberry

Posted on

JoyConf: a live emoji reaction app for presentations

I have a talk coming up in a few weeks. It got rescheduled once, which gave me extra time to prepare. The presentation itself is finished, so I used that time to build an application that I think will make the talk more fun and engaging for the audience.

I wanted to add some live audience interaction. The usual options, Slido, Mentimeter, Poll Everywhere, are fine, but they're designed around Q&A and polls. What I actually wanted was simpler and more visual: live emoji reactions that float up as an overlay on my slides while I'm presenting. None of the existing tools seem to do that. So I built my own, and I called it JoyConf.

It also gave me a reason to finally learn some Elixir.


What it does

The flow is simple:

  1. You create a talk in the admin panel and get a QR code
  2. You put the QR code on your title slide
  3. Attendees scan it and land on a page with pre-defined set emoji buttons: ❤️ 😂 🙋🏻 👏 🤯, etc.
  4. They tap a button, and the emoji floats up on their screen in real time
  5. A Chrome extension running on your laptop picks up the same broadcast and overlays the emoji directly in the lower right corner of your Google Slides presentation

That's it. No accounts for attendees, no app to install, no configuration to fiddle with. Scan, tap, react.


Why Elixir and Phoenix

I write Ruby day to day, so the syntax was familiar enough. But Elixir runs on the BEAM, the Erlang virtual machine, which was built for soft real-time systems with lots of concurrent connections. Phoenix LiveView lets you build interactive, server-rendered UIs over WebSockets without writing much JavaScript. And Phoenix PubSub gives you a message bus that lets any process broadcast to any other, regardless of what type of process it is.

All three of those things are important for a system where dozens of phones are sending events to a server that needs to fan them out to a slide presentation and all the phones (for their own consolidated live view) in under a second.

LiveView is pretty magical, by the way. You define your UI as a function of state, and Phoenix handles keeping the browser in sync. You get WebSocket-backed interactivity without writing a single-page app.


How it works under the hood

There are three clients talking to one Phoenix server.

Attendee phones connect via Phoenix LiveView, which manages the WebSocket lifecycle automatically. When an attendee taps an emoji, it fires a phx-click event to the server. The server checks a rate limiter (one reaction per session every 3 seconds) and, if allowed, broadcasts the event to a PubSub topic keyed by the talk slug: "reactions:my-talk".

The Chrome extension connects via a Phoenix Channel, a lower-level WebSocket primitive. The extension can't use LiveView because it's not a web page; it just needs to receive events. The ReactionChannel is subscribed to the same PubSub topic, so when the attendee's broadcast lands, it gets forwarded to the extension automatically.

The admin browser is a standard LiveView protected by HTTP Basic Auth. You create a talk, the server generates a slug from the title, and you get a QR code pointing at the attendee URL.

The PubSub layer is what makes the architecture clean. TalkLive doesn't need to know the Chrome extension exists. It just broadcasts on the topic, and PubSub delivers it to everyone subscribed, whether that's a LiveView process, a Channel process, or both.

Here's the entire broadcast path in code:

def handle_event("react", %{"emoji" => emoji}, socket) do
  if RateLimiter.allow?(socket.id) do
    Endpoint.broadcast!("reactions:#{socket.assigns.talk.slug}", "new_reaction", %{emoji: emoji})
  end
  {:noreply, socket}
end
Enter fullscreen mode Exit fullscreen mode

That's the whole thing. Five lines. PubSub does the fan-out.

Rate limiting with ETS

The rate limiter is a GenServer that owns an ETS table. ETS is an in-memory key/value store built into the BEAM runtime. It's extremely fast and, crucially, you can configure it for concurrent reads without going through the GenServer process. This matters because with many attendees tapping at once, you don't want all those requests queuing up behind a single process.

The table stores {session_id, last_reaction_at}. The allow?/1 function looks up the session, checks if enough time has passed, and updates the timestamp atomically. No database round-trip, no lock contention.

There's also a client-side rate limit in JavaScript: buttons are disabled for 3 seconds with a visible countdown timer. That's just UX, the real enforcement is on the server.

The Chrome extension

The extension has two parts: a popup where you enter the talk slug once, and a content script injected into Google Slides pages that handles the actual connection and overlay.

The content script connects a Phoenix WebSocket client to the server, joins the reactions:${slug} channel, and listens for new_reaction messages. When one arrives, it spawns a floating <span> element with a CSS animation that drifts up and fades out.


It never works on the first try

Double emojis on slides. Early on, every reaction appeared twice on the speaker's screen. The bug was that the Chrome extension was subscribing to PubSub directly as well as receiving the Channel push. Two subscriptions, two deliveries. The fix was removing the redundant subscription and letting the Channel handle delivery exclusively.

Fullscreen mode swallows the overlay. When you go fullscreen in Google Slides, the browser creates a new stacking context. Any position: fixed element on <body> becomes invisible. The fix was listening for fullscreenchange events and re-parenting the overlay <div> into document.fullscreenElement when it fires:

document.addEventListener("fullscreenchange", () => {
  const overlay = document.getElementById("joyconf-overlay");
  if (document.fullscreenElement) {
    document.fullscreenElement.appendChild(overlay);
  } else {
    document.body.appendChild(overlay);
  }
});
Enter fullscreen mode Exit fullscreen mode

Not obvious, but straightforward once you know it.

Button clicks were being swallowed. The initial implementation disabled the emoji buttons immediately on click. That blocked the phx-click handler from firing, so the server never received the event. The fix was deferring the button disable to a setTimeout(..., 0), which lets the click event propagate before the buttons get disabled.

The Chrome extension's origin. Chrome extensions run from a chrome-extension:// origin, which Phoenix's check_origin protection rejects by default. One line in endpoint.ex fixes it:

socket "/socket", JoyconfWeb.UserSocket,
  websocket: [check_origin: false]
Enter fullscreen mode Exit fullscreen mode

The tech stack, briefly

Concern Choice
Language / framework Elixir / Phoenix LiveView
Real-time Phoenix PubSub + Channels
Database PostgreSQL (Fly.io managed)
Rate limiting ETS-backed GenServer
QR codes eqrcode hex package
Deployment Fly.io
Extension Chrome Manifest V3

The database has one table: talks. Reactions are ephemeral and never persisted. If the server restarts, in-flight reactions are lost, which is fine.


What's next

JoyConf is an MVP. It does one thing and it's ready for me to use at my talk. It's not productized, it's not super-polished, and it's likely not easy for the average non-technical user to deploy.

Things that might be nice to add in the future:

  • Ease of deployment for less technical users.
  • Reaction analytics tied to slides so you can see which parts of your talk landed
  • Support for other presentation tools beyond Google Slides

For now, I'm just going to go use it.


The source code is on GitHub. If you want to run your own instance, the README should have everything you need.

Top comments (0)