DEV Community

Yaamin Mohamed
Yaamin Mohamed

Posted on

State Machines + Motion Tokens: Building a Localised Chatbot with dotLottie

Live demo: Localised Chat Bot on CodePen


Two things make chatbot animations annoying to build.

The first is state management. You want the character to idle, react when clicked, loop while the bot is "thinking", then settle back. So you end up writing something like:

if (state === 'idle') {
  player.stop();
  player.playSegments([0, 23], true);
} else if (state === 'typing') {
  player.stop();
  player.playSegments([309, 393], true);
}
Enter fullscreen mode Exit fullscreen mode

And then you add transitions. And then typing fires while to-active is still mid-play. And then you've got three places in the codebase setting animation state and they're fighting each other.

The second is text. The character needs to say something — a greeting, a response, a prompt. So you either bake the text into the animation (and re-export every time it changes), or you float HTML text over the top and fake it.

I wanted to build a POC that solves both. The result is a localised chatbot widget where:

  • A named segment system encapsulates all animation logic. You call playSegment('typing', true, 'forward') and the right frames play, loop or not, in the right direction — no raw frame numbers scattered across your codebase.
  • Motion Tokens (via the setTextSlot API) control the text labels that live on the character itself — "Click Here!", "Hello!", "Let's Start!" — at runtime, per locale, without touching the file.
  • A live translation API (MyMemory) fetches and caches locale strings on first use — so there's no translation file to maintain and new languages just work.

The chat bubble UI is regular HTML. But the character's own text? That's where Motion Tokens come in.

Here's how it's wired together.


What the demo actually does

When the page loads, the character sits in an idle loop — a gentle animation with a "Click here" prompt. Click anywhere on the character and the widget opens into a full chat UI: the character scales up and animates in an active loop, a greeting message appears ("Hi! I'm your AI assistant. How can I help you today?"), and a chat input appears at the bottom.

Type anything and the character switches to the typing segment — a different animation playing while you compose your message. Hit send, the animation returns to active. Hit the X button and the character plays the to-active segment in reverse, collapsing back to idle.

Throughout all of this, the state badge in the header (idle → active → typing) reflects exactly which animation state is live. It's a useful debug view during development and a reasonable status signal in production.

The language switcher (EN / ES / DE / FR / PT) updates two things in parallel: the HTML chat content via regular DOM updates, and the text labels on the character itself via Motion Tokens — without resetting the animation state or reloading the file.


The animation segments

The .lottie file has four named segments corresponding to distinct character behaviours:

Segment Frames Behaviour
idle 0 – 23 Loops. Shows "Click here" prompt.
to-active 23 – 36 Plays once. Transition into the chat UI.
active 36 – 287 Loops. Character is engaged and expressive.
typing 309 – 432 Loops. Character animates while composing.

Rather than scattering [0, 23], [36, 287] etc. across the codebase, these are centralised in a SEG map at the top of the file:

const SEG = {
  idle:     [0,   23],
  toActive: [23,  36],
  active:   [36,  287],
  typing:   [309, 432],
};
Enter fullscreen mode Exit fullscreen mode

All animation transitions go through a single playSegment helper:

function playSegment(segName, loop, mode) {
  if (!dotLottie || !isReady) return;
  const [start, end] = SEG[segName];
  dotLottie.setMode(mode);      // 'forward' or 'reverse'
  dotLottie.setSegment(start, end);
  dotLottie.setLoop(loop);
  dotLottie.play();
}
Enter fullscreen mode Exit fullscreen mode

That's the entire animation API surface. Everywhere else in the code, you just call playSegment('typing', true, 'forward'). No frame numbers, no repeated setSegment blocks, no race conditions from multiple callers.


The app state machine (in JS, not in the file)

There are five app states: idle, activating, active, typing, closing. The state drives which segment plays and what UI is visible:

idle → activating → active ⇄ typing
                       ↓ (close btn)
                    closing → idle
Enter fullscreen mode Exit fullscreen mode

A setState helper updates both the JS variable and the debug badge:

let appState = 'idle';

function setState(s) {
  appState = s;
  const LABELS = {
    idle:       '● idle',
    activating: '▶ to-active',
    active:     '◉ active',
    typing:     '⌨ typing',
    closing:    '◀ to-idle',
  };
  stateBadge.textContent = LABELS[s] || s;
}
Enter fullscreen mode Exit fullscreen mode

The two chained transitions (activating → active, closing → idle) are driven by the complete event, which fires when a non-looping segment finishes:

dotLottie.addEventListener('complete', function onSegmentComplete() {
  if (appState === 'activating') {
    setState('active');
    playSegment('active', true, 'forward');
  } else if (appState === 'closing') {
    setState('idle');
    playSegment('idle', true, 'forward');
    collapseChat();
  }
});
Enter fullscreen mode Exit fullscreen mode

The click is handled in JS

Clicking the character opens the chat. That's a regular JS click listener on the container element — not a baked-in interaction inside the animation file:

animSection.addEventListener('click', function (e) {
  if (e.target.closest('#closeBtn')) return; // ignore close button clicks
  if (appState !== 'idle') return;            // only fire from idle
  activate();
});

function activate() {
  setState('activating');
  playSegment('toActive', false, 'forward');  // plays once, then 'complete' fires
  expandChat();
  setTimeout(() => addBotMessage(WELCOME[currentLang], 'welcome'), 650);
}
Enter fullscreen mode Exit fullscreen mode

The close button reverses the same segment — playing to-active backwards brings the character back to idle frames, which then loops naturally:

closeBtn.addEventListener('click', function () {
  if (appState !== 'active' && appState !== 'typing') return;
  setState('closing');
  playSegment('toActive', false, 'reverse');  // same segment, reversed
  // safety timeout in case 'complete' doesn't fire
  reverseTimer = setTimeout(() => {
    if (appState === 'closing') {
      setState('idle');
      playSegment('idle', true, 'forward');
      collapseChat();
    }
  }, 1800);
});
Enter fullscreen mode Exit fullscreen mode

Wiring the typing state to user input

The typing segment signals the character is composing a reply. It fires on input while there's text in the field, and reverts to active after 1.5 s of inactivity:

const TYPING_IDLE_DELAY = 1500;

chatInput.addEventListener('input', function () {
  if (appState !== 'active' && appState !== 'typing') return;
  clearTimeout(typingTimer);

  if (appState !== 'typing') {
    setState('typing');
    playSegment('typing', true, 'forward');
  }

  typingTimer = setTimeout(() => {
    if (appState === 'typing') {
      setState('active');
      playSegment('active', true, 'forward');
    }
  }, TYPING_IDLE_DELAY);
});
Enter fullscreen mode Exit fullscreen mode

Submitting the message clears input and returns to active immediately:

function sendMessage() {
  const text = chatInput.value.trim();
  if (!text) return;
  chatInput.value = '';
  clearTimeout(typingTimer);

  if (appState === 'typing') {
    setState('active');
    playSegment('active', true, 'forward');
  }

  addUserMessage(text);
  // show typing indicator, then bot reply after short delay...
}
Enter fullscreen mode Exit fullscreen mode

In a real product you'd fire typing when an LLM call starts and return to active once the response is ready. The animation layer doesn't care about the trigger source — it just plays whatever segment you tell it to.


Motion Tokens: text that lives on the character

The segment control handles which animation plays. Motion Tokens handle what the character says on its own body.

The .lottie file has three text slots defined in Lottie Creator:

Slot Default value
Text-Click "Click here"
Text-Hello "Hello!"
Text-Start "Let's Start"

These are the labels that appear on and around the box character at different points in the animation — not the chat bubble, which is plain HTML. These three are what you'd normally bake into the file at export time and re-export every time a label changed or a new locale needed supporting.

With Motion Tokens, you update them at runtime via setTextSlot instead:

dotLottie.setTextSlot('Text-Hello', { t: 'Hello!'       });
dotLottie.setTextSlot('Text-Click', { t: 'Click here'   });
dotLottie.setTextSlot('Text-Start', { t: "Let's Start"  });
Enter fullscreen mode Exit fullscreen mode

One call per slot, instant update, mid-animation, no re-export.

Pro tip: Motion Tokens aren't limited to text. You can tokenise colours, gradients, and transforms too — same idea. If you wanted the character's colour scheme to shift per brand theme, it's the same pattern.


Live translation via API

Rather than shipping a hardcoded translations file, locale strings for the character's slots are fetched on demand from the MyMemory API — free tier, no API key required, good for POCs.

The TRANSLATIONS object starts with English only. All other locales are populated on first use and cached so subsequent switches are instant:

const TRANSLATIONS = {
  en: {
    hello:       'Hello!',
    click:       'Click here',
    start:       "Let's Start",
    placeholder: 'Type a message…',
  },
  // es, de, fr, pt — fetched and cached on first switch
};
Enter fullscreen mode Exit fullscreen mode

A single translate helper handles the API call:

async function translate(text, lang) {
  const url = `https://api.mymemory.translated.net/get?q=${encodeURIComponent(text)}&langpair=en|${lang}`;
  const res = await fetch(url);
  const data = await res.json();
  return data.responseData.translatedText || text; // fall back to source on failure
}
Enter fullscreen mode Exit fullscreen mode

applyTextSlots orchestrates it all: fetch all slot strings in parallel if uncached, store them, then push into the animation:

async function applyTextSlots(lang) {
  if (!dotLottie || !isReady) return;

  // Fetch + cache if this locale hasn't been seen yet
  if (lang !== 'en' && !TRANSLATIONS[lang]) {
    const src = TRANSLATIONS.en;
    try {
      const [hello, click, start, placeholder] = await Promise.all([
        translate(src.hello,       lang),
        translate(src.click,       lang),
        translate(src.start,       lang),
        translate(src.placeholder, lang),
      ]);
      TRANSLATIONS[lang] = { hello, click, start, placeholder };
    } catch (err) {
      console.warn(`[translate] API failed for "${lang}", falling back to EN:`, err);
    }
  }

  const t = TRANSLATIONS[lang] || TRANSLATIONS.en;

  try {
    dotLottie.setTextSlot('Text-Hello', { t: t.hello });
    dotLottie.setTextSlot('Text-Click', { t: t.click });
    dotLottie.setTextSlot('Text-Start', { t: t.start });
  } catch (err) {
    console.warn('[dotLottie] setTextSlot failed:', err);
  }
}
Enter fullscreen mode Exit fullscreen mode

Promise.all means all four strings hit the API in parallel — the first switch to a new language takes about one round-trip. Every switch after that is synchronous from the cache.

The full language switch function also retranslates existing bot bubbles in the chat thread and updates the input placeholder:

async function setLanguage(lang) {
  currentLang = lang;

  // Update button highlights
  document.querySelectorAll('.lang-btn').forEach(btn => {
    btn.classList.toggle('active', btn.dataset.lang === lang);
  });

  // Fetch if needed, then push into animation slots
  await applyTextSlots(lang);

  // Retranslate chat bubbles already on screen
  chatMessages.querySelectorAll('.message.bot[data-msg-type]').forEach(el => {
    if (el.dataset.msgType === 'welcome') {
      el.textContent = WELCOME[lang] || WELCOME.en;
    } else if (el.dataset.msgType === 'keyword') {
      const topic = KEYWORD_REPLIES[parseInt(el.dataset.keywordIndex, 10)];
      if (topic) el.textContent = topic.response[lang] || topic.response.en;
    }
  });

  chatInput.placeholder = (TRANSLATIONS[lang] || TRANSLATIONS.en).placeholder;
}
Enter fullscreen mode Exit fullscreen mode

Five languages, one file, no re-exports — and switching locale doesn't interrupt whatever state the character is in.


What this POC doesn't have

Worth being explicit about where this stops:

No error or timeout state. A real chatbot needs an animation for failed API calls or unrecognised inputs. That's one more segment in the file and one more playSegment call.

No real bot reply. Sending a message triggers a keyword-matched response or a "transfer to human" fallback. Wiring an actual LLM means firing playSegment('typing', true, 'forward') when the API call starts and playSegment('active', true, 'forward') when the response is ready. The animation side is two lines.

No prefers-reduced-motion handling. If you ship this, detect window.matchMedia('(prefers-reduced-motion: reduce)') and skip the to-active segment or go straight to a static frame.

MyMemory is a POC API. For production, swap the translate function URL for DeepL or Google Translate. The rest of the architecture stays the same.


Why the combination works

Segment control on its own only handles animation logic. Motion Tokens on their own only update property values. A live translation API on its own only handles strings. Together, they give you a .lottie file with both its own behaviour and its own runtime data surface — driven by an external source.

For this chatbot: the character knows how to transition between states (playSegment), its own text labels update at runtime without re-exporting (Motion Tokens via setTextSlot), and a translation API means you never maintain a locale file at all. The HTML chat thread handles dynamic content the normal way. Each layer does what it's good at.

Try the live demo and fork it — curious what states people would add next.

Top comments (0)