<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Mike Chung</title>
    <description>The latest articles on DEV Community by Mike Chung (@mjyc).</description>
    <link>https://dev.to/mjyc</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/mjyc"/>
    <language>en</language>
    <item>
      <title>Functional Listening and its Generalization: Towards Having Control Over Your Productivity</title>
      <dc:creator>Mike Chung</dc:creator>
      <pubDate>Thu, 10 Aug 2023 23:10:42 +0000</pubDate>
      <link>https://dev.to/mjyc/functional-listening-and-its-generalization-towards-having-control-over-your-productivity-13em</link>
      <guid>https://dev.to/mjyc/functional-listening-and-its-generalization-towards-having-control-over-your-productivity-13em</guid>
      <description>&lt;p&gt;Originally published at &lt;a href="https://mjyc.github.io/2021/11/24/functional.html"&gt;https://mjyc.github.io&lt;/a&gt; on November 24, 2021.&lt;/p&gt;

&lt;p&gt;At some point in my life, I realized that I wasn't listening to music &lt;del&gt;to join the revolution and change the world!&lt;/del&gt;to enjoy myself but to put my kids to sleep or to focus while I'm working. This sad realization made me write this post.&lt;/p&gt;

&lt;h2&gt;
  
  
  Sounds
&lt;/h2&gt;

&lt;p&gt;Like many other new parents, I was introduced to &lt;a href="https://open.spotify.com/playlist/37i9dQZF1DWUZ5bk6qqDSy?si=3494c78a527640a0"&gt;white noise&lt;/a&gt; sounds when I was struggling to put my son to bed. One day after a sleepless night, I found myself listening to the white noise sounds even when I was working. I felt good and focused.&lt;/p&gt;

&lt;p&gt;I started exploring more noise sounds to see if any of them can boost my productivity because, at some point, I felt white noise was too harsh for my ears. Then I found &lt;a href="https://open.spotify.com/playlist/37i9dQZF1DX4hpot8sYudB?si=60a31bdb113441e1"&gt;brown noise&lt;/a&gt;, which was much warmer and easier on my ears than white noise, and I enjoyed listening to it for awhile. However, after many days of listening to it for a longer period of time, it became too soothing to the point where it made me feel sleepy, especially on Winter days with long nights. So I moved on to listening to &lt;a href="https://open.spotify.com/playlist/37i9dQZF1DX5NgkFTxJ4Wv?si=7029f4a7ead24b4a"&gt;pink noise&lt;/a&gt;, which felt like a great compromise between white noise and brown noise.&lt;/p&gt;

&lt;p&gt;Around the same time, my friend, who is also a dad, suggested I check out &lt;a href="https://open.spotify.com/playlist/37i9dQZF1DWUm4vT7WQxcD?si=e76c28f0c6574460"&gt;fan noise&lt;/a&gt;. I didn't initially like it, but it grew on me over time. There was something about listening to slightly more familiar sounds that kept me listening to it despite the initial dislike. This observation led me to explore non-synthetic noise sounds like &lt;a href="https://open.spotify.com/playlist/37i9dQZF1DX4PP3DA4J0N8?si=27f759d1d5064e9d"&gt;nature sounds&lt;/a&gt;, more specifically, &lt;a href="https://open.spotify.com/playlist/37i9dQZF1DWV90ZWj21ygB?si=2db0493068e7492f"&gt;ocean sounds&lt;/a&gt; and &lt;a href="https://open.spotify.com/playlist/37i9dQZF1DX8ymr6UES7vc?si=f00d14203b384061"&gt;rain sounds&lt;/a&gt;.What I really liked about listening to the nature sounds was that not only were they more comfortable to listen to than synthetic noise sounds like white noise, but they also put me into a certain mental state that had a specific effect on me. For example, listening to the wave noise tricked my brain into putting me into vacation mode, helping me relax, and the heavy rain sounds took me back to the hot and humid monsoon season in South Korea I weirdly enjoyed when I was a kid.&lt;/p&gt;

&lt;p&gt;Understanding that listening to certain sounds (or music) can have certain effects on me made me want to leverage such effects. For example, I intentionally listened to my favorite finals week music from my college years, like &lt;a href="https://open.spotify.com/artist/3Rq3YOF9YG9YfCWD4D56RZ?si=BDrveOO-SRigUDmRDtaeDg"&gt;Nujabes&lt;/a&gt;-like lo-fi hip-hop music, &lt;a href="https://open.spotify.com/artist/4LEiUm1SRbFMgfqnQTwUbQ?si=mDOsZUMbTQWJe_3lD0DDvw"&gt;Bon Iver&lt;/a&gt;-like indie folk music, and &lt;a href="https://open.spotify.com/artist/6UUrUCIZtQeOf8tC0WuzRy?si=nm0FY61iTNiNWB3im-vgQA"&gt;Sigur Ros&lt;/a&gt;-like post-rock music, when I needed to learn new techniques or tools. When I needed to pump out code, I listened to my favorite coding music from my bachelor years, like &lt;a href="https://open.spotify.com/artist/4tZwfgrHOc3mvqYlEYSvVi?si=AnprvdiGRRKh7DL3-Na_MA"&gt;Daft Punk&lt;/a&gt;'s French house music or &lt;a href="https://open.spotify.com/artist/1GhPHrq36VKCY3ucVaZCfo?si=HhSfDdriSBKYXEYT5e9zNg"&gt;Chemical Brothers&lt;/a&gt;' big beats music. I turned to nature sounds like ocean sounds to help me calm down and detach from problems, especially when I'm responding to an incident under time pressure.&lt;/p&gt;

&lt;h2&gt;
  
  
  Lights
&lt;/h2&gt;

&lt;p&gt;During the pandemic—specifically in the winter of 2020—I learned how sensitive I was to the lighting conditions in my room. At the time, I was using a single not-so-bright yellow light bulb in my room, and I felt like I couldn't focus because my room wasn't bright enough. So I ordered a &lt;a href="https://a.co/d/8wQJqPb"&gt;3-in-1 light socket splitter&lt;/a&gt; and three &lt;a href="https://www.wyze.com/products/wyze-bulb-white"&gt;smart light bulbs&lt;/a&gt; to get my focus back by making my room brighter.&lt;/p&gt;

&lt;p&gt;After trying out my new smart light bulbs without any customizations for a few days, I realized that while I really enjoyed having their default bright white lights during the daytime--especially on cloudy days, I didn't like the same bright white lights in the evening hours. They didn't mix well with the other yellow light bulbs we already had. I felt like the new white lights were taking away the cozy warm feeling created by the existing yellow lights. I also felt too awake if I worked under bright white lights in the evening and night hours, it was useful if I needed to work long hours, and it made me tired and difficult to concentrate during the day after.&lt;/p&gt;

&lt;p&gt;So I used the &lt;a href="https://ifttt.com/"&gt;IFTTT&lt;/a&gt;-like &lt;a href="https://support.wyze.com/hc/en-us/articles/360032409032-Using-Rules-in-the-Wyze-app"&gt;feature&lt;/a&gt; in the smart light app to adjust the light bulbs' brightness and color temperature throughout the day. Here is a set of rules I created:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;6 am: set brightness to 80%, set color temperature to 50%&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;7 am: set brightness to 80%, set color temperature to 50%&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;10 am: set brightness to 100%, set color temperature to 100%&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;11 am: set brightness to 100%, set color temperature to 100%&lt;/li&gt;
&lt;li&gt;1 pm: set brightness to 100%, set color temperature to 100%&lt;/li&gt;
&lt;li&gt;2 pm: set brightness to 100%, set color temperature to 100%&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;4 pm: set brightness to 100%, set color temperature to 75%&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;5 pm: set brightness to 100%, set color temperature to 75%&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;8 pm: set brightness to 100%, set color temperature to 50%&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;9 pm: set brightness to 100%, set color temperature to 50%&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;12 am: set brightness to 10%, set color temperature to 1%&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;My smart light bulbs' brightness ranges from 0 (0%) to 800 lumens (100%), and the color temperature ranges from 2700k (0%) to 6500k (100%)[^1].There are redundant rules (in the non-bold font) because there was no easy way to &lt;em&gt;keep&lt;/em&gt; a certain condition (e.g., set brightness to 100%) &lt;em&gt;throughout&lt;/em&gt; a certain time period (e.g., between 10 am and 4 pm) using the IFTTT that came with the bulb[^2].For example, if I power off my lights at 10 pm and power them back on at 9am the next day, the brightness and color temperature will stay at what they were at 10 pm last night and they won't change until the next rule kicks in, e.g., at 10 am.One of the ways to get around this issue was using the "&lt;a href="https://www.digitaltrends.com/home/wyze-bulb-white-sun-match-mode-changes-based-on-sun/"&gt;sun match&lt;/a&gt;" feature that continuously adjusts the brightness and color temperatures to match that of the sun throughout the day.However, I liked having a set of rules that changes brightness and color temperature in a bit more discrete manner.&lt;/p&gt;

&lt;p&gt;A funny thing about having these rules was that they had a reminder-like effect on me.For example, adjusting the color temperature slightly at 4 pm was like a reminder that I should wrap up and get ready for picking up my son at 5:30 pm, whereas the 5 pm change was like a "final warning." The dramatic reduction of brightness at 12 am was like a reminder that I should go to bed even though there is this one thing I really want to finish.It was really effective, especially because I like to hide the menubar, where the clock is located, to maximize screen space and minimize distractions when I'm working.&lt;/p&gt;

&lt;h3&gt;
  
  
  Screen Color Temperature, Dynamic Wallpaper
&lt;/h3&gt;

&lt;p&gt;Once I understood my room lights' reminder-like effect on me, I looked for other things that I can control, e.g., to better control myself.The first one was the screen display color temperature.&lt;/p&gt;

&lt;p&gt;I'm not a fan of dark mode for my workstation desktop. I know I'm a minority among fellow developers, but I just find using a white background (e.g., on IDE and terminal) easier to read. However, one problem with using the white background is it hurts my eyes towards the end of the day, i.e., in the evening hours. I became a big fan of the "night &lt;a href="https://support.apple.com/en-us/HT207513"&gt;shift&lt;/a&gt;/&lt;a href="https://help.ubuntu.com/stable/ubuntu-help/display-night-light.html.en"&gt;light&lt;/a&gt;" feature that adjusts the screen display color to a warmer color after sunset because it made using a white background in the evening much more comfortable.&lt;/p&gt;

&lt;p&gt;To go beyond the default "start night shift at sunset and stop it at sunrise", I set up the following cron jobs on my work desktop running Ubuntu:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Run `sudo crontab -e` to open this file&lt;/span&gt;

0 0 &lt;span class="k"&gt;*&lt;/span&gt; &lt;span class="k"&gt;*&lt;/span&gt; &lt;span class="k"&gt;*&lt;/span&gt; /usr/bin/gsettings &lt;span class="nb"&gt;set &lt;/span&gt;org.gnome.settings-daemon.plugins.color night-light-temperature 1000
0 6 &lt;span class="k"&gt;*&lt;/span&gt; &lt;span class="k"&gt;*&lt;/span&gt; &lt;span class="k"&gt;*&lt;/span&gt; /usr/bin/gsettings &lt;span class="nb"&gt;set &lt;/span&gt;org.gnome.settings-daemon.plugins.color night-light-temperature 5000
0 10 &lt;span class="k"&gt;*&lt;/span&gt; &lt;span class="k"&gt;*&lt;/span&gt; &lt;span class="k"&gt;*&lt;/span&gt; /usr/bin/gsettings &lt;span class="nb"&gt;set &lt;/span&gt;org.gnome.settings-daemon.plugins.color night-light-temperature 6000
0 4 &lt;span class="k"&gt;*&lt;/span&gt; &lt;span class="k"&gt;*&lt;/span&gt; &lt;span class="k"&gt;*&lt;/span&gt; /usr/bin/gsettings &lt;span class="nb"&gt;set &lt;/span&gt;org.gnome.settings-daemon.plugins.color night-light-temperature 5000
0 5 &lt;span class="k"&gt;*&lt;/span&gt; &lt;span class="k"&gt;*&lt;/span&gt; &lt;span class="k"&gt;*&lt;/span&gt; /usr/bin/gsettings &lt;span class="nb"&gt;set &lt;/span&gt;org.gnome.settings-daemon.plugins.color night-light-temperature 4000
0 8 &lt;span class="k"&gt;*&lt;/span&gt; &lt;span class="k"&gt;*&lt;/span&gt; &lt;span class="k"&gt;*&lt;/span&gt; /usr/bin/gsettings &lt;span class="nb"&gt;set &lt;/span&gt;org.gnome.settings-daemon.plugins.color night-light-temperature 3000
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;By the way, here I didn't need to have redundant rules because my laptop was always on. I even applied the same idea to control &lt;a href="https://github.com/adi1090x/dynamic-wallpaper"&gt;dynamic wallpaper&lt;/a&gt;, noticing dynamic wallpaper is another thing that I implicitly interact with.&lt;/p&gt;

&lt;p&gt;For some time, I played with synchronizing such changes across the room lights, screen display, and wallpaper but screen color temperature and wallpaper rules didn't stick. I ended up just using the default nightshift mode (i.e., on/off on sunset/sunrise) because changing the screen display color temperature was too subtle to give me the kick. As for the dynamic background, I often missed the changing background because I usually maximize my IDE or terminal window.&lt;/p&gt;

&lt;h2&gt;
  
  
  Productivity
&lt;/h2&gt;

&lt;p&gt;My exploration of trying to understand the effects of various kinds of lights on myself reminded me of human-computer interaction (HCI) research papers investigating the possibility of behavior change via technologies (disclosure: I only read their abstracts):&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://www.researchgate.net/publication/221515000_The_design_of_eco-feedback_technology"&gt;The design of eco-feedback technology&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.researchgate.net/publication/221514889_Theory-driven_design_strategies_for_technologies_that_support_behavior_change_in_everyday_life"&gt;Theory-Driven Design Strategies for Technologies that Support Behavior Change in Everyday Life&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.researchgate.net/profile/Julie-Kientz/publication/220962690_Personality_and_Persuasive_Technology_An_Exploratory_Study_on_Health-Promoting_Mobile_Applications/links/0deec5191b0d334f93000000/Personality-and-Persuasive-Technology-An-Exploratory-Study-on-Health-Promoting-Mobile-Applications.pdf"&gt;Personality and Persuasive Technology: An Exploratory Study on Health-Promoting Mobile Applications&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="http://cs4760.csl.mtu.edu/2022/resources/HE2.pdf"&gt;Heuristic evaluation of ambient displays&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Then I thought, can we take a computational approach to figure out the optimal environment (e.g., sounds/lights sequence/placements) for productivity? For example, if we have an accurate model of how certain environments like sounds or lights affect our productivity, we could find an optimal sequence and placement of sounds and lights that maximizes our productivity! Building such an accurate model would be difficult, but maybe we can take a &lt;a href="https://quantifiedself.com/"&gt;quantified-self&lt;/a&gt; approach to collect data and augment it with data from smart home devices?&lt;/p&gt;

&lt;p&gt;But then I asked myself: Do I really want to go this route? Would it truly boost my productivity?&lt;/p&gt;

&lt;h3&gt;
  
  
  Self-Control via Environment Control
&lt;/h3&gt;

&lt;p&gt;The goal of this entire journey was to improve my productivity, and what I discovered was not just about coffee but rather about self-control. This realization stemmed from the following observations:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Certain things put me into certain moods ("Sound").&lt;/li&gt;
&lt;li&gt;Environmental cues act as triggers ("Lights").&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;And thinking about how to leverage these observations. Expanding to different modalities ("Screen Color Temperature, Dynamic Wallpaper") taught me how to avoid distractions and stay focused on my goals. One thing I've learned from daydreaming about the computational approach to boosting my productivity ("Idea") is that, as a first step, I could try to internally learn my productivity model by paying more attention to how my mind responds to various environments (let's meditate people!). Only then can I leverage such a model (understanding) by changing environments.&lt;/p&gt;



&lt;h4&gt;
  
  
  Footnotes
&lt;/h4&gt;

&lt;p&gt;[^1] For more information on the color temperature unit Kelvin (K), see &lt;a href="https://www.ledlightexpert.com/understanding_led_light_color_temperatures_ep_79"&gt;this article&lt;/a&gt;.&lt;br&gt;
[^2] My gradschool labmate published a &lt;a href="https://hcrlab.cs.washington.edu/assets/pdfs/2015/huang2015ubicomp.pdf"&gt;research paper&lt;/a&gt; about this.&lt;/p&gt;

</description>
      <category>productivity</category>
    </item>
    <item>
      <title>Robo-Observability</title>
      <dc:creator>Mike Chung</dc:creator>
      <pubDate>Sat, 15 Jul 2023 00:29:37 +0000</pubDate>
      <link>https://dev.to/mjyc/robo-observability-7j2</link>
      <guid>https://dev.to/mjyc/robo-observability-7j2</guid>
      <description>&lt;p&gt;&lt;em&gt;Originally published at &lt;a href="https://mjyc.github.io/2023/04/21/observability.html"&gt;https://mjyc.github.io&lt;/a&gt; on April 21, 2023.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;I care about observability in the context of debugging and monitoring robotics systems.&lt;/p&gt;

&lt;h2&gt;
  
  
  Logs
&lt;/h2&gt;

&lt;p&gt;Debugging (e.g., ROS-based) robotics systems by digging through logs hasn't been fun for me. To investigate a bug report, I first SSH into a robot, look through multiple log files from multiple teams' software, and once I find relevant logs and data (e.g., camera images), I start downloading them to my local dev machine and wait for 10~20 minutes (at best). Only after that, I could dive deep into them to understand the reported issue. I'm describing my ~worst experience, but debugging (and monitoring, too) robotics systems has always been painful for one or two reasons I just mentioned.&lt;/p&gt;

&lt;p&gt;When I started using log management tools from the distributed systems community, I was pleasantly surprised by their amazing developer experience (DX). Three aspects in particular stood out to me:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;em&gt;Structured logs&lt;/em&gt; made logs machine-parsable and version controllable, eliminating the need for regex gymnastics. This greatly facilitated the development of user-facing tools like interactive data visualizers.&lt;/li&gt;
&lt;li&gt;
&lt;em&gt;Centralized logs&lt;/em&gt; helped bring together all relevant information for users.&lt;/li&gt;
&lt;li&gt;
&lt;em&gt;Log visualization tools&lt;/em&gt; allowed users to effortlessly navigate and process large amounts of log data.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;During the adoption of these aspects, I observed the following challenges faced by organizations:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;em&gt;Large and complex codebases&lt;/em&gt; made it difficult and laborious to structure logs consistently across diverse subsystems.&lt;/li&gt;
&lt;li&gt;
&lt;em&gt;Large data volumes&lt;/em&gt; posed challenges in centralizing data. Even in robotics companies that deal with non-autonomous vehicles, data generation reaches petabyte scales, making it incredibly challenging to work with.&lt;/li&gt;
&lt;li&gt;
&lt;em&gt;Non-textual logs&lt;/em&gt; made the utilization of existing log management and visualization tools more difficult.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Here are my suggestions:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Start structuring logs by adding metadata&lt;/strong&gt;  such as robot ID and customer ID (i.e., robot and customer information) to the logs of multiple teams. Doing so should spark discussions about standardizing the data structure and tooling for logs. Nudge stakeholders to think in terms of logs generated from fleets instead of individual robots, and manage the lifecycle of logs independently, for example, from the software that generated them.&lt;/li&gt;
&lt;li&gt;While aiming to standardize the metadata structure and tooling to simplify the consumption process, &lt;strong&gt;log data types and data channels&lt;/strong&gt; carrying logs &lt;strong&gt;should be treated differently and separately to optimize performance&lt;/strong&gt; in terms of transportation, visualization, and so on.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Invest in&lt;/strong&gt; adopting or even building &lt;strong&gt;data visualization tools&lt;/strong&gt;. "A picture is worth a thousand words." Non-textual data is essential when it comes to debugging, and each organization may have bespoke needs.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Metrics
&lt;/h2&gt;

&lt;p&gt;Typical metric categories I've seen are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;em&gt;Customer and robot-specific metrics&lt;/em&gt;, such as the total number of completed deliveries and the total distance traveled for an indoor delivery robot company.&lt;/li&gt;
&lt;li&gt;
&lt;em&gt;Resource utilization and health-related metrics&lt;/em&gt;, such as CPU, memory, and disk usage, network traffic of onboard and cloud machines.&lt;/li&gt;
&lt;li&gt;
&lt;em&gt;Service health and availability-related metrics&lt;/em&gt;, such as request rate/error/duration, service uptime/response time of onboard and cloud services.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These metrics aren't specific to robotics companies and are standardized (e.g., across services) for ease of consumption and operational scalability. However, I have found that specializing metrics for core robotics engineers (e.g., who also engage in operations work in smaller organizations) is helpful for monitoring purposes. Here are examples of such specialized metrics for motion control and planning:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;em&gt;Motion control:&lt;/em&gt; control frequency, number of staged controllers, response time of dependent hardware devices.&lt;/li&gt;
&lt;li&gt;
&lt;em&gt;Motion planning:&lt;/em&gt; planning request rate, errors, and duration, distance and duration of planned motion (trajectory).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Notice that these metrics are still high-level, i.e., general across different kinds of motion planning or control algorithms. There are also &lt;strong&gt;robotics algorithm-specific metrics&lt;/strong&gt; (e.g., the number of nodes explored for a sampling-based planner) that can be computed and tracked. While I do like to collect such robotics-specific metrics to gain a deeper understanding of algorithm performance, doing so requires caution, e.g., I like to ask questions such as: What's the overhead of computing algorithm-specific metrics? How can we extract meaningful information from these metrics and avoid adding noise to the dashboard? How much maintenance work do we anticipate?&lt;/p&gt;

&lt;h2&gt;
  
  
  Final Thoughts
&lt;/h2&gt;

&lt;p&gt;Observability is crucial for debugging and monitoring robotics systems at every stage of an organization. In small organizations, observability is a must for quickly detecting and resolving issues. For larger organizations, the ability to collect and process large log data from a fleet of robots effectively or monitor the health and utilization of such a fleet is a must. Ensuring observability of a robotics system at scale requires not only careful design and nontrivial implementation work on tooling but also the establishment of conventions and practices that are agreed upon and adhered to by multiple teams.&lt;/p&gt;



&lt;h4&gt;
  
  
  Acknowledgements
&lt;/h4&gt;

&lt;p&gt;I thank Chris Palmer and Rastislav Komara for sharing their experiences and insights.&lt;/p&gt;

</description>
      <category>robotics</category>
      <category>observability</category>
    </item>
    <item>
      <title>Getting started with programming, kind of</title>
      <dc:creator>Mike Chung</dc:creator>
      <pubDate>Sat, 11 Sep 2021 05:06:48 +0000</pubDate>
      <link>https://dev.to/mjyc/getting-started-with-programming-kind-of-455n</link>
      <guid>https://dev.to/mjyc/getting-started-with-programming-kind-of-455n</guid>
      <description>&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://leetcode.com/problemset/all/"&gt;https://leetcode.com/problemset/all/&lt;/a&gt; for coding interviews

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://youtu.be/xAxgzrj8zgU"&gt;https://youtu.be/xAxgzrj8zgU&lt;/a&gt; this guy shows how to meta-study for coding interviews&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://refactoring.guru/"&gt;&lt;/a&gt;&lt;a href="https://refactoring.guru/"&gt;https://refactoring.guru/&lt;/a&gt; for refactoring &amp;amp; reviewing design patterns

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://www.martinfowler.com/architecture/"&gt;https://www.martinfowler.com/architecture/&lt;/a&gt; if you want more&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://github.com/donnemartin/system-design-primer"&gt;&lt;/a&gt;&lt;a href="https://github.com/donnemartin/system-design-primer"&gt;https://github.com/donnemartin/system-design-primer&lt;/a&gt; for system design&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://dev.to/"&gt;&lt;/a&gt;&lt;a href="https://dev.to/"&gt;https://dev.to/&lt;/a&gt; explained stuff to me like I'm 4 years old

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://dev.to/mjyc"&gt;https://dev.to/mjyc&lt;/a&gt; I used to post some too!&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://homes.cs.washington.edu/%7Ejstn/#/teaching"&gt;&lt;/a&gt;&lt;a href="https://homes.cs.washington.edu/%7Ejstn/#/teaching"&gt;https://homes.cs.washington.edu/~jstn/#/teaching&lt;/a&gt; my grad school buddy who has amazing Git &amp;amp; C++ beginner’s tutorials

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://girliemac.com/blog/2017/12/26/git-purr/"&gt;https://girliemac.com/blog/2017/12/26/git-purr/&lt;/a&gt; git purr&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://github.com/alebcay/awesome-shell"&gt;&lt;/a&gt;&lt;a href="https://github.com/alebcay/awesome-shell"&gt;https://github.com/alebcay/awesome-shell&lt;/a&gt; shell is awesome&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://github.com/Mayccoll/Gogh"&gt;&lt;/a&gt;&lt;a href="https://github.com/Mayccoll/Gogh"&gt;https://github.com/Mayccoll/Gogh&lt;/a&gt; I like dracula, nord, github, night owl, dissonance/brogrammer, and broadcast

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://www.reddit.com/r/terminal_porn/"&gt;https://www.reddit.com/r/terminal_porn/&lt;/a&gt; oh yeah&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://carbon.now.sh/"&gt;&lt;/a&gt;&lt;a href="https://carbon.now.sh/"&gt;https://carbon.now.sh/&lt;/a&gt; if you need to share your amazing command line script&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://roadmap.sh/"&gt;&lt;/a&gt;&lt;a href="https://roadmap.sh/"&gt;https://roadmap.sh/&lt;/a&gt; cause developers need a roadmap too&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>beginners</category>
      <category>programming</category>
    </item>
    <item>
      <title>Testing robotics systems in fast-paced startups</title>
      <dc:creator>Mike Chung</dc:creator>
      <pubDate>Sun, 03 Jan 2021 00:06:03 +0000</pubDate>
      <link>https://dev.to/mjyc/testing-robotics-systems-2062</link>
      <guid>https://dev.to/mjyc/testing-robotics-systems-2062</guid>
      <description>&lt;p&gt;&lt;em&gt;Originally published at &lt;a href="https://mjyc.github.io/2020/12/16/testing.html"&gt;https://mjyc.github.io&lt;/a&gt; on December 16, 2020.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--rl88dN27--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://live.staticflickr.com/195/506281600_a68f821d33_c.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--rl88dN27--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://live.staticflickr.com/195/506281600_a68f821d33_c.jpg" width="780" height="495"&gt;&lt;/a&gt;&lt;br&gt;Starcraft II, Photo by &lt;a href="https://www.flickr.com/photos/tirrell/"&gt;Zach Tirrell&lt;/a&gt; on &lt;a href="https://www.flickr.com/"&gt;flickr&lt;/a&gt;
  &lt;/p&gt;

&lt;p&gt;Testing robotics systems is hard. Based on my experience working at startups with fewer than 200 employees and fewer than 100 robots providing RaaS using fleets of indoor mobile robots or lines of robot manipulators, the main reasons for the difficulty were as follows:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Edge cases and corner cases in production environments.&lt;/li&gt;
&lt;li&gt;The difficulty of using simulation.&lt;/li&gt;
&lt;li&gt;Challenges with adopting automation.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Why is Testing Robotics Systems Hard?
&lt;/h2&gt;

&lt;p&gt;In &lt;strong&gt;production&lt;/strong&gt;, I've encountered various &lt;strong&gt;edge cases and corner cases&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;em&gt;Edge cases for robotics algorithms.&lt;/em&gt; Input spaces for robotics algorithms, such as perception, control, and motion planning, are vast and challenging to effectively cover for edge cases; there is always a specific layout that causes navigation failures or a particular scene with specific objects that leads to grasping failures. Characterizing such instances is difficult and algorithm-dependent, which complicates the testing setup.&lt;/li&gt;
&lt;li&gt;
&lt;em&gt;Rare hardware issues.&lt;/em&gt; Rare hardware issues that are not (directly) detectable are the worst, such as a small damage in the robot cell structure that requires adjusting the collision map. Anticipating such issues requires input from domain experts (e.g., mechanical or firmware engineers), who may not be easily accessible and speak different jargons and reproducing them often requires changing interfaces, which can be expensive (e.g., it becomes yet another layer to maintain).&lt;/li&gt;
&lt;li&gt;
&lt;em&gt;Subtle regression.&lt;/em&gt; The complexity of robotics systems makes it challenging to establish a robust &lt;a href="https://katalon.com/resources-center/blog/regression-testing"&gt;regression testing&lt;/a&gt; pipeline. For example, handling low-frequency &lt;a href="https://docs.gitlab.com/ee/development/testing_guide/flaky_tests.html"&gt;flaky tests&lt;/a&gt;, implementing robust &lt;a href="https://damorimrg.github.io/practical_testing_book/testregression/selectionprio.html"&gt;test selection and prioritization&lt;/a&gt;[^2] is difficult and hence elusive bugs slip back into the production code. Performance regressions are particularly challenging--especially ones caused by low-level concurrency issues[^3], as they are subtle and require expensive measures such as repeated end-to-end tests and delicate statistical methods to detect.&lt;/li&gt;
&lt;li&gt;
&lt;em&gt;Unexpected peak load condition/usage pattern.&lt;/em&gt;
It is common for multiple (custom) software, such as core robotics, monitoring, and infra-related software, to run in parallel.
Unexpected high demands can adversarially impact your program, e.g., by consuming all of the available resources.
Anticipating and recreating such situations is challenging, especially when dealing with (custom) software at all levels, including firmware and system software.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Using simulation&lt;/strong&gt; for testing robotics systems effectively &lt;strong&gt;is not as easy&lt;/strong&gt; as it seems.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;em&gt;Inadequate usage.&lt;/em&gt; I find that simulation testing is most useful for end-to-end testing of robot applications. However, I often encounter test cases that would benefit from using other tools and techniques (e.g., for efficiency). All too frequently, I come across test cases for robot behaviors (e.g., implemented in finite state machine or behavior tree) that use simulation, making the test cases much more expensive than they need to be. In such cases, using alternatives like &lt;a href="https://martinfowler.com/bliki/TestDouble.html"&gt;fake&lt;/a&gt; or &lt;a href="https://www.educative.io/answers/what-is-model-based-testing"&gt;model-based testing&lt;/a&gt;[^4] would be much more efficient, as they can be used to only "simulate" the directly relevant modules[^5]. This often stems from organizational issues, such as unclear boundaries between teams that result in poorly defined interfaces and testing strategies[^6] or more typical insufficient allocation of time for testing/addressing technical debts (e.g., in favor of prioritizing other deliverables).&lt;/li&gt;
&lt;li&gt;
&lt;em&gt;Generating test cases.&lt;/em&gt; Even with simulation libraries that provide high-level interfaces for building scenarios, creating effective test scenarios is challenging. Creating a single simulated environment for end-to-end testing alone is laborious enough, so diversifying the test scenarios (e.g., to cover extreme cases) becomes a nice-to-have[^7]. There are commercial products that address this issue (e.g., &lt;a href="https://aws.amazon.com/blogs/aws/aws-announces-worldforge-in-aws-robomaker/"&gt;AWS RoboMaker WorldForge&lt;/a&gt;), but they are not easy for smaller organizations (i.e., startups) to integrate due to reasons such as integration cost, vendor lock-in, etc.&lt;/li&gt;
&lt;li&gt;
&lt;em&gt;Expressing specifications.&lt;/em&gt; Specifications for many robotics programs, e.g., those involving perception, motion planning, and behaviors, are difficult to express due to their spatiotemporal nature. This leads to verbose and unorganized (e.g., containing duplicates) test code, which makes it difficult to maintain and scale.&lt;/li&gt;
&lt;li&gt;
&lt;em&gt;Managing infrastructure.&lt;/em&gt; I haven't met a single person who loves managing simulation testing infrastructure, e.g., for continuous integration. Simulation test code is expensive to run, requires special hardware such as GPUs, and is difficult to optimize and move around (e.g., in cloud environments). This leads to a poor developer experience and can even result in the disabling of simulation testing.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Automating&lt;/strong&gt; robotics software &lt;strong&gt;testing is still hard&lt;/strong&gt;.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;em&gt;Challenges with automating build and deployment.&lt;/em&gt;
Here are tech talks and a blog post that shed light on this topic:

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://picknik.ai/ros/debian/packaging/2023/02/27/packaging-ros-with-github-actions.html"&gt;Packaging ROS with GitHub Actions&lt;/a&gt; from PICKNIK Blog, 2023.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://youtu.be/fjfFe98LTm8"&gt;Building Self Driving Cars with Bazel&lt;/a&gt; from Cruise, BazelCon 2019 - shares Cruise's experiences with building and testing robotics software at scale&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://www.airbotics.io/blog/software-deployment-landscape"&gt;The landscape of software deployment in robotics&lt;/a&gt; from Airbotics - summarizes the typical challenges with deploying robotics software&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://youtu.be/JNV9CkARh_g"&gt;Physical continuous integration on real robots&lt;/a&gt; from Fetch, ROSCon 2016 - shares Fetch's experience with setting up and using a physical continuous integration pipeline&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;
&lt;em&gt;No standard.&lt;/em&gt; Automating the testing of software requires agreements among engineering teams on build, deployment, and test models. Given how robotics brings multiple communities together, such as research (e.g., computer vision, robotics), web development (e.g., frontend, backend), DevOps, embedded, etc., reaching such an agreement, or even discussing ideas (e.g., due to different backgrounds), is difficult. While the Robot-Operating System (ROS) and the communities around it have made significant progress in this regard, the lack of standards still seems to be a significant problem in organizations.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Where to Start with Testing: General Techniques
&lt;/h2&gt;

&lt;p&gt;To figure out where to start with testing a robotics system, I use the followings:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Prioritization framework for creating tests.&lt;/li&gt;
&lt;li&gt;Systematic procedure for identifying what to test.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The techniques discussed in this section are general/not-so-technical and are mostly aimed at addressing the abovementioned challenges regarding "edge and corner cases in production environments".&lt;/p&gt;

&lt;h3&gt;
  
  
  Eisenhower Matrix for Test Prioritization
&lt;/h3&gt;

&lt;p&gt;In robotics startups that build complex systems, creating comprehensive test suites is impossible. To produce high-impact tests within the time budget, I use my adapted &lt;a href="https://www.eisenhower.me/eisenhower-matrix/"&gt;Eisenhower Matrix&lt;/a&gt; to prioritize a list of failure scenarios by first categorizing (potential) failure scenarios according to their (expected) frequency and risk.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--48mQsx03--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://mjyc.github.io/assets/imgs/mcmatrix.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--48mQsx03--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://mjyc.github.io/assets/imgs/mcmatrix.png" width="800" height="584"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;First Quadrant (upper left): frequent and high-risk.&lt;/strong&gt; In Quadrant 1 (Q1), I place failure scenarios that need to be covered immediately, e.g., that I hear all the time from internal communication channels, such as introducing breaking changes to APIs and dependencies.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Second Quadrant (upper right): frequent and medium-risk.&lt;/strong&gt; In Quadrant 2 (Q2), I place failure scenarios that occur frequently but allow continued operations with short downtime like unreliable hardware or unresponsive user interface issues with well-established alerts and recovery procedures.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Third Quadrant (lower left): infrequent and high-risk.&lt;/strong&gt; In Quadrant 3 (Q3), I place failure scenarios that occur rarely but causes significant disruption in operations such as core robotics component failure scenario or unexpected peak usage pattern.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Fourth Quadrant (lower right): infrequent and medium-risk.&lt;/strong&gt; In Quadrant 4 (Q4), I place failure scenarios that occur relatively infrequently and allow continued operations.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The placement of example failure scenarios in quadrants will differ across companies. For instance, depending on the maturity of the robot product/prototype or the amount of time invested by the engineering team in designing the system, an unreliable hardware failure scenario may belong in Q1 (e.g., if it is causing multiple issues) or a robotics algorithms failure scenario may belong in Q2 (e.g., if the failure is not catastrophic or easily recoverable). In general, I create or improve tests for one quadrant at a time, in increasing order. After working on tests for Q1, I move on to tests for Q2 before addressing those for Q3. This is because creating tests for Q3 requires a significant time investment, for example, to ensure the reproducibility of the failure. Usually, there is no time available to work on tests for Q4. But adjustments should be made to meet organization-specific requirements and constraints.&lt;/p&gt;

&lt;h3&gt;
  
  
  Test Scenario Identification Procedure
&lt;/h3&gt;

&lt;p&gt;So far, I have assumed that the failure scenarios to test are known; however, this is usually not the case.&lt;br&gt;
To determine what to test, I follow these steps:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Gain access to internal alerts, dashboards, and logs._&lt;/strong&gt; Investigating recently reported problems or analyzing the latest trends using monitoring tools[^8] is the easiest way to identify high-risk failure scenarios. If monitoring tools are not set up (e.g., in smaller companies), I get involved in operations work, which is another way to uncover potential high-value tests to create.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Identify interface and service boundaries.&lt;/strong&gt; Understanding how software components interact with each other provides insights into potential integration failures and their impact.I start by looking for internal documentation with system diagrams (or examining the codebase and creating them if such diagrams don't exist) and ask questions such as: which interactions must not fail? which interactions are changing frequently?Such exercises reveal missing must-have contract tests or high-impact opportunities to improve integration tests.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Identify implicit dependencies.&lt;/strong&gt; I consider edge cases such as low resources, unexpected hardware states, or unseen inputs to robotics algorithms (e.g., those that crash applications) as unmet runtime dependencies. Taking this view nudges me to specify these not-well-understood requirements for keeping the system (or "implicit dependencies") well-behaving as explicitly and clearly as possible. Once defined, such requirements can be used to create extreme failure scenarios to test.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Ensuring Testability as a Startup Grows
&lt;/h2&gt;

&lt;p&gt;Below, I share my insights on key practices to employ at each growth stage/funding round of robotics startups. The real motivation behind testing is the reliability (e.g., of the provided service), and so the shared key practices below cover related areas such as debugging and observability[^9].&lt;/p&gt;

&lt;p&gt;If you have experience with SaaS/web service startups, you might notice that the RaaS/robotics company size (i.e., the number of employees) at each growth stage is larger, and some key practices occur later in robotics startups. This is because RaaS/robotics products not only consist of a more diverse set of software components but also require additional teams like electrical/firmware engineering, mechanical engineering, and manufacturing operations.&lt;/p&gt;

&lt;h3&gt;
  
  
  Series A: 5-20 Employees
&lt;/h3&gt;

&lt;p&gt;At this stage, startups are likely to have fewer than 5 customers/design partners and a handful of developers who are relentlessly building (and fixing) major components of the company's first product. The goal of the startups is to prove the value of their product to their (rather forgiving) customers by succeeding in basic tasks performed by robots as much as possible. For example, a delivery robot should navigate without colliding with obstacles, and a robot manipulator should pick and place objects without dropping them in customers' (production) environments.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Key Practices:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Setting up &lt;strong&gt;continuous integration or nightly tests&lt;/strong&gt; (e.g., using Jenkins, GitLab CI/CD, GitHub Actions, or &lt;a href="https://picknik.ai/ros/debian/packaging/2023/02/27/packaging-ros-with-github-actions.html"&gt;Debian build farm&lt;/a&gt;) with &lt;strong&gt;end-to-end tests involving a high-fidelity simulator&lt;/strong&gt; (e.g., Gazebo) to quickly smoke test rapidly changing codebases.&lt;/li&gt;
&lt;li&gt;Collaborating on &lt;strong&gt;internal communication channels&lt;/strong&gt; (e.g., Slack) and utilizing a (custom) &lt;strong&gt;teleop solution&lt;/strong&gt; (e.g., built on MQTT, WebRTC) to quickly respond to critical incidents.&lt;/li&gt;
&lt;li&gt;Creating &lt;strong&gt;metrics and dashboards to track business-critical measures&lt;/strong&gt; (e.g., using Grafana/Prometheus) such as the number of deliveries/distance traveled, throughput/object knitted to guide all developers/employees at a high level.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Series B: 21-200 Employees
&lt;/h3&gt;

&lt;p&gt;Startups at this stage start expanding their customer base and aim to scale their operations to deploy and handle, for example, more than 100 robots. The companies now have (small) teams of developers working on enhancing the robustness of core robotics software components to handle diverse environments of new (and unforgiving) customers, providing a non-beta-user-acceptable user experience (e.g., by building proper onboard and/or desktop UIs), and/or infrastructure to scale operations. The robotics system powering the product becomes much more complex, and to manage such complexity, its architecture becomes more modular, composable, and distributed, and boundaries/ownerships occur.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Key Practices:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Employing &lt;strong&gt;&lt;a href="https://martinfowler.com/bliki/IntegrationTest.html"&gt;narrow integration tests and contract tests&lt;/a&gt;&lt;/strong&gt; using &lt;a href="https://martinfowler.com/bliki/TestDouble.html"&gt;test doubles&lt;/a&gt; like fakes and spies—sometimes going as far as implementing low-fidelity simulators with them (e.g., using pytest, GoogleTest) to test each team's evolving software component in isolation and complex interactions between such components, efficiently.&lt;/li&gt;
&lt;li&gt;Establishing a &lt;strong&gt;deployment strategy with rollback support&lt;/strong&gt; (e.g., using tools that enable infrastructure-as-code/gitops like Ansible, Terraform and support over-the-air updates, etc.) to avoid manually applying untrackable hotfixes in fear of losing customers.    By the way, this practice is nontrivial to achieve for technical and cultural reasons; for more information, see &lt;a href="https://www.airbotics.io/blog/software-deployment-landscape"&gt;"The landscape of software deployment in robotics"&lt;/a&gt; from Airbotics. Typically, adopting a deployment tooling sparks discussions like adopting cloud-native tooling &lt;a href="https://ubuntu.com/blog/ros-docker"&gt;or not&lt;/a&gt;, which then sparks discussions on &lt;a href="https://discourse.ros.org/t/how-do-you-launch-your-systems/23383/16"&gt;system launching mechanisms&lt;/a&gt;, and so on.&lt;/li&gt;
&lt;li&gt;Implementing &lt;strong&gt;nightly tests on real robots&lt;/strong&gt; in a mirror warehouse or manufacturing line or &lt;strong&gt;[physical continuous integration](&lt;a href="https://youtu.be/JNV9CkARh"&gt;https://youtu.be/JNV9CkARh&lt;/a&gt;&lt;/strong&gt;g)** in a production-like environment to prevent edge and corner cases and performance regressions in production environments as much as possible.&lt;/li&gt;
&lt;li&gt;Utilizing (custom) &lt;strong&gt;fleet operation solution&lt;/strong&gt; to scale teleop-based incidence recovery (e.g., more robots to monitor/rescue per person), open up such operations to non-developers, etc.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Structuring and centralizing logs&lt;/strong&gt; (e.g., structlog, spdlog, and Loki, ELK, Splunk) to track performance across growing fleets and enable dynamically querying information for fast debugging or monitoring, e.g., with &lt;a href="http://127.0.0.1:4000/2023/04/21/observability.html"&gt;robotics-specific metrics&lt;/a&gt;. In addition, centralizing data would be ideal; however, given the large size of robotics data, doing so requires more care and resources—which usually becomes possible to afford in the next stage.&lt;/li&gt;
&lt;li&gt;Implementing a &lt;strong&gt;data record and replayer and visualizer&lt;/strong&gt; (e.g., rosbag and RVIZ, Foxglove Studio, or custom-built ones) to enable debugging or optimizing robotics algorithms running in distributed systems. By the way, building a performant data record and replayer, e.g., that works well with both real robots and simulators, is no joke; there are issues like not being able to record or replay data fast enough, doesn't work well with simulators that run slower or faster than real-time, etc.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Series C: 201-2000 Employees
&lt;/h3&gt;

&lt;p&gt;Robotics companies at this stage have experienced significant growth and have a large customer base. Their goal is to further scale their operations and efficiently handle a substantial number of robots, potentially exceeding 1000 units, without compromising performance. The company now consists of multiple teams working on various aspects of the robotics system, including core software development, user experience enhancement, infrastructure scalability, and customer support. Additionally, the focus expands beyond purely technical challenges to encompass broader organizational considerations, such as team structure, talent management, strategic partnerships, and market expansion.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Key Practices:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Implementing an &lt;strong&gt;advanced test automation framework&lt;/strong&gt; that encompasses a wide range of tests, including unit, integration, performance regression, and security tests. To improve testing efficiency, the company utilizes techniques such as &lt;a href="https://martinfowler.com/articles/rise-test-impact-analysis.html"&gt;test impact analysis&lt;/a&gt; and test selection and prioritization, which identify the most critical tests and prioritize their execution. Furthermore, they leverage cloud resources to run a large number of simulation-based end-to-end tests in parallel, accelerating the testing process.&lt;/li&gt;
&lt;li&gt;Establishing a &lt;strong&gt;formalized release management&lt;/strong&gt; process that includes thorough testing, staging environments, and controlled deployments. This involves leveraging continuous integration and continuous deployment (CI/CD) practices, utilizing version control systems (VCS), maintaining release documentation, and employing release dashboards. The company also invests in &lt;strong&gt;advanced deployment and orchestration&lt;/strong&gt; tools such as Terraform, Docker, and Kubernetes to facilitate large-scale, cross-team deployment and operation of a diverse fleet, encompassing robots, peripheral devices (e.g., elevator controller, manufacturing line controller), and (web) servers.&lt;/li&gt;
&lt;li&gt;Enhancing &lt;strong&gt;developer productivity&lt;/strong&gt; through faster iteration, improved debugging and testing tooling, and ultimately aiming to enhance system reliability. At this stage, companies have the resources to invest in infrastructure and pipeline improvements. This includes setting up &lt;strong&gt;cloud development environments&lt;/strong&gt; utilizing platforms like GitPod, Coder, and DevZero. Additionally, efforts are made to &lt;strong&gt;speed up build times&lt;/strong&gt; through techniques such as &lt;a href="https://bazel.build/remote/caching"&gt;remote caching&lt;/a&gt;. &lt;strong&gt;Streamlining code review and merge progress&lt;/strong&gt; is achieved using tools like Merge Queue. Furthermore, &lt;strong&gt;specialized tooling&lt;/strong&gt; is developed to cater to the specific needs of the company. For instance, enhancing the data record and replayer to support &lt;strong&gt;streaming robotics data&lt;/strong&gt; enables faster debugging by bypassing the need to download large datasets onto local development machines. The data record and replayer are also optimized to handle high throughput and enable navigation between critical timestamps. Integration with high-fidelity simulators is improved to facilitate seamless interaction. Another example of specialized tooling is the creation of a &lt;strong&gt;test scenario generator&lt;/strong&gt;, which automates the generation of realistic test scenarios, enhancing test coverage and enabling the identification of edge cases and potential failure scenarios.&lt;/li&gt;
&lt;li&gt;Implementing &lt;strong&gt;incident response and resolution workflows&lt;/strong&gt; to minimize downtime and efficiently handle critical incidents. The fleet operation tool should leverage automation, e.g., incident detection, require minimal human input for recovery, etc. Also the workflow or procedure should allow each team to be responsible for the sfotware they write.&lt;/li&gt;
&lt;li&gt;Establishing a &lt;strong&gt;security and compliance framework&lt;/strong&gt; to ensure the protection of sensitive data, intellectual property, and customer privacy. This includes implementing secure coding practices, conducting regular security audits, and complying with relevant industry standards and regulations.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Beyond 2000 Employees
&lt;/h3&gt;

&lt;p&gt;I don't have any firsthand experience with companies at this size, and in fact, I only know a handful of robotics companies at this stage, such as large autonomous vehicle companies like Waymo, Cruise, Aurora, and Zoox. I will update here after gaining more experience with companies at this stage, one day.&lt;/p&gt;

&lt;h2&gt;
  
  
  Closing notes
&lt;/h2&gt;

&lt;p&gt;In this post, I have listed the challenges of testing robotics systems in fast-paced startups, shared my techniques for getting started with testing work, and provided insights on key practices for ensuring testability as an organization grows.&lt;/p&gt;

&lt;p&gt;Let me know what you think by leaving comments below or messaging me on &lt;a href="https://www.linkedin.com/in/michaeljaeyoonchung/"&gt;LinkedIn&lt;/a&gt; or &lt;a href="https://twitter.com/mjycio"&gt;Twitter&lt;/a&gt;!&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Do my experiences/approaches resonate/align (or not resonate/align) with yours?&lt;/li&gt;
&lt;li&gt;Do you have any test-related war stories or effective testing strategies you'd like to share?&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I'd love to hear your thoughts.&lt;/p&gt;



&lt;h4&gt;
  
  
  Acknowledgements
&lt;/h4&gt;

&lt;p&gt;I thank all my colleagues who have engaged in conversations with me on related topics.&lt;/p&gt;

&lt;h4&gt;
  
  
  Significant Revisions
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;em&gt;2023/06/29&lt;/em&gt;: Added the "Ensuring reliability as an organization grows" section&lt;/li&gt;
&lt;li&gt;
&lt;em&gt;2023/05/28&lt;/em&gt;: Rewrote the whole post&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Footnotes
&lt;/h4&gt;

&lt;p&gt;[^1] The identified challenges and my approaches may not generalize to other settings, such as testing in robotics companies that are much smaller (i.e., &amp;lt; 10 employees) or much bigger (i.e., &amp;gt; 1000 employees), or involving a different product, such as an autonomous vehicle-based ride-hailing service or an autonomous-based inspection service. For example, I don't have much experience with testing robotics systems that make heavy use of &lt;a href="https://getcruise.com/news/blog/2020/cruises-continuous-learning-machine-predicts-the-unpredictable-on-san/"&gt;machine learning&lt;/a&gt; or &lt;a href="https://docs.ros.org/en/iron/index.html"&gt;real-time programming&lt;/a&gt;.&lt;br&gt;
[^2] See also &lt;a href="https://martinfowler.com/articles/rise-test-impact-analysis.html"&gt;The Rise of Test Impact Analysis&lt;/a&gt; by Martin Fowler.&lt;br&gt;
[^3] E.g., see &lt;a href="https://youtu.be/II8yCw5tPE0"&gt;ROSCon 2017 Vancouver Day 2 Determinism in ROS&lt;/a&gt; and &lt;a href="https://vimeopro.com/osrfoundation/roscon-2019/video/379127709"&gt;ROSCON 2019 MACAU: CONCURRENCY IN ROS 1 AND 2: FROM ASYNCSPINNER TO MULTITHREADEDEXECUTOR&lt;/a&gt;.&lt;br&gt;
[^4] For code examples, see &lt;a href="https://hypothesis.readthedocs.io/en/latest/stateful.html"&gt;Stateful testing&lt;/a&gt; (Python) or &lt;a href="https://medium.com/criteo-engineering/detecting-the-unexpected-in-web-ui-fuzzing-1f3822c8a3a5"&gt;Detecting the unexpected in (Web) UI&lt;/a&gt; (JavaScript).&lt;br&gt;
[^5] Check out &lt;a href="https://martinfowler.com/bliki/IntegrationTest.html"&gt;IntegrationTest&lt;/a&gt; by Martin Fowler for related discussion, e.g., about narrow and broad integration tests.&lt;br&gt;
[^6] Check out &lt;a href="https://martinfowler.com/articles/2021-test-shapes.html"&gt;On the Diverse And Fantastical Shapes of Testing&lt;/a&gt; (at least the last paragarph starting with "If you're paying my careful prose ...") by Martin Fowler for related discussion.&lt;br&gt;
[^7] I enjoy following research papers in this space, such as those taking a grammar-based approach like &lt;a href="https://dl.acm.org/doi/abs/10.1145/3314221.3314633"&gt;Scenic: a language for scenario specification and scene generation&lt;/a&gt;.&lt;br&gt;
[^8] See also &lt;a href="https://twitter.com/GergelyOrosz/status/1665340939529773057"&gt;this twit thread&lt;/a&gt; from Gergely Orosz.&lt;br&gt;
[^9] See also my another post &lt;a href="http://127.0.0.1:4000/2023/04/21/observability.html"&gt;Robo-Observability&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>robotics</category>
      <category>testing</category>
    </item>
    <item>
      <title>Getting started with robotics: Just do it!</title>
      <dc:creator>Mike Chung</dc:creator>
      <pubDate>Sun, 24 May 2020 21:27:43 +0000</pubDate>
      <link>https://dev.to/mjyc/getting-started-with-robotics-1a32</link>
      <guid>https://dev.to/mjyc/getting-started-with-robotics-1a32</guid>
      <description>&lt;p&gt;Getting started with robotics is confusing.&lt;br&gt;
Robotics is an interdisciplinary field and people think of many different things when they are trying to learn about it. For example, google searching "getting started with robotics" gives me the following top three results:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://www.youtube.com/watch?v=uw-4K9joFL8"&gt;How To Start With Robotics? - YouTube&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="http://robotsforroboticists.com/getting-started-kids-adults/"&gt;Robotics for Kids (and Adults) – Getting Started and How to Progress&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://robots.ieee.org/learn/getting-started/"&gt;Getting Started in Robotics - ROBOTS: Your Guide to the World of Robotics&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;They talk about learning skills related to the fields of mechanical engineering, electrical engineering, and computer science. At first, it just felt overwhelming. Reading each of them slowly again, they were great tutorials especially because they all shared one great message--"learn by doing projects" (&lt;a href="https://www.amazon.com/Robotics-Project-Based-Approach-Lakshmi-Prayaga-ebook/dp/B00PG922M4"&gt;there was even a book named with a similar spirit&lt;/a&gt;).&lt;/p&gt;

&lt;p&gt;I 100% agree with the message, I think people should learn robotics by doing projects. In fact, I recently shared &lt;a href="https://github.com/mjyc/awesome-robotics-projects"&gt;my curated list of opensource (and other) robotics projects&lt;/a&gt; for those who are interested in building robots. Because I'm a programmer by training, one additional suggestion I like to add is "start by working with a simulator". Working with hardware is fun but it can be extremely time-consuming so by working with a simulator first you can feel out the robot and identify potential problems early. Projects like &lt;a href="https://mushr.io/"&gt;MuSHR&lt;/a&gt; and &lt;a href="https://hackaday.io/project/164992-bobble-bot"&gt;bobble-bot&lt;/a&gt; are great because they provide robot simulators as well as detailed instructions for building robots. &lt;a href="https://atsushisakai.github.io/PythonRobotics/"&gt;PythonRobotics&lt;/a&gt; is another great entry point for learning about robotics algorithms. The repository contains provide tiny, simple environments for testing the algorithms which are great for learning purposes. Here is a list of &lt;a href="https://www.ros.org/"&gt;ROS&lt;/a&gt;-based simulators that I've curated in &lt;a href="https://rds.theconstructsim.com/r/mchung/"&gt;ROS Development studio&lt;/a&gt;, &lt;a href="https://www.theconstructsim.com/rds-ros-development-studio/"&gt;a cloud service&lt;/a&gt; that allows you to work on ROS projects in a browser. In a similar spirit, I encourage using a single board computer such as &lt;a href="https://www.raspberrypi.org/"&gt;Raspberry Pi&lt;/a&gt; or &lt;a href="https://developer.nvidia.com/embedded/learn/tutorials"&gt;NVIDIA Jetson products&lt;/a&gt; instead of using a microcontroller like &lt;a href="https://www.arduino.cc/"&gt;Arduino&lt;/a&gt;. Programming a microcontroller can be fun and it can allow you to develop a solution that is highly tailored to your use case, but for learning purposes, it can become a rabbit hole that prevents you from completing the project you started. However, if your goal is learning mechanical or electrical engineering my advice (rather opinions) is not for you.&lt;/p&gt;

&lt;p&gt;Finally, I believe getting involved with robotics communities is effective for learning. The below list could be good entry points for learning about software-focused robotics&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://github.com/topics/robotics"&gt;github repos with #robotics tag&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://discourse.ros.org/"&gt;ROS discourse&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://foxglove.dev/blog"&gt;Foxglove blog&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://picknik.ai/blog/"&gt;PICKNIK blog&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://developer.nvidia.com/blog/tag/isaac-sim/"&gt;Isaac Sim Technical Blog&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.duckietown.org/research/ai-driving-olympics"&gt;The AI Driving Olympics (AI-DO)&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://www.balena.io/blog"&gt;Balena blog&lt;/a&gt; - they provide less robotics and more IoT-centric contents&lt;/li&gt;
&lt;li&gt;&lt;a href="https://getcruise.com/news"&gt;Cruise news&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://github.com/mjyc/awesome-robotics-system-design"&gt;Awesome Robotics System Design&lt;/a&gt; - where I keep interesting software-focused robotics stuff&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://github.com/mjyc/awesome-robotics-problems-design"&gt;Awesome Robotics Problems&lt;/a&gt; - where I keep interesting robotics problems, datasets, and algorithm implementations&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;the list below for learning about electronics-focused robotics&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://www.sparkfun.com/news"&gt;sparkfun news&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://blog.adafruit.com/"&gt;adafruit blog posts&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;and the list below for learning about hardware-focused robotics&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://www.instructables.com/"&gt;instructables&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://hackaday.com/"&gt;hackaday&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.hackster.io/"&gt;hackster.io&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://www.onshape.com/en/blog/"&gt;onshape blog&lt;/a&gt; - &lt;a href="https://hackaday.com/2021/02/28/onshape-to-robot-models-made-easier/"&gt;roboticsts love it&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This may be a bit off topic, but since people relate "robotics" with AI/ML computer science research, it might be fun to skim robotics-related papers in open paper review and curated paper list websites:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://arxiv.org/"&gt;https://arxiv.org/&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://openreview.net/"&gt;https://openreview.net/&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://paperswithcode.com/"&gt;https://paperswithcode.com/&lt;/a&gt; - one note: not all researchers are great coders/documenters.&lt;/li&gt;
&lt;li&gt;&lt;a href="http://bohg.cs.stanford.edu/list/"&gt;http://bohg.cs.stanford.edu/list/&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Talking about skimming, it might be inspiring to skim the class materials from &lt;a href="https://courses.cs.washington.edu/courses/cse478/20wi/"&gt;CSE 478: Autonomous Robotics&lt;/a&gt;. Unlike many other class materials, their class slides provide application examples of introduced concepts with an open-source autonomous mobile robot platform &lt;a href="https://mushr.io/"&gt;MUSHR&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;WARNING&lt;/strong&gt; Reading papers and learning class materials can become yet another rabbit hole. There are endless interesting papers (on surface) or concepts (from class slides) and they can distract you from finishing your project. What happens is that because you feel achievement/growth and you get temped to keep learning. Being able to focus on the track and learn only necessary skills (and taking the project to the finish line--and defining the finish line) is a huge challenge/probably the most important skill to learn.&lt;/p&gt;

&lt;p&gt;With that said, go explore project ideas, check out robotics communities and start your project! I believe now is the time to learn about robotics and I hope this blurb can be helpful to aspiring roboticists.&lt;/p&gt;

</description>
      <category>robotics</category>
      <category>beginners</category>
      <category>programming</category>
      <category>motivation</category>
    </item>
    <item>
      <title>Installed ROS Lunar on macOS Mojave! But...</title>
      <dc:creator>Mike Chung</dc:creator>
      <pubDate>Mon, 18 May 2020 04:17:10 +0000</pubDate>
      <link>https://dev.to/mjyc/installed-ros-lunar-on-macosx-mojave-but-51f2</link>
      <guid>https://dev.to/mjyc/installed-ros-lunar-on-macosx-mojave-but-51f2</guid>
      <description>&lt;h3&gt;
  
  
  Motivation
&lt;/h3&gt;

&lt;p&gt;I've been playing with &lt;a href="http://cozmosdk.anki.com/docs/"&gt;Comzo&lt;/a&gt; lately and tried to &lt;a href="https://github.com/mjyc/ros_cozmo"&gt;integrate its sdk with ROS&lt;/a&gt;. Initially, I used a Ubuntu desktop for running ROS but I wanted to use my Macbook Pro laptop instead so I tried installing ROS on macOS. I found the following instructions:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="http://wiki.ros.org/melodic/Installation/macOS/Homebrew/Source"&gt;http://wiki.ros.org/melodic/Installation/macOS/Homebrew/Source&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/mikepurvis/ros-install-osx"&gt;https://github.com/mikepurvis/ros-install-osx&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;and the first one seems like an official one, so I tried it. There was a big yellow warning&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;This is a work in progress! At present, the instructions cover only the &amp;gt; installation of ROS-Comm (Bare Bones) variant and tested on the following &amp;gt; configuration:&lt;/p&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;- macOS Mojave + native (Apple) Python 2.7.10 + XCode 11.2.1
&lt;/code&gt;&lt;/pre&gt;
&lt;/blockquote&gt;

&lt;p&gt;but my setup met the described requirement and I only needed ROS-Comm (Bare Bones) packages so just went ahead with it.&lt;/p&gt;

&lt;h3&gt;
  
  
  Initial Installation
&lt;/h3&gt;

&lt;p&gt;First, I ran all commands in &lt;a href="http://wiki.ros.org/melodic/Installation/macOS/Homebrew/Source#Setup"&gt;Setup&lt;/a&gt; section they went through smoothly. I felt great, so additionally ran ominous &lt;code&gt;brew upgrade&lt;/code&gt; since it's been a while I have done that.&lt;/p&gt;

&lt;p&gt;In &lt;a href="http://wiki.ros.org/melodic/Installation/macOS/Homebrew/Source#melodic.2FInstallation.2FSource.Installation"&gt;Installation&lt;/a&gt; section, I chose ROS-Comm option in &lt;a href="http://wiki.ros.org/melodic/Installation/macOS/Homebrew/Source#melodic.2FInstallation.2FSource.Create_a_catkin_Workspace"&gt;2.1&lt;/a&gt; as I planned earlier. Everything went through until running &lt;code&gt;catkin_make_isolated&lt;/code&gt; in 2.5.2 Building the catkin Workspace subsubsection. It outputted:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;CMake Error at test/CMakeLists.txt:108 (set_target_properties):
  set_target_properties Can not find target to add properties to: test_socket
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Pfft! who needs test? I added &lt;code&gt;-DCATKIN_ENABLE_TESTING=0&lt;/code&gt; flag and ran the below install command again:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;./src/catkin/bin/catkin_make_isolated --install -DCMAKE_BUILD_TYPE=Release -DCMAKE_MACOSX_RPATH=ON -DCMAKE_INSTALL_RPATH=$HOME/ros_catkin_ws/install_isolated/lib -DCATKIN_ENABLE_TESTING=0
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;then it outputted:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;...
[ 81%] Building CXX object CMakeFiles/roscpp.dir/src/libros/transport_subscriber_link.cp
p.o
/Users/mjyc/ros_catkin_ws/src/ros_comm/roscpp/src/libros/transport/transport_udp.cpp:68:
18: error:
      expected '('
, server_address_{}
                 ^
...
/Users/mjyc/ros_catkin_ws/src/ros_comm/roscpp/src/libros/transport/transport_udp.cpp:69:
1: error:
      expected unqualified-id
, local_address_{}
^
3 warnings and 2 errors generated.
make[2]: *** [CMakeFiles/roscpp.dir/src/libros/transport/transport_udp.cpp.o] Error 1
make[2]: *** Waiting for unfinished jobs....
...
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This output didn't look too good. I google searched the error messages and it sounded like I was trying to compile src code that is not compatible the C++ compiler I have on my mac. Arr... I didn't want to up/downgrading my system/brew libraries, so I searched for a different solution on internet but didn't like them at all. Then I thought about giving a shot at installing an older version of ROS and see if that works so I decided to &lt;a href="http://wiki.ros.org/lunar/Installation"&gt;install ROS Lunar&lt;/a&gt;. I ran this command to download ROS lunar packages&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;rosinstall_generator ros_comm --rosdistro lunar --deps --tar &amp;gt; lunar-ros_comm.rosinstall
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;and ran the same &lt;code&gt;catkin_make_isolated&lt;/code&gt; command above to start building. It now complained about missing &lt;code&gt;boost_signals&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;CMake Error at /usr/local/lib/cmake/Boost-1.72.0/BoostConfig.cmake:119 (find_package):
  Could not find a package configuration file provided by "boost_signals"
  (requested version 1.72.0) with any of the following names:

    boost_signalsConfig.cmake
    boost_signals-config.cmake

  Add the installation prefix of "boost_signals" to CMAKE_PREFIX_PATH or set
  "boost_signals_DIR" to a directory containing one of the above files.  If
  "boost_signals" provides a separate development package or SDK, be sure it
  has been installed.
Call Stack (most recent call first):
  /usr/local/lib/cmake/Boost-1.72.0/BoostConfig.cmake:184 (boost_find_component)
  /usr/local/Cellar/cmake/3.17.2/share/cmake/Modules/FindBoost.cmake:444 (find_package)
  CMakeLists.txt:25 (find_package)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;It turned out &lt;a href="http://wiki.ros.org/melodic/Installation/macOS/Homebrew/Source#Modify_Some_CMake_Config_Files"&gt;the solution was mentioned in ROS melodic installations instructions&lt;/a&gt;. After modifying some CMake config files as suggested, the built everything successfully!&lt;/p&gt;

&lt;h3&gt;
  
  
  Testing and More Installation
&lt;/h3&gt;

&lt;p&gt;After sourcing the &lt;code&gt;setup.bash&lt;/code&gt; file, i.e., ran &lt;code&gt;source ~/ros_catkin_ws/install_isolated/setup.bash&lt;/code&gt;, and I ran &lt;code&gt;roscore&lt;/code&gt; and it worked!!&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ roscore
... logging to /Users/mjyc/.ros/log/a15e85b8-98a1-11ea-8b7b-f45c898e9ff7/roslaunch-mjmbp
r.local-92159.log
Checking log directory for disk usage. This may take awhile.
Press Ctrl-C to interrupt

started roslaunch server http://mjmbpr.local:63709/
ros_comm version 1.13.7


SUMMARY
========

PARAMETERS
 * /rosdistro: lunar
 * /rosversion: 1.13.7

NODES

auto-starting new master
process[master]: started with pid [92166]
ROS_MASTER_URI=http://mjmbpr.local:11311/

setting /run_id to a15e85b8-98a1-11ea-8b7b-f45c898e9ff7
process[rosout-1]: started with pid [92169]
started core service [/rosout]
[ERROR] [1589763055.896770000]: [registerService] Failed to contact master at [mjmbpr.lo
cal:11311].  Retrying...
[ERROR] [1589763055.960174000]: [registerPublisher] Failed to contact master at [mjmbpr.
local:11311].  Retrying...
[ERROR] [1589763056.025336000]: [registerSubscriber] Failed to contact master at [mjmbpr
.local:11311].  Retrying...
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;There were some error messages but that didn't matter. I tried &lt;code&gt;rostopic list&lt;/code&gt; and I got:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Fatal Python error: PyThreadState_Get: no current thread
Abort trap: 6
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;It seemed like &lt;a href="https://answers.ros.org/question/198432/python-segfaults-on-os-x10-with-indigo-and-brew/"&gt;I used the brew installed python when I should have used the system installed one&lt;/a&gt;. For example, the command below worked.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;/usr/bin/python /Users/mjyc/ros_catkin_ws/install_isolated/bin/rostopic list
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Urgh, I didn't change my python settings because doing so could mess up my other python projects (I don't use virtualenv, conda, pyenv, etc). I also noticed adding &lt;code&gt;source ~/ros_catkin_ws/devel/setup.bash&lt;/code&gt; to &lt;code&gt;~/.bash_profile&lt;/code&gt; slowed down opening every new terminal window, which was super annoying. To make these problems go away, I added the following bash script to &lt;code&gt;~/.bash_profile&lt;/code&gt;, which makes the sourcing &lt;code&gt;setup.bash&lt;/code&gt; on-demand and creates aliases for basic ROS commands:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="o"&gt;[&lt;/span&gt; &lt;span class="o"&gt;!&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;type&lt;/span&gt; &lt;span class="nt"&gt;-t&lt;/span&gt; __init_ros&lt;span class="si"&gt;)&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;function&lt;/span&gt; &lt;span class="o"&gt;]&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;then
  &lt;/span&gt;&lt;span class="nb"&gt;declare&lt;/span&gt; &lt;span class="nt"&gt;-a&lt;/span&gt; &lt;span class="nv"&gt;__ros_commands&lt;/span&gt;&lt;span class="o"&gt;=(&lt;/span&gt;&lt;span class="s1"&gt;'roscd'&lt;/span&gt; &lt;span class="s1"&gt;'roscore'&lt;/span&gt; &lt;span class="s1"&gt;'rosrun'&lt;/span&gt; &lt;span class="s1"&gt;'catkin_make'&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt;
  &lt;span class="nb"&gt;declare&lt;/span&gt; &lt;span class="nt"&gt;-a&lt;/span&gt; &lt;span class="nv"&gt;__ros_commandspy&lt;/span&gt;&lt;span class="o"&gt;=(&lt;/span&gt;&lt;span class="s1"&gt;'rostopic'&lt;/span&gt; &lt;span class="s1"&gt;'rosmsg'&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt;
  &lt;span class="k"&gt;function &lt;/span&gt;__init_ros&lt;span class="o"&gt;()&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;for &lt;/span&gt;i &lt;span class="k"&gt;in&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;__ros_commands&lt;/span&gt;&lt;span class="p"&gt;[@]&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;do &lt;/span&gt;&lt;span class="nb"&gt;unalias&lt;/span&gt; &lt;span class="nv"&gt;$i&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;done
    for &lt;/span&gt;i &lt;span class="k"&gt;in&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;__ros_commandspy&lt;/span&gt;&lt;span class="p"&gt;[@]&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;do &lt;/span&gt;&lt;span class="nb"&gt;alias&lt;/span&gt; &lt;span class="nv"&gt;$i&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s1"&gt;'/usr/bin/python /Users/mjyc/ros_catkin_ws/install_isolated/bin/'&lt;/span&gt;&lt;span class="nv"&gt;$i&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;done
    &lt;/span&gt;&lt;span class="nb"&gt;source&lt;/span&gt; &lt;span class="nv"&gt;$HOME&lt;/span&gt;/ros_catkin_ws/install_isolated/setup.bash
    &lt;span class="o"&gt;[&lt;/span&gt; &lt;span class="nt"&gt;-f&lt;/span&gt; &lt;span class="nv"&gt;$HOME&lt;/span&gt;/catkin_ws/devel/setup.bash &lt;span class="o"&gt;]&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="nb"&gt;source&lt;/span&gt; &lt;span class="nv"&gt;$HOME&lt;/span&gt;/catkin_ws/devel/setup.bash
    &lt;span class="nb"&gt;unset &lt;/span&gt;__ros_commands
    &lt;span class="nb"&gt;unset &lt;/span&gt;__ros_commandspy
    &lt;span class="nb"&gt;unset&lt;/span&gt; &lt;span class="nt"&gt;-f&lt;/span&gt; __init_ros
  &lt;span class="o"&gt;}&lt;/span&gt;
  &lt;span class="k"&gt;for &lt;/span&gt;i &lt;span class="k"&gt;in&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;__ros_commands&lt;/span&gt;&lt;span class="p"&gt;[@]&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;do &lt;/span&gt;&lt;span class="nb"&gt;alias&lt;/span&gt; &lt;span class="nv"&gt;$i&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s1"&gt;'__init_ros &amp;amp;&amp;amp; '&lt;/span&gt;&lt;span class="nv"&gt;$i&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;done
  for &lt;/span&gt;i &lt;span class="k"&gt;in&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;__ros_commandspy&lt;/span&gt;&lt;span class="p"&gt;[@]&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;do &lt;/span&gt;&lt;span class="nb"&gt;alias&lt;/span&gt; &lt;span class="nv"&gt;$i&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s1"&gt;'__init_ros &amp;amp;&amp;amp; /usr/bin/python /Users/mjyc/ros_catkin_ws/install_isolated/bin/'&lt;/span&gt;&lt;span class="nv"&gt;$i&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;done
fi&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now the commands below runs without errors and there were no more delay in opening up a new terminal!&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;rostopic pub -r 1 /chatter std_msgs/String hello
rostopic echo /chatter
rostopic type /chatter | rosmsg show
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Adding More ROS Packages
&lt;/h3&gt;

&lt;p&gt;My goal of installing ROS on macOS was to use &lt;code&gt;ros_cozmo&lt;/code&gt; which is dependent on &lt;code&gt;actionlib&lt;/code&gt; and dependent packages. So I added them by running&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;mv lunar-ros_comm_plus.rosinstall lunar-ros_comm_plus.rosinstall.old
rosinstall_generator ros_comm actionlib --rosdistro lunar --deps --tar &amp;gt; lunar-ros_comm_plus.rosinstall
rm src/.rosinstall
wstool merge -t src melodic-desktop-full.rosinstall
wstool update -t src
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;and rebuilt the workspace. It built without any errors.&lt;/p&gt;

&lt;p&gt;Then I became interested in teleoperating Cozmo via a web app with &lt;a href="http://wiki.ros.org/roslibjs/Tutorials/BasicRosFunctionality"&gt;roslib&lt;/a&gt;. I updated &lt;code&gt;.rosinstall&lt;/code&gt; file via&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;rosinstall_generator ros_comm actionlib --rosdistro lunar --deps --tar &amp;gt; lunar-ros_comm_plus.rosinstall
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;and tried rebuilding the workspace. This time I faced error messages like&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;...
/Users/mjyc/ros_catkin_ws/src/rosauth/src/ros_mac_authentication.cpp:15:10: fatal error:

      'openssl/sha.h' file not found
#include &amp;lt;openssl/sha.h&amp;gt;
         ^~~~~~~~~~~~~~~
1 error generated.
make[2]: *** [CMakeFiles/ros_mac_authentication.dir/src/ros_mac_authentication.cpp.o] Er
ror 1
make[1]: *** [CMakeFiles/ros_mac_authentication.dir/all] Error 2
make: *** [all] Error 2
&amp;lt;== Failed to process package 'rosauth':
  Command '['/Users/mjyc/ros_catkin_ws/install_isolated/env.sh', 'make', '-j8', '-l8']'
returned non-zero exit status 2

...
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;There were some suggestions on internet to solve this problem but at this point, I felt like installing packages like this is not a scalable approach.&lt;/p&gt;

&lt;h3&gt;
  
  
  Afterthoughts
&lt;/h3&gt;

&lt;p&gt;I felt like what I tried wasn't the right approach for using ROS on mac in 2020. I guess there are&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="http://wiki.ros.org/docker"&gt;ROS docker&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://rds.theconstructsim.com/"&gt;ROS development studio&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;ROS on cloud like &lt;a href="https://aws.amazon.com/robomaker"&gt;AWS robomaker&lt;/a&gt;, &lt;a href="https://www.rapyuta-robotics.com/"&gt;Rapyuta Robotics&lt;/a&gt;,
and &lt;a href="https://rds.theconstructsim.com/"&gt;ROS development studio&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;and I think ROS docker and ROS development studio are reasonable but they weren't amazing. Maybe I just need to wait for &lt;a href="https://index.ros.org/doc/ros2/"&gt;ROS2&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>robotics</category>
      <category>osx</category>
      <category>ros</category>
      <category>opensource</category>
    </item>
    <item>
      <title>git-issue: an offline-friendly project management tool with potential</title>
      <dc:creator>Mike Chung</dc:creator>
      <pubDate>Sat, 04 Apr 2020 19:38:31 +0000</pubDate>
      <link>https://dev.to/mjyc/git-issue-an-offline-friendly-project-management-tool-with-potential-4hmi</link>
      <guid>https://dev.to/mjyc/git-issue-an-offline-friendly-project-management-tool-with-potential-4hmi</guid>
      <description>&lt;h4&gt;
  
  
  Background
&lt;/h4&gt;

&lt;p&gt;Before COVID-19, I've been reviewing issues on a bus to work. My team used gitlab, so without a stable internet connection on a bus, I wasn't able to concentrate. I started look for an offline friendly issue management tool and narrowed the contenders to (i) &lt;a href="https://github.com/dspinellis/git-issue"&gt;git-issue&lt;/a&gt; and (ii) &lt;a href="https://github.com/MichaelMure/git-bug"&gt;git-bug&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;I decided to give a shot at &lt;a href="https://github.com/dspinellis/git-issue"&gt;git-issue&lt;/a&gt; because I liked its simplicity. My first impression of git-issue was basically an issue manager powered by git. It provided command-line tools for basic issue management like creating, editing, removing, listing issues and comments and also had additional features like logging time estimates/spent and setting milestones.&lt;br&gt;
It was written in shell script.&lt;/p&gt;

&lt;h4&gt;
  
  
  The Pros and Cons List
&lt;/h4&gt;

&lt;p&gt;After 2 weeks of using it here is my pros and cons list.&lt;/p&gt;

&lt;p&gt;Pros:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;lightweight&lt;/li&gt;
&lt;li&gt;based on git&lt;/li&gt;
&lt;li&gt;supports interacting with github/gitlab&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Cons:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;github/gitlab integration is buggy&lt;/li&gt;
&lt;li&gt;written in shell script so hard to read &amp;amp; debug the src code&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  My Honest Opinion
&lt;/h4&gt;

&lt;p&gt;My initial motivation was doing project management when I have don't have access to the internet, e.g., on a bus. I couldn't replace my existing workflow--I used to use &lt;a href="https://about.gitlab.com/stages-devops-lifecycle/issueboard/"&gt;GitLab's issue board&lt;/a&gt;--because there were &lt;a href="https://docs.gitlab.com/ee/user/project/issues/"&gt;tons of project management features GitLab provided&lt;/a&gt; that I didn't realize I was dependent on. For example, I wanted&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;customizable board view&lt;/li&gt;
&lt;li&gt;markdown preview&lt;/li&gt;
&lt;li&gt;issue number auto-completion the editing view&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Also, I realized (i) connecting to the internet on a bus wasn't as hard via tethering and (ii) there are no severe consequences of editing issues concurrently (as in using a cloud tool). For coding (ii) is not true since doing so will likely crash your program.&lt;/p&gt;

&lt;p&gt;I ended up mostly using git-issue when I know when exactly what to do, e.g., comment on a specific issue. While I still think it has a lot of potentials, for now, it has not replaced my goto tool, the gitlab issues/board pages on the browser.&lt;/p&gt;

&lt;h4&gt;
  
  
  Misc.
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;git-issue &lt;a href="https://gist.github.com/mjyc/b33ea80309161328716e59f665dc595f"&gt;helper scripts for fuzzy searching issues (and editing)&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>programming</category>
      <category>git</category>
      <category>productivity</category>
      <category>offlinefirst</category>
    </item>
    <item>
      <title>My failed attempt to set up an old MacbookPro as a deep learning workstation</title>
      <dc:creator>Mike Chung</dc:creator>
      <pubDate>Sat, 21 Mar 2020 05:36:42 +0000</pubDate>
      <link>https://dev.to/mjyc/my-failed-attempt-to-set-up-an-old-macbookpro-as-a-deep-learning-workstation-4278</link>
      <guid>https://dev.to/mjyc/my-failed-attempt-to-set-up-an-old-macbookpro-as-a-deep-learning-workstation-4278</guid>
      <description>&lt;p&gt;I recently got a new laptop but didn't want to say goodbye to my &lt;a href="https://everymac.com/systems/apple/macbook_pro/specs/macbook-pro-core-i7-2.6-15-dual-graphics-late-2013-retina-display-specs.html"&gt;2014 MacBookPro (10,1)&lt;/a&gt; yet. Then I remembered it has an NVIDIA graphics card, so I thought maybe I'll use it for training toy deep learning models. I never train a deep learning model before.&lt;/p&gt;

&lt;p&gt;My goal was to training a &lt;a href="https://github.com/NVIDIA/tacotron2"&gt;tacotron2&lt;/a&gt; model. Tacotron is Google's deep learning-based speech synthesis system. I was not able to set up my old laptop to train a tacotron2 model but found a different way to achieve my goal, i.e., using &lt;a href="https://colab.research.google.com/"&gt;Google Colab&lt;/a&gt;. Still, I'm sharing my journey in case someone could learn from the mistakes I made.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;I installed Ubuntu 18.04. I searched for related documentation from &lt;a href="https://help.ubuntu.com/community/MacBookPro"&gt;help.ubuntu.com&lt;/a&gt; or &lt;a href="https://wiki.ubuntu.com/MactelSupportTeam/CommunityHelpPages"&gt;wiki.ubuntu.com&lt;/a&gt; but they seem outdated so I just googled "install ubuntu 18.04 MacBookPro and found a &lt;a href="https://medium.com/@vincentedwardcastro/installing-ubuntu-18-04-01-lts-on-late-2013-mac-book-pro-61d20e5e6230"&gt;medium post&lt;/a&gt;. Following the instructions on the post just worked. I also installed an NVIDIA driver via &lt;a href="https://itsfoss.com/install-additional-drivers-ubuntu/"&gt;Software Settings&lt;/a&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://github.com/NVIDIA/tacotron2#setup"&gt;Training tacotron2 requires Pytorch 1.0&lt;/a&gt;. Although &lt;a href="https://pytorch.org/get-started/locally/"&gt;it seemed like I could install Pytorch via pip&lt;/a&gt;, I wanted to try installing it from source to gain better control of it. So I decided to install cuda &amp;amp; cudnn myself. It seemed like cuda 10.1 and cudnn 7.6.x are the latest pytorch compatible version of cuda and cudnn in March 2020.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;NVIDIA's instructions for installing &lt;a href="https://developer.nvidia.com/cuda-10.1-download-archive-base?target_os=Linux&amp;amp;target_arch=x86_64&amp;amp;target_distro=Ubuntu&amp;amp;target_version=1804&amp;amp;target_type=debnetwork"&gt;cuda&lt;/a&gt; and &lt;a href="https://docs.nvidia.com/deeplearning/sdk/cudnn-install/index.html#installlinux-deb"&gt;cudnn&lt;/a&gt; weren't as straightforward as I hoped them to be. After running the main cuda install command &lt;code&gt;sudo apt-get install cuda-10-1&lt;/code&gt; (not &lt;code&gt;sudo apt-get install cuda-10&lt;/code&gt; because I want to control the cuda version) I was seeing errors like&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;...
  found 'diversion of /usr/lib/x86_64-linux-gnu/libGL.so.1 to /usr/lib/x86_64-linux-gnu/libGL.so.1.distrib by nvidia-340'
...
&lt;/code&gt;&lt;/pre&gt;


&lt;p&gt;I found &lt;a href="https://askubuntu.com/questions/1035409/installing-nvidia-drivers-on-18-04"&gt;a solution at stackoverflow&lt;/a&gt; and shamelessly applied it without understanding the commands fixed the problem. I suspected the error was due to installing an NVIDIA driver at 1. but never confirmed it. I also did not understand why the solution worked, but in the interest of time, I marched forward. cudnn installation was smooth.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Now it was time to build Pytorch from source. Although I should be using conda, I just used &lt;code&gt;pip3&lt;/code&gt; in the interest of time and started following &lt;a href="https://github.com/pytorch/pytorch#from-source"&gt;the instructions&lt;/a&gt;. After running the grand &lt;code&gt;python setup.py install&lt;/code&gt;, I got stuck:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;...
    /home/mjyc/.local/lib/python3.6/site-packages/torch/cuda/__init__.py:134: UserWarning:
        Found GPU0 GeForce GT 750M which is of cuda capability 3.0.
        PyTorch no longer supports this GPU because it is too old.
        The minimum cuda capability that we support is 3.5.
...
&lt;/code&gt;&lt;/pre&gt;


&lt;p&gt;It turns out MacBookPro 10,1's GPU GeForce GT 750M was &lt;a href="https://discuss.pytorch.org/t/pytorch-no-longer-supports-this-gpu-because-it-is-too-old/13803"&gt;too old for PyTorch 1.4&lt;/a&gt; (&lt;a href="https://discuss.pytorch.org/t/pytorch-no-longer-supports-this-gpu-because-it-is-too-old/13803/11"&gt;the latest compatible PyTorch version seems to be 0.3.1&lt;/a&gt;). My first reaction to this error message was just buying an external GPU (eGPU). However, &lt;a href="https://egpu.io/best-egpu-buyers-guide/"&gt;quick google search results&lt;/a&gt; showed that the eGPU case alone costs ~$300.00! and learned that just choosing which GPU to buy &lt;a href="https://towardsdatascience.com/maximize-your-gpu-dollars-a9133f4e546a"&gt;requires some research work&lt;/a&gt;.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;At this point, I realized I spent much more time and efforts and I originally budgeted so I gave up turning my old MacBookPro to a workstation and started looking for an alternative solution. I read &lt;a href="https://www.reddit.com/r/deeplearning/comments/96pftg/is_nvidia_gtx_1050_good_enough_for_deep_learning/"&gt;a Reddit thread suggesting a cloud solution&lt;/a&gt; and &lt;a href="https://towardsdatascience.com/maximize-your-gpu-dollars-a9133f4e546a"&gt;looked into which service is a good starting point&lt;/a&gt;. Seems like Google Colab is a good place to start. So I stopped my exploration here.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;To me, the lessons learned from this journey was &lt;em&gt;always focus on the end-goal instead of the mean&lt;/em&gt;. Given my goal was to train a deep learning model, the mean to achieve the goal--using a laptop workstation or cloud--should not have mattered.&lt;/p&gt;

</description>
      <category>beginners</category>
      <category>linux</category>
      <category>osx</category>
      <category>gpu</category>
    </item>
    <item>
      <title>Understanding challenges with large robotics system development</title>
      <dc:creator>Mike Chung</dc:creator>
      <pubDate>Sat, 07 Mar 2020 23:41:21 +0000</pubDate>
      <link>https://dev.to/mjyc/understanding-challenges-with-large-robotics-system-development-35b7</link>
      <guid>https://dev.to/mjyc/understanding-challenges-with-large-robotics-system-development-35b7</guid>
      <description>&lt;p&gt;&lt;em&gt;Originally posted on &lt;a href="https://gitlab.com/mjyc/robosysdev-notes/-/blob/master/post.md"&gt;GitLab&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Robotics system development is hard. To understand causes for the robotics system development challenges, I interviewed a few robotics engineers who have been involved in large robotics projects and identified the following themes.&lt;/p&gt;

&lt;h2&gt;
  
  
  There aren't many performant off-the-shelve tools
&lt;/h2&gt;

&lt;p&gt;As the field of robotics is not matured, it is not easy to find performance libraries for perception, manipulation, human-robot interaction that fits your needs. Many existing off-the-shelve code is research code and hence requires expert knowledge, e.g., a user needs to see through undocumented assumptions and limitations. Essentially, identifying whether they will be useful for your problem is an art of itself.&lt;/p&gt;

&lt;h2&gt;
  
  
  There aren't many generalist robotics systems engineer
&lt;/h2&gt;

&lt;p&gt;Although more robotics educational materials are becoming available, there are not many engineers who can design and implement large robotics systems. Many robotics engineers often focuses on one subfield of robotics engineering such as computer vision or control but does not have much experience with working with the whole system. On the other hands, good systems engineers are often lacks the robotics knowledge and treats robotics libraries as black boxes.&lt;/p&gt;

&lt;h2&gt;
  
  
  Gathering system requirements or software specifications is not trivial
&lt;/h2&gt;

&lt;p&gt;A robotic system that interact with physical world is complicated and consequences of using such system in real world is hard to predict. This makes the gathering of system requirements or software specifications challenging. Therefore the system specifications are often underspecified which yields brittle or over-prepared systems.&lt;/p&gt;

&lt;h2&gt;
  
  
  Maintenance and testing are challenging
&lt;/h2&gt;

&lt;p&gt;Often existing dev-ops tools are unfit for the robotics system development purposes. For example, robotics data collection, analysis, and visualization are different from those of web services. Testing is especially challenging since setting up a real-world testing environment is not trivial, e.g., a clean "reset" of the real robot testing environment is near impossible or time-consuming. Also, the simulators that are supposed to help with testing do not serve their purpose because of the gap between simulation and reality.&lt;/p&gt;

&lt;p&gt;Although the list above is based on a small number of interviews and my personal experience, I hope it to be used as a starting point for brainstorming for solutions. Please let me know if you see missing themes or any comments!&lt;/p&gt;

&lt;h4&gt;
  
  
  Miscellaneous
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;&lt;em&gt;Sep, 2021. &lt;a href="https://www.csc.gov.sg/articles/how-to-build-good-software"&gt;"How to Build Good Software"&lt;/a&gt; - "Why Bad Software Happens to Good People" section felt relevant.&lt;/em&gt;&lt;/li&gt;
&lt;li&gt;
&lt;em&gt;Apr, 2021. Found more related papers!&lt;/em&gt;

&lt;ul&gt;
&lt;li&gt;&lt;em&gt;&lt;a href="https://arxiv.org/ftp/arxiv/papers/2010/2010.14537.pdf"&gt;"State of the Practice and Guidelines for ROS-based System"&lt;/a&gt;&lt;/em&gt;&lt;/li&gt;
&lt;li&gt;&lt;em&gt;&lt;a href="https://arxiv.org/pdf/2004.07368.pdf"&gt;"A Study on the Challenges of Using Robotics Simulators for Testing"&lt;/a&gt;&lt;/em&gt;&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;&lt;em&gt;While I was writing this post, I learned about this excellent paper &lt;a href="https://github.com/S2-group/icse-seip-2020-replication-package/blob/master/ICSE_SEIP_2020.pdf"&gt;"State of the Practice and Guidelines for ROS-based System"&lt;/a&gt; and discussions about the paper in &lt;a href="https://discourse.ros.org/t/guidelines-on-how-to-architect-ros-based-systems/12641"&gt;the ROS Discourse&lt;/a&gt;. The paper is focused on &lt;a href="https://www.ros.org/"&gt;ROS&lt;/a&gt; yet the high-level goals of it seem similar.&lt;/em&gt;&lt;/li&gt;
&lt;li&gt;&lt;em&gt;The study notes this article is based on are available in &lt;a href="https://github.com/mjyc/robosysdev-notes"&gt;github&lt;/a&gt; and &lt;a href="https://gitlab.com/mjyc/robosysdev-notes"&gt;gitlab&lt;/a&gt; repos&lt;/em&gt;&lt;/li&gt;
&lt;li&gt;&lt;em&gt;Thank you! to all those who participated in my interview studies&lt;/em&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>robotics</category>
      <category>devops</category>
      <category>testing</category>
      <category>developer</category>
    </item>
    <item>
      <title>Please help me build a cloud visual SLAM system for cellphones</title>
      <dc:creator>Mike Chung</dc:creator>
      <pubDate>Mon, 10 Jun 2019 00:25:16 +0000</pubDate>
      <link>https://dev.to/mjyc/please-help-me-building-a-cloud-visual-slam-system-for-cellphones-ine</link>
      <guid>https://dev.to/mjyc/please-help-me-building-a-cloud-visual-slam-system-for-cellphones-ine</guid>
      <description>&lt;p&gt;Hello hackers, tinkers, webdevs, sysdevs, roboticists, and all coders! I've been excited about &lt;a href="https://en.wikipedia.org/wiki/Cloud_robotics"&gt;cloud robotics&lt;/a&gt;, a field of robotics that utilizes the power of cloud computing, and want to share the excitement with you and suggest a project we can potentially work together. The project that I'm thinking of is "cellphone visual SLAMing". The idea is to run a visual SLAM system on cloud so mobile devices like a cellphone can build 3D maps by simply uploading camera data to the cloud.&lt;/p&gt;

&lt;p&gt;Here are the steps I'm thinking:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Try creating a 3D map using &lt;a href="https://github.com/raulmur/ORB_SLAM2"&gt;ORB_SLAM2&lt;/a&gt; and desktop camera images.
The main goal of this step is to get comfortable with a visual SLAM library and feel out the limitations.&lt;/li&gt;
&lt;li&gt;Try creating 3D maps using ORB*SLAM2 running on a desktop and cellphone camera images.
ORB_SLAM2 supports &lt;a href="https://www.ros.org/"&gt;ROS&lt;/a&gt;. So one can easily capture device camera images using &lt;a href="https://developer.mozilla.org/en-US/docs/Web/API/MediaDevices/getUserMedia"&gt;HTML5's &lt;code&gt;MediaDevices.getUserMedia()&lt;/code&gt;&lt;/a&gt;, turn them into ROS image messages, and publish them using &lt;a href="https://github.com/RobotWebTools/roslibjs"&gt;roslibjs&lt;/a&gt; so ORB_SLAM2 can use the images collected from a remote device.&lt;/li&gt;
&lt;li&gt;Run the ORB_SLAM2 to cloud.
I have not tried it, but it seems like it is fairly easy to &lt;a href="https://docs.docker.com/samples/library/ros/"&gt;containerize a ROS package and deploy it on cloud&lt;/a&gt;.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;That's it! Are you interested in trying this idea out? If you have experiences with visual SLAM and have suggestions? Let me know, I'd love to hear your thoughts.&lt;/p&gt;



&lt;h3&gt;
  
  
  Updates
&lt;/h3&gt;

&lt;p&gt;&lt;em&gt;2021/01/02&lt;/em&gt; I have moved on as I don't get to spend time on tinkering but still think this is a fun project to try one day.&lt;br&gt;
&lt;em&gt;2020/11/23&lt;/em&gt; &lt;a href="https://fyusion.com/"&gt;Fyusion&lt;/a&gt; and &lt;a href="https://canvas.io/"&gt;CANVAS&lt;/a&gt; seem to provide products with related technolgoies.&lt;br&gt;
&lt;em&gt;2020/05/02&lt;/em&gt; It seems like &lt;a href="//github.com/izhengfan/se2lam"&gt;se2lam&lt;/a&gt; could be used instead of ORB_SLAM2.&lt;/p&gt;

</description>
      <category>robotics</category>
      <category>discuss</category>
      <category>help</category>
      <category>docker</category>
    </item>
    <item>
      <title>Implementing a finite state machine in Cycle.js</title>
      <dc:creator>Mike Chung</dc:creator>
      <pubDate>Fri, 09 Nov 2018 06:24:07 +0000</pubDate>
      <link>https://dev.to/mjyc/implementing-a-finite-state-machine-in-cyclejs-1e63</link>
      <guid>https://dev.to/mjyc/implementing-a-finite-state-machine-in-cyclejs-1e63</guid>
      <description>&lt;p&gt;&lt;em&gt;Originally published on &lt;a href="https://medium.com/@chungjy9/implementing-a-finite-state-machine-in-cycle-js-c498b6cfb231"&gt;Medium&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;&lt;strong&gt;Note:&lt;/strong&gt; Check out other posts on programming a social robot using Cycle.js too:&lt;/em&gt;&lt;br&gt;
&lt;em&gt;1. &lt;a href="https://dev.to/mjyc/programming-a-social-robot-using-cyclejs-23jl"&gt;Programming a social robot using Cycle.js&lt;/a&gt;&lt;/em&gt;&lt;br&gt;
&lt;em&gt;2. &lt;a href="https://dev.to/mjyc/implementing-a-finite-state-machine-in-cyclejs-1e63"&gt;Implementing a finite state machine in Cycle.js&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;In this post, I'll show you how to implement a reactive social robot program as a &lt;a href="https://en.wikipedia.org/wiki/Finite-state_machine"&gt;finite state machine&lt;/a&gt;. We'll continue from where we left off in the previous post &lt;a href="//./programming_socialrobot_with_cyclejs.md"&gt;Programming a social robot using Cycle.js&lt;/a&gt;--so check it out if you haven't already! If you are in a hurry, here is the &lt;a href="https://stackblitz.com/edit/cycle-robot-drivers-tutorials-02-fsm"&gt;demo and complete code&lt;/a&gt; of what we are building in this post.&lt;/p&gt;

&lt;h2&gt;
  
  
  Making existing "travel personality quiz" program more complex
&lt;/h2&gt;

&lt;p&gt;&lt;a href="//./programming_socialrobot_with_cyclejs.md"&gt;Previously&lt;/a&gt;, we programmed a &lt;a href="https://github.com/mjyc/tablet-robot-face"&gt;tablet-face robot&lt;/a&gt; to test your travel personality. Concretely, we implemented a tablet-face robot program that&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;looks at a person when it sees one and&lt;/li&gt;
&lt;li&gt;asks travel personality quiz questions as shown in &lt;a href="http://www.nomadwallet.com/afford-travel-quiz-personality/"&gt;this flowchart&lt;/a&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;as a &lt;a href="https://cycle.js.org/"&gt;Cycle.js&lt;/a&gt; application. Here are the &lt;a href="https://stackblitz.com/edit/cycle-robot-drivers-tutorials-01-personality-quiz"&gt;demo&lt;/a&gt; at Stackbliz and &lt;a href="https://github.com/mjyc/cycle-robot-drivers/tree/master/examples/tutorials/01_personality_quiz"&gt;complete code&lt;/a&gt; in GitHub from the previous post.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;IMPORTANT!!&lt;/strong&gt; The main package we use in the demo and in this post, &lt;a href="https://github.com/mjyc/cycle-robot-drivers/tree/master/run"&gt;cycle-robot-drivers/run&lt;/a&gt;, only works on Chrome browsers  (&amp;gt;= 65.0.3325.181) for now.&lt;/p&gt;

&lt;p&gt;Now, what if we want the robot to&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;look at a person only when the robot is waiting for a person's response,&lt;/li&gt;
&lt;li&gt;stop asking a question if the robot cannot see a person and resume asking the question if it sees a person again, and&lt;/li&gt;
&lt;li&gt;stop asking questions completely if a person abandons the robot, i.e., the robot does not see a person for more than 10 seconds.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;How difficult would it be to update the existing program to have these additional behaviors? Try implementing the new behaviors on top of the &lt;a href="https://github.com/mjyc/cycle-robot-drivers/tree/master/examples/tutorials/01_personality_quiz/index.js"&gt;travel personality quiz program&lt;/a&gt;.&lt;br&gt;
What kind of challenges do you face?&lt;/p&gt;

&lt;p&gt;From my experience, it was difficult to implement, or even just express the "stateful" behaviors in reactive programming. For example, to implement 1., I needed to know whether the robot is in the "waiting for a person's response" state but it wasn't clear how to represent such state in a scalable manner; I tried keeping all states in drivers (e.g., &lt;code&gt;SpeechRecognitionAction&lt;/code&gt; emitting &lt;code&gt;status&lt;/code&gt; events), as proxies (e.g., &lt;code&gt;$lastQuestion&lt;/code&gt; in &lt;a href="https://github.com/mjyc/cycle-robot-drivers/tree/master/examples/tutorials/01_personality_quiz/index.js#L58"&gt;the previous code&lt;/a&gt;), or in higher-order streams, but none of them felt simple nor scalable. This was very concerning since &lt;a href="http://wiki.ros.org/smach/Tutorials/Getting%20Started#Why_learn_Smach.3F"&gt;many&lt;/a&gt; &lt;a href="https://www.researchgate.net/figure/A-behavioral-state-machine-for-robot-soccer_fig10_238086654"&gt;robot&lt;/a&gt; &lt;a href="https://www.youtube.com/watch?v=4XEK7OU2gIw"&gt;behaviors&lt;/a&gt; are expressed and implemented as stateful behaviors.&lt;/p&gt;

&lt;p&gt;To address this problem, I propose using finite state machines to clearly express the desired robot behaviors. In the following, I first present a pattern for implementing a finite state machine in a reactive programming framework (Cycle.js) without scarifying maintainability. Then I demonstrate a use case of the FSM pattern via implementing the first additional behavior.&lt;/p&gt;
&lt;h2&gt;
  
  
  What is a finite state machine?
&lt;/h2&gt;

&lt;p&gt;A &lt;a href="https://en.wikipedia.org/wiki/Finite-state_machine"&gt;finite state machine (FSM)&lt;/a&gt; is a computational model that can be used to represent and control execution flow. Due to their simplicity, FSMs have been frequently used by &lt;a href="http://wiki.ros.org/smach"&gt;roboticists&lt;/a&gt;, &lt;a href="https://sketch.systems/"&gt;UI developers&lt;/a&gt; and many others for a &lt;a href="https://www.mtholyoke.edu/courses/pdobosh/cs100/handouts/genghis.pdf"&gt;long&lt;/a&gt; &lt;a href="http://www.inf.ed.ac.uk/teaching/courses/seoc/2005_2006/resources/statecharts.pdf"&gt;time&lt;/a&gt;. An FSM we are using in this post is comprised of five parts:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;A set of states, e.g., &lt;code&gt;'SAY_SENTENCE'&lt;/code&gt;, &lt;code&gt;'WAIT_FOR_RESPONSE'&lt;/code&gt;, etc.&lt;/li&gt;
&lt;li&gt;A set of variables, e.g., &lt;code&gt;currentSentence = 'Can you see yourself working online?'&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;A set of inputs: e.g., &lt;code&gt;VALID_RESPONSE&lt;/code&gt;, &lt;code&gt;INVALID_RESPONSE&lt;/code&gt;, etc.&lt;/li&gt;
&lt;li&gt;A set of outputs: e.g., &lt;code&gt;speechSynthesisAction = 'Can you see yourself working online?'&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;A transition function that takes a state, variable, and input and returns a state, variable, and output.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;If you are familiar with FSMs, the FSM we are using is a &lt;a href="https://en.wikipedia.org/wiki/Mealy_machine"&gt;mealy machine&lt;/a&gt; extended with "variables".&lt;br&gt;
Like a mealy machine, it has the following constraints:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;the state set is a &lt;a href="https://en.wikipedia.org/wiki/Finite_set"&gt;finite set&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;the FSM can only be in one state at a time in the state set&lt;/li&gt;
&lt;li&gt;the transition function is deterministic; given a state, variable, and input the function always returns the same new state, new variable, and new output.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
  Representing the "travel personality quiz" program as an FSM
&lt;/h2&gt;

&lt;p&gt;We'll start from representing the &lt;a href="https://github.com/mjyc/cycle-robot-drivers/tree/master/examples/tutorials/01_personality_quiz/index.js"&gt;"travel personality test" program&lt;/a&gt; we implemented in the previous post as an FSM:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--_WLs5m9d--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://thepracticaldev.s3.amazonaws.com/i/et73xk1bvd20kbyrt69c.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--_WLs5m9d--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://thepracticaldev.s3.amazonaws.com/i/et73xk1bvd20kbyrt69c.png" alt="travel_personality_quiz_fsm" width="356" height="339"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Here we have three states, &lt;code&gt;PEND&lt;/code&gt;, &lt;code&gt;SAY&lt;/code&gt;, &lt;code&gt;LISTEN&lt;/code&gt;, and five input types, &lt;code&gt;START&lt;/code&gt;, &lt;code&gt;SAY_DONE&lt;/code&gt;, &lt;code&gt;VALID_RESPONSE&lt;/code&gt;, &lt;code&gt;INVALID_RESPONSE&lt;/code&gt;, and &lt;code&gt;DETECTED_FACE&lt;/code&gt;. We omitted variables associated with each state and outputs associated with each transition for visual clarity.&lt;/p&gt;

&lt;p&gt;Notice that we use verbs as state names (as a popular robotics FSM library &lt;a href="http://wiki.ros.org/smach"&gt;SMACH&lt;/a&gt; does). This is because we define the states based on distinct actions each state is performing, where the distinct actions are triggered by outputs emitted from transitions. You may have wondered why we did not create each state in the &lt;a href="http://www.nomadwallet.com/afford-travel-quiz-personality/"&gt;travel quiz flowchart&lt;/a&gt; as an individual state, e.g., &lt;code&gt;ASK_CAREER_QUESTION&lt;/code&gt;, &lt;code&gt;ASK_WORKING_ABROAD_QUESTION&lt;/code&gt;, &lt;code&gt;ASK_FAMILY_QUESTION&lt;/code&gt;, etc. This is because representing the states that behave the same except the sentence the robot says with a single &lt;code&gt;SAY&lt;/code&gt; state with a variable &lt;code&gt;currentSentence&lt;/code&gt; (not shown in the diagram) yields the simpler, more maintainable FSM.&lt;/p&gt;

&lt;p&gt;The inputs can be considered as the events that could occur in each state and  are originated from actions, e.g., &lt;code&gt;SAY_DONE&lt;/code&gt;, sensors, e.g., &lt;code&gt;DETECTED_FACE&lt;/code&gt;, or external systems, e.g. &lt;code&gt;START&lt;/code&gt;. We represent an input as a type-value pair. For example, the &lt;code&gt;VALID_RESPONSE&lt;/code&gt; type input is paired with a value "yes" or "no", which is used to determine the transition between &lt;code&gt;LISTEN&lt;/code&gt; to &lt;code&gt;SAY&lt;/code&gt; (input values are not shown in the graph).&lt;/p&gt;

&lt;p&gt;Now, let's update the FSM to express the first additional behavior mentioned above: looking at a person only when the robot is waiting for a person's response.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--MNnu4g2X--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://thepracticaldev.s3.amazonaws.com/i/7z09n6fz5z1s8zjlb9ii.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--MNnu4g2X--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://thepracticaldev.s3.amazonaws.com/i/7z09n6fz5z1s8zjlb9ii.png" alt="travel_personality_quiz_fsm_updated" width="356" height="339"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;All we did here is remove the two self-loop transitions from the &lt;code&gt;PEND&lt;/code&gt; and &lt;code&gt;SAY&lt;/code&gt; states to stop the robot from looking at a person while the FSM is in those states.&lt;/p&gt;
&lt;h2&gt;
  
  
  Implementing the "travel personality test" FSM using Cycle.js
&lt;/h2&gt;

&lt;p&gt;Let's now implement the "travel personality test" FSM we defined above using Cycle.js.&lt;/p&gt;

&lt;p&gt;First, we'll try to define the FSM in javascript as follows:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;State&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="na"&gt;PEND&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;PEND&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="na"&gt;SAY&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;SAY&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;  &lt;span class="c1"&gt;//_SENTENCE&lt;/span&gt;
  &lt;span class="na"&gt;LISTEN&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;LISTEN&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;  &lt;span class="c1"&gt;//_FOR_RESPONSE&lt;/span&gt;
&lt;span class="p"&gt;};&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;InputType&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="na"&gt;START&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;`START`&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="na"&gt;SAY_DONE&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;`SAY_DONE`&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="c1"&gt;// QUIZ_DONE: is not an input type but a transition&lt;/span&gt;
  &lt;span class="na"&gt;VALID_RESPONSE&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;`VALID_RESPONSE`&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="na"&gt;INVALID_RESPONSE&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;`INVALID_RESPONSE`&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="na"&gt;DETECTED_FACE&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;`DETECTED_FACE`&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="p"&gt;};&lt;/span&gt;

&lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;transition&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;state&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;variables&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;input&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;  &lt;span class="c1"&gt;// a dummy transition function&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;newState&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;state&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;newVariables&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;variables&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;newOutputs&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;null&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="na"&gt;state&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;newState&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;variables&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;newVariables&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;outputs&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;newOutputs&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="p"&gt;};&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="cm"&gt;/**
 * // Example state, variables, input, and outputs
 * const state = State.PEND;
 * const variables = {
 *   sentence: 'You are a vacationer!',
 * };
 * const input = {
 *   type: InputType.START,
 *   value: null,
 * };
 * const outputs = {
 *   SpeechSynthesisAction: {
 *     goal: 'You are a vacationer!'
 *   },
 *   SpeechRecognitionAction: {
 *     goal: {}
 *   },
 *   TabletFace: {
 *     goal: {
 *       type: 'SET_STATE',
 *       value: {
 *         leftEye: {x: 0.5, y: 0.5},
 *         rightEye: {x: 0.5, y: 0.5},
 *       },
 *     }},
 *   },
 * }
 */&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here we define the set of states &lt;code&gt;State&lt;/code&gt;, the set of input types &lt;code&gt;InputType&lt;/code&gt;, and the transition function &lt;code&gt;transition&lt;/code&gt;. The sets for the variables and outputs of the FSM are not explicitly defined, but I provided example values that the variables and outputs can take in the comment.&lt;/p&gt;

&lt;h3&gt;
  
  
  Setting up FSM in Cycle.js
&lt;/h3&gt;

&lt;p&gt;We'll now setup the FSM as a Cycle.js application. You can fork &lt;a href="https://stackblitz.com/edit/cycle-robot-drivers-tutorials-02-fsm"&gt;the Stackblitz demo code&lt;/a&gt; and start coding or set up a Cycle.js application.&lt;br&gt;
For the latter, create a folder:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;mkdir my-second-robot-program
cd my-second-robot-program
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Download &lt;a href="https://github.com/mjyc/cycle-robot-drivers/tree/master/examples/tutorials/02_fsm/package.json"&gt;&lt;code&gt;package.json&lt;/code&gt;&lt;/a&gt;, &lt;a href="https://github.com/mjyc/cycle-robot-drivers/tree/master/examples/tutorials/02_fsm/.babelrc"&gt;&lt;code&gt;.babelrc&lt;/code&gt;&lt;/a&gt;, &lt;a href="https://github.com/mjyc/cycle-robot-drivers/tree/master/examples/tutorials/02_fsm/index.html"&gt;&lt;code&gt;index.html&lt;/code&gt;&lt;/a&gt;, create an empty &lt;code&gt;index.js&lt;/code&gt; file in the folder, and run &lt;code&gt;npm install&lt;/code&gt; to install the required npm packages. After installing, you can run &lt;code&gt;npm start&lt;/code&gt; to build and start the web application--that does nothing at this point.&lt;/p&gt;

&lt;p&gt;Now add the following code in &lt;code&gt;index.js&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="nx"&gt;xs&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;xstream&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;runRobotProgram&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;@cycle-robot-drivers/run&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;State&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
&lt;span class="c1"&gt;// ...&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;InputType&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
&lt;span class="c1"&gt;// ...&lt;/span&gt;
&lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;transition&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;state&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;variables&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;input&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;  &lt;span class="c1"&gt;// a dummy transition function&lt;/span&gt;
&lt;span class="c1"&gt;// ...&lt;/span&gt;

&lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;input&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;  &lt;span class="c1"&gt;// a dummy input function&lt;/span&gt;
  &lt;span class="nx"&gt;start$&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="nx"&gt;speechRecognitionActionResult$&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="nx"&gt;speechSynthesisActionResult$&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="nx"&gt;poses$&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nx"&gt;xs&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;never&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;output&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;machine$&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;  &lt;span class="c1"&gt;// a dummy output function&lt;/span&gt;
  &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="na"&gt;SpeechSynthesisAction&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;xs&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;never&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt;
    &lt;span class="na"&gt;SpeechRecognitionAction&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;xs&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;never&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt;
    &lt;span class="na"&gt;TabletFace&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;xs&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;never&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt;
  &lt;span class="p"&gt;};&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;main&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;sources&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;input$&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;input&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="nx"&gt;sources&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;TabletFace&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;load&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="nx"&gt;sources&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;SpeechSynthesisAction&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;result&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="nx"&gt;sources&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;SpeechRecognitionAction&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;result&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="nx"&gt;sources&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;PoseDetection&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;poses&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="p"&gt;);&lt;/span&gt;

  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;defaultMachine&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="na"&gt;state&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;State&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;PEND&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;variables&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="na"&gt;sentence&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;null&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="p"&gt;},&lt;/span&gt;
    &lt;span class="na"&gt;outputs&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;null&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="p"&gt;};&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;machine$&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;input$&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;fold&lt;/span&gt;&lt;span class="p"&gt;((&lt;/span&gt;&lt;span class="nx"&gt;machine&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;input&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="nf"&gt;transition&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="nx"&gt;machine&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;state&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;machine&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;variables&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;input&lt;/span&gt;
  &lt;span class="p"&gt;),&lt;/span&gt; &lt;span class="nx"&gt;defaultMachine&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;sinks&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;output&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;machine$&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nx"&gt;sinks&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="nf"&gt;runRobotProgram&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;main&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you run the application, it should load a robot face that still does nothing on your browser.&lt;/p&gt;

&lt;p&gt;The most important thing to notice here is that we divide the &lt;code&gt;main&lt;/code&gt; function into three functions; &lt;code&gt;input&lt;/code&gt;, &lt;code&gt;transition&lt;/code&gt;, and &lt;code&gt;output&lt;/code&gt;. The &lt;code&gt;input&lt;/code&gt; function takes incoming streams in &lt;code&gt;sources&lt;/code&gt; and returns a stream that emits the FSM's input values. We then use the &lt;a href="https://github.com/staltz/xstream#fold"&gt;&lt;code&gt;fold&lt;/code&gt;&lt;/a&gt; xstream operator on the returned stream (&lt;code&gt;$input&lt;/code&gt;) to trigger the FSM's &lt;code&gt;transition&lt;/code&gt; function. Note that the &lt;code&gt;fold&lt;/code&gt; operator is like &lt;code&gt;Array.prototype.reduce&lt;/code&gt; for streams; it takes&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;an accumulator function that takes an emitted value (e.g., an FSM input value, &lt;code&gt;input&lt;/code&gt;) and a previous output of the accumulator function (e.g., the latest FSM status, &lt;code&gt;machine&lt;/code&gt;) or a seed value and&lt;/li&gt;
&lt;li&gt;an initial output of the accumulator function (e.g., the initial FSM status, &lt;code&gt;defaultMachine&lt;/code&gt;).&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Finally, the &lt;code&gt;output&lt;/code&gt; function takes the stream that emits FSM status (&lt;code&gt;$machine&lt;/code&gt;) and returns outgoing streams.&lt;/p&gt;

&lt;h3&gt;
  
  
  Input, transition, and output
&lt;/h3&gt;

&lt;p&gt;Let's implement the three functions.&lt;br&gt;
First, update the dummy &lt;code&gt;input&lt;/code&gt; function to:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="c1"&gt;// ...&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;Response&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="na"&gt;YES&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;yes&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="na"&gt;NO&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;no&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;input&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
  &lt;span class="nx"&gt;start$&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="nx"&gt;speechRecognitionActionResult$&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="nx"&gt;speechSynthesisActionResult$&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="nx"&gt;poses$&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nx"&gt;xs&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;merge&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="nx"&gt;start$&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;mapTo&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;&lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;InputType&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;START&lt;/span&gt;&lt;span class="p"&gt;}),&lt;/span&gt;
    &lt;span class="nx"&gt;speechRecognitionActionResult$&lt;/span&gt;
      &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;filter&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;result&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt;
        &lt;span class="nx"&gt;result&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;status&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;status&lt;/span&gt; &lt;span class="o"&gt;===&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;SUCCEEDED&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;
        &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;result&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;result&lt;/span&gt; &lt;span class="o"&gt;===&lt;/span&gt; &lt;span class="nx"&gt;Response&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;YES&lt;/span&gt; &lt;span class="o"&gt;||&lt;/span&gt; &lt;span class="nx"&gt;result&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;result&lt;/span&gt; &lt;span class="o"&gt;===&lt;/span&gt; &lt;span class="nx"&gt;Response&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;NO&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
      &lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;map&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;result&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;({&lt;/span&gt;
        &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;InputType&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;VALID_RESPONSE&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="na"&gt;value&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;result&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;result&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="p"&gt;})),&lt;/span&gt;
    &lt;span class="nx"&gt;speechSynthesisActionResult$&lt;/span&gt;
      &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;filter&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;result&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="nx"&gt;result&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;status&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;status&lt;/span&gt; &lt;span class="o"&gt;===&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;SUCCEEDED&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
      &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;mapTo&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;&lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;InputType&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;SAY_DONE&lt;/span&gt;&lt;span class="p"&gt;}),&lt;/span&gt;
    &lt;span class="nx"&gt;speechRecognitionActionResult$&lt;/span&gt;
      &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;filter&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;result&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt;
        &lt;span class="nx"&gt;result&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;status&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;status&lt;/span&gt; &lt;span class="o"&gt;!==&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;SUCCEEDED&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;
        &lt;span class="o"&gt;||&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;result&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;result&lt;/span&gt; &lt;span class="o"&gt;!==&lt;/span&gt; &lt;span class="nx"&gt;Response&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;YES&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="nx"&gt;result&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;result&lt;/span&gt; &lt;span class="o"&gt;!==&lt;/span&gt; &lt;span class="nx"&gt;Response&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;NO&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
      &lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;mapTo&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;&lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;InputType&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;INVALID_RESPONSE&lt;/span&gt;&lt;span class="p"&gt;}),&lt;/span&gt;
    &lt;span class="nx"&gt;poses$&lt;/span&gt;
      &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;filter&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;poses&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt;
        &lt;span class="nx"&gt;poses&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;length&lt;/span&gt; &lt;span class="o"&gt;===&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;
        &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="nx"&gt;poses&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;].&lt;/span&gt;&lt;span class="nx"&gt;keypoints&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;filter&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;kpt&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="nx"&gt;kpt&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;part&lt;/span&gt; &lt;span class="o"&gt;===&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;nose&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nx"&gt;length&lt;/span&gt; &lt;span class="o"&gt;===&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;
      &lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;map&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;poses&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;nose&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;poses&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;].&lt;/span&gt;&lt;span class="nx"&gt;keypoints&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;filter&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;kpt&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="nx"&gt;kpt&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;part&lt;/span&gt; &lt;span class="o"&gt;===&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;nose&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)[&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;];&lt;/span&gt;
        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
          &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;InputType&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;DETECTED_FACE&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
          &lt;span class="na"&gt;value&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="na"&gt;x&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;nose&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;position&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;x&lt;/span&gt; &lt;span class="o"&gt;/&lt;/span&gt; &lt;span class="mi"&gt;640&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;  &lt;span class="c1"&gt;// max value of position.x is 640&lt;/span&gt;
            &lt;span class="na"&gt;y&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;nose&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;position&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;y&lt;/span&gt; &lt;span class="o"&gt;/&lt;/span&gt; &lt;span class="mi"&gt;480&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;  &lt;span class="c1"&gt;// max value of position.y is 480&lt;/span&gt;
          &lt;span class="p"&gt;},&lt;/span&gt;
        &lt;span class="p"&gt;};&lt;/span&gt;
      &lt;span class="p"&gt;}),&lt;/span&gt;
  &lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="c1"&gt;// ...&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Try testing whether the &lt;code&gt;input&lt;/code&gt; function is behaving properly. For example, you can attach the &lt;a href="https://github.com/staltz/xstream#addListener"&gt;&lt;code&gt;addListener&lt;/code&gt;&lt;/a&gt; xstream operator to the returned &lt;code&gt;$input&lt;/code&gt; stream and return some outgoing streams from the &lt;code&gt;output&lt;/code&gt; function.&lt;br&gt;
Like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="c1"&gt;// ...&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="nx"&gt;delay&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;xstream/extra/delay&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;
&lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;output&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;machine$&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="na"&gt;SpeechSynthesisAction&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;xs&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="k"&gt;of&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Hello world!&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;compose&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nf"&gt;delay&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;1000&lt;/span&gt;&lt;span class="p"&gt;)),&lt;/span&gt;
    &lt;span class="na"&gt;SpeechRecognitionAction&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;xs&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="k"&gt;of&lt;/span&gt;&lt;span class="p"&gt;({}).&lt;/span&gt;&lt;span class="nf"&gt;compose&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nf"&gt;delay&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;1000&lt;/span&gt;&lt;span class="p"&gt;)),&lt;/span&gt;
    &lt;span class="na"&gt;TabletFace&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;xs&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;never&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt;
  &lt;span class="p"&gt;};&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;main&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;sources&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;input$&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;input&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="nx"&gt;sources&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;TabletFace&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;load&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="nx"&gt;sources&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;SpeechSynthesisAction&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;result&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="nx"&gt;sources&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;SpeechRecognitionAction&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;result&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="nx"&gt;sources&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;PoseDetection&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;poses&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="nx"&gt;input$&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;addListener&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;&lt;span class="na"&gt;next&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;value&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;input&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;value&lt;/span&gt;&lt;span class="p"&gt;)})&lt;/span&gt;
&lt;span class="c1"&gt;// ...&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Do you see the expected outputs on your browser console? You should see many inputs with the &lt;code&gt;DETECTED_FACE&lt;/code&gt; type if the robot is detecting a person.&lt;/p&gt;

&lt;p&gt;Let's now remove the dummy &lt;code&gt;transition&lt;/code&gt; function and create a new one:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="c1"&gt;// ...&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;State&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
&lt;span class="c1"&gt;// ...&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;InputType&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
&lt;span class="c1"&gt;// ...&lt;/span&gt;
&lt;span class="c1"&gt;// // Remove the dummy transition function&lt;/span&gt;
&lt;span class="c1"&gt;// function transition(state, variables, input) {  // a dummy transition function&lt;/span&gt;
&lt;span class="c1"&gt;// ...&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;Response&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
&lt;span class="c1"&gt;// ...&lt;/span&gt;
&lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;input&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
&lt;span class="c1"&gt;// ...&lt;/span&gt;

&lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;createTransition&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;Sentence&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="na"&gt;CAREER&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Is it important that you reach your full career potential?&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;ONLINE&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Can you see yourself working online?&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;FAMILY&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Do you have to be near my family/friends/pets?&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;TRIPS&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Do you think short trips are awesome?&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;HOME&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Do you want to have a home and nice things?&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;ROUTINE&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Do you think a routine gives your life structure?&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;JOB&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Do you need a secure job and a stable income?&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;VACATIONER&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;You are a vacationer!&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;EXPAT&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;You are an expat!&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;NOMAD&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;You are a nomad!&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="p"&gt;};&lt;/span&gt;

  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;flowchart&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;Sentence&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;CAREER&lt;/span&gt;&lt;span class="p"&gt;]:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;Response&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;YES&lt;/span&gt;&lt;span class="p"&gt;]:&lt;/span&gt; &lt;span class="nx"&gt;Sentence&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;ONLINE&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;Response&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;NO&lt;/span&gt;&lt;span class="p"&gt;]:&lt;/span&gt; &lt;span class="nx"&gt;Sentence&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;FAMILY&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="p"&gt;},&lt;/span&gt;
    &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;Sentence&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;ONLINE&lt;/span&gt;&lt;span class="p"&gt;]:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;Response&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;YES&lt;/span&gt;&lt;span class="p"&gt;]:&lt;/span&gt; &lt;span class="nx"&gt;Sentence&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;NOMAD&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;Response&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;NO&lt;/span&gt;&lt;span class="p"&gt;]:&lt;/span&gt; &lt;span class="nx"&gt;Sentence&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;VACATIONER&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="p"&gt;},&lt;/span&gt;
    &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;Sentence&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;FAMILY&lt;/span&gt;&lt;span class="p"&gt;]:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;Response&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;YES&lt;/span&gt;&lt;span class="p"&gt;]:&lt;/span&gt; &lt;span class="nx"&gt;Sentence&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;VACATIONER&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;Response&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;NO&lt;/span&gt;&lt;span class="p"&gt;]:&lt;/span&gt; &lt;span class="nx"&gt;Sentence&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;TRIPS&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="p"&gt;},&lt;/span&gt;
    &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;Sentence&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;TRIPS&lt;/span&gt;&lt;span class="p"&gt;]:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;Response&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;YES&lt;/span&gt;&lt;span class="p"&gt;]:&lt;/span&gt; &lt;span class="nx"&gt;Sentence&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;VACATIONER&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;Response&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;NO&lt;/span&gt;&lt;span class="p"&gt;]:&lt;/span&gt; &lt;span class="nx"&gt;Sentence&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;HOME&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="p"&gt;},&lt;/span&gt;
    &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;Sentence&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;HOME&lt;/span&gt;&lt;span class="p"&gt;]:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;Response&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;YES&lt;/span&gt;&lt;span class="p"&gt;]:&lt;/span&gt; &lt;span class="nx"&gt;Sentence&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;EXPAT&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;Response&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;NO&lt;/span&gt;&lt;span class="p"&gt;]:&lt;/span&gt; &lt;span class="nx"&gt;Sentence&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;ROUTINE&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="p"&gt;},&lt;/span&gt;
    &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;Sentence&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;ROUTINE&lt;/span&gt;&lt;span class="p"&gt;]:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;Response&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;YES&lt;/span&gt;&lt;span class="p"&gt;]:&lt;/span&gt; &lt;span class="nx"&gt;Sentence&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;EXPAT&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;Response&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;NO&lt;/span&gt;&lt;span class="p"&gt;]:&lt;/span&gt; &lt;span class="nx"&gt;Sentence&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;JOB&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="p"&gt;},&lt;/span&gt;
    &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;Sentence&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;JOB&lt;/span&gt;&lt;span class="p"&gt;]:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;Response&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;YES&lt;/span&gt;&lt;span class="p"&gt;]:&lt;/span&gt; &lt;span class="nx"&gt;Sentence&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;ONLINE&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;Response&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;NO&lt;/span&gt;&lt;span class="p"&gt;]:&lt;/span&gt; &lt;span class="nx"&gt;Sentence&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;NOMAD&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="p"&gt;},&lt;/span&gt;
  &lt;span class="p"&gt;};&lt;/span&gt;

  &lt;span class="c1"&gt;// this transitionTable is a dictionary of dictionaries and returns a function&lt;/span&gt;
  &lt;span class="c1"&gt;//   that takes previous "variables" and "inputValue" and returns a current&lt;/span&gt;
  &lt;span class="c1"&gt;//   FSM status; {state, variable, outputs}&lt;/span&gt;
  &lt;span class="c1"&gt;// this transitionTable is a dictionary of dictionaries and returns a function&lt;/span&gt;
  &lt;span class="c1"&gt;//   that takes previous "variables" and "inputValue" and returns a current&lt;/span&gt;
  &lt;span class="c1"&gt;//   FSM status; {state, variable, outputs}&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;transitionTable&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;State&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;PEND&lt;/span&gt;&lt;span class="p"&gt;]:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;InputType&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;START&lt;/span&gt;&lt;span class="p"&gt;]:&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;prevVariables&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;prevInputValue&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;({&lt;/span&gt;
        &lt;span class="na"&gt;state&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;State&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;SAY&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="na"&gt;variables&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="na"&gt;sentence&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;Sentence&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;CAREER&lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;
        &lt;span class="na"&gt;outputs&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="na"&gt;SpeechSynthesisAction&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="na"&gt;goal&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;Sentence&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;CAREER&lt;/span&gt;&lt;span class="p"&gt;}},&lt;/span&gt;
      &lt;span class="p"&gt;}),&lt;/span&gt;
    &lt;span class="p"&gt;},&lt;/span&gt;
    &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;State&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;SAY&lt;/span&gt;&lt;span class="p"&gt;]:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;InputType&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;SAY_DONE&lt;/span&gt;&lt;span class="p"&gt;]:&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;prevVariables&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;prevInputValue&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;
          &lt;span class="nx"&gt;prevVariables&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;sentence&lt;/span&gt; &lt;span class="o"&gt;!==&lt;/span&gt; &lt;span class="nx"&gt;Sentence&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;VACATIONER&lt;/span&gt;
          &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="nx"&gt;prevVariables&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;sentence&lt;/span&gt; &lt;span class="o"&gt;!==&lt;/span&gt; &lt;span class="nx"&gt;Sentence&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;EXPAT&lt;/span&gt;
          &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="nx"&gt;prevVariables&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;sentence&lt;/span&gt; &lt;span class="o"&gt;!==&lt;/span&gt; &lt;span class="nx"&gt;Sentence&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;NOMAD&lt;/span&gt;
        &lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;?&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;  &lt;span class="c1"&gt;// SAY_DONE&lt;/span&gt;
          &lt;span class="na"&gt;state&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;State&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;LISTEN&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
          &lt;span class="na"&gt;variables&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;prevVariables&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
          &lt;span class="na"&gt;outputs&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="na"&gt;SpeechRecognitionAction&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="na"&gt;goal&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{}}},&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;  &lt;span class="c1"&gt;// QUIZ_DONE&lt;/span&gt;
          &lt;span class="na"&gt;state&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;State&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;PEND&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
          &lt;span class="na"&gt;variables&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;prevVariables&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
          &lt;span class="na"&gt;outputs&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="na"&gt;done&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;
        &lt;span class="p"&gt;},&lt;/span&gt;
    &lt;span class="p"&gt;},&lt;/span&gt;
    &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;State&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;LISTEN&lt;/span&gt;&lt;span class="p"&gt;]:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;InputType&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;VALID_RESPONSE&lt;/span&gt;&lt;span class="p"&gt;]:&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;prevVariables&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;prevInputValue&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;({&lt;/span&gt;
        &lt;span class="na"&gt;state&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;State&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;SAY&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="na"&gt;variables&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="na"&gt;sentence&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;flowchart&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;prevVariables&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;sentence&lt;/span&gt;&lt;span class="p"&gt;][&lt;/span&gt;&lt;span class="nx"&gt;prevInputValue&lt;/span&gt;&lt;span class="p"&gt;]},&lt;/span&gt;
        &lt;span class="na"&gt;outputs&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
          &lt;span class="na"&gt;SpeechSynthesisAction&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="na"&gt;goal&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;flowchart&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;prevVariables&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;sentence&lt;/span&gt;&lt;span class="p"&gt;][&lt;/span&gt;&lt;span class="nx"&gt;prevInputValue&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;
          &lt;span class="p"&gt;},&lt;/span&gt;
          &lt;span class="na"&gt;TabletFace&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="na"&gt;goal&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;SET_STATE&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="na"&gt;value&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
              &lt;span class="na"&gt;leftEye&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="na"&gt;x&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mf"&gt;0.5&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;y&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mf"&gt;0.5&lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;
              &lt;span class="na"&gt;rightEye&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="na"&gt;x&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mf"&gt;0.5&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;y&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mf"&gt;0.5&lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;
            &lt;span class="p"&gt;},&lt;/span&gt;
          &lt;span class="p"&gt;}},&lt;/span&gt;
        &lt;span class="p"&gt;},&lt;/span&gt;
      &lt;span class="p"&gt;}),&lt;/span&gt;
      &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;InputType&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;INVALID_RESPONSE&lt;/span&gt;&lt;span class="p"&gt;]:&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;prevVariables&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;prevInputValue&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;({&lt;/span&gt;
        &lt;span class="na"&gt;state&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;State&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;LISTEN&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="na"&gt;variables&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;prevVariables&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="na"&gt;outputs&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="na"&gt;SpeechRecognitionAction&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="na"&gt;goal&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{}}},&lt;/span&gt;
      &lt;span class="p"&gt;}),&lt;/span&gt;
      &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;InputType&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;DETECTED_FACE&lt;/span&gt;&lt;span class="p"&gt;]:&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;prevVariables&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;prevInputValue&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;({&lt;/span&gt;
        &lt;span class="na"&gt;state&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;State&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;LISTEN&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="na"&gt;variables&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;prevVariables&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="na"&gt;outputs&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
          &lt;span class="na"&gt;TabletFace&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="na"&gt;goal&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;SET_STATE&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="na"&gt;value&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
              &lt;span class="na"&gt;leftEye&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;prevInputValue&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
              &lt;span class="na"&gt;rightEye&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;prevInputValue&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="p"&gt;},&lt;/span&gt;
          &lt;span class="p"&gt;}},&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt;
      &lt;span class="p"&gt;}),&lt;/span&gt;
    &lt;span class="p"&gt;},&lt;/span&gt;
  &lt;span class="p"&gt;};&lt;/span&gt;

  &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="kd"&gt;function&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;prevState&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;prevVariables&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;prevInput&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;prevState&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;prevVariables&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;prevInput&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="c1"&gt;// excuse me for abusing ternary&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="o"&gt;!&lt;/span&gt;&lt;span class="nx"&gt;transitionTable&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;prevState&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
      &lt;span class="p"&gt;?&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="na"&gt;state&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;prevState&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;variables&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;prevVariables&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;outputs&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;null&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;
      &lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="o"&gt;!&lt;/span&gt;&lt;span class="nx"&gt;transitionTable&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;prevState&lt;/span&gt;&lt;span class="p"&gt;][&lt;/span&gt;&lt;span class="nx"&gt;prevInput&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;type&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
        &lt;span class="p"&gt;?&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="na"&gt;state&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;prevState&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;variables&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;prevVariables&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;outputs&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;null&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;
        &lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;transitionTable&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;prevState&lt;/span&gt;&lt;span class="p"&gt;][&lt;/span&gt;&lt;span class="nx"&gt;prevInput&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;type&lt;/span&gt;&lt;span class="p"&gt;](&lt;/span&gt;&lt;span class="nx"&gt;prevVariables&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;prevInput&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;value&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;transition&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;createTransition&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;

&lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;output&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;machine$&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;  &lt;span class="c1"&gt;// a dummy output function&lt;/span&gt;
&lt;span class="c1"&gt;// ...&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here we define and return the FSM's transition function inside the &lt;code&gt;createTransition&lt;/code&gt; function.&lt;/p&gt;

&lt;p&gt;Finally update the dummy &lt;code&gt;output&lt;/code&gt; function to:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="c1"&gt;// ...&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;transition&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;createTransition&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;

&lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;output&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;machine$&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;outputs$&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;machine$&lt;/span&gt;
    &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;filter&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;machine&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="o"&gt;!!&lt;/span&gt;&lt;span class="nx"&gt;machine&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;outputs&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;map&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;machine&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="nx"&gt;machine&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;outputs&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="na"&gt;SpeechSynthesisAction&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;outputs$&lt;/span&gt;
      &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;filter&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;outputs&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="o"&gt;!!&lt;/span&gt;&lt;span class="nx"&gt;outputs&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;SpeechSynthesisAction&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
      &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;map&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;output&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="nx"&gt;output&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;SpeechSynthesisAction&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;goal&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
    &lt;span class="na"&gt;SpeechRecognitionAction&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;outputs$&lt;/span&gt;
      &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;filter&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;outputs&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="o"&gt;!!&lt;/span&gt;&lt;span class="nx"&gt;outputs&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;SpeechRecognitionAction&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
      &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;map&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;output&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="nx"&gt;output&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;SpeechRecognitionAction&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;goal&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
    &lt;span class="na"&gt;TabletFace&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;outputs$&lt;/span&gt;
      &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;filter&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;outputs&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="o"&gt;!!&lt;/span&gt;&lt;span class="nx"&gt;outputs&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;TabletFace&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
      &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;map&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;output&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="nx"&gt;output&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;TabletFace&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;goal&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
  &lt;span class="p"&gt;};&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;main&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;sources&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
&lt;span class="c1"&gt;// ...&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Try running the application and test whether it behaves as we defined in the FSM.&lt;/p&gt;

&lt;p&gt;You just implemented a social robot program as an FSM!&lt;/p&gt;

&lt;h4&gt;
  
  
  Relation to the Model-View-Intent pattern
&lt;/h4&gt;

&lt;p&gt;The FSM pattern is an application of the &lt;a href="https://cycle.js.org/model-view-intent.html"&gt;Model-View-Intent (MVI) pattern&lt;/a&gt;, an &lt;a href="https://cycle.js.org/model-view-intent.html#model-view-intent-what-mvc-is-really-about"&gt;adaptation of Model-View-Controller in reactive programming&lt;/a&gt;, where "intent" is &lt;code&gt;input&lt;/code&gt;, "model" is &lt;code&gt;FSM status&lt;/code&gt;, and "view" is &lt;code&gt;output&lt;/code&gt;. In addition to the MVI pattern, the FSM pattern additionally requires a specific structure for the "model"/&lt;code&gt;FSM status&lt;/code&gt; and the "update"/&lt;code&gt;transition&lt;/code&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Updating the "travel personality quiz" FSM
&lt;/h2&gt;

&lt;p&gt;The true power of the FSM pattern is its maintainability. The crux of the FSM pattern is dividing the &lt;code&gt;main&lt;/code&gt; function into the three functions that have separate concerns:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;the &lt;code&gt;input&lt;/code&gt; function that focuses on turning incoming streams into "input" that the FSM can work with and&lt;/li&gt;
&lt;li&gt;the &lt;code&gt;transition&lt;/code&gt; function implements the FSM's transition function.&lt;/li&gt;
&lt;li&gt;the &lt;code&gt;output&lt;/code&gt; function that maps the outputs returned from &lt;code&gt;transition&lt;/code&gt; into the outgoing streams (&lt;code&gt;sinks&lt;/code&gt; in Cycle.js) to make side effects, e.g., trigger actions.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This separation allows programmers to only update the portion of code in the two functions when they need to make the program more complex.&lt;/p&gt;

&lt;p&gt;For example, if we were to implement the rest of additional behaviors mentioned in the &lt;a href="https://dev.tomaking-travel-personality-quiz-program-more-complex"&gt;Making "travel personality quiz" program more complex&lt;/a&gt; section, we'll need to first update the FSM to reflect the new desired behavior, e.g.:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--om_me9ag--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://thepracticaldev.s3.amazonaws.com/i/st5hzvob4hq22pbrsb78.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--om_me9ag--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://thepracticaldev.s3.amazonaws.com/i/st5hzvob4hq22pbrsb78.png" alt="travel_personality_quiz_fsm_final" width="800" height="309"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;and update the &lt;code&gt;input&lt;/code&gt; and &lt;code&gt;transition&lt;/code&gt; functions accordingly. Checkout the &lt;a href="https://stackblitz.com/edit/cycle-robot-drivers-tutorials-02-fsm"&gt;complete code&lt;/a&gt; to see how I updated the &lt;code&gt;input&lt;/code&gt; and &lt;code&gt;transition&lt;/code&gt; functions to implement the remaining additional behaviors.&lt;/p&gt;

&lt;p&gt;The biggest challenge for using FSM is defining FSM. If you are using the FSM pattern and having problems with it, double check the current definition of your state machine. For example, look for the redundant states or input types that make updating the transition function cumbersome (merge them into one state with variables), or look for state or input type that is not being used as intended for (add new necessary states or input types). Another point to check is, making sure your FSM is taking reactive programming approach, e.g., make sure the three functions (&lt;code&gt;input&lt;/code&gt;, &lt;code&gt;transition&lt;/code&gt;, &lt;code&gt;output&lt;/code&gt;) are as pure as possible. Defining effective FSM is art, but I believe using FSMs in reactive programming greatly helps the programmers to better organize their programs.&lt;/p&gt;

&lt;p&gt;Thank you for reading! I hope I got you interested in using FSMs in Cycle.js. Let me know if something isn’t clear, and I’d be happy to chat.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;My name is Mike Chung. I'm a &lt;a href="https://homes.cs.washington.edu/~mjyc/"&gt;graduate student&lt;/a&gt; interested in the field of human-robot interaction and machine learning. You can reach me on &lt;a href="https://twitter.com/mjycio"&gt;Twitter&lt;/a&gt; and on &lt;a href="https://github.com/mjyc"&gt;GitHub&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>javascript</category>
      <category>cyclejs</category>
      <category>reactive</category>
      <category>robotics</category>
    </item>
    <item>
      <title>Programming a social robot using Cycle.js</title>
      <dc:creator>Mike Chung</dc:creator>
      <pubDate>Tue, 06 Nov 2018 20:23:59 +0000</pubDate>
      <link>https://dev.to/mjyc/programming-a-social-robot-using-cyclejs-23jl</link>
      <guid>https://dev.to/mjyc/programming-a-social-robot-using-cyclejs-23jl</guid>
      <description>&lt;p&gt;&lt;em&gt;Originally published on &lt;a href="https://medium.com/@chungjy9/programming-a-social-robot-using-cycle-js-95f30a0128ce" rel="noopener noreferrer"&gt;Medium&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;&lt;strong&gt;Note:&lt;/strong&gt; Check out other posts on programming a social robot using Cycle.js too:&lt;/em&gt; &lt;br&gt;
&lt;em&gt;1. &lt;a href="https://dev.to/mjyc/programming-a-social-robot-using-cyclejs-23jl"&gt;Programming a social robot using Cycle.js&lt;/a&gt;&lt;/em&gt;&lt;br&gt;
&lt;em&gt;2. &lt;a href="https://dev.to/mjyc/implementing-a-finite-state-machine-in-cyclejs-1e63"&gt;Implementing a finite state machine in Cycle.js&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;In this post, I'll show you how to program a social robot using &lt;a href="https://cycle.js.org/" rel="noopener noreferrer"&gt;Cycle.js&lt;/a&gt;. I assume you are familiar with reactive programming. If you are not, check out &lt;a href="https://gist.github.com/staltz/868e7e9bc2a7b8c1f754" rel="noopener noreferrer"&gt;The introduction to Reactive Programming you've been missing&lt;/a&gt;. If you are eager to get your hands dirty, jump to the Implementing "travel personality test" section.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is a social robot?
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://en.wikipedia.org/wiki/Social_robot" rel="noopener noreferrer"&gt;Wikipedia&lt;/a&gt; introduces it as:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;A social robot is an autonomous robot that interacts and communicates with humans or other autonomous physical agents by following social behaviors and rules attached to its role.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://books.google.com/books?hl=en&amp;amp;lr=&amp;amp;id=402dquhxSTQC&amp;amp;oi=fnd&amp;amp;pg=PA1&amp;amp;dq=cynthia+breazeal&amp;amp;ots=oAToxSv8Cf&amp;amp;sig=KAnbgcrcT56kMQVSFobJho7WN8E#v=onepage&amp;amp;q&amp;amp;f=false" rel="noopener noreferrer"&gt;Cynthia Breazel&lt;/a&gt;, the mother of social robots, once said:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;In short, a socialable robot is socially intelligent in a human-like way, and interacting with it is like interacting with another person. At the pinnacle of achievement, they could befriend us, as we could them.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;I see social robots as embodied agents whose main task is to communicate with humans to help humans. So, interactive robots for &lt;a href="http://robotic.media.mit.edu/portfolio/storytelling-companion/" rel="noopener noreferrer"&gt;education&lt;/a&gt; or &lt;a href="http://www.cataliahealth.com/" rel="noopener noreferrer"&gt;eldercare&lt;/a&gt; fit my definition the best.&lt;/p&gt;

&lt;p&gt;Programming social robots is similar to programming web applications. In both cases, programmers write code for handling inputs, e.g., a button click or sensor reading, and outputting data accordingly, e.g., displaying information on screen or sending control signals to motors. The major difference is programming social robots involves working with multi-modal inputs and outputs, e.g., speech and motion, to interact with humans instead of solely using a screen interface.&lt;/p&gt;

&lt;p&gt;In this post, I'll use a &lt;a href="https://github.com/mjyc/tablet-robot-face" rel="noopener noreferrer"&gt;tablet-face robot&lt;/a&gt; for demonstration purposes. The tablet-face robot is just a web application running on a tablet, but we'll make it speak, listen, and see you to make it more like a "social robot".&lt;/p&gt;

&lt;h2&gt;
  
  
  What is Cycle.js?
&lt;/h2&gt;

&lt;p&gt;&lt;a href="http://cycle.js.org" rel="noopener noreferrer"&gt;Cycle.js&lt;/a&gt; is a functional and reactive JavaScript framework. It is an abstraction that separates all &lt;a href="https://en.wikipedia.org/wiki/Side_effect_(computer_science)" rel="noopener noreferrer"&gt;side effect&lt;/a&gt; producing code into &lt;a href="https://cycle.js.org/drivers.html" rel="noopener noreferrer"&gt;drivers&lt;/a&gt; so the core application logic code remains &lt;a href="https://en.wikipedia.org/wiki/Pure_function" rel="noopener noreferrer"&gt;pure&lt;/a&gt; in one "main" function. The author of Cycle.js describes a web application as a &lt;a href="https://cycle.js.org/dialogue.html#dialogue-abstraction" rel="noopener noreferrer"&gt;dialogue between a human and a computer&lt;/a&gt;. If we assume both are functions, the human as &lt;code&gt;y = driver(x)&lt;/code&gt; and the computer as &lt;code&gt;x = main(y)&lt;/code&gt; where &lt;code&gt;x&lt;/code&gt; and &lt;code&gt;y&lt;/code&gt; are streams in the context of &lt;a href="https://cycle.js.org/streams.html#streams-reactive-programming" rel="noopener noreferrer"&gt;reactive programming&lt;/a&gt;, then the dialogue is simply two functions that react to each other via their input stream, which is an output of the another function.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Cycle.js for social robots?
&lt;/h2&gt;

&lt;p&gt;To me, Cycle.js essentially enforces functional reactive programming, e.g., using streams, and &lt;a href="http://wiki.c2.com/?PortsAndAdaptersArchitecture" rel="noopener noreferrer"&gt;ports and adapters architecture&lt;/a&gt;, e.g., separating side effects, to make it easy to create and understand complex and concurrent interactive programs--beyond web applications. This is why I chose Cycle.js for programming a social robot. I believe the patterns enforced by Cycle.js will help programmers to battle the concurrency problems originated from supporting multi-modal interactions and stay in control when complexity of the desired robot behavior grows. In fact, you don't need to use Cycle.js if you can enforce the patterns yourself. For example, you could use &lt;a href="https://wiki.haskell.org/Yampa/reactimate" rel="noopener noreferrer"&gt;Yampa with reactimate&lt;/a&gt;, &lt;a href="http://www.flapjax-lang.org/" rel="noopener noreferrer"&gt;Flapjax&lt;/a&gt;, or one of &lt;a href="http://reactivex.io/" rel="noopener noreferrer"&gt;ReactiveX&lt;/a&gt; stream libraries to do this in a language in which your robot's API is available.&lt;/p&gt;

&lt;h2&gt;
  
  
  Implementing "travel personality test"
&lt;/h2&gt;

&lt;p&gt;Enough backgrounds, we'll now create a robot program that tests your travel personality. Specifically, we'll make the robot&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;look at you while you are interacting with the robot and&lt;/li&gt;
&lt;li&gt;ask questions as shown in &lt;a href="http://www.nomadwallet.com/afford-travel-quiz-personality/" rel="noopener noreferrer"&gt;this flowchart&lt;/a&gt;.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;If you are curious, check out &lt;a href="https://stackblitz.com/edit/cycle-robot-drivers-tutorials-01-personality-quiz" rel="noopener noreferrer"&gt;the complete code and the demo&lt;/a&gt; at Stackblitz.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;IMPORTANT!!&lt;/strong&gt; For now, the &lt;a href="https://github.com/mjyc/cycle-robot-drivers/blob/master/run" rel="noopener noreferrer"&gt;cycle-robot-drivers/run&lt;/a&gt; package we use in this post and in the Stackblitz demo only work on Chrome browsers (&amp;gt;= 65.0.3325.181).&lt;/p&gt;

&lt;p&gt;The code examples in this post assume you are familiar with &lt;a href="https://medium.freecodecamp.org/write-less-do-more-with-javascript-es6-5fd4a8e50ee2" rel="noopener noreferrer"&gt;JavaScript ES6&lt;/a&gt;. To build code, I use &lt;a href="http://browserify.org/" rel="noopener noreferrer"&gt;browserify&lt;/a&gt; and &lt;a href="https://babeljs.io/" rel="noopener noreferrer"&gt;Babel&lt;/a&gt; here, but feel free to use a build tool and a transpiler you prefer. If you are not familiar with them, just fork &lt;a href="https://stackblitz.com/edit/cycle-robot-drivers-tutorials-01-personality-quiz" rel="noopener noreferrer"&gt;the Stackblitz demo code&lt;/a&gt; and start coding!&lt;/p&gt;

&lt;p&gt;Let's set up a Cycle.js application. Create a folder:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;mkdir my-robot-program
cd my-robot-program
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then download &lt;a href="https://github.com/mjyc/cycle-robot-drivers/blob/master/examples/tutorials/01_personality_quiz/package.json" rel="noopener noreferrer"&gt;&lt;code&gt;package.json&lt;/code&gt;&lt;/a&gt;, &lt;a href="https://github.com/mjyc/cycle-robot-drivers/blob/master/examples/tutorials/01_personality_quiz/.babelrc" rel="noopener noreferrer"&gt;&lt;code&gt;.babelrc&lt;/code&gt;&lt;/a&gt;, &lt;a href="https://github.com/mjyc/cycle-robot-drivers/blob/master/examples/tutorials/01_personality_quiz/index.html" rel="noopener noreferrer"&gt;&lt;code&gt;index.html&lt;/code&gt;&lt;/a&gt; and create an empty &lt;code&gt;index.js&lt;/code&gt; file in the folder. Run &lt;code&gt;npm install&lt;/code&gt; to install the required npm packages. After installing, you can run &lt;code&gt;npm start&lt;/code&gt; to build and start the web application that does nothing.&lt;/p&gt;

&lt;p&gt;Now add the following code in &lt;code&gt;index.js&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="nx"&gt;xs&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;xstream&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;runRobotProgram&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;@cycle-robot-drivers/run&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;main&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;sources&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="nf"&gt;runRobotProgram&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;main&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then run this application, e.g., by running &lt;code&gt;npm start&lt;/code&gt;. It should load a robot face on your browser.&lt;/p&gt;

&lt;p&gt;We just successfully set up and ran a Cycle.js application!&lt;/p&gt;

&lt;h3&gt;
  
  
  Robot, look at a face!
&lt;/h3&gt;

&lt;p&gt;We'll now focus on implementing the first feature--looking at a face.&lt;/p&gt;

&lt;p&gt;Let's make the robot just move its eyes by adding the following code in &lt;code&gt;main&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="c1"&gt;// ...&lt;/span&gt;

&lt;span class="c1"&gt;// "sources" is a Cycle.js term for the input of "main" / the output of "drivers"&lt;/span&gt;
&lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;main&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;sources&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="c1"&gt;// "const" (and "let") is a javascript ES6 feature&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;sinks&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="na"&gt;TabletFace&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;xs&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;periodic&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;1000&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;map&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;i&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;({&lt;/span&gt;
        &lt;span class="na"&gt;x&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;i&lt;/span&gt; &lt;span class="o"&gt;%&lt;/span&gt; &lt;span class="mi"&gt;2&lt;/span&gt; &lt;span class="o"&gt;===&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt; &lt;span class="p"&gt;?&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt; &lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;  &lt;span class="c1"&gt;// horizontal left or right&lt;/span&gt;
        &lt;span class="na"&gt;y&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mf"&gt;0.5&lt;/span&gt;  &lt;span class="c1"&gt;// vertical center&lt;/span&gt;
      &lt;span class="p"&gt;})).&lt;/span&gt;&lt;span class="nf"&gt;map&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;position&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;({&lt;/span&gt;
        &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;SET_STATE&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="na"&gt;value&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
          &lt;span class="na"&gt;leftEye&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;position&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
          &lt;span class="na"&gt;rightEye&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;position&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt;
      &lt;span class="p"&gt;}))&lt;/span&gt;
  &lt;span class="p"&gt;};&lt;/span&gt;
  &lt;span class="c1"&gt;// "sinks" is a Cycle.js term for the output of "main" / the input of "drivers"&lt;/span&gt;
  &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nx"&gt;sinks&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="c1"&gt;// ...&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here we are sending commands to the &lt;code&gt;TabletFace&lt;/code&gt; driver by returning the &lt;code&gt;sink.TabletFace&lt;/code&gt; stream from &lt;code&gt;main&lt;/code&gt;. The &lt;a href="https://github.com/staltz/xstream#periodic" rel="noopener noreferrer"&gt;&lt;code&gt;periodic&lt;/code&gt;&lt;/a&gt; xstream factory creates a stream emitting an incremental number every second and the &lt;a href="https://github.com/staltz/xstream#map" rel="noopener noreferrer"&gt;&lt;code&gt;map&lt;/code&gt;&lt;/a&gt; xstream operator create a new stream that turns the emitted numbers into positions and another new stream that turns the emitted positions into control commands. If you run the updated application, the robot should look left and right repeatedly.&lt;/p&gt;

&lt;p&gt;Let's now work on detecting a face by adding more code in &lt;code&gt;main&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="c1"&gt;// ...&lt;/span&gt;

&lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;main&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;sources&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;sources&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;PoseDetection&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;poses&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;addListener&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
    &lt;span class="na"&gt;next&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;poses&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;poses&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;poses&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
  &lt;span class="p"&gt;});&lt;/span&gt;

  &lt;span class="c1"&gt;// ...&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="c1"&gt;// ...&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here we use the &lt;a href="https://github.com/staltz/xstream#addListener" rel="noopener noreferrer"&gt;addListener&lt;/a&gt; xstream operator to add a callback function that prints the detected pose data to the &lt;code&gt;poses&lt;/code&gt; stream, the stream returned from the &lt;code&gt;PoseDetection&lt;/code&gt; driver.&lt;/p&gt;

&lt;p&gt;When you run the application you should see arrays of objects printed to your browser's console. If you don't see them, make sure you are visible to the camera and being detected via the pose visualizer located below the robot face (try scroll down). Each array represents detected poses at current moment, which has the following format:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;poses&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
  &lt;span class="c1"&gt;// the first detected person&lt;/span&gt;
  &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;score&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mf"&gt;0.32371445304906&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;keypoints&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
      &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;part&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;nose&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;position&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
          &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;x&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mf"&gt;253.36747741699&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
          &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;y&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mf"&gt;76.291801452637&lt;/span&gt;
        &lt;span class="p"&gt;},&lt;/span&gt;
        &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;score&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mf"&gt;0.99539834260941&lt;/span&gt;
      &lt;span class="p"&gt;},&lt;/span&gt;
      &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;part&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;leftEye&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;position&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
          &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;x&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mf"&gt;253.54365539551&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
          &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;y&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mf"&gt;71.10383605957&lt;/span&gt;
        &lt;span class="p"&gt;},&lt;/span&gt;
        &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;score&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mf"&gt;0.98781454563141&lt;/span&gt;
      &lt;span class="p"&gt;},&lt;/span&gt;
      &lt;span class="c1"&gt;// ...&lt;/span&gt;
  &lt;span class="p"&gt;},&lt;/span&gt;
  &lt;span class="c1"&gt;// the second detected person if there is one&lt;/span&gt;
  &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;score&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mf"&gt;0.22838506316132706&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;keypoints&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
      &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;part&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;nose&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;position&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
          &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;x&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mf"&gt;236.58547523373466&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
          &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;y&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mf"&gt;360.03672892252604&lt;/span&gt;
        &lt;span class="p"&gt;},&lt;/span&gt;
        &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;score&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mf"&gt;0.9979155659675598&lt;/span&gt;
      &lt;span class="p"&gt;},&lt;/span&gt;
      &lt;span class="c1"&gt;// ...&lt;/span&gt;
    &lt;span class="p"&gt;]&lt;/span&gt;
  &lt;span class="p"&gt;},&lt;/span&gt;
  &lt;span class="c1"&gt;// ...&lt;/span&gt;
&lt;span class="p"&gt;]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;While the application is running, try disappearing from the camera.&lt;br&gt;
You should see one less object in the &lt;code&gt;poses&lt;/code&gt; array. Also try hiding one of your ears by turning your head left or right. You should not see an object that has a string &lt;code&gt;nose&lt;/code&gt; for its &lt;code&gt;part&lt;/code&gt; field in the &lt;code&gt;keypoints&lt;/code&gt; array.&lt;/p&gt;

&lt;p&gt;Now that we know how to move the robot's eyes and retrieve detected face data, let's put them together to make the robot look at a face. Concretely, we'll make the robot's eyes follow a detected person's nose. Update &lt;code&gt;main&lt;/code&gt; as follows:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="c1"&gt;// ...&lt;/span&gt;

&lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;main&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;sources&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;sinks&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="na"&gt;TabletFace&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;sources&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;PoseDetection&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;poses&lt;/span&gt;
      &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;filter&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;poses&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt;
        &lt;span class="c1"&gt;// must see one person&lt;/span&gt;
        &lt;span class="nx"&gt;poses&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;length&lt;/span&gt; &lt;span class="o"&gt;===&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;
        &lt;span class="c1"&gt;// must see the nose&lt;/span&gt;
        &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="nx"&gt;poses&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;].&lt;/span&gt;&lt;span class="nx"&gt;keypoints&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;filter&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;kpt&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="nx"&gt;kpt&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;part&lt;/span&gt; &lt;span class="o"&gt;===&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;nose&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nx"&gt;length&lt;/span&gt; &lt;span class="o"&gt;===&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;
      &lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;map&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;poses&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;nose&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;poses&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;].&lt;/span&gt;&lt;span class="nx"&gt;keypoints&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;filter&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;kpt&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="nx"&gt;kpt&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;part&lt;/span&gt; &lt;span class="o"&gt;===&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;nose&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)[&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;];&lt;/span&gt;
        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
          &lt;span class="na"&gt;x&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;nose&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;position&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;x&lt;/span&gt; &lt;span class="o"&gt;/&lt;/span&gt; &lt;span class="mi"&gt;640&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;  &lt;span class="c1"&gt;// max value of position.x is 640&lt;/span&gt;
          &lt;span class="na"&gt;y&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;nose&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;position&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;y&lt;/span&gt; &lt;span class="o"&gt;/&lt;/span&gt; &lt;span class="mi"&gt;480&lt;/span&gt;  &lt;span class="c1"&gt;// max value of position.y is 480&lt;/span&gt;
        &lt;span class="p"&gt;};&lt;/span&gt;
      &lt;span class="p"&gt;}).&lt;/span&gt;&lt;span class="nf"&gt;map&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;position&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;({&lt;/span&gt;
        &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;SET_STATE&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="na"&gt;value&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
          &lt;span class="na"&gt;leftEye&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;position&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
          &lt;span class="na"&gt;rightEye&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;position&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt;
      &lt;span class="p"&gt;}))&lt;/span&gt;
  &lt;span class="p"&gt;};&lt;/span&gt;
  &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nx"&gt;sinks&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="c1"&gt;// ...&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here we are sending commands to the &lt;code&gt;TabletDriver&lt;/code&gt; by using the stream created from the output stream of the &lt;code&gt;PoseDetection&lt;/code&gt; driver (&lt;code&gt;sources.PoseDetection.poses&lt;/code&gt;).&lt;br&gt;
To convert pose data into control commands, we use the &lt;a href="https://github.com/staltz/xstream#filter" rel="noopener noreferrer"&gt;&lt;code&gt;filter&lt;/code&gt;&lt;/a&gt; xstream operator to filter pose data to the ones containing only one person whose nose is visible. Then we use the &lt;a href="https://github.com/staltz/xstream#map" rel="noopener noreferrer"&gt;&lt;code&gt;map&lt;/code&gt;&lt;/a&gt; xstream operator twice to convert the detected nose positions into eye positions and turn the eye positions into control commands.&lt;/p&gt;

&lt;p&gt;We have made the robot look at a face!&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Exercise ideas:&lt;/em&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Make the robot look at one of &lt;a href="https://vignette.wikia.nocookie.net/juveniles-roleplay/images/e/e9/Louis4.gif/revision/latest?cb=20130825225246" rel="noopener noreferrer"&gt;your hands&lt;/a&gt; instead of your nose?&lt;/li&gt;
&lt;li&gt;Make the robot smile (&lt;a href="https://github.com/mjyc/cycle-robot-drivers/blob/master/screen" rel="noopener noreferrer"&gt;&lt;code&gt;happy&lt;/code&gt; expression&lt;/a&gt;) when you are looking away from the camera?&lt;/li&gt;
&lt;/ul&gt;
&lt;h4&gt;
  
  
  Taking a closer look at &lt;code&gt;runRobotProgram&lt;/code&gt;
&lt;/h4&gt;

&lt;p&gt;While following code examples above, you may have wondered:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;when and where is the &lt;code&gt;TabletFace&lt;/code&gt; driver created&lt;/li&gt;
&lt;li&gt;how and when a driver produces side effects&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Here is the answer to the first question: the two drivers we used in the example code, &lt;code&gt;TabletFace&lt;/code&gt; and &lt;code&gt;PoseDetection&lt;/code&gt;, are created in &lt;code&gt;runRobotProgram&lt;/code&gt;.&lt;br&gt;
Normally when you program a Cycle.js app, you need to &lt;a href="https://cycle.js.org/getting-started.html#getting-started-coding-create-main-and-drivers" rel="noopener noreferrer"&gt;create drivers explicitly&lt;/a&gt; and pass them to the &lt;a href="https://cycle.js.org/api/run.html" rel="noopener noreferrer"&gt;Cycle.js &lt;code&gt;run&lt;/code&gt;&lt;/a&gt; function. We skipped this step because we used &lt;code&gt;runRobotProgram&lt;/code&gt; that creates the required drivers for programming a tablet-face robot and calls Cycle.js &lt;code&gt;run&lt;/code&gt; for us. The &lt;code&gt;runRobotProgram&lt;/code&gt; function is &lt;a href="https://github.com/mjyc/cycle-robot-drivers/blob/master/run/src/index.tsx" rel="noopener noreferrer"&gt;a wrapper function for Cycle.js &lt;code&gt;run&lt;/code&gt;&lt;/a&gt; that&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;creates five drivers, &lt;code&gt;AudioPlayer&lt;/code&gt;, &lt;code&gt;SpeechSynthesis&lt;/code&gt;, &lt;code&gt;SpeechRecognition&lt;/code&gt;, &lt;code&gt;TabletFace&lt;/code&gt;, &lt;code&gt;PoseDetection&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;creates and sets up five action components &lt;code&gt;FacialExpressionAction&lt;/code&gt;, &lt;code&gt;AudioPlayerAction&lt;/code&gt;, &lt;code&gt;TwoSpeechbubblesAction&lt;/code&gt;, &lt;code&gt;SpeechSynthesisAction&lt;/code&gt;, &lt;code&gt;SpeechRecognitionAction&lt;/code&gt; to allow programmers to use them as drivers, and&lt;/li&gt;
&lt;li&gt;calls Cycle.js run with the created drivers and actions.&lt;/li&gt;
&lt;/ol&gt;



&lt;p&gt;In fact, if you are comfortable with Cycle.js, you could use Cycle.js &lt;code&gt;run&lt;/code&gt; instead of &lt;code&gt;runRobotProgram&lt;/code&gt; to have more control over drivers and actions. You could also create a new &lt;code&gt;runRobotProgram&lt;/code&gt; function that provides drivers for your own robot that is not a tablet-face robot!&lt;/p&gt;

&lt;p&gt;Regarding the second question, check out &lt;a href="https://cycle.js.org/drivers.html" rel="noopener noreferrer"&gt;this page&lt;/a&gt; from the Cycle.js website.&lt;/p&gt;
&lt;h3&gt;
  
  
  Robot, ask questions!
&lt;/h3&gt;

&lt;p&gt;We'll now focus on implementing the second feature--asking the travel personality quiz questions.&lt;/p&gt;

&lt;p&gt;First, we'll represent &lt;a href="http://www.nomadwallet.com/wp-content/uploads/2014/03/travel-quiz-flowchart.jpg" rel="noopener noreferrer"&gt;the quiz flowchart&lt;/a&gt; as a dictionary of dictionaries for convenience. Add the following code:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="c1"&gt;// ...&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;runRobotProgram&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;@cycle-robot-drivers/run&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;Question&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="na"&gt;CAREER&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Is reaching your full career potential important to you?&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="na"&gt;ONLINE&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Can you see yourself working online?&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="na"&gt;FAMILY&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Do you have to be near my family/friends/pets?&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="na"&gt;TRIPS&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Do you think short trips are awesome?&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="na"&gt;HOME&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Do you want to have a home and nice things?&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="na"&gt;ROUTINE&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Do you think a routine gives your life structure?&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="na"&gt;JOB&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Do you need a secure job and a stable income?&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="na"&gt;VACATIONER&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;You are a vacationer!&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="na"&gt;EXPAT&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;You are an expat!&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="na"&gt;NOMAD&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;You are a nomad!&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;
&lt;span class="p"&gt;};&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;Response&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="na"&gt;YES&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;yes&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="na"&gt;NO&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;no&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;
&lt;span class="p"&gt;};&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;transitionTable&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;Question&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;CAREER&lt;/span&gt;&lt;span class="p"&gt;]:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;Response&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;YES&lt;/span&gt;&lt;span class="p"&gt;]:&lt;/span&gt; &lt;span class="nx"&gt;Question&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;ONLINE&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;Response&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;NO&lt;/span&gt;&lt;span class="p"&gt;]:&lt;/span&gt; &lt;span class="nx"&gt;Question&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;FAMILY&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="p"&gt;},&lt;/span&gt;
  &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;Question&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;ONLINE&lt;/span&gt;&lt;span class="p"&gt;]:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;Response&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;YES&lt;/span&gt;&lt;span class="p"&gt;]:&lt;/span&gt; &lt;span class="nx"&gt;Question&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;NOMAD&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;Response&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;NO&lt;/span&gt;&lt;span class="p"&gt;]:&lt;/span&gt; &lt;span class="nx"&gt;Question&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;VACATIONER&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="p"&gt;},&lt;/span&gt;
  &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;Question&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;FAMILY&lt;/span&gt;&lt;span class="p"&gt;]:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;Response&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;YES&lt;/span&gt;&lt;span class="p"&gt;]:&lt;/span&gt; &lt;span class="nx"&gt;Question&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;VACATIONER&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;Response&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;NO&lt;/span&gt;&lt;span class="p"&gt;]:&lt;/span&gt; &lt;span class="nx"&gt;Question&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;TRIPS&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="p"&gt;},&lt;/span&gt;
  &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;Question&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;TRIPS&lt;/span&gt;&lt;span class="p"&gt;]:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;Response&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;YES&lt;/span&gt;&lt;span class="p"&gt;]:&lt;/span&gt; &lt;span class="nx"&gt;Question&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;VACATIONER&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;Response&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;NO&lt;/span&gt;&lt;span class="p"&gt;]:&lt;/span&gt; &lt;span class="nx"&gt;Question&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;HOME&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="p"&gt;},&lt;/span&gt;
  &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;Question&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;HOME&lt;/span&gt;&lt;span class="p"&gt;]:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;Response&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;YES&lt;/span&gt;&lt;span class="p"&gt;]:&lt;/span&gt; &lt;span class="nx"&gt;Question&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;EXPAT&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;Response&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;NO&lt;/span&gt;&lt;span class="p"&gt;]:&lt;/span&gt; &lt;span class="nx"&gt;Question&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;ROUTINE&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="p"&gt;},&lt;/span&gt;
  &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;Question&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;ROUTINE&lt;/span&gt;&lt;span class="p"&gt;]:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;Response&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;YES&lt;/span&gt;&lt;span class="p"&gt;]:&lt;/span&gt; &lt;span class="nx"&gt;Question&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;EXPAT&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;Response&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;NO&lt;/span&gt;&lt;span class="p"&gt;]:&lt;/span&gt; &lt;span class="nx"&gt;Question&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;JOB&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="p"&gt;},&lt;/span&gt;
  &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;Question&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;JOB&lt;/span&gt;&lt;span class="p"&gt;]:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;Response&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;YES&lt;/span&gt;&lt;span class="p"&gt;]:&lt;/span&gt; &lt;span class="nx"&gt;Question&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;ONLINE&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;Response&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;NO&lt;/span&gt;&lt;span class="p"&gt;]:&lt;/span&gt; &lt;span class="nx"&gt;Question&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;NOMAD&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;};&lt;/span&gt;

&lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;main&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;sources&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
&lt;span class="c1"&gt;// ...&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Notice that I modified the quiz questions to change all response choices to "yes" and "no".&lt;/p&gt;

&lt;p&gt;Let's now make the robot ask questions and take your verbal responses.&lt;br&gt;
First, we'll make the robot to just say the first question on start, i.e., on loading the robot's face, and start listening after saying something:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="c1"&gt;// ...&lt;/span&gt;
&lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;main&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;sources&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;sources&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;SpeechRecognitionAction&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;result&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;addListener&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
    &lt;span class="na"&gt;next&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;result&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;result&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;result&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
  &lt;span class="p"&gt;});&lt;/span&gt;
  &lt;span class="c1"&gt;// ...&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;sinks&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="na"&gt;TabletFace&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;sources&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;PoseDetection&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;poses&lt;/span&gt;
      &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;filter&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;poses&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt;
      &lt;span class="c1"&gt;// ...&lt;/span&gt;
    &lt;span class="na"&gt;SpeechSynthesisAction&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;sources&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;TabletFace&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;load&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;mapTo&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;Question&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;CAREER&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
    &lt;span class="na"&gt;SpeechRecognitionAction&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;sources&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;SpeechSynthesisAction&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;result&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;mapTo&lt;/span&gt;&lt;span class="p"&gt;({})&lt;/span&gt;
  &lt;span class="p"&gt;};&lt;/span&gt;
  &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nx"&gt;sinks&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="c1"&gt;// ...&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here we are sending commands to the &lt;code&gt;SpeechSynthesisAction&lt;/code&gt; driver and the &lt;code&gt;SpeechRecognitionAction&lt;/code&gt; driver by returning the created streams via &lt;code&gt;sink.SpeechSynthesisAction&lt;/code&gt; and &lt;code&gt;sink.SpeechRecognitionAction&lt;/code&gt; from &lt;code&gt;main&lt;/code&gt;.&lt;br&gt;
The input stream for the &lt;code&gt;SpeechSynthesisAction&lt;/code&gt; driver emits &lt;code&gt;Question.Career&lt;/code&gt; on the tablet-face-loaded event emitted in the &lt;code&gt;sources.TabletFace.load&lt;/code&gt; stream.&lt;br&gt;
The input stream for the &lt;code&gt;SpeechRecognitionAction&lt;/code&gt; driver emits an empty object (&lt;code&gt;{}&lt;/code&gt;) on finishing the speech synthesis action event emitted in the &lt;code&gt;sources.SpeechSynthesisAction.result&lt;/code&gt; stream.&lt;br&gt;
Both streams are created using the &lt;a href="https://github.com/staltz/xstream#mapTo" rel="noopener noreferrer"&gt;&lt;code&gt;mapTo&lt;/code&gt;&lt;/a&gt; xstream operator.&lt;br&gt;
We also print out events emitted in the &lt;code&gt;sources.SpeechRecognitionAction.result&lt;/code&gt; stream using the &lt;a href="https://github.com/staltz/xstream#addListener" rel="noopener noreferrer"&gt;addListener&lt;/a&gt; xstream operator.&lt;/p&gt;

&lt;p&gt;When you run the application, you should hear the robot saying "Is reaching your full career potential important to you?" and see the output of the &lt;code&gt;SpeechRecognitionAction&lt;/code&gt; printed to your browser's console. The output has the following format:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;result&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;result&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;yes&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;  &lt;span class="c1"&gt;// transcribed texts&lt;/span&gt;
  &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;status&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;goal_id&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;  &lt;span class="c1"&gt;// a unique id for the executed action&lt;/span&gt;
      &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;stamp&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Mon Oct 01 2018 21:49:00 GMT-0700 (PDT)&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;  &lt;span class="c1"&gt;// "Date" object&lt;/span&gt;
      &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;id&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;h0fogq2x0zo-1538455335646&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;
    &lt;span class="p"&gt;},&lt;/span&gt;
    &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;status&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;SUCCEEDED&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;  &lt;span class="c1"&gt;// "SUCCEEDED", "PREEMPTED", or "ABORTED"&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Try saying something and see how well it hears you.&lt;/p&gt;

&lt;p&gt;Now we want to improve the program to make the robot ask more than one question. For example, we can try to send questions as commands to the &lt;code&gt;SpeechSynthesisAction&lt;/code&gt; driver whenever the robot hears an appropriate answer, i.e., "yes" or "no". Let's try to express this by updating the code above as follows:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="c1"&gt;// ...&lt;/span&gt;
&lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;main&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;sources&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="c1"&gt;// ...&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;sinks&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="na"&gt;TabletFace&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;sources&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;PoseDetection&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;poses&lt;/span&gt;
      &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;filter&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;poses&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt;
      &lt;span class="c1"&gt;// ...&lt;/span&gt;
    &lt;span class="na"&gt;SpeechSynthesisAction&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;xs&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;merge&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
      &lt;span class="nx"&gt;sources&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;TabletFace&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;load&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;mapTo&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;Question&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;CAREER&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
      &lt;span class="nx"&gt;sources&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;SpeechRecognitionAction&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;result&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;filter&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;result&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt;
        &lt;span class="nx"&gt;result&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;status&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;status&lt;/span&gt; &lt;span class="o"&gt;===&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;SUCCEEDED&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;  &lt;span class="c1"&gt;// must succeed&lt;/span&gt;
        &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;result&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;result&lt;/span&gt; &lt;span class="o"&gt;===&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;yes&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt; &lt;span class="o"&gt;||&lt;/span&gt; &lt;span class="nx"&gt;result&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;result&lt;/span&gt; &lt;span class="o"&gt;===&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;no&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="c1"&gt;// only yes or no&lt;/span&gt;
      &lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;map&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;result&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="nx"&gt;result&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;result&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;map&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;result&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="c1"&gt;// Hmm...&lt;/span&gt;
      &lt;span class="p"&gt;})&lt;/span&gt;
    &lt;span class="p"&gt;),&lt;/span&gt;
    &lt;span class="na"&gt;SpeechRecognitionAction&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;sources&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;SpeechSynthesisAction&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;result&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;mapTo&lt;/span&gt;&lt;span class="p"&gt;({})&lt;/span&gt;
  &lt;span class="p"&gt;};&lt;/span&gt;
  &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nx"&gt;sinks&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="c1"&gt;// ...&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here we are merging the commands from the stream that emits the first question (&lt;code&gt;sources.TabletFace.load.mapTo(Question.CAREER)&lt;/code&gt;) and the commands from the stream that emits a subsequent question on hearing "yes" or "no" (&lt;code&gt;sources.SpeechRecognitionAction.result.filter(// ...&lt;/code&gt;) using the &lt;a href="https://github.com/staltz/xstream#merge" rel="noopener noreferrer"&gt;&lt;code&gt;merge&lt;/code&gt;&lt;/a&gt; xstream factory.&lt;/p&gt;

&lt;p&gt;There is one problem with this approach. We cannot figure out which question to return in the second stream since the question is dependent on the last question the robot asked, which also is dependent on the last last question and so on. In other words, we need a previous output of the current stream we are creating as a input to the current stream.&lt;/p&gt;

&lt;p&gt;To solve this circular dependency problem, we adopt the proxy pattern by updating the &lt;code&gt;main&lt;/code&gt; function as follows:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="c1"&gt;// ...&lt;/span&gt;
&lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;main&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;sources&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="c1"&gt;// ...&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;lastQuestion$&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;xs&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;create&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;question$&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;xs&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;merge&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="nx"&gt;sources&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;TabletFace&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;load&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;mapTo&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;Question&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;CAREER&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
    &lt;span class="nx"&gt;sources&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;SpeechRecognitionAction&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;result&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;filter&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;result&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt;
      &lt;span class="nx"&gt;result&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;status&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;status&lt;/span&gt; &lt;span class="o"&gt;===&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;SUCCEEDED&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;  &lt;span class="c1"&gt;// must succeed&lt;/span&gt;
      &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;result&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;result&lt;/span&gt; &lt;span class="o"&gt;===&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;yes&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt; &lt;span class="o"&gt;||&lt;/span&gt; &lt;span class="nx"&gt;result&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;result&lt;/span&gt; &lt;span class="o"&gt;===&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;no&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="c1"&gt;// only yes or no&lt;/span&gt;
    &lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;map&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;result&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="nx"&gt;result&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;result&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;startWith&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;''&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;compose&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nf"&gt;sampleCombine&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
      &lt;span class="nx"&gt;lastQuestion$&lt;/span&gt;
    &lt;span class="p"&gt;)).&lt;/span&gt;&lt;span class="nf"&gt;map&lt;/span&gt;&lt;span class="p"&gt;(([&lt;/span&gt;&lt;span class="nx"&gt;response&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;question&lt;/span&gt;&lt;span class="p"&gt;])&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nx"&gt;transitionTable&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;question&lt;/span&gt;&lt;span class="p"&gt;][&lt;/span&gt;&lt;span class="nx"&gt;response&lt;/span&gt;&lt;span class="p"&gt;];&lt;/span&gt;
    &lt;span class="p"&gt;})&lt;/span&gt;
  &lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="nx"&gt;lastQuestion$&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;imitate&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;question$&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;sinks&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="na"&gt;TabletFace&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;sources&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;PoseDetection&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;poses&lt;/span&gt;
      &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;filter&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;poses&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt;
      &lt;span class="c1"&gt;// ...&lt;/span&gt;
    &lt;span class="na"&gt;SpeechSynthesisAction&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;question$&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;SpeechRecognitionAction&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;sources&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;SpeechSynthesisAction&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;result&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;mapTo&lt;/span&gt;&lt;span class="p"&gt;({})&lt;/span&gt;
  &lt;span class="p"&gt;};&lt;/span&gt;
  &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nx"&gt;sinks&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="c1"&gt;// ...&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here we have moved creating the code for a stream for &lt;code&gt;sink.SpeechSynthesisAction&lt;/code&gt; outside of the &lt;code&gt;sink&lt;/code&gt; object definition. We create an empty proxy stream &lt;code&gt;lastQuestion$&lt;/code&gt; using the &lt;a href="https://github.com/staltz/xstream#create" rel="noopener noreferrer"&gt;&lt;code&gt;create&lt;/code&gt;&lt;/a&gt; xstream factory and use it when creating the &lt;code&gt;question$&lt;/code&gt; stream.&lt;br&gt;
Then use the &lt;a href="https://github.com/staltz/xstream#imitate" rel="noopener noreferrer"&gt;&lt;code&gt;imitate&lt;/code&gt;&lt;/a&gt; xstream operator to connect the proxy stream, &lt;code&gt;lastQuestion$&lt;/code&gt;, to its source stream, &lt;code&gt;question$&lt;/code&gt;. We also use the &lt;a href="https://github.com/staltz/xstream#compose" rel="noopener noreferrer"&gt;&lt;code&gt;compose&lt;/code&gt;&lt;/a&gt; and &lt;a href="https://github.com/staltz/xstream/blob/master/EXTRA_DOCS.md#sampleCombine" rel="noopener noreferrer"&gt;&lt;code&gt;sampleCombine&lt;/code&gt;&lt;/a&gt; xstream operators to combine events from the stream originated from &lt;code&gt;sources.SpeechRecognitionAction.result&lt;/code&gt; and the &lt;code&gt;lastQuestion$&lt;/code&gt; stream. Note that I add &lt;code&gt;$&lt;/code&gt; at the end of stream variable names to distinguish them from other variables as Cycle.js authors do. Try the updated application and see if the robot asks more than one question if you respond to it with "yes" or "no".&lt;/p&gt;

&lt;p&gt;You may have wondered when did we update the code to send the "start listening" command ({}) after &lt;em&gt;all&lt;/em&gt; questions. We didn't update the code; the code we had before already works as desired since the &lt;code&gt;sources.SpeechSynthesisAction.result&lt;/code&gt; stream emits data on finishing &lt;em&gt;every&lt;/em&gt; synthesized speech.&lt;/p&gt;

&lt;p&gt;One problem you may have faced is the robot failing to ask a next question when it hears an answer that is not "yes" or "no", e.g., by mistake. In such case, the robot should start listening again to give the person a chance to correct their answer. Let's update the code to fix the problem:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="c1"&gt;// ...&lt;/span&gt;
    &lt;span class="nx"&gt;SpeechSynthesisAction&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;question$&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="nx"&gt;SpeechRecognitionAction&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;xs&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;merge&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
      &lt;span class="nx"&gt;sources&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;SpeechSynthesisAction&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;result&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="nx"&gt;sources&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;SpeechRecognitionAction&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;result&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;filter&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;result&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt;
        &lt;span class="nx"&gt;result&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;status&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;status&lt;/span&gt; &lt;span class="o"&gt;!==&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;SUCCEEDED&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;
        &lt;span class="o"&gt;||&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;result&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;result&lt;/span&gt; &lt;span class="o"&gt;!==&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;yes&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="nx"&gt;result&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;result&lt;/span&gt; &lt;span class="o"&gt;!==&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;no&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
      &lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;mapTo&lt;/span&gt;&lt;span class="p"&gt;({})&lt;/span&gt;
  &lt;span class="p"&gt;};&lt;/span&gt;
  &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nx"&gt;sinks&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="c1"&gt;// ...&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Run the updated application. You should see that the robot will continue to listen and print whatever it hears to the console until it hears "yes" or "no" before asking a next question.&lt;/p&gt;

&lt;p&gt;We are done at this point. Try taking the travel personality quiz to find out your travel personality and enjoy!&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Exercise ideas:&lt;/em&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Implement one of &lt;a href="https://www.buzzfeed.com/lukelewis/the-most-important-flowcharts-of-all-time" rel="noopener noreferrer"&gt;"The 24 Most Important Flowcharts Of All Time"&lt;/a&gt; to make the robot answer one of the biggest questions in life?&lt;/li&gt;
&lt;li&gt;Make your robot to read Tweets from a certain Twitter user whenever that user post a tweet, e.g., using &lt;a href="https://developer.twitter.com/en/docs/tweets/filter-realtime/overview" rel="noopener noreferrer"&gt;a Twitter API&lt;/a&gt;?&lt;/li&gt;
&lt;li&gt;Make your robot alert you whenever a &lt;a href="https://www.youtube.com/watch?v=uS1KcjkWdoU" rel="noopener noreferrer"&gt;stock's price goes below or above a certain threshold&lt;/a&gt;?&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Please let me know if something isn’t clear, and I’d be happy to chat about your concerns. Thank you for reading!&lt;/p&gt;

&lt;h4&gt;
  
  
  Miscellaneous
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Fun fact: &lt;a href="https://spectrum.ieee.org/automaton/robotics/humanoids/what-people-see-in-157-robot-faces" rel="noopener noreferrer"&gt;many social robots today use a screen as a face&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;Check out &lt;a href="http://rxmarbles.com/#mergeMap" rel="noopener noreferrer"&gt;RxJS Marbles&lt;/a&gt; for visualizing stream operators with marble diagrams, e.g., &lt;a href="http://rxmarbles.com/#interval" rel="noopener noreferrer"&gt;interval&lt;/a&gt; (periodic in xstream), &lt;a href="http://rxmarbles.com/#map" rel="noopener noreferrer"&gt;map&lt;/a&gt;, &lt;a href="http://rxmarbles.com/#filter" rel="noopener noreferrer"&gt;filter&lt;/a&gt;, &lt;a href="http://rxmarbles.com/#mapTo" rel="noopener noreferrer"&gt;mapTo&lt;/a&gt;, and &lt;a href="http://rxmarbles.com/#merge" rel="noopener noreferrer"&gt;merge&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;If you are a &lt;a href="http://www.ros.org/" rel="noopener noreferrer"&gt;ROS&lt;/a&gt; user, check out my &lt;a href="https://github.com/mjyc/cycle-ros-example" rel="noopener noreferrer"&gt;experimental Cycle.js driver&lt;/a&gt; for communicating with ROS using &lt;a href="https://github.com/RobotWebTools/roslibjs" rel="noopener noreferrer"&gt;roslibjs&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;Help me improve &lt;a href="//./"&gt;cycle-robot-drivers&lt;/a&gt; library by participating in &lt;a href="https://goo.gl/forms/rdnvgk8rWrUmbtrt1" rel="noopener noreferrer"&gt;this brief survey&lt;/a&gt;!&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;em&gt;My name is Mike Chung. I'm a &lt;a href="https://homes.cs.washington.edu/~mjyc/" rel="noopener noreferrer"&gt;graduate student&lt;/a&gt; interested in the field of human-robot interaction and machine learning. You can reach me on &lt;a href="https://twitter.com/mjycio" rel="noopener noreferrer"&gt;Twitter&lt;/a&gt; and on &lt;a href="https://github.com/mjyc" rel="noopener noreferrer"&gt;GitHub&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>javascript</category>
      <category>cyclejs</category>
      <category>reactive</category>
      <category>robotics</category>
    </item>
  </channel>
</rss>
