TL;DR:
- Built with Supabase, React, WebGazer.js, Motion One, anime.js, Stable Audio
- Leverages Supabase Realtime Presence & Broadcast (no database tables used at all!)
- GitHub repo
- Website
- Demo video
Yet another Supabase Launch Week Hackathon and yet another experimental project, called Gaze into the Abyss. This ended up being both one of the simplest and complex projects at the same time. Luckily I've been enjoying Cursor quite a bit recently, so I had some helping hands to make it through! I also wanted to validate a question in my mind: is it possible to use just the realtime features from Supabase without any database tables? The (maybe somewhat obvious) answer is: yes, yes it is (love you, Realtime team ♥️). So let's dive a bit deeper into the implementation.
The idea
I was just one day randomly just thinking about Nietzsche's quote about the abyss and that it would be nice (and cool) to actually visualize it somehow: you stare into a dark screen and something stares back at you. Nothing much more to it!
Building the project
Initially I had the idea that I would use Three.js to make this project, however I realized it'd mean that I'd need to create or find some free assets for the 3D eye(s). I decided that it's a bit too much, especially since I didn't have too much time to work on the project itself, and decided to do it in 2D with SVGs instead.
I also did not want it to be only visual: it would be a better experience with some audio too. So I had an idea that it would be awesome if the participants could talk to a microphone and others could hear it as ineligible whispers or winds passing by. This, however, turned out very challenging and decided to drop it completely as I wasn't able to hook up WebAudio and WebRTC together well. I do have a leftover component in the codebase which listens to the local microphone and triggers "wind sounds" for the current user if you want to take a look. Maybe something to add in the future?
Realtime rooms
Before working on any visual stuff, I wanted to test out the realtime setup I had in mind. Since there are some limitations in the realtime feature I wanted it work so that:
- There are max. 10 participants in one channel at a time
- meaning you'd need to join a new channel if one is full
- You should only see other participants' eyes
For this I came up with an useEffect
setup where it recursively joins to a realtime channel like so:
This joinRoom
lives inside the useEffect
hook and is called when the room component is mounted. One caveat I found out when working on this feature was that the currentPresences
param does not contain any values in the join
event even though it is available. I'm not sure if it's a bug in the implementation or working as intended. Hence needing to do a manual room.presenceState
fetch to get the number of participants in the room whenever the user joins.
We check the participant count and either unsubscribe from the current room and try to join another room, or then proceed with the current room. We do this in the join
event as sync
would be too late (it triggers after join
or leave
events).
I tested this implementation by opening a whole lot of tabs in my browser and all seemed swell!
After that I wanted to debug the solution with mouse position updates and quickly ran into some issues of sending too many messages in the channel! The solution: throttle the calls.
/**
* Creates a throttled version of a function that can only be called at most once
* in the specified time period.
*/
function createThrottledFunction<T extends (...args: unknown[]) => unknown>(
functionToThrottle: T,
waitTimeMs: number
): (...args: Parameters<T>) => void {
let isWaitingToExecute = false
return function throttledFunction(...args: Parameters<T>) {
if (!isWaitingToExecute) {
functionToThrottle.apply(this, args)
isWaitingToExecute = true
setTimeout(() => {
isWaitingToExecute = false
}, waitTimeMs)
}
}
}
Cursor came up with this little throttle function creator and I used it with the eye tracking broadcasts like this:
const throttledBroadcast = createThrottledFunction((data: EyeTrackingData) => {
if (currentChannel) {
currentChannel.send({
type: 'broadcast',
event: 'eye_tracking',
payload: data
})
}
}, THROTTLE_MS)
throttledBroadcast({
userId: userId.current,
isBlinking: isCurrentlyBlinking,
gazeX,
gazeY
})
This helped a lot! Also, in the initial versions I had the eye tracking messages sent with presence
however broadcast
allows more messages per second, so I switched the implementation to that instead. It's especially crucial in eye tracking since the camera will record everything all the time.
Eye tracking
I had ran into WebGazer.js a while back when I first had the idea for this project. It's a very interesting project and works suprisingly well!
The whole eye tracking capabilities are done in one function in a useEffect
hook:
window.webgazer
.setGazeListener(async (data: any) => {
if (data == null || !currentChannel || !ctxRef.current) return
try {
// Get normalized gaze coordinates
const gazeX = data.x / windowSize.width
const gazeY = data.y / windowSize.height
// Get video element
const videoElement = document.getElementById('webgazerVideoFeed') as HTMLVideoElement
if (!videoElement) {
console.error('WebGazer video element not found')
return
}
// Set canvas size to match video
imageCanvasRef.current.width = videoElement.videoWidth
imageCanvasRef.current.height = videoElement.videoHeight
// Draw current frame to canvas
ctxRef.current?.drawImage(videoElement, 0, 0)
// Get eye patches
const tracker = window.webgazer.getTracker()
const patches = await tracker.getEyePatches(
videoElement,
imageCanvasRef.current,
videoElement.videoWidth,
videoElement.videoHeight
)
if (!patches?.right?.patch?.data || !patches?.left?.patch?.data) {
console.error('No eye patches detected')
return
}
// Calculate brightness for each eye
const calculateBrightness = (imageData: ImageData) => {
let total = 0
for (let i = 0; i < imageData.data.length; i += 16) {
// Convert RGB to grayscale
const r = imageData.data[i]
const g = imageData.data[i + 1]
const b = imageData.data[i + 2]
total += (r + g + b) / 3
}
return total / (imageData.width * imageData.height / 4)
}
const rightEyeBrightness = calculateBrightness(patches.right.patch)
const leftEyeBrightness = calculateBrightness(patches.left.patch)
const avgBrightness = (rightEyeBrightness + leftEyeBrightness) / 2
// Update rolling average
if (brightnessSamples.current.length >= SAMPLES_SIZE) {
brightnessSamples.current.shift() // Remove oldest sample
}
brightnessSamples.current.push(avgBrightness)
// Calculate dynamic threshold from rolling average
const rollingAverage = brightnessSamples.current.reduce((a, b) => a + b, 0) / brightnessSamples.current.length
const dynamicThreshold = rollingAverage * THRESHOLD_MULTIPLIER
// Detect blink using dynamic threshold
const blinkDetected = avgBrightness > dynamicThreshold
// Debounce blink detection to avoid rapid changes
if (blinkDetected !== isCurrentlyBlinking) {
const now = Date.now()
if (now - lastBlinkTime > 100) { // Minimum time between blink state changes
isCurrentlyBlinking = blinkDetected
lastBlinkTime = now
}
}
// Use throttled broadcast instead of direct send
throttledBroadcast({
userId: userId.current,
isBlinking: isCurrentlyBlinking,
gazeX,
gazeY
})
} catch (error) {
console.error('Error processing gaze data:', error)
}
})
Getting the information where the user is looking at is simple and works like getting mouse positions on the screen. However, I also wanted to add blink detection as (a cool) feature, which required jumping through some hoops.
When you google information about WebGazer and blink detection, you can see some remanents of an initial implementation. Like there are commented out code in the source even. Unfortunately these kind of capabilities do not exit in the library. You'll need to do it manually.
After a lot of trial and error, Cursor and I were able to come up with a solution that calculates pixels & brightness levels from the eye patch data to determine when user is blinking. It also has some dynamic lighting adjustments as I noticed that (at least for me) the web cam doesn't always recognize when you are blinking depending on your lighting. For me it worked worse the lighter my picture/room was, and better in darker lighting (go figure).
While debugging the eye tracking capabilities (WebGazer has a very nice .setPredictionPoints
call that displays a red dot on the screen to visualize where you are looking at), I noticed that the tracking is not very accurate unless you calibrate it. Which is what the project asks you to do before joining any rooms.
const startCalibration = useCallback(() => {
const points: CalibrationPoint[] = [
{ x: 0.1, y: 0.1 },
{ x: 0.9, y: 0.1 },
{ x: 0.5, y: 0.5 },
{ x: 0.1, y: 0.9 },
{ x: 0.9, y: 0.9 },
]
setCalibrationPoints(points)
setCurrentPoint(0)
setIsCalibrating(true)
window.webgazer.clearData()
}, [])
const handleCalibrationClick = useCallback((event: React.MouseEvent) => {
if (!isCalibrating) return
// Record click location for calibration
const x = event.clientX
const y = event.clientY
window.webgazer.recordScreenPosition(x, y, 'click')
if (currentPoint < calibrationPoints.length - 1) {
setCurrentPoint(prev => prev + 1)
} else {
setIsCalibrating(false)
setHasCalibrated(true)
}
}, [isCalibrating, currentPoint, calibrationPoints.length])
<div
className="fixed inset-0 bg-black/50 flex items-center justify-center z-50"
>
{calibrationPoints.map((point, index) => (
<div
key={index}
onClick={handleCalibrationClick}
className={`absolute w-6 h-6 rounded-full transform -translate-x-1/2 -translate-y-1/2 ${
index === currentPoint ? 'bg-red-500 cursor-pointer pulsate' : 'bg-gray-500'
} z-50`}
style={{
left: `${point.x * 100}%`,
top: `${point.y * 100}%`,
}}
>
</div>
))}
<div className="text-white text-center z-50 font-serif text-md mb-24">
Click the red dot to calibrate ({currentPoint + 1}/{calibrationPoints.length})
</div>
</div>
Basically we're rendering 5 dots on the screen: one in each corner and one in the center. Clicking them will record the screen position in WebGazer so it can adjust the model a bit better to predict where you are looking. You might wonder what does this clicking actually do and the funny part is that you are actually looking where you are clicking, right? And as you do that, WebGazer is able to process your eye movements a bit better and provide more accurate results. Very cool!
The Eye
I had already added a simple SVG implementation for the eye & hooked it up to the tracking, however it needed to be stylized a bit more. Below is a bit how it ended up looking. The inspiration was Alucard Eyes by MIKELopez.
This is an earlier version of the eye, however it is pretty much 95% there. I sent the video to couple my friends and they thought it was very cool, especially when knowing the fact that it was actually following your eye movements! You can also see the WebGazer's prediction dot moving on the screen, too.
The eye component itself is an SVG with some path animations via Motion.
<svg
className={`w-full h-full self-${alignment} max-w-[350px] max-h-[235px]`}
viewBox="-50 0 350 235"
preserveAspectRatio="xMidYMid meet"
>
{/* Definitions for gradients and filters */}
<defs>
<filter id="pupil-blur">
<feGaussianBlur stdDeviation="0.75" />
</filter>
<radialGradient id="eyeball-gradient">
<stop offset="60%" stopColor="#dcdae0" />
<stop offset="100%" stopColor="#a8a7ad" />
</radialGradient>
<radialGradient
id="pupil-gradient"
cx="0.35"
cy="0.35"
r="0.65"
>
<stop offset="0%" stopColor="#444" />
<stop offset="75%" stopColor="#000" />
<stop offset="100%" stopColor="#000" />
</radialGradient>
<radialGradient
id="corner-gradient-left"
cx="0.3"
cy="0.5"
r="0.25"
gradientUnits="objectBoundingBox"
>
<stop offset="0%" stopColor="rgba(0,0,0,0.75)" />
<stop offset="100%" stopColor="rgba(0,0,0,0)" />
</radialGradient>
<radialGradient
id="corner-gradient-right"
cx="0.7"
cy="0.5"
r="0.25"
gradientUnits="objectBoundingBox"
>
<stop offset="0%" stopColor="rgba(0,0,0,0.75)" />
<stop offset="100%" stopColor="rgba(0,0,0,0)" />
</radialGradient>
<filter id="filter0_f_302_14" x="-25" y="0" width="320" height="150" filterUnits="userSpaceOnUse" colorInterpolationFilters="sRGB">
<feGaussianBlur stdDeviation="4.1" result="effect1_foregroundBlur_302_14"/>
</filter>
<filter id="filter1_f_302_14" x="-25" y="85" width="320" height="150" filterUnits="userSpaceOnUse" colorInterpolationFilters="sRGB">
<feGaussianBlur stdDeviation="4.1" result="effect1_foregroundBlur_302_14"/>
</filter>
<filter id="filter2_f_302_14" x="-50" y="-30" width="400" height="170" filterUnits="userSpaceOnUse" colorInterpolationFilters="sRGB">
<feGaussianBlur stdDeviation="7.6" result="effect1_foregroundBlur_302_14"/>
</filter>
<filter id="filter3_f_302_14" x="-50" y="95" width="400" height="170" filterUnits="userSpaceOnUse" colorInterpolationFilters="sRGB">
<feGaussianBlur stdDeviation="7.6" result="effect1_foregroundBlur_302_14"/>
</filter>
<filter id="filter4_f_302_14" x="0" y="-20" width="260" height="150" filterUnits="userSpaceOnUse" colorInterpolationFilters="sRGB">
<feGaussianBlur stdDeviation="3.35" result="effect1_foregroundBlur_302_14"/>
</filter>
<filter id="filter5_f_302_14" x="0" y="105" width="260" height="150" filterUnits="userSpaceOnUse" colorInterpolationFilters="sRGB">
<feGaussianBlur stdDeviation="3.35" result="effect1_foregroundBlur_302_14"/>
</filter>
</defs>
{/* Eyeball */}
<ellipse
cx="131"
cy="117.5"
rx="100"
ry="65"
fill="url(#eyeball-gradient)"
/>
{/* After the main eyeball ellipse but before the eyelids, add the corner shadows */}
<ellipse
cx="50"
cy="117.5"
rx="50"
ry="90"
fill="url(#corner-gradient-left)"
/>
<ellipse
cx="205"
cy="117.5"
rx="50"
ry="90"
fill="url(#corner-gradient-right)"
/>
{/* Corner reflections - repositioned diagonally */}
<circle
cx={45}
cy={135}
r="1.5"
fill="white"
className="opacity-60"
/>
<circle
cx={215}
cy={100}
r="2"
fill="white"
className="opacity-60"
/>
{/* Smaller companion reflections - repositioned diagonally */}
<circle
cx={35}
cy={120}
r="1"
fill="white"
className="opacity-40"
/>
<circle
cx={222}
cy={110}
r="1.5"
fill="white"
className="opacity-40"
/>
{/* Pupil group with animations */}
<motion.g
variants={pupilVariants}
animate={isBlinking ? "hidden" : "visible"}
>
{/* Pupil */}
<motion.ellipse
cx={131}
cy={117.5}
rx="50"
ry="50"
fill="url(#pupil-gradient)"
filter="url(#pupil-blur)"
animate={{
cx: 131 + pupilOffsetX,
cy: 117.5 + pupilOffsetY
}}
transition={{
type: "spring",
stiffness: 400,
damping: 30
}}
/>
{/* Light reflections */}
<motion.circle
cx={111}
cy={102.5}
r="5"
fill="white"
animate={{
cx: 111 + pupilOffsetX,
cy: 102.5 + pupilOffsetY
}}
transition={{
type: "spring",
stiffness: 400,
damping: 30
}}
/>
<motion.circle
cx={124}
cy={102.5}
r="3"
fill="white"
animate={{
cx: 124 + pupilOffsetX,
cy: 102.5 + pupilOffsetY
}}
transition={{
type: "spring",
stiffness: 400,
damping: 30
}}
/>
</motion.g>
{/* Upper eyelid */}
<motion.path
custom={true}
variants={eyelidVariants}
animate={isBlinking ? "closed" : "open"}
fill="#000"
/>
{/* Lower eyelid */}
<motion.path
custom={false}
variants={eyelidVariants}
animate={isBlinking ? "closed" : "open"}
fill="#000"
/>
{/* Top blurred lines */}
<g filter="url(#filter0_f_302_14)">
<motion.path
custom={true}
variants={blurredLineVariants}
animate={isBlinking ? "closed" : "open"}
stroke="#2A2A2A"
strokeWidth="5"
strokeLinecap="round"
/>
</g>
<g filter="url(#filter2_f_302_14)">
<motion.path
custom={true}
variants={outerBlurredLineVariants}
animate={isBlinking ? "closed" : "open"}
stroke="#777777"
strokeWidth="5"
strokeLinecap="round"
/>
</g>
<g filter="url(#filter4_f_302_14)">
<motion.path
custom={true}
variants={arcLineVariants}
animate={isBlinking ? "closed" : "open"}
stroke="#838383"
strokeWidth="5"
strokeLinecap="round"
/>
</g>
{/* Bottom blurred lines */}
<g filter="url(#filter1_f_302_14)">
<motion.path
variants={bottomBlurredLineVariants}
animate={isBlinking ? "closed" : "open"}
stroke="#2A2A2A"
strokeWidth="5"
strokeLinecap="round"
/>
</g>
<g filter="url(#filter3_f_302_14)">
<motion.path
variants={bottomOuterBlurredLineVariants}
animate={isBlinking ? "closed" : "open"}
stroke="#777777"
strokeWidth="5"
strokeLinecap="round"
/>
</g>
<g filter="url(#filter5_f_302_14)">
<motion.path
variants={bottomArcLineVariants}
animate={isBlinking ? "closed" : "open"}
stroke="#838383"
strokeWidth="5"
strokeLinecap="round"
/>
</g>
</svg>
Cursor works surprisingly well with SVG paths. For example, the eye lid closing animation is basically done by straightening out a curved path. I had just highlighted the path in the editor, pasted it in the Composer and asked to add an animation which straightens out the points so that the eye looks like it's closing/blinking.
// Define the open and closed states for both eyelids
const upperLidOpen = "M128.5 53.5C59.3 55.5 33 99.6667 28.5 121.5H0V0L261.5 0V121.5H227.5C214.7 65.1 156.167 52.6667 128.5 53.5Z"
const upperLidClosed = "M128.5 117.5C59.3 117.5 33 117.5 28.5 117.5H0V0L261.5 0V117.5H227.5C214.7 117.5 156.167 117.5 128.5 117.5Z"
const lowerLidOpen = "M128.5 181C59.3 179 33 134.833 28.5 113H0V234.5H261.5V113H227.5C214.7 169.4 156.167 181.833 128.5 181Z"
const lowerLidClosed = "M128.5 117.5C59.3 117.5 33 117.5 28.5 117.5H0V234.5H261.5V117.5H227.5C214.7 117.5 156.167 117.5 128.5 117.5Z"
// Animation variants for the eyelids
const eyelidVariants = {
open: (isUpper: boolean) => ({
d: isUpper ? upperLidOpen : lowerLidOpen,
transition: {
duration: 0.4,
ease: "easeOut"
}
}),
closed: (isUpper: boolean) => ({
d: isUpper ? upperLidClosed : lowerLidClosed,
transition: {
duration: 0.15,
ease: "easeIn"
}
})
}
It was a very cool experience to see this in action! I applied the same approach to the surrounding lines and instructed Cursor to "collapse" them towards the center: which it did pretty much with one go!
The eyes would then be rendered inside a simple CSS grid with cells aligned so that a full room looks like a big eye.
<div className="fixed inset-0 grid grid-cols-3 grid-rows-3 gap-4 p-8 md:gap-2 md:p-4 lg:max-w-6xl lg:mx-auto">
{Object.entries(roomState.participants).map(([key, presences]) => {
const participant = presences[0]
const eyeData = eyeTrackingState[key]
if (key === userId.current) return null
return (
<div
key={key}
className={`flex items-center justify-center ${getGridClass(participant.position)}`}
>
<Eyes
isBlinking={eyeData?.isBlinking ?? false}
gazeX={eyeData?.gazeX ?? 0.5}
gazeY={eyeData?.gazeY ?? 0.5}
alignment={getEyeAlignment(participant.position)}
/>
</div>
)
})}
</div>
// Helper function to convert position to Tailwind grid classes
function getGridClass(position: string): string {
switch (position) {
case 'center': return 'col-start-2 row-start-2'
case 'middleLeft': return 'col-start-1 row-start-2'
case 'middleRight': return 'col-start-3 row-start-2'
case 'topCenter': return 'col-start-2 row-start-1'
case 'bottomCenter': return 'col-start-2 row-start-3'
case 'topLeft': return 'col-start-1 row-start-1'
case 'topRight': return 'col-start-3 row-start-1'
case 'bottomLeft': return 'col-start-1 row-start-3'
case 'bottomRight': return 'col-start-3 row-start-3'
default: return 'col-start-2 row-start-2'
}
}
function getEyeAlignment(position: string): 'start' | 'center' | 'end' {
switch (position) {
case 'topLeft':
case 'topRight':
return 'end'
case 'bottomLeft':
case 'bottomRight':
return 'start'
default:
return 'center'
}
}
Final touches
Then slap some nice intro screen and background music and the project is good to go!
Audio always improves the experience when you are working on things like this, so I used Stable Audio to generate a background music when the user "enters the abyss." The prompt I used for the music was the following:
Ambient, creepy, background music, whispering sounds, winds, slow tempo, eerie, abyss
I also thought that just plain black screen is a bit boring, so I added some animated SVG filter stuff on the background. Additionally I added a dark, blurred circle in the center of the screen to have some nice fading effect. I probably could've done this with SVG filters, however I didn't want to spend too much time on this. Then to have some more movement, I made the background rotate on its axis. Sometimes doing animations with the SVG filters is a bit wonky, so I decided to do it this way instead.
<div style={{ width: '100vw', height: '100vh' }}>
{/* Background Elements */}
<svg className="fixed inset-0 w-full h-full -z-10">
<defs>
<filter id="noise">
<feTurbulence
id="turbFreq"
type="fractalNoise"
baseFrequency="0.01"
seed="5"
numOctaves="1"
>
</feTurbulence>
<feGaussianBlur stdDeviation="10">
<animate
attributeName="stdDeviation"
values="10;50;10"
dur="20s"
repeatCount="indefinite"
/>
</feGaussianBlur>
<feColorMatrix
type="matrix"
values="1 0 0 0 1
0 1 0 0 1
0 0 1 0 1
0 0 0 25 -13"
/>
</filter>
</defs>
<rect width="200%" height="200%" filter="url(#noise)" className="rotation-animation" />
</svg>
<div className="fixed inset-0 w-[95vw] h-[95vh] bg-black rounded-full blur-[128px] m-auto" />
Conclusion
So there you have it: fairly straight-forward look into how to implement a stylized eye tracking with Supabase's realtime capabilities. Personally I found this a very interesting experiment and didn't have too many hiccups while working on it. And surprisingly I did not have to do an all-nighter for the last night before submitting the project!
Feel free to check out the project or the demo video how it turned out. There might be some issues if a bunch of people are using it at the same time (very hard to test as it requires multiple devices & web cams to do it properly), but I guess that's in the fashion of hackathon projects? And if you do test it out, remember that if you see an eye, it's someone else watching you somewhere over the internet!
Top comments (0)