DEV Community

Cover image for Wishing Upon A Star with Web AR for Disney’s Wish
Lee Martin
Lee Martin

Posted on

Wishing Upon A Star with Web AR for Disney’s Wish

Reposted from my dev blog: leemartin.com/wish-star-finder

I’ve had the opportunity to work with Disney a couple of times in the past few years on a number of Instagram filters but I’ve never been hired to build a web app for the company. So, imagine my delight when Dani Ratliff from Disney Music contacted me about their new film Wish and was wondering if I’d be interested in taking the celestial learnings I’ve applied to other client projects for Jack White, Eddie Vedder, Shinedown, etc and attempt to build something unique for Wish on… the web. Finally, my wish came true.

Wish is the culmination of 100 years of Disney, a company that has been a creative inspiration to me since I was a child. I’ve always admired Disney’s ability to weave R&D, engineering, and artistry to tell stories in new ways. From them, I learned that technology should fade into the background and allow the magic of the interaction to shine through in simple and accessible ways. So how does one take these learnings and apply it to marketing a film like Wish?

In the film, our main character Asha makes a wish on a star. A scene we’ve seen countless times in Disney films. However, in this instance, the star answers and appears to her as a celestial being aptly called, “Star.”

What if we developed an AR experience in the browser which plots a field of wishing stars in the user’s sky. Then, when the user points their device at one of these stars, it unlocks a piece of content from the upcoming film. Users then have the option to wish on that star and watch as the star listens to their request. Once the user finishes wishing, the wishing star bursts in response and reveals the user, themselves, only to be shortly joined by the character “Star.” This entire magical experience is recorded and then offered back to the user as a shareable video.

Make a wish on a star today at wish.disneymusic.co and read on to learn how it all came together.

Wireframes

Once a scope is agreed upon and a proposal is approved, I provide my clients with a series of designless wireframes which illustrate the entire user journey. This allows everyone to understand how the application will function without getting sidetracked by the distraction of visual design. Here’s a look at some of those wireframes.

Wish Star Finder Wireframes

Once these wireframes are approved, I will begin developing a prototype version of the application which also does not include design. This allows me to get buy-in on the UX before we delve into the opinionated world of design.

Permissions

Since our application is a web AR app, we’ll need the user to agree to a few permissions in order to gain access to the technology required to augment their reality. Access to their camera is required so the application may stream an image of their reality. We also require access to their device’s motion and orientation so that they may use their device like a controller and point it at stars in the sky. Finally, we’ll need access to their microphone to receive and record their spoken wish.

Camera & Microphone

We use WebRTC to gain access to a user’s camera and microphone using the getUserMedia method. Typically, I would gain access to both of these from the same call. However, our experience requires the camera to flip from facing the environment to facing the user and I noticed that the small period of time the flip occurred (and microphone wasn’t available) contributed to a bit of audio lagging in the final recorded video. This was one of the nastier bugs I faced in development. So, we’ll just access each of these on their own media streams so that the camera can flip independently from the microphone.

I decided to write simple composables for both the camera and the microphone for maximum reusability. Here’s what the microphone looks like.

export const useMicrophone = () => {
  // State
  const microphoneStream = useState('microphoneStream')

  // Start microphone
  // ----------
  const start = async () => {
    // Promise
    return new Promise(async (resolve, revoke) => {
      try {
        // Get microphone
        microphoneStream.value = await navigator.mediaDevices.getUserMedia({
          audio: true
        })

        // Resolve
        resolve()

      } catch (e) {
        // Microphone error
        switch (e.name) {
          case "NotFoundError":
            revoke("Please enable the microphone on your device.")
            break
          case "NotAllowedError":
            revoke("Please allow access to your microphone.")
            break
          case "NotReadableError":
            revoke("Please close all other tabs which are using your microphone.")
            break
          default:
            revoke("This experience requires access to your microphone.")
        }

      }

    })

  }

  // Stop microphone
  // ----------
  const stop = () => {
    // If stream exists
    if (microphoneStream.value) {
      // Stop all streams
      microphoneStream.value.getTracks().forEach(track => track.stop())

      // Set stream to null
      microphoneStream.value = null

    }    

  }

  // Return
  return {
    start,
    stop
  }

}
Enter fullscreen mode Exit fullscreen mode

I can then import the start and stop methods from this composable on any required page or component.

const { start, stop } = useMicrophone()
Enter fullscreen mode Exit fullscreen mode

I use a state variable to keep track of the microphone stream because it is used in a few places throughout the app.

The camera composable works in a similar way with a few differences. First, the view state is passed to the camera start method so the camera is facing the appropriate direction. Next, I manually update a video tag with the camera stream and wait for its metadata to load since we’ll be rendering the visual from this video tag in our AR scene.

Motion

The API for gaining access to a user’s device motion and orientation is a bit different from WebRTC but not too complicated once you get the hang of it. To keep my code concise, I created a composable for this permission also. So, even though the permission APIs between media devices and device motion are different, at least they can seem similar in my code. The trick to accessing motion is making sure DeviceMotionEvent and DeviceOrientationEvent exist on the window and then using the requestPermission method to gain access to it. If everything was successful, the method will respond with granted. Here’s what my composable looks like.

export const useOrientation = () => {
  // Start orientation
  // ----------
  const start = async () => {
    // Promise
    return new Promise((resolve, revoke) => {
      // If request permission exists
      if (window.DeviceMotionEvent && typeof window.DeviceOrientationEvent.requestPermission === 'function') {
        // Request permission
        window.DeviceOrientationEvent.requestPermission()
        .then(response => {
          // Log
          console.log('orientation permission: ', response)

          // If granted
          if (response == "granted") {
            // Resolve
            resolve()

          } else {
            // Revoke
            revoke()

          }

        })
        .catch(e => {
          // Log
          console.log('orientation error: ', e)

          // Revoke
          revoke("This experience requires access to your device's motion.")

        })

      } else {
        // Log
        console.log('orientation does not exist')

        // Resolve
        resolve()

      }

    })

  }

  // Return
  return {
    start
  }
}
Enter fullscreen mode Exit fullscreen mode

Access Granted

I keep track of the status of all of these permission requests on the same screen and compute whether or not they are successful. Once the user has enabled their camera, microphone, and motion, they may continue the experience.

const grantedAccess = computed(() => {
  // All access granted
  return cameraStream.value && microphoneStream.value && orientationEnabled.value
})
Enter fullscreen mode Exit fullscreen mode

Now, let’s create a sky of wishing stars.

Wishing Stars

Star Types

Our application augments a Three.js powered scene of wishing stars on top of a user’s sky. The user may then point their device at one of these stars to reveal the content associated with that particular star. Contentful is used by the client to manage these stars and a new one is added every Wednesday at the wishing time of 11:11. Let’s talk a little bit about plotting these stars and targeting them with the user’s device.

Managing Stars

As I mentioned, Contentful is used by the client to manage the wishing stars in the sky and their associated content. Content can be an image, video, or audio. Once the content is added to Contentful, we can use their Publishing API to fetch them for plotting. To do this, I first set up a Contentful plugin in Nuxt and set up the associated environment variables in Netlify.

// Imports
import * as contentful from 'contentful'

// Define plugin
export default defineNuxtPlugin(nuxtApp => {
  // Use config
  const config = useRuntimeConfig().public

  // Provide contentful client as helper
  return {
    provide: {
      contentful: contentful.createClient({
        space: config.contentfulSpaceId,
        accessToken: config.contentfulAccessToken,
        host: config.contentfulHost
      })
    }
  }

})
Enter fullscreen mode Exit fullscreen mode

I can then use this client to fetch all the star entries.

const { items } = await $contentful.getEntries()
Enter fullscreen mode Exit fullscreen mode

Now that we have our wishing stars, let’s plot them in the sky.

Plotting Stars

As mentioned, we’re using Three.js to power the AR visual of our web app. I’ll save you the basics of building with Three.js since they have wonderful docs and instead focus on the specifics of our application. A wishing star in our sky is one that the user can interact with. So, even though there are many stars visible on the app, most of these are what I call “idle stars” and they cannot be interacted with. A wishing star is actually a Three.js Group which includes several Sprites (or a plane that always faces the user.)

Wishing Star Layers

  • The body of the star is the graphic of the iconic 8-point wishing star we’ve seen in many Disney films.
  • The glow of the star is the glow of light surrounding the 8-point star. This will animate as the user speaks their wish.
  • The flare of the star is the star burst graphic that appears when the wishing star begins to answer a user’s wish.
  • The flash is a flat white Sprite that is used to envelop the user’s screen in bright white light after the star bursts.

Both the idle and wish stars need to be plotted in the upper hemisphere of the user’s reality, aka their sky. In order to do this, we come up with a random spherical positioning 60 degrees and below the sky directly above the user and then position the star there.

// Random position
let position = new THREE.Vector3().setFromSphericalCoords(
  100, 
  THREE.MathUtils.degToRad(Math.random() * 60),
  THREE.MathUtils.degToRad(Math.random() * 360)
)

// Position star
star.position.copy(position)
Enter fullscreen mode Exit fullscreen mode

All of these stars are added to another wishStars group so they can be easily targeted later.

In order to make the sky itself a bit more interesting, we plot a Sphere in the scene and give it a Disney Wish texture. I use a gradient fade so the texture is more noticeable the higher the user points up in the sky. This helps differentiate the user’s reality with the Wish reality.

Targeting Stars

With these wishing star groups and their associated sprite layers plotted in our textured sky, we can add DeviceOrientationControls and a Raycaster to determine which star the user is currently pointing at. Check out the Eddie Vedder “Earthling” dev blog for more context on resurrecting the defunct DeviceOrientationControls using patch-package. Once the user’s device orientation is controlling the camera, we can use a Raycaster to determine if they are intersecting with a wish star. First, let’s set up our Raycaster and a 2d Pointer.

// Raycaster
raycaster = new THREE.Raycaster()

// Pointer
pointer = new THREE.Vector2()
Enter fullscreen mode Exit fullscreen mode

Now, in our render loop, we can update the raycaster and check for any intersections with wish stars.

// Update ray
raycaster.setFromCamera(pointer, camera)

// Check for intersections
const intersectObjects(wishStars.children)
Enter fullscreen mode Exit fullscreen mode

This combination of DeviceOrientationControls and Raycaster works really well. However, I didn’t want the associated content to unlock immediately but rather when the user paused over the wishing star for a moment.

Unlocking Stars

When a user intersects with a wishing star, I use Greensock to tween a progress variable from 0.0 to 1.0 over 3 seconds. If the user stops pointing at that wishing star before the tween completes, the tween is killed and the wishing star content is not revealed. However, if the animation completes, the star content will reveal itself as a <dialog> element (more on that soon.) Here’s a look at that Greensock tween.

starUnlock = gsap.to(starProgress, {
  duration: 3,
  ease: linear,
  value: 1,
  onComplete: () => {
    // Star visible
    starVisible.value = true
  }
})
Enter fullscreen mode Exit fullscreen mode

A tween can be killed with the kill method.

starUnlock.kill()
Enter fullscreen mode Exit fullscreen mode

In order for the user to know this is happening, we need a clear visual of the unlock progressing. To achieve this, I designed a custom SVG scope in the Wish aesthetic which receives a starProgress prop and animates a glowing white circle progress. Here’s a CodePen of that component.

Revealing Content

When the user successfully unlocks a wishing star’s content, it is revealed in a Dialog element. I love this standardized method of bringing up a modal dialog box that must be interacted with in some manner in order for it to be closed. It can also be nicely customized in CSS and I did my best through the use of pseudo elements to give it a Disney Wish flare.

We can show a dialog by calling the showModal method and hide it by using the close method.

// Show modal
document.querySelector(dialog).showModal()

// Close modal
document.querySelector(dialog).close()
Enter fullscreen mode Exit fullscreen mode

In the case of our modal, users have the ability to close it and return to exploring the sky or close it and begin wishing on the currently targeted star. Before we talk about wishing, let’s discuss our setup for recording the wishing experience as a video.

Wish Recorder

Before we allow the user to begin their wish, we need to set up the infrastructure that will record it. Rather than directly record the Three.js renderer canvas, I set up a separate recording canvas with the story-friendly dimensions of 1080px by 1920px. We can then overlay additional images and videos which are not visible in the user experience but should be present on the final shareable video. All of this visual content is then recorded with MediaRecorder but we can’t forget to manage and record the audio of the experience also. Namely, the audio coming from the microphone and “Star” character video.

Recording Canvas

The recording canvas itself is simply a blank 1080 x 1920 canvas.

// Recording canvas
recordingCanvas = document.createElement(canvas)

// Size
recordingCanvas.height = 1920
recordingCanvas.width = 1080

// Recording context
recordingContext = recordingCanvas.getContext(2d)
Enter fullscreen mode Exit fullscreen mode

We can scale, position, and draw the Three.js renderer canvas onto this recording canvas by calculating a cover size using the intrinsic-scale helper library.

// Get cover
let { x, y, height, width } = cover(recordingCanvas.width, recordingCanvas.height, threeCanvas.width, threeCanvas.height)

// Draw three
recordingContext.drawImage(threeCanvas, 0, 0, threeCanvas.width, threeCanvas.height, x, y, width, height)

Enter fullscreen mode Exit fullscreen mode

In addition to drawing the Three.js canvas, we can draw image and video layers to add a bit of a story and more visual elements to the final recorded video.

For example, I could draw the Wish logo and a little quote from the film over the Three.js scene. What about the moment the user’s screen goes white? Instead of that simply being a blank white screen in the final video, we’ve made it a short video clip which shows the “Star” character zipping past the white screen before appearing to the user. To manage this, I use a shot variable to determine which additional layer to draw when rendering the recording canvas.

if (shot.value == wish || shot.value == star) {
  // Draw overlay image
  recordingContext.drawImage(overlayImage, 0, 0)

} else if (shot.value == flyby) {
  // Draw video
  recordingContext.drawImage(document.getElementById(flybyVideo), 0, 0)

} else if (shot.value == end) {
  // Draw end card
  recordingContext.drawImage(endImage, 0, 0)

}
Enter fullscreen mode Exit fullscreen mode

We can then adjust the current “shot” as the user progresses through the interaction. Now that we’re rendering the canvas we’d like to record, let’s set up our MediaRecorder to record it and all of the audio happening in our experience.

MediaRecorder

The MediaRecorder interface is a fascinating piece of standard browser technology that allows us to record a group of visual and audio streams as new video or audio files. In the case of our app, we’re interested in recording the “recording canvas” we’re rendering with all of our visual content. In addition, we’re interested in recording the user’s microphone when they are wishing and the “Star” characters sound effects when it is on screen.

Video

To begin with, capturing a visual stream and associated video track of the recording canvas couldn’t be simpler.

// Get canvas stream
const canvasStream = recordingCanvas.captureStream(30)

// Get video tracks
const [videoTrack] = canvasStream.getVideoTracks()
Enter fullscreen mode Exit fullscreen mode

Audio

As for the audio… Here's what I settled on after a lot of engineering work. The audio in our experience is coming from three different places. First, from the microphone stream when the user is wishing. Second, from the “Star” flyby video which is rendered onto the recording canvas secretly when the user’s screen is white. And lastly, from the “Star” appearance video which is part of the Three.js scene featuring the user’s video. One of these is an existing media stream and the other two are existing media elements (video tags on in the HTML.) In order to manage this variety of sources and their associated volume, we can use the Web Audio API.

First, we’ll set up a new media stream destination to receive all of these sources.

// Create audio context
const context = new AudioContext()

// Create destination
const destination = context.createMediaStreamDestination()
Enter fullscreen mode Exit fullscreen mode

For our microphone, we’ll create a new stream source and connect it to a gain node so we may dynamically control its volume when recording. We then connect the gain node to our destination stream.

// Microphone source
const microphoneSource = context.createMediaStreamSource(microphoneStream.value)

// Microphone volume
microphoneVolume = context.createGain()

// Adjust volume
microphoneVolume.gain.value = 1.0

// Connect to volume
microphoneSource.connect(microphoneVolume)

// Connect to destination
microphoneVolume.connect(destination)
Enter fullscreen mode Exit fullscreen mode

For our “Star” videos, we’ll create a pair of media element sources and also connect them to individual gain nodes so we can adjust their volume as needed. Unlike the microphone, we’ll want the user to hear these videos through the app so we’ll connect their gain nodes to the primary audio context destination in addition to the destination stream. Here’s an example.

// Star video source
const starSource = context.createMediaElementSource(document.getElementById("starVideo"))

// Star volume
starVolume = context.createGain()

// Adjust volume
starVolume.gain.value = 0.0

// Connect to volume
starSource.connect(starVolume)

// Connect to destination
starVolume.connect(destination)

// Connect star volume to context destination
starVolume.connect(context.destination)
Enter fullscreen mode Exit fullscreen mode

With all of these sources now connected to the destination stream, we can access the stream and get its associated audio track.

// Get audio stream
const audioStream = destination.stream

// Get audio track
const [audioTrack] = audioStream.getAudioTracks()
Enter fullscreen mode Exit fullscreen mode

We can then combine this audio track with the recording canvas video track to create our final video stream which is what will be used to finally initialize our MediaRecorder.

// Create video stream
const videoStream = new MediaStream([videoTrack, audioTrack])

// Recorder
recorder = new MediaRecorder(videoStream)
Enter fullscreen mode Exit fullscreen mode

Since this is already starting to feel like a MediaRecorder dev blog, I’m going to stop short of how to use MediaRecorder and instead direct you to the excellent documentation on the subject. The most important bits are storing video chunks as they are available and creating the video blob once recording is complete.

// On data available
recorder.ondataavailable = e => {
  // If size
  if (e.data.size > 0) {
    // Push new chunks
    recorderChunks.push(e.data)

  }
}

// On stop
recorder.onstop = e => {
  // Create blob
  let blob = new Blob(recorderChunks, {
    type: recorder.mimeType
  })

  // Store blob
  videoBlob.value = blob

}
Enter fullscreen mode Exit fullscreen mode

Analyzer

In addition to using Web Audio to wrangle our audio sources for recording, we’ll use it to set up an Analyzer node to receive volume levels from a user’s microphone as they speak. Then, we’ll use these levels to adjust the glow of the wishing star so it seems like the star is glowing in response. Here’s how you set up a very simple analyser and connect it to the microphone source.

// Create analyser
analyser = context.createAnalyser()

// Set fft size
analyser.fftSize = 64

// Connect microphone to analyser
microphoneSource.connect(analyser)

// Initialize analyser data
analyserData = new Float32Array(analyser.fftSize)
Enter fullscreen mode Exit fullscreen mode

We’ll discuss how to use this analyser next.

Wishing On A Star

With the recorder setup and ready to receive wishes, it is time to let the user make their wish. As the user speaks into their microphone, our Web Audio analyzer will adjust the glow of the wishing star so it seems to be listening. Once the user has finished wishing or the wishing time elapses, the wishing star will react by bursting and flashing the screen white. This is very similar to what Asha faces in the film when she makes her own wish.

Listening Star

Within our renderer loop, we’ll analyze the microphone to get an average volume level when the user is wishing. Then we’ll use this level to adjust the wishing star glow. First, we’ll use the getFloatTimeDomainData to get the current waveform array of magnitude data. Then we’ll determine an average of this data and adjust it with Math.min and Math.max until it is giving a pronounced value between 0.5 and 1.0. Finally, we’ll adjust the opacity of the glow.

// Get time domain data
analyser.getFloatTimeDomainData(analyserData)

// Sum
let sum = 0.0

// Loop through data
analyserData.forEach(d => sum += d * d)

// Get average
let avg = Math.sqrt(sum / analyserData.length) / 0.1

// Get level
let level = Math.min(1.0, Math.max(0.5, avg))

// Adjust glow opacity
glow.material.opacity = level

Enter fullscreen mode Exit fullscreen mode

Now the wishing star will glow as the user speaks.

Star Reaction

Once the user finishes their wish, the wishing star will burst until the entire screen goes white. This effect is achieved by using a Greensock timeline to tween the scale and opacity of the flare and flash sprites within the wishing star group. Here’s that setup.

// Timeline
let tl = gsap.timeline()

// Fade in flare
tl.to(flare.material, {
  duration: 1,
  ease: "power4.in",
  opacity: 1
})

// Scale flare
tl.to(flare.scale, {
  duration: 1,
  ease: "power4.in",
  x: 200,
  y: 200, 
}, "<")

// Fade in flash
tl.to(flash.material, {
  duration: .5,
  opacity: 1
})
Enter fullscreen mode Exit fullscreen mode

As the flare fades in, it is also scaled to 200 x and y. Then, the flash appears and the screen goes white.

Wishing Time

In an attempt to keep the final recorded video on the shorter side, users are given a maximum of 15 seconds to make their wish. The solution here is somewhat similar to the Greensock tween powered star progress scope we used to unlock wishing stars. However, instead of using an elaborate SVG scope, I’m just using a simple progress bar in the header of the experience. Here’s the Greensock tween.

wishTimeout = gsap.to(wishProgress, {
  duration: 15,
  ease: linear,
  value: 1.0,
  onComplete: () => {
    // Stop wishing
  }
})
Enter fullscreen mode Exit fullscreen mode

Users also have the ability to click a “Finish Wish” button if they finish wishing before the 15 seconds are up. At which point, the wishTimeout tween would be killed.

Star Answers

With the screen white and our wisher waiting patiently, we use this opportunity to flip their device camera so it is now facing them. In addition, we no longer record their microphone audio and instead focus on recording the “Star” character audio. When the user finally sees themselves, they are not alone. They are accompanied by “Star” who has heard and answered their wish.

Camera Flip

Since we put together that nice composable for accessing the user’s camera, in order to flip it, we simply need to stop the camera, adjust the view state, and start the camera again. Again, we do this independently from the microphone to help fight audio lag on the final recorded video.

// Stop camera
stopCamera()

// View user
view.value = user

// Start camera
startCamera()
Enter fullscreen mode Exit fullscreen mode

Now let’s adjust the audio.

Audio Flip

While the user is wishing, we’re interested in recording their microphone audio but when they’re done, we’re interested in recording the “Star” character SFX. Remember those Web Audio gain nodes we set up earlier? Here’s where they come into play. By adjusting the gains, we can mute the microphone audio and unmute the “Star” audio for recording.

// Mute microphone
microphoneVolume.gain.volume = 0.0

// Unmute flyby
flybyVolume.gain.volume = 1.0

// Unmute star
starVolume.gain.volume = 1.0
Enter fullscreen mode Exit fullscreen mode

I love to see Web Audio and MediaRecorder work together in this manner.

Star Video Sprite

The final magical element of our experience is when the “Star” character appears to the user to sprinkle some stardust on them. We’re all stars after all. The star video is also a Three.js Sprite, which is resized to cover the full dimensions of the Three.js scene. Transparency is achieved by creating a star video source which has both color and alpha channels and using both separately as maps for the sprite material. Let’s start with initializing the video textures.

Video Textures

Here’s an example video showing the stacking of both color and alpha channels. Luckily, Disney provided excellent video toolkit assets which had these channels available to me. What we want to do is use the top for our full color texture map and the bottom for our alpha map. That way, areas which are black on the alpha map texture will be transparent. This involves creating new VideoTextures from the startVideo video tag and using offset and repeat to isolate each specific channel/area.

// Star texture
starTexture = new THREE.VideoTexture(document.getElementById("starVideo"))

// Wrap
starTexture.wrapS = THREE.RepeatWrapping
starTexture.wrapT = THREE.RepeatWrapping

// Offset
starTexture.offset.set(0.0, 0.5)

// Repeat
starTexture.repeat.set(1.0, 0.5)

// Star alpha
starAlpha = new THREE.VideoTexture(document.getElementById("starVideo"))

// Wrap
starAlpha.wrapS = THREE.RepeatWrapping
starAlpha.wrapT = THREE.RepeatWrapping

// Offset
starAlpha.offset.set(0.0, 0.0)

// Repeat
starAlpha.repeat.set(1.0, 0.5)
Enter fullscreen mode Exit fullscreen mode

We can then use these when initializing our sprite material.

const material = new THREE.SpriteMaterial({
  alphaMap: starAlpha,
  depthTest: false,
  map: starTexture,
  side: THREE.DoubleSide,
  transparent: true
})
Enter fullscreen mode Exit fullscreen mode

Resizing Sprite

In order to have the video sprite fill the screen, we’ll calculate the vertical field of view and then the screen height based on how far the camera is from our sprite. It’s two lines of fancy math that effectively resizes the sprite so it fills the entire vertical space of the screen.

// Vertical field of view
let vFOV = THREE.MathUtils.degToRad(camera.fov)

// Height
let height = 2 * Math.tan(vFOV / 2) * (camera.position.z - starVideo.position.z)

// Scale
starVideo.scale.set(height, height, 1)
Enter fullscreen mode Exit fullscreen mode

Playing Sprite

With our star video sprite resized to fit the view and properly textured, we simply need to play the starVideo tag to have the “Star” character appear in front of our user. I’ll also listen for the “ended” event so I know when to conclude the recording experience.

// Play star video
document.getElementById(starVideo).play()

// Wait until ended
document.getElementById("starVideo").addEventListener("ended", () => {
  // Stop recording

}, {
  once: true
})
Enter fullscreen mode Exit fullscreen mode

Sharing

Once the video is created with MediaRecorder, it can be shared directly to any social app using the Web Share API. This involves creating a file out of the video blob and passing it to the Web Share API. I like using this mime library to determine the precise extension and type from the video blob.

// Extension
let extension = mime.getExtension(videoBlob.value.type)

// Type
let type = mime.getType(extension)

// File name
let fileName = `wish.${extension}`

// File
let file = new File([videoBlob.value], fileName, {
  type: type
})

// Share file
navigator.share({
  files: [file]
})
Enter fullscreen mode Exit fullscreen mode

Design

The Art of Wish

While I was provided a ton of Disney Wish toolkit assets, there wasn’t a whole lot of web friendly graphic design associated with this project. For example, Disney Wish does not have a custom built website, just a standardized page on the Disney movies domain. So, in order to create a responsive design system for this project, I had to look further. I found inspiration by exploring the full range of merchandise associated with the film, from activity books to backpacks and gel pens to UNO cards. I even purchased The Art of Wish off Amazon. This allowed me to get a birds eye view of how a bunch of different stakeholders were handling design and translate some of their choices into a responsive design system of colors, typography, and buttons. Again, I try not to go overboard here. The design should feel familiar but also work well on a simple web UI. One highlight is simply the primary button which uses before and after pseudo elements to create ornamental edges in the style of the architecture seen in Rosas. Here’s a CodePen of that button construction.

Acknowledgements

Thanks so much to Dani Ratliff, Weston Lyon, Natalia Castillo, and their entire team Disney Music for this opportunity. Special thanks to the privacy, legal, and technology teams at Disney who helped me pull off my first web app for the company. I truly hope we can build more magical things together.

Top comments (0)