DEV Community

Cover image for Bridging 8th Wall AR and React Three Fiber: How Pose Data Flows into Three.js
j1ngzoue
j1ngzoue

Posted on

Bridging 8th Wall AR and React Three Fiber: How Pose Data Flows into Three.js

What I Built

I created a React Three Fiber (R3F) wrapper library for the 8th Wall open-source AR engine, called @j1ngzoue/8thwall-react-three-fiber.

It lets you add image-tracking AR to a React app with minimal boilerplate:

<EighthwallCanvas xrSrc="/xr.js">
  <EighthwallCamera />
  <ImageTracker targetImage="/targets/marker.json">
    <mesh>
      <boxGeometry />
      <meshStandardMaterial color="hotpink" />
    </mesh>
  </ImageTracker>
</EighthwallCanvas>
Enter fullscreen mode Exit fullscreen mode

Point your phone camera at the target image, and the 3D object appears anchored to it.


Two Canvases, One Screen

The trickiest part of the architecture is layering XR8's camera feed with R3F's 3D scene. They run on separate canvases stacked on top of each other:

┌─────────────────────────────┐
│  R3F Canvas (alpha=true)    │  ← 3D objects, transparent background
├─────────────────────────────┤
│  XR8 Canvas                 │  ← camera feed
└─────────────────────────────┘
Enter fullscreen mode Exit fullscreen mode

Both canvases are position: absolute and fill the container. R3F renders with alpha: true so the camera feed shows through.

<div style={{ position: 'relative', width: '100%', height: '100%' }}>
  {/* XR8 renders camera feed here */}
  <canvas ref={xrCanvasRef} style={fillStyle} />

  {/* R3F renders 3D scene on top, transparent */}
  <Canvas style={fillStyle} gl={{ alpha: true }}>
    {children}
  </Canvas>
</div>
Enter fullscreen mode Exit fullscreen mode

XR8 is initialized with the back canvas:

XR8.run({ canvas: xrCanvasRef.current })
Enter fullscreen mode Exit fullscreen mode

How XR8 Pose Data Flows into Three.js

XR8 uses a camera pipeline module system. You register a module with named hooks, and XR8 calls them each frame.

Step 1: Read pose data from XR8

Every frame, XR8 calls onUpdate with detection results. We extract the pose for our target image:

XR8.addCameraPipelineModule({
  name: 'image-tracker-marker',
  onUpdate: ({ processCpuResult }) => {
    const detectedImages = processCpuResult?.reality?.detectedImages
    // [{ name, position: {x,y,z}, rotation: {x,y,z,w}, scale }]

    const pose = detectedImages?.find((img) => img.name === 'marker')
    latestPoseRef.current = pose ?? null
  },
})
Enter fullscreen mode Exit fullscreen mode

We store the latest pose in a ref — not state — because we don't want a re-render every frame.

Step 2: Apply pose to a Three.js group in useFrame

R3F's useFrame runs once per render frame. We read the latest pose and apply it directly to the <group> that wraps the AR content:

useFrame(() => {
  const pose = latestPoseRef.current
  if (!pose || !groupRef.current) return

  groupRef.current.position.set(
    pose.position.x,
    pose.position.y,
    pose.position.z,
  )
  groupRef.current.quaternion.set(
    pose.rotation.x,
    pose.rotation.y,
    pose.rotation.z,
    pose.rotation.w,
  )
  groupRef.current.scale.setScalar(pose.scale)
})
Enter fullscreen mode Exit fullscreen mode

Step 3: Show/hide on detection events

XR8 fires events when a target is found or lost. We use these to toggle visibility:

listeners: [
  {
    event: 'reality.imagefound',
    process: ({ detail }) => {
      if (detail.name !== targetName) return
      setVisible(true)
    },
  },
  {
    event: 'reality.imagelost',
    process: ({ detail }) => {
      if (detail.name !== targetName) return
      setVisible(false)
      latestPoseRef.current = null
    },
  },
],
Enter fullscreen mode Exit fullscreen mode

The final JSX renders a <group> whose visibility and transform are driven entirely by XR8:

return (
  <group ref={groupRef} visible={visible}>
    {children}
  </group>
)
Enter fullscreen mode Exit fullscreen mode

Syncing the Camera Matrix

The 3D scene also needs to match the physical camera's field of view. XR8 provides videoWidth and videoHeight from the device camera, which we use to estimate the FOV and update the Three.js camera matrix each frame:

XR8.addCameraPipelineModule({
  name: 'camera-sync',
  onStart: ({ videoWidth, videoHeight }) => {
    // Estimate FOV from video aspect ratio
    activeFov = estimateFovFromVideo(videoWidth, videoHeight)
  },
  onUpdate: ({ processCpuResult }) => {
    const cameraProjectionMatrix =
      processCpuResult?.reality?.cameraProjectionMatrix
    if (cameraProjectionMatrix) {
      // Store for use in useFrame
      latestMatrixRef.current = cameraProjectionMatrix
    }
  },
})

useFrame(({ camera }) => {
  if (latestMatrixRef.current) {
    camera.projectionMatrix.fromArray(latestMatrixRef.current)
    camera.projectionMatrixInverse
      .copy(camera.projectionMatrix)
      .invert()
    camera.matrixAutoUpdate = false
  }
})
Enter fullscreen mode Exit fullscreen mode

Without this, 3D objects would appear at the wrong depth and scale relative to the real world.


Key Takeaways

  • Use two stacked canvases to combine XR8's camera feed with R3F's transparent 3D scene
  • XR8 pose data flows through a camera pipeline modulerefuseFrame → Three.js group transform
  • Store pose in a ref, not state, to avoid unnecessary re-renders every frame

- Sync the Three.js camera projection matrix with XR8's data so depth and scale match the real world

The library is on npm:

npm install @j1ngzoue/8thwall-react-three-fiber
Enter fullscreen mode Exit fullscreen mode

GitHub: https://github.com/activeguild/8thwall-react-three-fiber

Top comments (0)