While contributing to Sharkord — a self-hosted, Discord-like communication platform built with TypeScript — I implemented a feature where a sound plays when a remote user starts or stops screensharing in a voice channel.
In this article I'll walk through how I explored the codebase, matched the existing sound patterns, and wired everything together.
The Problem
Sharkord already had sounds for many voice channel events:
- User joins/leaves a voice channel
- Muting/unmuting mic
- Enabling/disabling webcam
- Own user starting/stopping screenshare
But there was no sound when a remote user (someone else) started or stopped screensharing. You'd have no audio feedback that someone else just started sharing their screen.
Exploring the Codebase
Before writing any code I searched for how existing sounds worked. I found the sound system in:
apps/client/src/features/server/sounds/actions.ts
The project uses the Web Audio API — no external sound files needed! All sounds are generated programmatically using oscillators.
Here's the pattern every sound follows:
const createOsc = (type: OscillatorType, freq: number) => {
const osc = audioCtx.createOscillator()
osc.type = type
osc.frequency.setValueAtTime(freq, now())
return osc
}
const createGain = (value = 1) => {
const gain = audioCtx.createGain()
gain.gain.setValueAtTime(value * SOUNDS_VOLUME, now())
return gain
}
Every sound creates oscillators, connects them to gain nodes, and fades them out using exponentialRampToValueAtTime.
The Existing Screenshare Sounds
The own user screenshare sounds already existed:
// STARTED_SCREENSHARE — richer activation sequence
const sfxOwnUserStartedScreenshare = () => {
const pulses = [
{ freq: 600, delay: 0 },
{ freq: 800, delay: 0.06 },
{ freq: 1000, delay: 0.12 }
]
pulses.forEach(({ freq, delay }) => {
const t = now() + delay
const osc = createOsc('sine', freq)
const gain = createGain(0.08)
gain.gain.exponentialRampToValueAtTime(0.0001, t + 0.1)
osc.connect(gain).connect(audioCtx.destination)
osc.start(t)
osc.stop(t + 0.1)
})
}
The maintainer specifically asked that the new sound be consistent with existing ones. So I based the remote user sounds on the own user versions but made them slightly softer (gain 0.06 instead of 0.08).
What I Changed
1. Added new SoundType entries (types.ts)
export enum SoundType {
// ... existing entries ...
REMOTE_USER_STARTED_SCREENSHARE = 'remote_user_started_screenshare',
REMOTE_USER_STOPPED_SCREENSHARE = 'remote_user_stopped_screenshare'
}
2. Added new sound functions (sounds/actions.ts)
// REMOTE STARTED SCREENSHARE — similar to own user but slightly softer
const sfxRemoteUserStartedScreenshare = () => {
const pulses = [
{ freq: 600, delay: 0 },
{ freq: 800, delay: 0.06 },
{ freq: 1000, delay: 0.12 }
]
pulses.forEach(({ freq, delay }) => {
const t = now() + delay
const osc = createOsc('sine', freq)
const gain = createGain(0.06) // slightly softer than own user (0.08)
gain.gain.exponentialRampToValueAtTime(0.0001, t + 0.1)
osc.connect(gain).connect(audioCtx.destination)
osc.start(t)
osc.stop(t + 0.1)
})
const osc2 = createOsc('triangle', 1200)
const gain2 = createGain(0.02)
gain2.gain.exponentialRampToValueAtTime(0.0001, now() + 0.2)
osc2.connect(gain2).connect(audioCtx.destination)
osc2.start(now() + 0.08)
osc2.stop(now() + 0.22)
}
Added the cases to the playSound switch:
case SoundType.REMOTE_USER_STARTED_SCREENSHARE:
return sfxRemoteUserStartedScreenshare()
case SoundType.REMOTE_USER_STOPPED_SCREENSHARE:
return sfxRemoteUserStoppedScreenshare()
3. Triggered the sound in voice actions (voice/actions.ts)
This was the trickiest part. I needed to detect when a remote user's sharingScreen state changed from false to true (or true to false).
The key was reading the current state before dispatching the update:
export const updateVoiceUserState = (
userId: number,
channelId: number,
newState: Partial
): void => {
const state = store.getState()
const ownUserId = ownUserIdSelector(state)
const currentChannelId = currentVoiceChannelIdSelector(state)
if (userId !== ownUserId && channelId === currentChannelId) {
const currentUserState = state.server.voiceMap[channelId]?.users[userId]
if (newState.sharingScreen === true && !currentUserState?.sharingScreen) {
playSound(SoundType.REMOTE_USER_STARTED_SCREENSHARE)
} else if (newState.sharingScreen === false && currentUserState?.sharingScreen) {
playSound(SoundType.REMOTE_USER_STOPPED_SCREENSHARE)
}
}
store.dispatch(
serverSliceActions.updateVoiceUserState({ userId, channelId, newState })
)
}
Why read state before dispatching?
If I dispatched first and then checked, the state would already be updated and I couldn't compare old vs new. By reading state before dispatching I can compare:
-
currentUserState.sharingScreen— what it was -
newState.sharingScreen— what it's becoming
This gives me a clean transition detection.
The Key Lesson
Read state before dispatching when you need to detect transitions.
This pattern is useful whenever you need to:
- Play a sound when something changes
- Show a notification when a value crosses a threshold
- Log when a state transitions from one value to another
// ✅ Read current state BEFORE dispatch
const currentState = store.getState()
const oldValue = currentState.something.value
if (newValue === true && !oldValue) {
// transitioning from false → true
doSomething()
}
store.dispatch(updateSomething(newValue))
Summary
| File | Change |
|---|---|
types.ts |
Added 2 new SoundType entries |
sounds/actions.ts |
Added 2 new sound functions + switch cases |
voice/actions.ts |
Added transition detection logic |
If you found this helpful, check out the Sharkord repo and my GitHub profile.
Have questions or spotted something I missed? Drop a comment below!
Top comments (0)