In the previous post, I showed you how to set up a Chromium extension project, so it supports TypeScript, autocompletion wherever possible and just works nicely as a starter. Now, I'll briefly show the implementation of my simple Page Audio extension.
Intro
Idea
What I wanted from my extension was very simple - when I go to a specific website, it should start playing predefined audio. Hard-coded website name and audio are completely fine.
In a bit more detail, the audio should start playing when I open www.example.com
, stop when I switch to a different tab, and resume when I go back to www.example.com
. Also, if I have two (or more) tabs with www.example.com
opened and I switch between them, the audio should keep playing without restarting. In other words, audio should be played on the whole extension level, not individual tabs.
General technical approach
In short, we need to create HTMLAudioElement
somewhere and play/pause it depending on the website in the current tab.
It is doable with service worker and content scripts - we could have a content script creating an HTMLAudioElement
element on every page and use a service worker to coordinate the playback. When the tab loses focus, it passes the current media time frame to the service worker and when another tab with a matching URL gains focus, it asks the service worker for the time frame and resumes the playback from there.
However, I think this approach is a bit convoluted and might be prone to errors. It would be much nicer if we could have only one HTMLAudioElement
element and play/pause it globally, not from individual tabs. Luckily, there's an interesting API that will greatly help us - offscreen API.
Offscreen API lets the extension create one invisible HTML document. Using it, we'll have a place to keep our HTMLAudioElement
and just play/pause it when needed. Bear in mind that service worker still can't do any DOM operations, so we'll need some helper script on our offscreen document to receive service worker messages and adequately control the player.
Implementation
Needed permissions in manifest.json
My extension needs two entries in the permissions
array:
-
tabs
- it needs to know when the user is switching and/or updating tabs -
offscreen
- it needs ability to create offscreen document to play the audio from there
If you open extension details in the browser, you'll see permissions described as:
Read your browsing history
It might look a bit scary, but that's what adding tabs
permission causes. Unfortunately, I wasn't able to figure out a different approach with less concerning permissions. The other ideas I had were resulting in even scarier permission sets 😅 In this thread you can read why tabs
permission causes that entry.
Managing offscreen documents
As I've mentioned, I would like to have only one HTMLAudioElement
and play the audio from it. To make it tab-independent, I'll use offscreen
API to create a document where it will be kept and controlled by messages from the service worker.
I feel like object-oriented programming, so here's OffscreenDoc
class helping with offscreen document management. In essence, it just creates the offscreen document if it's not created yet.
// ts/offscreen-doc.ts
/**
* Static class to manage the offscreen document
*/
export class OffscreenDoc {
private static isCreating: Promise<boolean | void> | null;
private constructor() {
// private constructor to prevent instantiation
}
/**
* Sets up the offscreen document if it doesn't exist
* @param path - path to the offscreen document
*/
static async setup(path: string) {
if (!(await this.isDocumentCreated(path))) {
await this.createOffscreenDocument(path);
}
}
private static async createOffscreenDocument(path: string) {
if (OffscreenDoc.isCreating) {
await OffscreenDoc.isCreating;
} else {
OffscreenDoc.isCreating = chrome.offscreen.createDocument({
url: path,
reasons: ['AUDIO_PLAYBACK'],
justification:
'Used to play audio independently from the opened tabs',
});
await OffscreenDoc.isCreating;
OffscreenDoc.isCreating = null;
}
}
private static async isDocumentCreated(path: string) {
// Check all windows controlled by the service worker to see if one
// of them is the offscreen document with the given path
const offscreenUrl = chrome.runtime.getURL(path);
const existingContexts = await chrome.runtime.getContexts({
contextTypes: ['OFFSCREEN_DOCUMENT'],
documentUrls: [offscreenUrl],
});
return existingContexts.length > 0;
}
}
As you can see, the only public
method is setup
and it needs some path
when called. That's a path to an HTML document template that will be used to create our offscreen document. It's gonna be super simple in our case:
<!-- offscreen.html -->
<script src="dist/offscreen.js" type="module"></script>
Literally, just one script tag. This script will be used to receive service worker messages, create HTMLAudioElement
, and play/pause the music. It also has type="module"
as I will import
something there.
But to receive messages, we should probably send them first.
Message interface
There isn't any strict interface for messages. We just need to make sure they are JSON-serializable. However, I would like to be as type-safe as possible, so I defined a simple interface for messages passed in my extension:
// ts/audio-message.ts
export interface AudioMessage {
/**
* Command to be executed on the audio element.
*/
command: 'play' | 'pause';
/**
* Source of the audio file.
*/
source?: string;
}
You'll see in a moment that the sendMessage
method isn't that great fit for typing, but there's an easy workaround to still benefit from type safety there.
Sending messages from the service worker
The service worker is the "brain" of our extension, knows what and when is happening, and should send appropriate messages as needed. But when is it exactly?
We should change the playback state in three situations:
- when a new tab is activated, so user simply changes from tab A to tab B,
- when the current tab is updated, so its URL has changed, or
- when a tab is closed - that's a bit tricky case, as it might happen without invoking any of the two above cases when the user closes the last incognito window while the audio is playing.
All situations mean we might be on the website where we want the audio to play or that we've just closed/left it.
Without further ado, here's the updated ts/background.ts
script reacting to the two events:
// ts/background.ts
import { AudioMessage } from './audio-message.js';
import { OffscreenDoc } from './offscreen-doc.js';
const affectedPage = 'https://example.com/';
const defaultAudio = 'assets/audio.mp3';
// Play audio when the tab with affectedPage is active
chrome.tabs.onActivated.addListener((activeInfo) => {
chrome.tabs.get(activeInfo.tabId, async (tab) => {
await toggleAudio(tab.url);
});
});
chrome.tabs.onUpdated.addListener(async (tabId, changeInfo, tab) => {
await toggleAudio(tab.url);
});
chrome.tabs.onRemoved.addListener(async (tabId) => {
const tabs = await chrome.tabs.query({});
const activeTab = tabs.find((tab) => tab.active);
await toggleAudio(activeTab?.url);
});
async function toggleAudio(tabUrl?: string) {
await OffscreenDoc.setup('offscreen.html');
const command = tabUrl?.includes(affectedPage) ? 'play' : 'pause';
chrome.runtime.sendMessage({
command,
source: defaultAudio,
} satisfies AudioMessage);
}
As you can see, the toggleAudio
function is the most important here. First of all, it sets up the offscreen document. It's safe to call it multiple times, as it just does nothing if the document is already created. Then it decides if it should send "play"
or "pause"
command, depending on the URL of the current tab. Finally, it sends the message. As I've mentioned, sendMessage
doesn't have a generic variant (sendMessage<T>
) so it's non-trivial to specify the message type, but TS satisfies
helps with making sure that the object we are sending is of AudioMessage
type.
Notice also the two constants at the top - here you specify what audio you want to play and at which website.
Receiving the messages by offscreen document
Finally, we are sending the messages, so now it's time to receive them and play some music 🎶
To do this, we need to implement the script used by offscreen.html
. It's dist/offscreen.js
, so that's how corresponding ts/offscreen.ts
looks:
// ts/offscreen.ts
import { AudioMessage } from './audio-message.js';
let audio: HTMLAudioElement | null = null;
// Listen for messages from the extension
chrome.runtime.onMessage.addListener((msg: AudioMessage) => {
audio ??= new Audio(msg.source);
audio?.[msg.command]();
return undefined;
});
In short, if we haven't created HTMLAudioElement
we're doing that using the provided source and then we're playing/pausing it. Returning undefined
is needed for typing purposes. If you're interested in the meaning of the different return
values, check the docs
Summary
Try it out! Go to www.example.com
(or whatever website you've set) and see if the audio is playing. Try switching tabs back and forth and verify if it correctly stops and resumes.
Take into account that if you pause music for more than 30 seconds, it will be restarted, as the service worker will be terminated by the browser! Here are some docs about that.
To summarize what we did:
- we updated our
manifest.json
with the required permissions to create an offscreen document and monitor activity on tabs - we made the service worker observe activity on tabs and send adequate commands to the script living in the offscreen document
- we started playing audio via a script that receives messages from the service worker and controls the DOM of the offscreen document
I hope it was clear and easy to follow! There's quite a natural progression of this extension - letting the user specify different websites and assign different audio to each of them. Hopefully, I'll add that when I have some time and write another post describing my approach.
For now, thanks for reading!
Top comments (0)