There's a weird inertia in JavaScript development. We spend years getting comfortable with features like async/await or destructuring, and then we just... stop. The language keeps evolving, TC39 keeps shipping proposals, and most of us keep writing the same patterns we learned in 2018.
I'm guilty of it too. I was writing Array.prototype.reduce() boilerplate for grouping operations until last month, completely unaware that Object.groupBy() shipped in March 2024 across all major browsers. I was manually managing promise resolvers with that awkward outer-scope pattern when Promise.withResolvers() has been standardized since ES2024.
This isn't another "here's what's new in ES2025" listicle. This is about the features that are already here, already shipping in browsers and Node.js, that you're probably not using because nobody told you about them, or because they slipped through while you were busy shipping features.
Let's fix that.
Promise.withResolvers(): Stop Fighting Promise Constructors
If you've been writing JavaScript for any length of time, you've written this pattern:
let resolve, reject;
const promise = new Promise((res, rej) => {
resolve = res;
reject = rej;
});
// Later, somewhere else in your code:
if (someCondition) {
resolve(data);
} else {
reject(error);
}
This works, but it feels wrong. You're declaring variables in an outer scope just to capture them inside the Promise constructor, creating this weird dependency dance. The Promise constructor API—the "revealing constructor pattern"—was designed to keep resolve and reject private to the code constructing the promise. But sometimes you need to control promise resolution from outside.
Enter Promise.withResolvers(), standardized in ES2024:
const { promise, resolve, reject } = Promise.withResolvers();
// Now you can use these anywhere
if (someCondition) {
resolve(data);
} else {
reject(error);
}
That's it. Three lines instead of five, no outer scope pollution, no let declarations. The method returns an object with three properties: the promise itself, and its resolution functions.
Where This Actually Matters
This isn't just syntactic sugar. It fundamentally changes how you structure asynchronous code that doesn't fit the constructor pattern.
Event-driven flows: Consider a dialog component where user actions determine promise settlement:
class ApprovalDialog {
show() {
const { promise, resolve, reject } = Promise.withResolvers();
this.dialog.showModal();
this.approveButton.onclick = () => {
this.dialog.close();
resolve('approved');
};
this.rejectButton.onclick = () => {
this.dialog.close();
reject('rejected');
};
return promise;
}
}
// Usage
const dialog = new ApprovalDialog();
try {
const result = await dialog.show();
console.log('User approved:', result);
} catch (e) {
console.log('User rejected');
}
The handlers for approve and reject aren't nested inside the promise constructor. They're separate event listeners, but they still settle the same promise. This was always possible with the outer-scope pattern, but withResolvers() makes it explicit and clean.
Debouncing with promises: Here's a pattern I use constantly—debouncing expensive operations but returning a promise for each call:
function createDebouncedFetch(delay) {
let timeout;
let currentPromise = null;
return function debouncedFetch(url) {
clearTimeout(timeout);
if (!currentPromise) {
const { promise, resolve, reject } = Promise.withResolvers();
currentPromise = { promise, resolve, reject };
}
timeout = setTimeout(async () => {
try {
const response = await fetch(url);
const data = await response.json();
currentPromise.resolve(data);
} catch (error) {
currentPromise.reject(error);
} finally {
currentPromise = null;
}
}, delay);
return currentPromise.promise;
};
}
const debouncedSearch = createDebouncedFetch(300);
// All these calls return the same promise, but only the last triggers the fetch
searchInput.addEventListener('input', async (e) => {
const results = await debouncedSearch(`/api/search?q=${e.target.value}`);
displayResults(results);
});
Queue implementations: Building a simple async queue becomes cleaner:
class AsyncQueue {
constructor() {
this.queue = [];
this.processing = false;
}
async enqueue(task) {
const { promise, resolve, reject } = Promise.withResolvers();
this.queue.push({ task, resolve, reject });
this.process();
return promise;
}
async process() {
if (this.processing || this.queue.length === 0) return;
this.processing = true;
const { task, resolve, reject } = this.queue.shift();
try {
const result = await task();
resolve(result);
} catch (error) {
reject(error);
} finally {
this.processing = false;
this.process();
}
}
}
The Nuance Nobody Mentions
Calling resolve() doesn't always settle the promise. If you pass another promise to resolve(), the original promise locks onto that promise's state:
const { promise, resolve } = Promise.withResolvers();
resolve(new Promise(() => {})); // Promise that never settles
// `promise` is now forever pending
This is standard promise behavior, but it catches people off guard. The term "resolve" is slightly misleading—it means "determine the fate," not "settle immediately." Understanding this matters when you're using withResolvers() in complex async flows.
Set Methods: Finally, Native Set Operations
For nearly a decade, JavaScript had Sets but no way to perform set operations. Want the intersection of two Sets? Write your own function or import Lodash. Want the union? Same thing.
That ended in June 2024 when all major browsers shipped the Set methods proposal. Now you have:
intersection()union()difference()symmetricDifference()isSubsetOf()isSupersetOf()isDisjointFrom()
These are methods on Set instances, and they return new Sets (no mutation).
Why This Matters More Than You Think
Set operations aren't some academic exercise. They're fundamental to real application logic that we've just been implementing badly.
Permissions and roles:
const userPermissions = new Set(['read', 'write', 'comment']);
const requiredPermissions = new Set(['write', 'delete']);
// Can the user perform this action?
const hasPermissions = requiredPermissions.isSubsetOf(userPermissions);
// false - user lacks 'delete'
// What's missing?
const missing = requiredPermissions.difference(userPermissions);
// Set { 'delete' }
Tag-based filtering:
const postTags = new Set(['javascript', 'react', 'hooks']);
const filterTags = new Set(['react', 'vue', 'angular']);
// Posts matching any filter tag
const hasAnyTag = !postTags.isDisjointFrom(filterTags);
// true - they share 'react'
// Which tags match?
const matchingTags = postTags.intersection(filterTags);
// Set { 'react' }
Deduplicating across sources:
const cacheLiterals = new Set(['user:123', 'post:456']);
const apiData = new Set(['user:123', 'comment:789']);
// What do we already have cached?
const alreadyCached = apiData.intersection(cacheLiterals);
// Set { 'user:123' }
// What's new from the API?
const newData = apiData.difference(cacheLiterals);
// Set { 'comment:789' }
// Everything we know about
const allData = cacheLiterals.union(apiData);
// Set { 'user:123', 'post:456', 'comment:789' }
Performance Characteristics
These aren't just convenience wrappers. The implementations are optimized. For intersection() and difference(), the spec requires iterating over the smaller set, which gives you O(min(n,m)) instead of O(n*m) if you wrote it naively.
Example of what NOT to do:
// Slow - O(n*m)
function slowIntersection(setA, setB) {
return new Set([...setA].filter(x => setB.has(x)));
}
// Native method optimizes by iterating the smaller set
const fast = setA.intersection(setB);
The SQL Connection
If you've worked with SQL, these operations map directly to joins:
-
union()→ FULL OUTER JOIN -
intersection()→ INNER JOIN -
difference()→ LEFT JOIN (excluding matches) -
symmetricDifference()→ FULL OUTER JOIN (excluding matches)
That mental model helps. If you're building data pipelines or transforming query results, thinking in terms of set operations often clarifies the logic.
Immutable Array Methods: toSorted, toReversed, toSpliced
ES2023 shipped four methods that mirror existing array methods but return new arrays instead of mutating:
-
toSorted()(instead ofsort()) -
toReversed()(instead ofreverse()) -
toSpliced()(instead ofsplice()) -
with()(instead of bracket assignment)
If you're rolling your eyes because "I just use the spread operator," hold on.
The Problem With Mutation
Array mutation has bitten all of us:
function displaySortedUsers(users) {
return users.sort((a, b) => a.name.localeCompare(b.name));
}
const myUsers = [{ name: 'Bob' }, { name: 'Alice' }];
const sorted = displaySortedUsers(myUsers);
console.log(myUsers);
// [{ name: 'Alice' }, { name: 'Bob' }]
// Wait, what? We just mutated the original array.
The spread workaround:
function displaySortedUsers(users) {
return [...users].sort((a, b) => a.name.localeCompare(b.name));
}
This works but requires remembering to spread before sorting. The immutable methods eliminate that cognitive load:
function displaySortedUsers(users) {
return users.toSorted((a, b) => a.name.localeCompare(b.name));
}
Same result, no mutation, no spread operator to remember.
React and Immutability
This is particularly valuable in React:
// Bad - doesn't trigger re-render
const handleSort = () => {
state.items.sort((a, b) => a.value - b.value);
setState({ items: state.items }); // Same reference!
};
// Old fix - verbose
const handleSort = () => {
setState({
items: [...state.items].sort((a, b) => a.value - b.value)
});
};
// New way - clean
const handleSort = () => {
setState({
items: state.items.toSorted((a, b) => a.value - b.value)
});
};
The immutable methods return new arrays, which React's reconciliation picks up immediately.
toSpliced: The Underappreciated One
splice() is notoriously confusing because it mutates, returns the removed items, and takes weird parameters. toSpliced() fixes this:
const items = ['a', 'b', 'c', 'd'];
// Remove 2 items starting at index 1
const removed = items.toSpliced(1, 2);
// ['a', 'd'] - new array
// items is still ['a', 'b', 'c', 'd']
// Insert without removing
const inserted = items.toSpliced(2, 0, 'X', 'Y');
// ['a', 'b', 'X', 'Y', 'c', 'd']
// Replace
const replaced = items.toSpliced(1, 2, 'Z');
// ['a', 'Z', 'd']
Common use case—removing an item by index:
// Old way
const newItems = [...items.slice(0, index), ...items.slice(index + 1)];
// New way
const newItems = items.toSpliced(index, 1);
with(): Immutable Index Updates
Updating a single element immutably used to require spreading or mapping:
const items = ['a', 'b', 'c'];
// Old
const updated = [...items];
updated[1] = 'X';
// Or
const updated = items.map((item, i) => i === 1 ? 'X' : item);
// New
const updated = items.with(1, 'X');
// ['a', 'X', 'c']
Clean, clear, no unnecessary array iteration.
Object.groupBy and Map.groupBy: Stop Writing Reduce Boilerplate
Grouping array elements is one of the most common operations in JavaScript, yet until ES2024, there was no native way to do it. We all wrote variations of this:
const groupedByCategory = products.reduce((acc, product) => {
const key = product.category;
if (!acc[key]) {
acc[key] = [];
}
acc[key].push(product);
return acc;
}, {});
This works, but it's verbose, error-prone (forgot to initialize the array?), and not immediately readable. The same logic appears in codebases hundreds of times, slightly different each time.
Now we have Object.groupBy():
const groupedByCategory = Object.groupBy(products, p => p.category);
One line. That's the entire operation.
How It Works
Object.groupBy(array, callbackFn) takes an array and a callback that returns the grouping key for each element. It returns an object where keys are the group names and values are arrays of elements:
const transactions = [
{ amount: 100, type: 'credit' },
{ amount: 50, type: 'debit' },
{ amount: 200, type: 'credit' },
{ amount: 75, type: 'debit' }
];
const byType = Object.groupBy(transactions, t => t.type);
// {
// credit: [{ amount: 100, type: 'credit' }, { amount: 200, type: 'credit' }],
// debit: [{ amount: 50, type: 'debit' }, { amount: 75, type: 'debit' }]
// }
The callback can return anything—strings, numbers, booleans, whatever. Non-string values get coerced to strings:
const byAmount = Object.groupBy(transactions, t => t.amount > 100);
// {
// 'false': [...],
// 'true': [...]
// }
Important Gotcha: Null Prototype
The returned object has no prototype:
const grouped = Object.groupBy(items, keyFn);
grouped.hasOwnProperty('someKey'); // TypeError!
This prevents prototype pollution but means you can't use methods like hasOwnProperty() directly. Use the static Object.hasOwn() instead:
Object.hasOwn(grouped, 'someKey'); // OK
Object.keys(grouped); // OK
Map.groupBy: When Keys Aren't Strings
Map.groupBy() does the same thing but returns a Map, which allows any type as keys:
const users = [
{ name: 'Alice', dept: { id: 1, name: 'Engineering' } },
{ name: 'Bob', dept: { id: 2, name: 'Sales' } },
{ name: 'Charlie', dept: { id: 1, name: 'Engineering' } }
];
const byDept = Map.groupBy(users, u => u.dept);
// Map {
// { id: 1, name: 'Engineering' } => [Alice, Charlie],
// { id: 2, name: 'Sales' } => [Bob]
// }
Notice we're using object references as keys. With Object.groupBy(), those would get stringified to "[object Object]", which would group everything together. Map.groupBy() preserves the object references.
Real-World Usage
Time-series data:
const events = [
{ timestamp: '2024-01-01T10:00:00Z', event: 'login' },
{ timestamp: '2024-01-01T11:00:00Z', event: 'purchase' },
{ timestamp: '2024-01-02T09:00:00Z', event: 'login' }
];
const byDay = Object.groupBy(events, e => e.timestamp.split('T')[0]);
// {
// '2024-01-01': [...],
// '2024-01-02': [...]
// }
Multi-level grouping:
const items = [
{ category: 'electronics', brand: 'Apple', price: 999 },
{ category: 'electronics', brand: 'Samsung', price: 799 },
{ category: 'clothing', brand: 'Nike', price: 120 }
];
// Group by category, then by brand
const grouped = Object.groupBy(items, i => i.category);
const nested = Object.fromEntries(
Object.entries(grouped).map(([category, items]) => [
category,
Object.groupBy(items, i => i.brand)
])
);
// {
// electronics: {
// Apple: [...],
// Samsung: [...]
// },
// clothing: {
// Nike: [...]
// }
// }
Counting occurrences (though you might prefer Map for this):
const words = ['apple', 'banana', 'apple', 'cherry', 'banana', 'apple'];
const counts = Object.entries(Object.groupBy(words, w => w))
.map(([word, arr]) => [word, arr.length])
.reduce((acc, [word, count]) => ({ ...acc, [word]: count }), {});
// { apple: 3, banana: 2, cherry: 1 }
RegExp /v Flag: Unicode That Actually Works
Regular expressions in JavaScript have had Unicode support via the /u flag since ES6, but it's always been limited. The /v flag, shipping in all major browsers since June 2024, fixes longstanding issues and adds powerful new features.
The Problem With Unicode in JavaScript
JavaScript strings are UTF-16, which means some characters—emoji, non-BMP characters—are represented by two code units (surrogate pairs). The /u flag helped:
'😀'.length; // 2 (WTF)
/^.$/u.test('😀'); // true (OK, u flag works)
But many emoji are actually sequences of multiple code points:
'👨👩👧👦'.length; // 11 (family emoji is 7 code points with ZWJ joiners)
/^\p{Emoji}$/u.test('👨👩👧👦'); // false (u flag fails)
The /u flag treats each code point separately. For multi-code-point emoji, it doesn't work.
Enter the /v Flag
The /v flag introduces "properties of strings"—Unicode properties that match entire sequences of code points:
/^\p{RGI_Emoji}$/v.test('👨👩👧👦'); // true!
/^\p{RGI_Emoji}$/v.test('👍🏾'); // true (emoji with skin tone modifier)
/^\p{RGI_Emoji}$/v.test('🧑💻'); // true (technologist emoji)
RGI_Emoji means "Recommended for General Interchange"—it matches all valid emoji that Unicode recommends using, regardless of how many code points they contain.
Set Operations in Character Classes
This is the real power move. The /v flag enables set notation in character classes:
Intersection (&&): Match characters that belong to multiple sets:
// Match Greek lowercase letters
/[\p{Lowercase}&&\p{Script=Greek}]/v.test('α'); // true
/[\p{Lowercase}&&\p{Script=Greek}]/v.test('Α'); // false (uppercase)
/[\p{Lowercase}&&\p{Script=Greek}]/v.test('a'); // false (not Greek)
Subtraction (--): Match characters in one set but not another:
// Match letters except vowels
/^[[a-z]--[aeiou]]+$/v.test('rhythm'); // true
/^[[a-z]--[aeiou]]+$/v.test('hello'); // false (has vowels)
// Remove punctuation except sentence enders
const text = '"Hello!" said Alice, smiling.';
const cleaned = text.replace(/[\p{Punctuation}--[.!?]]/gv, '');
// 'Hello! said Alice smiling.'
Nested sets: Combine operations:
// Match ASCII characters that are letters or digits, but not vowels
/^[[A-Za-z0-9]--[aeiouAEIOU]]+$/v.test('pr1v4t3'); // true
Practical Applications
Validating international names:
// Allow letters from any script, common punctuation, spaces
function isValidName(name) {
return /^[\p{Letter}\p{Mark}\s'.-]+$/v.test(name);
}
isValidName('José García'); // true
isValidName('Σωκράτης'); // true (Greek)
isValidName('मोहन दास'); // true (Hindi)
isValidName('李明'); // true (Chinese)
isValidName('O\'Brien'); // true
isValidName('user@example.com'); // false
Stripping accents while preserving base letters:
// Remove combining diacritical marks
const stripAccents = str => str.replace(/\p{Mark}/gvu, '');
stripAccents('café'); // 'cafe'
stripAccents('naïve'); // 'naive'
stripAccents('Åsa'); // 'Asa'
Script-specific validation:
// Ensure text uses only Latin script
/^[\p{Script=Latin}\p{Common}\s]+$/v.test('Hello World'); // true
/^[\p{Script=Latin}\p{Common}\s]+$/v.test('Hello 世界'); // false
// Block homograph attacks (mixing scripts)
function isSingleScript(text) {
const scripts = new Set();
for (const char of text) {
if (/\p{Letter}/v.test(char)) {
// Simplified - real implementation would check script property
scripts.add(true);
}
}
return scripts.size <= 1;
}
Caveats and Gotchas
The /v and /u flags are mutually exclusive—using both throws SyntaxError. The /v flag is essentially a superset of /u, so just use /v.
Character class escaping rules are stricter with /v. Some characters that didn't need escaping before now do:
// Without /v - these work
/[()]/u.test('('); // true
/[{}]/u.test('{'); // true
// With /v - need to escape
/[\(\)]/v.test('('); // true
/[\{\}]/v.test('{'); // true
Atomics.waitAsync: Shared Memory Without Blocking
If you're doing any serious work with Web Workers and SharedArrayBuffer, you've probably run into the limitation of Atomics.wait(): it blocks the thread. That's fine in a worker, but it's forbidden on the main thread (calling it throws TypeError).
Atomics.waitAsync(), standardized in ES2024, is the non-blocking version. It returns immediately with a Promise instead of blocking.
Why Shared Memory and Atomics Matter
JavaScript is single-threaded, but with Web Workers, you can run code in parallel. The problem is communication. postMessage() works, but it serializes data, which is expensive for large structures.
SharedArrayBuffer lets workers share memory directly. No serialization, no copying. But with shared memory comes race conditions, which is where Atomics comes in.
The Old Way: Atomics.wait
// In worker
const buffer = new SharedArrayBuffer(4);
const view = new Int32Array(buffer);
// Block until view[0] changes from 0 to something else
const result = Atomics.wait(view, 0, 0);
// Thread is blocked until another thread calls Atomics.notify()
This is fine for workers, but you can't do this on the main thread without freezing the UI.
The New Way: Atomics.waitAsync
const buffer = new SharedArrayBuffer(4);
const view = new Int32Array(buffer);
const result = Atomics.waitAsync(view, 0, 0);
// Returns immediately
if (result.async) {
result.value.then(state => {
console.log(state); // 'ok' or 'timed-out'
});
} else {
console.log(result.value); // 'not-equal'
}
If the value doesn't match the expected value, it returns { async: false, value: 'not-equal' } immediately.
If it matches, it returns { async: true, value: Promise }. The promise resolves when Atomics.notify() is called or the timeout expires.
Building an Async Lock
Here's a practical example—implementing a mutex that works both synchronously (in workers) and asynchronously (on the main thread):
class AsyncLock {
static UNLOCKED = 0;
static LOCKED = 1;
constructor(sab) {
this.buffer = new Int32Array(sab);
}
async lock() {
while (true) {
// Try to acquire the lock
const oldValue = Atomics.compareExchange(
this.buffer, 0,
AsyncLock.UNLOCKED,
AsyncLock.LOCKED
);
if (oldValue === AsyncLock.UNLOCKED) {
// We got the lock
return;
}
// Lock is held by someone else, wait for notification
const result = Atomics.waitAsync(
this.buffer, 0, AsyncLock.LOCKED
);
if (result.async) {
await result.value;
}
// If not async, the value changed between our check and wait - try again
}
}
unlock() {
Atomics.store(this.buffer, 0, AsyncLock.UNLOCKED);
Atomics.notify(this.buffer, 0, 1);
}
async executeLocked(fn) {
await this.lock();
try {
return await fn();
} finally {
this.unlock();
}
}
}
// Usage
const buffer = new SharedArrayBuffer(4);
const lock = new AsyncLock(buffer);
// In main thread or worker
await lock.executeLocked(async () => {
// Critical section - only one thread executes this at a time
await expensiveOperation();
});
Real-World Use Cases
Coordinating workers: Wait for all workers to reach a synchronization point:
class Barrier {
constructor(count, sab) {
this.count = count;
this.buffer = new Int32Array(sab);
Atomics.store(this.buffer, 0, 0); // Current count
}
async wait() {
const current = Atomics.add(this.buffer, 0, 1) + 1;
if (current === this.count) {
// Last one in - notify everyone
Atomics.notify(this.buffer, 0);
return;
}
// Wait for the last one
while (Atomics.load(this.buffer, 0) < this.count) {
const result = Atomics.waitAsync(this.buffer, 0, current);
if (result.async) {
await result.value;
}
}
}
}
Producer-consumer queue: Implement async message passing between workers:
class SharedQueue {
constructor(size, sab) {
this.size = size;
this.buffer = new Int32Array(sab);
// buffer[0] = read index
// buffer[1] = write index
// buffer[2..size+1] = data
}
async enqueue(value) {
while (true) {
const writeIdx = Atomics.load(this.buffer, 1);
const readIdx = Atomics.load(this.buffer, 0);
if ((writeIdx + 1) % this.size === readIdx) {
// Queue full, wait for space
const result = Atomics.waitAsync(this.buffer, 0, readIdx);
if (result.async) await result.value;
continue;
}
// Write value
Atomics.store(this.buffer, 2 + writeIdx, value);
Atomics.store(this.buffer, 1, (writeIdx + 1) % this.size);
Atomics.notify(this.buffer, 1, 1);
return;
}
}
async dequeue() {
while (true) {
const readIdx = Atomics.load(this.buffer, 0);
const writeIdx = Atomics.load(this.buffer, 1);
if (readIdx === writeIdx) {
// Queue empty, wait for data
const result = Atomics.waitAsync(this.buffer, 1, writeIdx);
if (result.async) await result.value;
continue;
}
// Read value
const value = Atomics.load(this.buffer, 2 + readIdx);
Atomics.store(this.buffer, 0, (readIdx + 1) % this.size);
Atomics.notify(this.buffer, 0, 1);
return value;
}
}
}
Performance Considerations
SharedArrayBuffer and Atomics are low-level primitives. Using them effectively requires understanding:
Cache coherence: Modern CPUs cache memory. Atomic operations force cache synchronization, which is expensive. Minimize atomic operations in hot loops.
False sharing: If two threads frequently access different indices in the same cache line (typically 64 bytes), they'll thrash the cache. Pad your data structures:
// Bad - false sharing likely
const buffer = new Int32Array(sharedArrayBuffer);
// Thread 1 writes buffer[0]
// Thread 2 writes buffer[1]
// These are in the same cache line, causing thrashing
// Better - pad to different cache lines
const buffer = new Int32Array(sharedArrayBuffer);
// Thread 1 writes buffer[0]
// Thread 2 writes buffer[16] // 64 bytes apart
-
Busy waiting:
Atomics.waitAsync()is more efficient than a polling loop, but it's still waiting. Design your algorithms to minimize waiting.
Array.prototype.findLast and findLastIndex: Searching Backwards
These shipped in ES2023 and are exactly what they sound like. Instead of find() and findIndex(), which search from the start, these search from the end.
const items = [
{ id: 1, status: 'active' },
{ id: 2, status: 'inactive' },
{ id: 3, status: 'active' }
];
// Old way - find first active item
const first = items.find(item => item.status === 'active');
// { id: 1, status: 'active' }
// New way - find last active item
const last = items.findLast(item => item.status === 'active');
// { id: 3, status: 'active' }
When You Actually Need This
Log files and event streams: Finding the most recent entry matching criteria:
const logs = [
{ timestamp: '2024-01-01T10:00:00Z', level: 'info', message: 'Started' },
{ timestamp: '2024-01-01T10:05:00Z', level: 'error', message: 'Failed' },
{ timestamp: '2024-01-01T10:10:00Z', level: 'info', message: 'Retrying' },
{ timestamp: '2024-01-01T10:15:00Z', level: 'error', message: 'Failed again' }
];
const lastError = logs.findLast(log => log.level === 'error');
// { timestamp: '2024-01-01T10:15:00Z', level: 'error', message: 'Failed again' }
Undo/redo stacks: Finding the last undoable action:
const history = [
{ action: 'insert', char: 'h', undoable: true },
{ action: 'insert', char: 'e', undoable: true },
{ action: 'save', undoable: false },
{ action: 'insert', char: 'l', undoable: true }
];
const lastUndoable = history.findLast(h => h.undoable);
const indexToUndo = history.findLastIndex(h => h.undoable);
Reverse iteration with a predicate: Before findLast, you'd write:
// Reverse the array, find, then reverse back
const reversed = [...items].reverse();
const found = reversed.find(predicate);
// Or slice and reverse
const found = items.slice().reverse().find(predicate);
Both allocate a new reversed array. findLast() avoids that allocation entirely.
String.prototype.isWellFormed and toWellFormed: Unicode Hygiene
JavaScript strings can contain malformed Unicode—lone surrogates that don't form valid characters. This happens when you manipulate strings at the code unit level:
const str = 'Hello \uD800 World'; // Lone high surrogate
Most of the time, this doesn't matter. But when you need to encode strings (URL encoding, base64, sending to an API), lone surrogates cause problems:
encodeURIComponent('Test \uD800'); // URIError: URI malformed
isWellFormed and toWellFormed
ES2024 added two methods:
'Hello World'.isWellFormed(); // true
'Hello \uD800 World'.isWellFormed(); // false
'Hello \uD800 World'.toWellFormed();
// 'Hello � World' (replaces lone surrogate with U+FFFD)
toWellFormed() replaces lone surrogates with the replacement character (�), ensuring the string can be safely encoded.
When This Matters
API communication: Before sending user-generated content to an API:
function safelyEncode(str) {
return encodeURIComponent(str.toWellFormed());
}
Database storage: Some databases (MySQL's utf8mb4, PostgreSQL) reject malformed Unicode. Sanitizing input:
async function saveUserInput(text) {
const wellFormed = text.toWellFormed();
await db.query('INSERT INTO messages (text) VALUES (?)', [wellFormed]);
}
Cross-origin communication: postMessage can fail with malformed strings in some implementations:
worker.postMessage({ text: userInput.toWellFormed() });
Why Aren't We Using These Features?
The adoption lag isn't purely ignorance. There are legitimate reasons:
Browser support: If you're supporting IE11 or older mobile browsers, many of these features aren't available. But if you're targeting modern environments—which, in 2026, you probably are—the support is there.
Build tool friction: Some features require Babel plugins or TypeScript updates. Teams running older build configs don't get new features automatically.
Documentation gaps: MDN is great, but it doesn't explain when to use features, only how. This article is trying to fill that gap.
Mental inertia: We learn patterns that work and stick with them. It takes active effort to update your mental toolkit.
Code review norms: If your team doesn't know these features exist, they'll flag them in reviews as "too clever" or "non-standard."
How to Start Using This Stuff
Here's my recommendation:
Pick one feature. Don't try to adopt everything at once. Choose the one that solves a pain point you have right now. If you're constantly writing
reduce()to group arrays, start withObject.groupBy().Use it in a low-risk context. New feature flag? Isolated utility function? Somewhere that won't blow up production if something goes wrong.
Share with your team. Write a Slack message, do a short demo. "Hey, I learned about this new array method, check it out." Normalize using new features.
Update your linter config. If you're running ESLint with
ecmaVersion: 2020, bump it to2024. New features will stop being flagged as errors.Check browser support. Use caniuse.com or MDN's browser compatibility tables. For Node.js, check the node.green compatibility table.
Polyfills exist. If you need to support older environments, core-js polyfills most of these features.
The Bigger Picture
JavaScript is in a strange place. The language evolves steadily—TC39 ships new features every year, browsers implement them surprisingly fast, and yet the median JavaScript developer is using ES2018 patterns.
Part of this is inevitable. Languages accumulate features faster than developers can learn them. But part of it is a failure of communication. The spec process is public, but it's not accessible. Blog posts focus on hype ("ES2025 is here!") rather than utility ("here's how this solves your actual problems").
These features matter. Promise.withResolvers() makes event-driven async code cleaner. Set methods eliminate entire classes of bugs. Immutable array methods reduce React footguns. Grouping methods cut through reduce boilerplate.
You don't need to memorize every ECMAScript proposal. But you should be aware of the tools the language gives you. The language is better than it was five years ago. Your code can be too.
Stop writing 2018 JavaScript in 2026. The language has moved on. You should too.



Top comments (4)
The Set methods section alone made this worth reading. I've been writing that
.filter(x => otherSet.has(x))pattern for years without even questioning it.Object.groupBy()is the one I'm most excited about though. I refactored a data processing pipeline last week and replaced like 5 different reduce calls with it. The code went from "what does this even do" to instantly readable. The null prototype gotcha is good to know — I actually hit that exacthasOwnPropertyerror and was confused for a minute.One thing I'd push back on slightly: the
Atomics.waitAsyncsection, while technically solid, is probably overkill for 95% of JS devs. SharedArrayBuffer is still gated behind COOP/COEP headers which makes it a pain to deploy. Would've been cool to seestructuredCloneor theusingkeyword instead — those feel more universally useful right now.Great post on JS! Going to bookmark for the future :D
♥
A very timely post as I just wrote a group-by yesterday. My code is now cleaner because of you :)
Excellently written!