If you've ever built a web application, especially one with a backend (like a server written in Node.js), you've almost certainly used JSON.stringify
. It's the go-to tool for turning structured data (like a list of users or products) into a plain text format that can be sent over the internet as part of an API response, or saved in your browser's local storage.
Well, the JavaScript engine called V8 (which powers popular browsers like Chrome and Node.js) has made JSON.stringify
more than twice as fast! This is a huge deal, meaning quicker page interactions and more responsive applications for all of us.
This was not an optimization in search of a problem. For years, developers in the Node.js community have identified JSON.stringify
as a critical performance chokepoint. In community discussions, it has been described as one of the "biggest impediments to just about everything around performant node services". Given that Node.js operates on a single-threaded event loop, a CPU-intensive, blocking operation like serializing a large JSON object can halt the entire server, preventing it from handling other requests. This makes the function's performance a direct factor in a server's scalability and throughput.
The V8 team's work, therefore, addresses a well-known and systemic pain point, directly enabling developers to build more efficient and scalable backend systems. Let's dive into how they pulled this off, with some simple examples.
What Does Actually Do?
Imagine you have some information organized neatly in your JavaScript code, like a profile for a person.
const userProfile = {
id: 123,
name: "Jane Doe",
isActive: true,
roles: ["admin", "editor"]
};
JSON.stringify
takes this structured data and converts it into a plain string of text that can be easily sent over a network or stored.
JSON.stringify converts it to a string:
// This is the output of JSON.stringify(userProfile)
'{"id":123,"name":"Jane Doe","isActive":true,"roles":["admin","editor"]}'
You'll often see this when an API sends back a list of items, like an array of these kinds of objects with the same structure, which is a very common scenario.
The Big Improvements and How They Work
The engineers at V8 re-thought JSON.stringify
from the ground up, focusing on core operations like memory management and character handling
1. The "Super Highway" for Clean Data (Side-Effect-Free Fast Path)
Think of it like shipping a package. If your package is simple, contains only standard items, and doesn't require any custom processing or special instructions, it can go on a super-fast, automated conveyor belt.
V8 now has a special fast path for data that's plain and clean. This means the data doesn't involve any hidden code running or complex internal operations that could cause side effects when it's being converted to JSON.
Most typical data we send, especially in API responses, falls into this "clean" category. This allows V8 to bypass many expensive checks and defensive logic that the older, more general serializer had to perform, leading to a significant speedup for common JavaScript objects.
To stay on the fast path, the data being serialized must be "plain". This means it cannot contain features that would force the engine to execute arbitrary code during the serialization process.
Consider the following examples:
Clean Object (Eligible for the Fast Path):
This object is a simple container for data. It has no hidden logic.
const userProfile = {
id: 42,
username: "dev_user",
isActive: true,
tags: ["javascript", "performance"]
};
// V8 can safely assume serializing this object has no side effects.
// It will use the ultra-fast "Super Highway."
JSON.stringify(userProfile);
Object with Side Effects (Forced onto the Slower, General Path):
This object contains custom logic that gets triggered during serialization, creating side effects.
const userProfileWithSideEffects = {
id: 42,
username: "dev_user",
// A custom.toJSON method is user-defined code. V8 must execute it.
toJSON: function() {
console.log("Custom.toJSON() method was called!"); // A clear side effect (I/O)
return {
userId: this.id,
user: this.username
};
}
};
// When V8 sees the.toJSON() method, it cannot make assumptions.
// It must fall back to the slower, general-purpose serializer to execute the function safely.
JSON.stringify(userProfileWithSideEffects);
Other features that cause a fallback include using a replacer
function or encountering a Proxy
object, as both involve executing user code that V8 cannot predict.
The replacer
function
A replacer
is a function passed as the second argument to JSON.stringify
. It gets called for every key-value pair in the object being serialized, giving you a chance to modify or skip values on the fly. Since this involves running arbitrary user-defined code for each property, V8 can't predict the outcome and must opt out of its "fast path".
Consider the following example:
Imagine you have user data but you don't want to expose the user's email
and you want to format their lastLogin
date for the output.
// The data object
const user = {
id: 42,
username: "jdoe",
email: "john.doe@example.com",
lastLogin: new Date("2025-08-17T08:30:00Z"),
roles: ["editor", "viewer"]
};
// The replacer function
const replacer = (key, value) => {
// 1. Skip the 'email' property
if (key === "email") {
return undefined; // Returning undefined removes the property
}
// 2. Format the 'lastLogin' date
if (key === "lastLogin") {
// V8 can't know this logic ahead of time
return value.toLocaleString('en-US', { timeZone: 'UTC' });
}
return value; // Return the original value for all other keys
};
// Using JSON.stringify with the replacer
const userJson = JSON.stringify(user, replacer, 2); // The '2' is for pretty-printing
console.log(userJson);
Output:
{
"id": 42,
"username": "jdoe",
"lastLogin": "8/17/2025, 8:30:00 AM",
"roles": [
"editor",
"viewer"
]
}
V8 cannot optimize this process because it has to pause and execute the replacer
function for every single property (id
, username
, email
, etc.). It's impossible to know in advance that you'll be filtering the email
or reformatting the lastLogin
date. This unpredictability forces a retreat to the slower, more cautious serialization method.
The Proxy
Object
A Proxy
is an object that wraps another object and intercepts fundamental operations, such as getting or setting a property. When JSON.stringify
tries to access the properties of a Proxy, it's not directly reading the data; it's triggering user-defined "trap" functions (get, has, etc.). This is another form of unpredictable user code that V8's fast path cannot handle.
Consider the following example:
Let's create a proxy that logs every time a property is accessed and hides any property that starts with an underscore (_).
// The original data object, with a "private" property
const userSecret = {
id: 101,
username: "secret_agent",
_missionCode: "Alpha-7" // This should not be serialized
};
// The handler with the trap logic for the Proxy
const handler = {
// This 'get' trap is called whenever a property is read
get: (target, property) => {
console.log(`Intercepted: Getting property "${property}"`);
// Hide properties starting with '_'
if (property.toString().startsWith('_')) {
return undefined;
}
return target[property];
},
// This 'ownKeys' trap is needed for JSON.stringify to know which keys exist
ownKeys: (target) => {
console.log("Intercepted: Getting all keys");
return Object.keys(target).filter(key => !key.startsWith('_'));
}
};
// Create the proxy
const proxyUser = new Proxy(userSecret, handler);
// Try to stringify the proxy
// JSON.stringify will trigger the 'get' and 'ownKeys' traps
const proxyJson = JSON.stringify(proxyUser, null, 2);
console.log("\nFinal JSON output:");
console.log(proxyJson);
Output:
Intercepted: Getting all keys
Intercepted: Getting property "id"
Intercepted: Getting property "username"
Final JSON output:
{
"id": 101,
"username": "secret_agent"
}
When JSON.stringify
processes proxyUser
, it doesn't just read { id: 101, ... }
. Instead, for every property it wants to read, it must execute the logic inside the handler.get
and handler.ownKeys
traps. The behavior of these traps is completely unpredictable - they could return anything, perform calculations, or even throw errors. V8 cannot make any safe assumptions and must use the "slow path" that correctly invokes the proxy's logic.
Iteration over Recursion
A subtle but critical architectural improvement within the new "fast path" is that it is iterative, not recursive. A traditional approach to traversing a nested object structure often uses recursion, where a function calls itself to process each level of nesting. However, this can lead to a "Maximum call stack size exceeded" error if the object is too deep.
The new iterative design completely sidesteps this problem. It "eliminates the need for stack overflow checks" and allows for the serialization of "significantly deeper nested object graphs". This means the optimization is not just about raw speed; it also enhances the robustness and capability of JSON.stringify
, allowing it to handle larger and more complex data structures without crashing.
Consider the following example:
The JavaScript code examples below demonstrate the conceptual difference between a recursive and an iterative approach to traversing an object. The actual high-performance implementation of this iterative logic is handled deep inside the V8 engine's C++ source code. The examples are provided for illustrative purposes to help developers visualize the robust, non-blocking strategy that V8 now uses internally.
Traversing a Nested Object
Let's use a common task for JSON.stringify
: traversing a nested object to read all its values.
1. The Recursive Way
In this approach, the traverse
function calls itself for every nested object it finds.
const userProfile = {
name: "Alex",
settings: {
theme: "dark",
notifications: {
email: true,
push: false
}
}
};
function traverseRecursively(obj) {
for (const key in obj) {
console.log(`Found key: ${key}, value: ${obj[key]}`);
// If the value is another object, call the same function on it
if (typeof obj[key] === 'object' && obj[key] !== null) {
traverseRecursively(obj[key]); // <<< RECURSIVE CALL
}
}
}
console.log("--- Recursive Traversal ---");
traverseRecursively(userProfile);
How it works?
-
traverseRecursively
is called on userProfile. - It processes
name
and thensettings
. - When it sees
settings
is an object, it pauses and callstraverseRecursively
on thesettings
object. - That second call processes
theme
andnotifications
. - When it sees
notifications
is an object, it pauses and callstraverseRecursively
a third time.
What is the Danger?
Each function call is added to the call stack. If the object were 20,000 levels deep, you would have 20,000 function calls waiting on the stack, leading to a crash.
// A deeply nested object that will cause a stack overflow
let deepObject = {};
let current = deepObject;
for (let i = 0; i < 20000; i++) {
current.next = {};
current = current.next;
}
// This will likely throw: "Maximum call stack size exceeded"
try {
traverseRecursively(deepObject);
} catch (e) {
console.error(e.message);
}
2. The Iterative Way
This version uses a simple while
loop and an array (nodesToVisit
) that acts as our to-do list, completely avoiding deep function calls.
const userProfile = {
name: "Alex",
settings: {
theme: "dark",
notifications: {
email: true,
push: false
}
}
};
function traverseIteratively(obj) {
// Our "to-do list" starts with the main object
const nodesToVisit = [obj];
// Loop as long as there are objects to process
while (nodesToVisit.length > 0) {
const currentNode = nodesToVisit.pop(); // Get the next object to work on
for (const key in currentNode) {
console.log(`Found key: ${key}, value: ${currentNode[key]}`);
// If a value is another object, add it to our to-do list
if (typeof currentNode[key] === 'object' && currentNode[key] !== null) {
nodesToVisit.push(currentNode[key]); // <<< NO RECURSIVE CALL
}
}
}
}
console.log("\n--- Iterative Traversal ---");
traverseIteratively(userProfile);
How it works?
- The
nodesToVisit
array starts with just[userProfile]
. - The
while
loop starts. It takesuserProfile
out of the array. - It processes
name
andsettings
. - When it finds the
settings
object, it just adds it to thenodesToVisit
array. The array is now[settings object]
. - On the next loop, it takes
settings
out, processes its keys, and adds thenotifications
object to the array.
What is the Benefit?
This version can handle the deeply nested object without any problems.
let deepObject = {};
let current = deepObject;
for (let i = 0; i < 20000; i++) {
current.next = {};
current = current.next;
}
// This works perfectly, no crash!
console.log("\n--- Iterative on Deep Object ---");
// We won't print every line, but we can confirm it runs.
try {
traverseIteratively(deepObject);
console.log("Successfully traversed deep object iteratively!");
} catch(e) {
console.error("This should not happen:", e.message);
}
Conclusion:
By switching JSON.stringify
from a recursive to an iterative internal design, V8 made it more robust. It's not just faster for common cases; it's now capable of handling extremely deep, complex objects without the risk of crashing from a "Maximum call stack size exceeded" error. This makes it a more reliable tool for heavy-duty data serialization.
2. The "Express Lane" for Similar Data (A huge win for API responses!)
Within the main "Super Highway," V8 introduced an even faster "Express Lane." This optimization provides a massive speed boost for one of the most common use cases on the web: serializing an array of objects that all have the same structure, such as a list of products or users returned from an API. To understand how this works, one must first understand a core concept of V8's internal design: Hidden Classes
A Primer on V8's Hidden Classes
JavaScript is a dynamically typed language, meaning you can add or remove properties from an object at any time. While flexible, this poses a performance challenge. If objects are treated like simple dictionaries, accessing a property like user.email
would require a slow string-based lookup to find the memory location of that property's value.
To solve this, V8 uses a mechanism called Hidden Classes (also known internally as Maps). When an object is created, V8 creates a hidden class that acts as an internal blueprint, describing the object's "shape" - its properties and the order they were added in. When another object is created with the exact same shape, V8 can reuse the same hidden class. This allows the engine to treat the object's properties as if they had a fixed layout, similar to statically typed languages like Java. Instead of a slow dictionary lookup, V8 knows the exact memory offset for each property, enabling incredibly fast access.
What might have happened without this (Hidden Classes) optimization?
At its core, a basic JavaScript object is like a dictionary or a hash map. It's a collection of key-value pairs. When you write user.email
, you're telling the engine to find the value associated with the key "email"
inside the user
object.
Without optimizations, the engine would have to perform a string-based lookup:
-
Get the key: The engine takes the string
"email"
. -
Hash the key: It might run the string through a hashing function to get a unique identifier, like
48291
. -
Find the key in memory: It then has to search through the object's internal list of properties to find a match for the string
"email"
(or its hash). -
Retrieve the value's location: Once it finds the matching key, it gets the memory address where the actual value (
"jane.doe@example.com"
) is stored. - Return the value: Finally, it fetches the data from that memory address.
This process, especially the "find the key" step, is computationally expensive. It has to be repeated every single time you access user.email
, even inside a tight loop running thousands of times. For a dynamic language, this is the default, safe way to do things, but it's not fast.
How Hidden Classes (Shapes) solves this problem?
The engineers behind V8 realized that in most real-world applications, developers create many objects with the exact same structure (same properties in the same order). V8 leverages this pattern to avoid the slow string lookups.
It does this using a concept called Hidden Classes (also known as Shapes or Maps).
Here’s how it works:
Step 1: Creating a Hidden Class
When you create an object, V8 creates a Hidden Class behind the scenes. This acts as a blueprint for that object's shape.
JS Code | V8's action |
---|---|
const user1 = {}; |
Creates a blank Hidden Class, let's call it C0 . |
user1.name = "Jane"; |
Creates a new Hidden Class C1 based on C0 . C1 describes an object that has a single property, name . V8 also records the memory offset for name . For example, it knows name is stored right at the beginning of the object's memory block (offset: 0). |
user1.email = "jane.doe@example.com"; |
Creates another Hidden Class C2 based on C1 . C2 describes an object with properties name and email . V8 records that email is stored at the next memory location (e.g., offset: 1). |
The user1
object is now tagged with Hidden Class C2
.
Step 2: Reusing the Hidden Class
Now, here comes the magic. When you create a second object with the same properties in the same order:
const user2 = {};
user2.name = "John";
user2.email = "john.smith@example.com";
V8 follows the exact same steps. It sees you're creating an object with the same "shape" as user1
. Instead of creating new Hidden Classes, it simply tags user2
with the existing Hidden Class C2
.
Step 3: The High-Speed Access
Now, when your code runs console.log(user1.email)
or console.log(user2.email)
, V8 performs a much faster, direct memory access:
-
Check Hidden Class: V8 sees that both
user1
anduser2
are using Hidden ClassC2
. -
Look up offset in the blueprint: It consults the
C2
blueprint and sees that theemail
property is always at a fixed memory offset (e.g., offset: 1). - Direct Memory Access: It jumps directly to that memory address for the given object and retrieves the value.
There is no string comparison and no searching. It's as fast as accessing a property in a low-level, compiled language like C++. This optimization transforms the slow, dynamic lookup into a simple calculation: object_memory_address + property_offset
.
What Breaks the Optimization?
This speed boost only works if objects share a Hidden Class. If you add or delete properties or change their order, V8 has to create a new blueprint and can't use the same optimized path.
const user1 = { name: "Jane", email: "jane@example.com" }; // Uses Hidden Class C2
const user2 = { email: "john@example.com", name: "John" }; // Different order -> New Hidden Class C3!
user1.age = 30; // This forces user1 to transition to yet another Hidden Class.
This is why a common performance tip for JavaScript is to always initialize your objects with the same structure and avoid adding or removing properties after creation.
The Express Lane in Action
The "Express Lane" optimization leverages this hidden class mechanism brilliantly. The process works as follows:
-
JSON.stringify
is called on an array of user objects, all sharing the same hidden class (e.g.,C2
from the example above). - When processing the first object in the array, the fast path performs its standard series of checks for each property key: confirm it's not a
Symbol
, ensure it's enumerable, and scan the key string for characters that require escaping. - If all keys pass these checks, V8 does something clever: it attaches a special flag to the object's hidden class (
C2
), marking it as"fast-json-iterable"
. Think of this as putting a "fast pass" sticker on the blueprint for this type of object. - When V8 moves to the second object, it first checks its hidden class. It sees that it's also
C2
and thatC2
has the"fast-json-iterable"
flag. -
This is the magic moment. Because of that flag, V8 can now completely skip all the property key checks. It already knows the keys
"id"
and"name"
are safe and require no special handling. It can copy them directly to the output string buffer at maximum speed. - This process repeats for every subsequent object in the array that shares the same flagged hidden class.
This optimization creates a powerful feedback loop that rewards a performance (friendly coding pattern known as monomorphism - creating objects with consistent, predictable shapes). By structuring data consistently (e.g., always returning {id, name, email}
from an API endpoint), developers automatically unlock this massive performance boost, not just in JSON.stringify
but across many other parts of the V8 engine that rely on hidden classes for optimization.
3. Hardware-Level Acceleration
The V8 team did not stop at high-level architectural and algorithmic improvements; they pushed all the way down to the hardware level, leveraging special features of modern CPUs to accelerate the most fundamental operations: handling strings and numbers.
Parallel Processing in a Single Core: SIMD and SWAR
One of the most powerful features of modern processors is SIMD (Single Instruction, Multiple Data). In essence, SIMD allows a single CPU instruction to perform the same operation on multiple pieces of data simultaneously. A helpful analogy is applying a brightness filter in a photo editor. A traditional (scalar) approach would adjust the brightness of each pixel one by one. A SIMD-capable processor can adjust a whole block of pixels (i.e. say, 8 or 16 of them) in a single operation, dramatically speeding up the process.
V8 applies this same principle to string serialization. When stringifying a value, the engine must scan the string for any characters that require special escaping, such as a double quote ("
) or a backslash (\
). Instead of checking one character at a time, V8 uses SIMD to load a large chunk of the string into a wide register and check multiple bytes at once with just a few instructions.
This strategy is further refined with a two-tiered approach:
- For longer strings, V8 uses dedicated hardware SIMD instructions (like NEON on ARM CPUs or AVX2 on x86 CPUs), which are highly efficient for large blocks of data.
- For shorter strings, where the overhead of setting up hardware SIMD would be too high, V8 uses a technique called SWAR (SIMD Within A Register). This approach uses clever bitwise logic on standard CPU registers to process multiple characters at once with very low overhead.
If a chunk is scanned and found to contain no special characters (which is the most common case) the entire chunk can be copied directly to the output buffer, making the process incredibly efficient.
Here’s a summary of where you can find CPUs that support SIMD instructions - AVX2 or NEON—across AWS, GCP, and laptops:
Platform | SIMD Support | Example Instances / Notes |
---|---|---|
AWS (x86 EC2) | AVX2, AVX-512 | C5/C5d, M5, T3 (Intel Skylake/Cascade Lake) |
AWS (ARM EC2) | NEON | Graviton2 (2×128-bit NEON): M6g, C6g, etc.; Graviton3 (4×128-bit NEON + SVE): C7g, M7g, etc. |
GCP (Compute Engine) | AVX2 / AVX-512 | Use “min CPU platform” to ensure Cascade Lake/Ice Lake (AVX2/possibly AVX-512) |
Laptops (Intel) | AVX2/AVX-512 | Recent-gen Intel Core i5/i7/i9 mostly supports AVX2; newer Ice Lake+ may include AVX-512 |
Laptops (ARM) | NEON | Apple M-series or ARM-based laptops include NEON SIMD |
A Faster Number-to-String Algorithm: Dragonbox
Converting a floating-point number into its shortest, most accurate string representation is a notoriously complex computer science problem. For this task, V8 upgraded its core DoubleToString algorithm, replacing the long-serving Grisu3 algorithm with a state-of-the-art implementation called Dragonbox.
While this change was driven by profiling JSON.stringify
, its benefits extend far beyond it. This is a "rising tide lifts all boats" improvement (general growth helps everyone). The new Dragonbox implementation benefits all calls to Number.prototype.toString()
throughout the entire V8 engine. This means any JavaScript code that converts numbers to strings, not just JSON serialization, automatically receives this performance boost for free. This is a hallmark of efficient engineering: solving a specific problem by improving a shared, foundational component, thereby creating widespread, positive ripple effects across the platform.
4. Smart Memory Management
When building a very large JSON string, which can sometimes be hundreds of megabytes in size, memory management becomes a critical performance factor. The previous implementation of JSON.stringify
faced a significant bottleneck related to how it handled its temporary output buffer.
🐌 The Old Way...
Previously, the stringifier built its output in a single, contiguous block of memory on the C++ heap. Imagine writing a very long document in a single notebook. As the JSON string grew, this memory block would eventually run out of space. When that happened, V8 had to perform a costly sequence of operations:
- Allocate a new, larger block of memory (like getting a bigger notebook).
- Copy the entire existing content from the old, full buffer to the new, larger one.
- Deallocate the old buffer.
For very large JSON objects, this cycle of re-allocation and copying became a major performance bottleneck. Copying hundreds of megabytes of data is a slow, resource-intensive operation.
🚀 The New Way...
The V8 engineers had a crucial insight: the temporary buffer only needs to be a single, contiguous string at the very end of the process. While the string is being built, it can exist in pieces.
With this in mind, they replaced the old system with a Segmented Buffer. Instead of one large, growing block, the new stringifier uses a list of smaller, fixed-size buffers, or "segments" — much like using a series of small sticky notes instead of one giant page. When one segment is full, V8 simply allocates a new one and continues writing there. This completely eliminates the expensive copy operations.
This is vastly more efficient because you have eliminated all the intermediate copying steps. The expensive "joining" operation is only performed once, right at the end.
Old Way (Contiguous Buffer) | New Way (Segmented Buffer) |
---|---|
Analogy: One single, growing scroll. | Analogy: A stack of sticky notes. |
Process: Write, run out of space, copy everything to a bigger scroll, repeat. | Process: Write on a note, grab a new one when full, repeat. |
Bottleneck: The slow, repeated copying of large amounts of data. | Benefit: No intermediate copying. The expensive "join" happens only once at the end. |
These segments are allocated in V8's Zone memory, a special memory pool optimized for very fast, short-lived allocations, which further reduces management overhead. At the very end, when the entire object has been processed, all these smaller segments are efficiently joined together to form the final, complete JSON string. This change in the intermediate data structure is a classic example of a performance engineering trade-off: the logic to manage a list of buffers is slightly more complex, but it eliminates a major performance cliff, making it a worthwhile investment for a critical function like JSON.stringify
.
How to Stay on the Fast Path (and When You Can't)
These incredible performance improvements are available automatically in V8 version 13.8 (found in Chrome 138) and later.
Node.js
As of Node.js 24, the V8 engine version included is 13.6, not 13.8. Thus, no stable or current Node.js release is using V8 13.8 as of mid‑August 2025
Google Chrome
Performance improvements (like faster JSON.stringify
) are confirmed to be available starting in Chrome 138, which uses V8 13.8
Summary
The following table provides a quick-reference guide for developers aiming to maximize JSON.stringify
performance.
Feature / Condition | Consequence | Rationale for Fallback |
---|---|---|
replacer function provided |
Falls back to general serializer. | V8 cannot make assumptions about arbitrary user code. The replacer could introduce side effects or complex logic that the fast path is not designed to handle. |
space argument provided |
Falls back to general serializer. | The logic for pretty-printing (inserting spaces, newlines) is separate from the core task of fast serialization and is handled by the more versatile general path. |
Object has a .toJSON() method |
Falls back to general serializer. | This invokes a user-defined method during serialization, which is considered a potential side effect. V8 must execute this code, so it uses the general path. |
Array of objects with inconsistent shapes | Stays on fast path, but disables the "Express Lane." | The core fast path can still be used, but V8 cannot reuse the hidden class information. It must individually check the properties of each differently-shaped object, losing the major speedup of the "Express Lane." |
Object with array-like indexed properties | Falls back to general serializer. | Objects with numeric keys (e.g., '0', '1') are handled differently internally than objects with string keys, particularly regarding property ordering. This requires the more complex logic of the general path. |
Object contains Symbol-keyed properties | Property is ignored (standard behavior). | The fast path must still check for Symbols on the first object of a given shape. The "Express Lane" then assumes no Symbols exist for subsequent objects of the same shape. |
Reference: https://v8.dev/blog/json-stringify
Top comments (1)
So much to understand but I completed it.
Great breakdown of Json.stringify new performance optimization in V8.
Some comments may only be visible to logged-in visitors. Sign in to view all comments.