DEV Community

Cover image for πŸ”₯ JavaScript Interview Series(11): Deep vs Shallow Copy β€” Hidden Traps & Best Practices
jackma
jackma

Posted on

πŸ”₯ JavaScript Interview Series(11): Deep vs Shallow Copy β€” Hidden Traps & Best Practices

Welcome to another installment of our JavaScript Interview Series! Today, we're diving deep (and shallow) into one of the most fundamental yet tricky concepts for many developers: copying objects. Understanding the difference between a deep and shallow copy isn't just academic; it's a practical necessity to avoid frustrating bugs and write predictable, solid code. Let's unravel the hidden traps and best practices you need to ace your next interview.


1. What is the fundamental difference between a shallow copy and a deep copy in JavaScript?

Assessment Point: This question tests your foundational knowledge of how JavaScript handles object references and memory.

Standard Answer: The core difference lies in how they handle nested objects. A shallow copy creates a new object, but it only copies the top-level properties. If a property's value is a reference to another object (like a nested object or an array), the shallow copy duplicates the reference, not the object itself. This means both the original and the copied object will point to the same nested object in memory.

A deep copy, on the other hand, creates a completely independent duplicate of the original object. It recursively copies not just the top-level properties but also all nested objects and their properties. As a result, the original and the deep-copied object share no references, and modifications to one will not affect the other.

Possible 3 Follow-up Questions: πŸ‘‰ (Want to test your skills? Try a Mock Interview β€” each question comes with real-time voice insights)

  1. Can you provide a simple code example to illustrate this difference?
  2. When would a shallow copy be sufficient or even preferable to a deep copy?
  3. How does JavaScript's handling of primitive versus reference types relate to this concept?

2. How can you create a shallow copy of an object in JavaScript? Name at least two common methods.

Assessment Point: This question assesses your familiarity with common JavaScript idioms and built-in methods for object manipulation.

Standard Answer: There are several ways to create a shallow copy in JavaScript. Two of the most common and idiomatic methods are:

  1. Spread Syntax (...): This is a modern and concise way to create a shallow copy.

    const original = { a: 1, b: { c: 2 } };
    const shallowCopy = { ...original };
    
  2. Object.assign(): This method copies all enumerable own properties from one or more source objects to a target object.

    const original = { a: 1, b: { c: 2 } };
    const shallowCopy = Object.assign({}, original);
    

In both cases, changing shallowCopy.a will not affect original.a, but modifying shallowCopy.b.c will also change original.b.c because the nested object b is shared by reference.

Possible 3 Follow-up Questions: πŸ‘‰ (Want to test your skills? Try a Mock Interview β€” each question comes with real-time voice insights)

  1. Between the spread syntax and Object.assign(), are there any subtle differences in their behavior?
  2. What happens if you use the spread syntax on an array with nested objects? Does it still perform a shallow copy?
  3. Can you think of any other, perhaps less common, ways to create a shallow copy?

3. What is the most common "hack" for creating a deep copy, and what are its major limitations?

Assessment Point: This tests your awareness of a popular but flawed technique and your understanding of its edge cases.

Standard Answer: The most common and quick-and-dirty method for creating a deep copy is using JSON.stringify() followed by JSON.parse().

const original = { a: 1, b: { c: 2 } };
const deepCopy = JSON.parse(JSON.stringify(original));
Enter fullscreen mode Exit fullscreen mode

While this works for simple, JSON-safe objects, it comes with significant limitations:

  • Unsupported Data Types: It will discard functions, undefined, and Symbol properties.
  • Data Type Coercion: It converts Date objects into ISO 8601 date strings. Special values like NaN and Infinity are converted to null.
  • Circular References: It cannot handle objects with circular references (where an object refers back to itself), and will throw a TypeError.

Because of these drawbacks, this method is generally not recommended for production code where data integrity is critical.

Possible 3 Follow-up Questions: πŸ‘‰ (Want to test your skills? Try a Mock Interview β€” each question comes with real-time voice insights)

  1. Why does this JSON-based method fail with circular references?
  2. Can you give an example of an object that would not be correctly copied using this method?
  3. In what specific, controlled scenarios might this JSON "hack" be an acceptable solution?

4. What is structuredClone(), and how does it improve upon the JSON.parse(JSON.stringify()) method?

Assessment Point: This question evaluates your knowledge of modern JavaScript APIs and best practices for deep cloning.

Standard Answer: structuredClone() is a built-in global function in modern browsers and Node.js that provides a robust mechanism for creating deep copies of objects. It is designed specifically for this purpose and overcomes many of the limitations of the JSON.parse(JSON.stringify()) approach.

Key improvements include:

  • Support for More Data Types: It can correctly clone a wider range of data types, including Date, RegExp, Map, Set, and even complex nested structures.
  • Handles Circular References: Unlike the JSON method, structuredClone() can handle objects with circular references without throwing an error.
  • Performance: For large and complex objects, structuredClone() is generally more performant than the JSON-based approach because it doesn't involve the overhead of serializing to and from a string format.

However, it's important to note that structuredClone() still has limitations. For instance, it cannot clone functions, DOM nodes, or an object's prototype chain.

Possible 3 Follow-up Questions: πŸ‘‰ (Want to test your skills? Try a Mock Interview β€” each question comes with real-time voice insights)

  1. If structuredClone() can't copy functions, what happens if you try to clone an object with a method?
  2. When might you still need to use a library like Lodash's _.cloneDeep() instead of structuredClone()?
  3. How does structuredClone()'s algorithm differ conceptually from the JSON serialization method?

5. You are working with a Redux store and need to update a nested property in the state. Why is it crucial to avoid direct mutation, and how does this relate to shallow and deep copying?

Assessment Point: This question connects the theoretical concepts of copying to a practical application in state management, a common scenario in modern frontend development.

Standard Answer: In state management libraries like Redux, immutability is a core principle. You should never directly mutate the state object. Instead, you must always return a new state object. This is crucial for several reasons:

  • Change Detection: Redux performs a shallow equality check (===) on the state object to determine if it has changed. If you mutate the object directly, the reference remains the same, and Redux won't detect the change, leading to components not re-rendering.
  • Predictability and Debugging: Immutability ensures a clear and predictable state flow. It allows for powerful developer tools like time-travel debugging, as you have a snapshot of each state change.

This is where copying comes in. When updating a nested property, you need to create new copies of all affected levels of the state tree. This often involves a series of shallow copies. For example, to update state.user.profile.name, you would need to create a new profile object, a new user object, and a new top-level state object.

// Incorrect: Direct Mutation
state.user.profile.name = 'New Name'; // This is a huge anti-pattern!

// Correct: Using shallow copies at each level
const newState = {
  ...state,
  user: {
    ...state.user,
    profile: {
      ...state.user.profile,
      name: 'New Name'
    }
  }
};```



Using deep copy for the entire state on every update is usually inefficient and unnecessary. A targeted approach with shallow copies is the standard and most performant practice.

**Possible 3 Follow-up Questions:** πŸ‘‰ ([Want to test your skills? Try a Mock Interview β€” each question comes with real-time voice insights](https://offereasy.ai))
1. Could using a deep copy on the entire state object solve the mutation problem? What would be the performance implications?
2. How do libraries like Immer simplify this process of updating nested state?
3. Can you explain the concept of "structural sharing" and its relevance here?

---

## 6. Imagine you have a large object with many nested levels. What are the performance implications of performing a deep copy versus a shallow copy?
**Assessment Point:** This question tests your understanding of the performance trade-offs and computational costs associated with different copying strategies.

**Standard Answer:** There's a significant performance difference between the two, especially with large, complex objects.

*   **Shallow Copy:** This is a very fast and lightweight operation. It only needs to create a new top-level object and copy the properties. The cost is minimal and doesn't depend on the depth or complexity of the nested objects, as it's only copying references.
*   **Deep Copy:** This is a much more expensive operation. It has to traverse the entire object graph, recursively creating new objects and copying properties at every level. The computational cost and memory consumption grow with the size and depth of the object. For very large or deeply nested objects, frequent deep copying can become a performance bottleneck in an application.

Therefore, the best practice is to **use a shallow copy whenever possible** and only resort to a deep copy when you are certain you need a completely independent object and are aware of the potential performance impact.

**Possible 3 Follow-up Questions:** πŸ‘‰ ([Want to test your skills? Try a Mock Interview β€” each question comes with real-time voice insights](https://offereasy.ai))
1. How could you benchmark the performance difference between a shallow copy, `JSON.parse(JSON.stringify())`, and `structuredClone()` for a given object?
2. In what kind of application (e.g., a real-time data visualization tool, a simple form) would this performance difference be most noticeable?
3. Does the presence of many primitive values versus many nested object references affect the performance of a deep copy?

---

## 7. How would you write a custom function to perform a deep copy of an object without using `structuredClone()` or external libraries?
**Assessment Point:** This is an advanced question that tests your ability to think algorithmically and handle recursion, a core computer science concept.

**Standard Answer:** Writing a custom deep copy function requires a recursive approach. The basic idea is to check the type of the value you're copying. If it's a primitive, you return it directly. If it's an object or an array, you create a new empty object or array and then iterate over the keys/elements of the original, calling the deep copy function recursively for each value.

Here's a basic implementation:


```javascript
function deepClone(value) {
  if (value === null || typeof value !== 'object') {
    return value;
  }

  // Handle Date
  if (value instanceof Date) {
    return new Date(value.getTime());
  }

  // Handle Array
  if (Array.isArray(value)) {
    const newArray = [];
    for (let i = 0; i < value.length; i++) {
      newArray[i] = deepClone(value[i]);
    }
    return newArray;
  }

  // Handle Object
  const newObject = {};
  for (const key in value) {
    if (Object.prototype.hasOwnProperty.call(value, key)) {
      newObject[key] = deepClone(value[key]);
    }
  }
  return newObject;
}
Enter fullscreen mode Exit fullscreen mode

This is a simplified version. A production-ready function would also need to handle circular references (by keeping track of visited nodes), Map, Set, and other complex types.

Possible 3 Follow-up Questions: πŸ‘‰ (Want to test your skills? Try a Mock Interview β€” each question comes with real-time voice insights)

  1. How would you modify this function to handle circular references to prevent an infinite loop?
  2. What are the limitations of this specific implementation? (e.g., what data types does it not handle?)
  3. Why is the Object.prototype.hasOwnProperty.call(value, key) check important in the for...in loop?

8. Can the spread syntax (...) be used for deep copying? Explain why or why not.

Assessment Point: This is a common point of confusion for junior developers. This question directly addresses that misconception.

Standard Answer: No, the spread syntax cannot be used for deep copying. It only performs a shallow copy.

The spread syntax works by iterating over the own enumerable properties of an object and assigning them to a new object. It operates at only one level deep. When it encounters a property that holds a reference to another object, it copies that reference, not the object itself.

Consider this example:

const original = {
  name: 'Alice',
  details: { age: 30, city: 'New York' }
};

const copy = { ...original };

copy.details.age = 31;

console.log(original.details.age); // Output: 31
Enter fullscreen mode Exit fullscreen mode

As you can see, modifying the nested details object in the copy also mutates the original object. This demonstrates that the spread operator only created a shallow copy. It is a common hidden trap for developers who mistakenly believe it creates a deep clone.

Possible 3 Follow-up Questions: πŸ‘‰ (Want to test your skills? Try a Mock Interview β€” each question comes with real-time voice insights)

  1. How could you use the spread syntax to manually perform a "deeper" copy for a known, two-level nested object?
  2. Does the same shallow copy limitation apply when using the spread syntax with arrays?
  3. What is the difference between const newObj = obj and const newObj = { ...obj }?

9. How does object immutability relate to the concepts of deep and shallow copies?

Assessment Point: This question links copying to the broader software design principle of immutability, gauging your understanding of why these concepts are important for writing robust applications.

Standard Answer: Object immutability is the principle that once an object is created, it cannot be changed. If you need to modify an immutable object, you must create a new object with the desired changes rather than altering the original.

This relates directly to deep and shallow copies:

  • Shallow copies are a tool often used to achieve partial immutability. When you modify a top-level property of a shallow copy, you are creating a new object and leaving the original untouched at that level, which aligns with the principle of immutability. However, the trap is with nested objects; modifying them breaks immutability because the references are shared.
  • Deep copies are a way to achieve true immutability when you need a completely separate and mutable version of a complex data structure. By creating a deep copy, you ensure that any modifications you make will be confined to the new object, leaving the original data structure completely unchanged and preserved.

In essence, copying mechanisms are the "how" behind implementing immutability in languages like JavaScript where objects are mutable by default.

Possible 3 Follow-up Questions: πŸ‘‰ (Want to test your skills? Try a Mock Interview β€” each question comes with real-time voice insights)

  1. Why is immutability particularly important in functional programming paradigms?
  2. Can you name a JavaScript library that helps enforce or work with immutable data structures?
  3. How can Object.freeze() be used in conjunction with copying to promote immutability?

10. When debugging, you find that changing a copied object is unexpectedly modifying the original object. What is the most likely cause, and what is your first step to fix it?

Assessment Point: This is a practical, problem-solving question that simulates a real-world debugging scenario.

Standard Answer: The most likely cause of this issue is that a shallow copy was performed on an object with nested data structures, when a deep copy was actually needed. The developer likely used a method like Object.assign() or the spread operator (...), assuming it would create a completely independent copy. However, since these methods only perform a shallow copy, the nested objects are still being shared by reference between the original and the copy.

My first step to fix this would be to:

  1. Identify the line of code where the copy is being made.
  2. Analyze the data structure of the object being copied. If it contains nested objects or arrays, a shallow copy is the culprit.
  3. Replace the shallow copy mechanism with a proper deep copy. The best modern solution would be to use structuredClone(). If structuredClone() is not suitable (e.g., the object contains functions) or available, I would consider using a well-tested library function like Lodash's _.cloneDeep() before attempting to write a custom recursive solution.

This approach ensures that the copied object is truly independent and prevents the unintended side effects of shared references.

Possible 3 Follow-up Questions: πŸ‘‰ (Want to test your skills? Try a Mock Interview β€” each question comes with real-time voice insights)

  1. How would you use browser developer tools to confirm that two variables are referencing the same nested object in memory?
  2. What kind of automated tests (e.g., unit tests) could you write to prevent this type of bug from occurring in the future?
  3. If structuredClone() is not an option, what are the pros and cons of using a library like Lodash versus rolling your own deep clone function?

Top comments (0)