Mutable Key Problem in HashMap
HashMap depends on hashCode() remaining stable
If the key changes after insertion:
- hashCode changes
- bucket location changes
- HashMap cannot find the object anymore
First Understand One Critical Rule
HashMap Stores Entry Based On: hashCode() + equals()
Simple Mutable Key Example
Employee Class
class Employee {
String name;
Employee(String name) {
this.name = name;
}
@Override
public int hashCode() {
return name.hashCode();
}
@Override
public boolean equals(Object obj) {
Employee e = (Employee) obj;
return this.name.equals(e.name);
}
}
Step 1 — Create Object
Employee emp = new Employee("John");
Current value:
name = "John"
Step 2 — Put into HashMap
Map<Employee, String> map = new HashMap<>();
map.put(emp, "Developer");
What Happens Internally?
A. HashMap Calls hashCode()
emp.hashCode()
Suppose:
"John".hashCode() = 2314539
B. Bucket Index Calculated
Formula:index = (n - 1) & hash
Suppose result: bucket = 5
Internal Structure
Bucket 5
↓
(Employee{name="John"}, "Developer")
Everything works correctly.
Step 3 — Retrieve Object
map.get(emp);
HashMap again calculates:
"John".hashCode()
Gets same bucket: bucket 5
Finds object successfully.
Output:Developer
NOW THE DANGEROUS PART
Step 4 — Modify Key Object
emp.name = "David";
Now object becomes: Employee{name="David"}
Important Thing The SAME object reference exists.
But: hashCode has changed
Step 5 — Try Retrieval Again
map.get(emp);
What Happens Internally Now?
A. HashMap Calls hashCode()
Now:"David".hashCode() = 65805908
Different hashCode.
B. New Bucket Calculated Now: bucket = 12
HashMap Searches Wrong Bucket HashMap now checks:Bucket 12
BUT object actually stored in:Bucket 5
BEFORE Mutation
Bucket 5
↓
(Employee{name="John"}, "Developer")
hashCode based on:John
AFTER Mutation Object becomes:Employee{name="David"} Now HashMap calculates: bucket 12
So retrieval searches: Bucket 12 → EMPTY
Final Result
System.out.println(map.get(emp));
Output: null
This is called:Hash-based collection corruption
Even Worse Scenario
Now try: map.containsKey(emp);
Output:false even though object exists inside map.
Another Dangerous Problem
Suppose: map.put(emp, "Manager");
Now HashMap inserts NEW entry.
Internal State Becomes
Bucket 5
↓
(Employee{name="David"}, "Developer")
Bucket 12
↓
(Employee{name="David"}, "Manager")
Same object reference appears logically duplicated.
Why Immutable Objects are Safe
Example: String key = "John"; as Strings are immutable.
Meaning:value never changes
- hashCode stable
- bucket stable
So retrieval always works.
Best Practice
Always Use Immutable KeysRecommended:String Integer UUID Immutable custom objects For Custom class use instance variable as finalHow to Create Immutable Key Object
final class Employee {
private final String name;
Employee(String name) {
this.name = name;
}
@Override
public int hashCode() {
return name.hashCode();
}
@Override
public boolean equals(Object obj) {
Employee e = (Employee) obj;
return this.name.equals(e.name);
}
}
Now object cannot change after insertion.Safe for HashMap.
Real Production Problems Caused
Mutable keys can cause:
Memory leaks
Duplicate entries
Data inconsistency
Cache corruption
Hard-to-debug production issues
Especially dangerous in:
caching systems
distributed systems
Hibernate
microservices
concurrent systems
Q1: Why does retrieval fail even though same object reference is used?
Answer: Because HashMap first locates bucket using hashCode(). Reference equality is irrelevant until bucket found.
Q2: Can equals() alone solve this?
Answer: No.
Because equals() is checked only AFTER correct bucket located.
Q3: Can mutable keys ever be safe?
*Answer: * Only if fields used in hashCode()/equals() never change.
Q4: Is there any advantages of using mutable key?
*Answer: * Mostly used for:
`- object graph traversal
- serialization frameworks
- proxy systems
- JVM internals` "Mutable keys are only safe if the mutable state is excluded from equals() and hashCode(). In practice, immutable or effectively immutable keys are preferred because HashMap bucket placement depends on stable hash codes."
Using StringBuilder as a HashMap Key — Why It Is Dangerous
"StringBuilder is a poor HashMap key because it uses reference-based equals/hashCode from Object and is mutable. Even though mutating it does not change the bucket location, the logical identity of the key changes, leading to unpredictable behavior, failed lookups, duplicate logical keys, and corrupted business semantics."
Q. If I have to use a custom class as a key, what precaution I need to take it?
Using a Custom Class as a HashMap Key — Precautions You MUST Take
This is a very common experienced-level Java interview question.
If you use a custom object as a HashMap key, the MOST important rule is:
equals() and hashCode() must be implemented correctly and consistently.
Otherwise you get:
- failed retrievals
- duplicate keys
- memory leaks
- corrupted collections
- unpredictable behavior
Golden Rules for Custom KeysRule 1 — Override BOTH equals() and hashCode() Never override only one. WRONG Example
class Employee {
int id;
@Override
public boolean equals(Object obj) {
return true;
}
}
This breaks HashMap contract.
Correct Rule
If:
a.equals(b) == true
then:
a.hashCode() MUST equal b.hashCode()
Rule 2 — Use Immutable Fields for Equality
Fields used in:
`- equals()
- hashCode()`
should NEVER change after insertion.
Best Practice Example
final class Employee {
private final int id;
private final String name;
Employee(int id, String name) {
this.id = id;
this.name = name;
}
@Override
public boolean equals(Object obj) {
if(this == obj)
return true;
if(obj == null || getClass() != obj.getClass())
return false;
Employee e = (Employee) obj;
return id == e.id;
}
@Override
public int hashCode() {
return Integer.hashCode(id)}}
Why This Is Safe because:
id never changes
hashCode stable
bucket stable
HashMap works correctly forever.
Be Careful with Lombok
Dangerous Lombok Example
@Data
class Employee {
int id;
String name;
}
plaintext
@Data includes ALL fields.
If name changes:
- hashCode changes
- HashMap breaks
Better
@EqualsAndHashCode(of = "id")
Q. Why do we need to increase the size of the hash map and who determines when to increase it? Although we have hash collision, if we have hash collision, why do we need to increase the size?
Collisions can be handled,
but too many collisions destroy performance.
Resizing exists to maintain:
near O(1) lookup/insertion performance
First Understand the Core Structure
A HashMap internally has:Array of buckets
Example:table[16]
Each bucket may contain:
- empty
- one node
- linked list
- tree
- Visual Example
Suppose capacity:
16Good Distribution
Bucket 0 → A
Bucket 1 → empty
Bucket 2 → B
Bucket 3 → C
Bucket 4 → empty
Bucket 5 → D
Very few collisions.Operations close to: O(1) Fast.
What Happens If Map Keeps Growing Without Resize?
Suppose:
still only 16 buckets
but now 10,000 entries inserted
Internal Structure Becomes
Bucket 0 → A → B → C → D → E → F
Bucket 1 → G → H → I → J
Bucket 2 → K → L → M → N → O
...
Huge collision chains.Now Lookup Becomes Slow
Suppose searching: O(n)
instead of:O(1)
Important PointCollision handling does NOT eliminate performance degradation.
It only prevents data loss.
Why Resize Helps, Suppose capacity increases:
16 → 32 Now more buckets available.
Entries redistribute.
BEFORE Resize
Bucket 5
↓
A → B → C → D → E
AFTER Resize
Bucket 5 → A → C
Bucket 21 → B → D → E
Collision chains smaller.
Performance improves.
Main Goal of Resizing
Reduce: collision density
This keeps:
lookup fast
insertion fast
deletion fast
Who Decides When Resize Happens?
HashMap uses:
load factor
Default Load Factor
0.75
Formula
threshold = capacity × loadFactor
Bucket Index Calculation
Java internally uses:
index = (n - 1) & hash
Where: n = array capacity
hash = processed hashCode
This is fixed.You cannot override it in normal HashMap.
The bucket calculation algorithm in Java HashMap is fixed internally and cannot be overridden. Java uses (capacity - 1) & hash for fast bucket computation. However, developers indirectly influence bucket placement through the quality of their hashCode() implementation.
Top comments (0)