Frequency Array vs unordered_map
Finding duplicate elements is a classic counting problem.
The goal is simple:
Identify all elements that appear more than once, and report each duplicate only once.
π Problem Summary
Input:
An array of integers
Output:
A list of elements that appear more than once
(Empty list if no duplicates exist)
Example
Input
[1, 2, 3, 2, 4, 1]
Output
[1, 2]
π§ ** Approach 1: Frequency Array**
This approach works when the range of values is known and small.
Code
vector<int> n = {1, 2, 3, 2, 4, 1};
vector<int> out;
vector<int> tracker(100, 0);
for (int x : n) tracker[x]++;
for (int i = 0; i < tracker.size(); i++) {
if (tracker[i] > 1) {
out.push_back(i);
}
}
Complexity
Time: O(n + k)
Space: O(k + z)
k β range size
z β number of duplicates
When to use
β Fast
β Simple
β Not suitable for large or unknown ranges
π§ Approach 2: Using unordered_map
This approach is flexible and works for any range of values.
Code
vector<int> n = {1, 2, 3, 2, 4, 1};
vector<int> out;
unordered_map<int, int> tracker;
for (int x : n) tracker[x]++;
for (auto it : tracker) {
if (it.second > 1) {
out.push_back(it.first);
}
}
Complexity
Time: O(n) average
Space: O(n + z)
When to use
β Range unknown
β Cleaner logic
β Interview-safe solution
π Key Comparison
Approach - Time - Space - Best Use Case
Frequency Array O(n + k) O(k) Small known range
unordered_map O(n) avg O(n) General-purpose
π― Final Takeaway
Both approaches are correct. The best solution depends on constraints . Always consider range size and space usage
Understanding why you choose an approach matters more than the code itself.
Top comments (1)
unordered hash map shines here because its space is dynamic
also it eliminates the space complexity of range k