## DEV Community

Gopi Gorantala

Posted on • Updated on

# Hamming Distance - C++ Solution

In this lesson, we find the number of positions where the bits are different for the given input.

## Introduction

In this question, we will find the number of positions at which the corresponding bits are different.

## Problem Statement

Given integers `x`, `y` finds the positions where the corresponding bits are different.

Example 01:

``````Input: x = 1, y = 8
Output: 2
Explanation:
1   (0 0 0 1)
8   (1 0 0 0)
↑     ↑
``````

Example 02:

``````Input: x = 12, y = 15
Output: 2
Explanation:
12   (1 1 0 0)
15   (1 1 1 1)
↑ ↑
``````

## Solution

We solve this using shifting operation and then we move to solve it in a more optimal way.

## Bit Shifting

This approach is better as it takes `O(1)` time complexity. We shift the bits to left or right and then check if the bit is one or not.

### Algorithm

We use the right shift operation, where each bit would have its turn to be shifted to the rightmost position.

Once shifted we use either modulo % (i.e., i % 2) or `&` operation (i.e., i & 1).

### Code

Hint: you can check if a number does not equal 0 by the `^` operator.

``````#include <iostream>
using namespace std;

void hammingDistance(int a, int b){
int xorVal = a ^ b;
int distance = 0;

while(xorVal ^ 0){
if(xorVal % 2 == 1){
distance += 1;
}
xorVal >>= 1;
}
cout << "Hamming Distance between two integers is " << distance << endl;
}

int main() {
int a = 1;
int b = 8;
hammingDistance(a, b);
return 0;
}
``````

### Complexity Analysis

Time complexity: `O(1)`. For a `32-bit` integer, the algorithm would take at most 32 iterations.

Space complexity: `O(1)`. Memory is constant irrespective of the input.

## Brian Kernighan’s Algorithm

In the above approach, we shifted each bit one by one. So, is there a better approach in finding the hamming distance? Yes.

### Algorithm

When we do & bit operation between number `n` and `(n-1)`, the rightmost bit of one in the original number `n` would be cleared.

``````      n       = 40  => 00101000
n - 1     = 39  => 00100111
----------------------------------
(n & (n - 1)) = 32  => 00100000
----------------------------------
``````

### Code

Based on the above idea, we can count the distance in 2 iterations rather than all the shifting iterations we did earlier. Let’s see the code in action.

``````#include <iostream>
using namespace std;

int hammingDistance(int a, int b){
int xorVal = a ^ b;
int distance = 0;

while (xorVal != 0) {
distance += 1;
xorVal &= ( xorVal - 1); // equals to `xorVal = xorVal & ( xorVal - 1);`
}

return distance;
}

int main() {
int a = 1;
int b = 8;
cout << "Hamming Distance between two integers is " << hammingDistance(a, b);
return 0;
}
``````

### Complexity Analysis

Time complexity: `O(1)`. The input size of the `integer` is fixed, we have a constant time complexity.

Space complexity: `O(1)`. Memory is constant irrespective of the input.

## Extras

If you are interested in mastering bit tricks, I've got a course that are loved by more than 100k+ programmers.

In this course, you will learn how to solve problems using bit manipulation, a powerful technique that can be used to optimize your algorithmic and problem-solving skills. The course has simple explanation with sketches, detailed step-by-step drawings, and various ways to solve it using bitwise operators.

These bit-tricks could help in competitive programming and coding interviews in running algorithms mostly in `O(1)` time.

This is one of the most important/critical topics when someone starts preparing for coding interviews for FAANG(Facebook, Amazon, Apple, Netflix, and Google) companies.

To kick things off, you’ll start by learning about the number system and how it’s represented. Then you’ll move on to learn about the six different bitwise operators: AND, OR, NOT, XOR, and bit shifting. Throughout, you will get tons of hands-on experience working through practice problems to help sharpen your understanding.

By the time you’ve completed this course, you will be able to solve problems faster with greater efficiency!! 🤩

Link to my course: Master Bit Manipulation for Coding Interviews.

## Top comments (18)

Pierre Gradot • Edited

There is an easy solution to count the number of 1 bits in a value, with C++20:

``````#include <bit>

constexpr auto hammingDistance(unsigned int a, unsigned int b){
return std::popcount(a ^ b);
}
``````

Gopi Gorantala • Edited

Hey @pgradot,

This works fine, and the time is `O(1)`. but the built-in `popCount` function would take `O(k)` time where `k` is the number of bits for an integer number.

So you code has to go through all of the bits present in a integer one-by-one.

But unlike Brian Kernighan’s Algorithm, which runs in 2 iterations to figure out the output using `&` operator for `n` and `(n-1)` elements.

Pierre Gradot

As Tony B mentionned below, I don't believe Big-O notation is relevant.

That consideration put aside, how do you know that std::popcount() is O(k)?

Thread

Gopi Gorantala

@pgradot, `popCount` returns the number of 1 bits in the value of x . How do you think it's doing? And how it knows if it's a `1` bit or `0` bit at any given position?

The `popCount` function runs through all the bit positions, which means `k` bits in any integer. So the time is directly proportional to number of bits present in an integer.

Also, Since it's an integer, and we knew that integers have fixed number of bits, hence at any point of time the time in running an algorithm is directly proportional to its bits.

**Note: **In small computations, big-O won't matter, but you should always write optimized algorithms in any software products.

Thread

Pierre Gradot • Edited

In both of your implementations for `hammingDistance()`, you start with `a ^ b` and then you count the bits, using 2 different techniques. Why on earth would `std::popcount()` would use a less optimized technique?

Furthermore, you say:

Time complexity: O(1). For a 32-bit integer, the algorithm would take at most 32 iterations.

Isn't that O(32) ?

Then you say:

Time complexity: O(1). The input size of the integer is fixed, we have a constant time complexity.

The integer is still 32-bit wide, so why do you have a different conclusion?

Note: a smart guy said almost 50 years ago that "premature optimization if the root of all evil"

Thread

Gopi Gorantala • Edited

Hey @pgradot ,

You covered everything, except that my first approach is not optimized and `popCount` fn runs through all the bit positions which isn't an optimized one as well.

Brian's algorithm is optimized, which is my second approach. Check LeetCode link below for explanation more on this.

Big-O simplification says, we can round it to the nearest one's.. `O(32)` by the way you're right, and is simplified and rounded to `O(1)`, or constant time.

Why O(1)?
Here is what LeetCode experts has to say about Hamming Distance, check out their Brian's algorithm solution leetcode.com/problems/hamming-dist....... You'll understand... why this is a O(32) or in simple terms O(1) time.

Gopi Gorantala • Edited

Hey Tony,

You need to understand that integers have fixed number of bits present, so at any point of time it won't change and hence it is always `O(1)`.

Bubble sorting on an array is traversing the number of elements present in the array, which means if an array has 5 elements, it iterates 5 times and if it contains a millions items, then the iteration takes a millions times.. So the order or time is directly proportional to `O(n)`, where `n` is the number of elements present in the array.

Note: Number of bits for an integer are fixed and never change, whichi is constant. This algorithm would require less iterations than the bit shift approach.

And as the approach is bit shift, since the size(i.e., the bit number) of integer number is fixed, we have a constant time complexity, which is `O(1)`.

Gopi Gorantala

Hey Tony,

Yes, when `N` gets bigger, the iterations do as I've explained in my earlier comment.

Why `O(1)`?
Here is what LeetCode experts has to say about Hamming Distance, check out their Brian's algorithm solution - leetcode.com/problems/hamming-dist.... You'll understand... why this is a `O(32)` or in simple terms `O(1)` time.

Gopi Gorantala

there is no such thing as O(32) in Big-O notation.
Yes, thats right, I agree with that!!

I never said there is a O(32) that exists in Big-O... But you aren't understanding big-O complexity properly.

O(32) mean -> 32 bits for an integer, and as this is a fixed number, we can round it to constant `O(1)` time.

Like O(N) where `N` mean -> `N` elements present in the array/list etc.

If this won't convince you at all. My bad :(

Gopi Gorantala • Edited

Hey @leouofa,

Having knowledge on how bit level operations work will be a game changing, you can solve problems in lesser time.

1. These bit tricks could help in competitive programming in running algorithms mostly in O(1) time with faster execution.
2. This is one of the most important/critical topics when someone starts preparing for coding interviews for FAANG companies.
3. It helps run algorithms in lesser time, and this is what the entire world is looking for, render something really fast on an app that do massive calculations in just milliseconds.
4. Bit level operations are very fast, as they are done at bit level.

Coding Interviews are intimidating to most of the programmers, learning these bit tricks, DSA and some problem solving can make you a better programmer and decision maker while choosing a better approach/algorithm for a software problem.

Gopi Gorantala • Edited