Recently, I was working on a multi-threaded implementation of a function to calculate the Poisson distribution (amath_pdist
). The goal was to divide the workload across multiple threads to improve performance, especially for large arrays. However, instead of achieving the expected speedup, I noticed a significant slowdown as the size of the array increased.
After some investigation, I discovered the culprit: false sharing. In this post, I’ll explain what false sharing is, show the original code causing the problem, and share the fixes that led to a substantial performance improvement.
The Problem: False Sharing in Multi-threaded Code
False sharing happens when multiple threads work on different parts of a shared array, but their data resides in the same cache line. Cache lines are the smallest unit of data transferred between memory and the CPU cache (typically 64 bytes). If one thread writes to part of a cache line, it invalidates the line for other threads—even if they’re working on logically independent data. This unnecessary invalidation leads to significant performance degradation due to repeated reloading of cache lines.
Here’s a simplified version of my original code:
void *calculate_pdist_segment(void *data) {
struct pdist_segment *segment = (struct pdist_segment *)data;
size_t interval_a = segment->interval_a, interval_b = segment->interval_b;
double lambda = segment->lambda;
int *d = segment->data;
for (size_t i = interval_a; i < interval_b; i++) {
segment->pdist[i] = pow(lambda, d[i]) * exp(-lambda) / tgamma(d[i] + 1);
}
return NULL;
}
double *amath_pdist(int *data, double lambda, size_t n_elements, size_t n_threads) {
double *pdist = malloc(sizeof(double) * n_elements);
pthread_t threads[n_threads];
struct pdist_segment segments[n_threads];
size_t step = n_elements / n_threads;
for (size_t i = 0; i < n_threads; i++) {
segments[i].data = data;
segments[i].lambda = lambda;
segments[i].pdist = pdist;
segments[i].interval_a = step * i;
segments[i].interval_b = (i == n_threads - 1) ? n_elements : (step * (i + 1));
pthread_create(&threads[i], NULL, calculate_pdist_segment, &segments[i]);
}
for (size_t i = 0; i < n_threads; i++) {
pthread_join(threads[i], NULL);
}
return pdist;
}
Where the Problem Occurs
In the above code:
- The array
pdist
is shared among all threads. - Each thread writes to a specific range of indices (
interval_a
tointerval_b
). -
At segment boundaries, adjacent indices may reside in the same cache line. For example, if
pdist[249999]
andpdist[250000]
share a cache line, Thread 1 (working onpdist[249999]
) and Thread 2 (working onpdist[250000]
) invalidate each other’s cache lines.
This issue scaled poorly with larger arrays. While the boundary issue might seem small, the sheer number of iterations magnified the cost of cache invalidations, leading to seconds of unnecessary overhead.
The Solution: Align Memory to Cache Line Boundaries
To fix the problem, I used posix_memalign
to ensure that the pdist
array was aligned to 64-byte boundaries. This guarantees that threads operate on completely independent cache lines, eliminating false sharing.
Here’s the updated code:
double *amath_pdist(int *data, double lambda, size_t n_elements, size_t n_threads) {
double *pdist;
if (posix_memalign((void **)&pdist, 64, sizeof(double) * n_elements) != 0) {
perror("Failed to allocate aligned memory");
return NULL;
}
pthread_t threads[n_threads];
struct pdist_segment segments[n_threads];
size_t step = n_elements / n_threads;
for (size_t i = 0; i < n_threads; i++) {
segments[i].data = data;
segments[i].lambda = lambda;
segments[i].pdist = pdist;
segments[i].interval_a = step * i;
segments[i].interval_b = (i == n_threads - 1) ? n_elements : (step * (i + 1));
pthread_create(&threads[i], NULL, calculate_pdist_segment, &segments[i]);
}
for (size_t i = 0; i < n_threads; i++) {
pthread_join(threads[i], NULL);
}
return pdist;
}
Why Does This Work?
-
Aligned Memory:
- Using
posix_memalign
, the array starts on a cache line boundary. - Each thread’s assigned range aligns neatly with cache lines, preventing overlap.
- Using
-
No Cache Line Sharing:
- Threads operate on distinct cache lines, eliminating invalidations caused by false sharing.
-
Improved Cache Efficiency:
- Sequential memory access patterns align well with CPU prefetchers, further boosting performance.
Results and Takeaways
After applying the fix, the runtime of the amath_pdist
function dropped significantly. For a dataset I was testing, the wall clock time dropped from 10.92 seconds to 0.06 seconds.
Key Lessons:
- False sharing is a subtle yet critical issue in multi-threaded applications. Even small overlaps at segment boundaries can degrade performance.
-
Memory alignment using
posix_memalign
is a simple and effective way to solve false sharing. Aligning memory to cache line boundaries ensures threads operate independently. - Always analyze your code for cache-related issues when working with large arrays or parallel processing. Tools like
perf
orvalgrind
can help pinpoint bottlenecks.
Thank you for reading!
For anyone curious about the code, you can find it here
Top comments (0)