Recently, while reviewing code, I spotted a colleague using ToLower() for a case-insensitive string comparison instead of string.Equals().
When I asked why, the answer was:
“I thought it would be faster.”
Rather than argue, we fired up BenchmarkDotNet
. The results? Surprising — and completely misleading.
This turned into a great reminder that benchmarks can lie if you don’t understand what’s really being tested.
The Test Code
Let’s say we have a ValidationService like this:
public class ValidationService
{
private const string Expected = "Grant RiOrDan";
public bool IsValid_StringEquals(string userInput) =>
string.Equals(userInput, Expected, StringComparison.OrdinalIgnoreCase);
public bool IsValid_ToLower(string userInput) =>
userInput.ToLower() == Expected.ToLower();
public bool IsValid_ToUpper(string userInput) =>
userInput.ToUpper() == Expected.ToUpper();
public bool IsValid_Equals(string userInput) =>
userInput.Equals(Expected, StringComparison.OrdinalIgnoreCase);
}
Our goal: benchmark each method for execution time and memory usage.
Round 1: The Wrong Benchmark
Here’s the initial benchmark setup:
[MemoryDiagnoser]
public class StringComparisonBenchmarks
{
[Benchmark]
public bool LowerCase() =>
new ValidationService().IsValid_ToLower("Grant Riordan");
[Benchmark]
public bool UpperCase() =>
new ValidationService().IsValid_ToUpper("Grant Riordan");
[Benchmark]
public bool Equals() =>
new ValidationService().IsValid_Equals("Grant Riordan");
[Benchmark]
public bool StringEquals() =>
new ValidationService().IsValid_StringEquals("Grant Riordan");
}
Results:
Method | Mean | Allocated |
---|---|---|
LowerCase | 30.49 ns | 96 B |
UpperCase | 29.83 ns | 96 B |
Equals | 0.00 ns | 0 B |
StringEquals | 0.00 ns | 0 B |
Wait… Equals()
and string.Equals()
take 0 nanoseconds and allocate no memory?
Did we just discover the fastest string comparison in the universe?
What Really Happened
Nope. This is constant folding in action.
Both Expected and the benchmark input "Grant Riordan" are compile-time constants. The C# compiler and JIT realise the comparison will always return the same value (true) and replace the whole method call with a constant of true
In other words, our “benchmark” was actually measuring:
return true;
That’s why it took 0ns
— the method was optimised away entirely.
Meanwhile, ToLower()
and ToUpper()
couldn’t be optimised in the same way because they create new strings at runtime.
How Can We Fix This? Round 2: The Correct Benchmark
To get a real measurement, we need to make sure our inputs aren’t compile-time constants. That forces the JIT to execute the comparison logic.
[MemoryDiagnoser]
public class StringComparisonBenchmarks
{
private readonly ValidationService _service = new();
private readonly string _userInput = "Grant Riordan";
[Benchmark]
public bool LowerCase() =>
_service.IsValid_ToLower(_userInput);
[Benchmark]
public bool UpperCase() =>
_service.IsValid_ToUpper(_userInput);
[Benchmark]
public bool Equals() =>
_service.IsValid_Equals(_userInput);
[Benchmark]
public bool StringEquals() =>
_service.IsValid_StringEquals(_userInput);
}
Results:
Method | Mean | Allocated |
---|---|---|
LowerCase | 31.05 ns | 96 B |
UpperCase | 30.75 ns | 96 B |
Equals | 0.55 ns | 0 B |
StringEquals | 0.72 ns | 0 B |
Equals()
and string.Equals()
are far faster and allocate no memory.
ToLower()
/ ToUpper()
are much slower here because they create new strings every time.
The Lesson
This post isn’t about proving which method is fastest (though OrdinalIgnoreCase is generally the best choice for case-insensitive equality checks).
It’s about understanding what you’re actually benchmarking.
If the results look too good to be true, they probably are.
In this case, the culprit was constant folding — and without realising it, we were benchmarking a return true;
.
Always sanity-check your benchmark setup. Even small changes in how you pass values can make the difference between measuring actual work… and measuring nothing at all.
Bonus Tip: OrdinalIgnoreCase vs InvariantCultureIgnoreCase
In our examples, we used:
StringComparison.OrdinalIgnoreCase
This is the fastest way to do a case-insensitive equality check in most scenarios because:
It does a simple binary (ordinal) comparison.
It only applies basic Unicode case folding (A–Z).
It avoids any cultural rules.
By contrast, InvariantCultureIgnoreCase
:
Performs a full culture-invariant comparison using Unicode collation rules. Culture-invariant means the string operations follow a fixed, culture-independent set of rules that never change regardless of the system's current locale or the user’s regional settings.
Handles special casing rules (Turkish I, Greek sigma forms, etc.).
Is much slower because it’s doing a lot more work under the hood.
Example:
| Culture | `"e"` vs `"é"` | Why |
| ------------- | -------------- | ------------------------------------------------------ |
| **Invariant** | Equal | Uses English-like rules, ignores accents in comparison |
| **en-US** | Equal | English treats accented e as same as plain e |
| **fr-FR** | Not equal | French considers accented e distinct |
| **Ordinal** | Not equal | Binary match — different Unicode code points |
Performance takeaway:
Use OrdinalIgnoreCase
when comparing identifiers, filenames, or any data where cultural rules should not affect the comparison — i.e., culture-independent comparisons focusing on binary equality ignoring case.
Use InvariantCultureIgnoreCase
when you want a culture-agnostic but linguistically reasonable comparison, for example, normalizing or searching user-entered text in a way consistent across cultures, without specific cultural preferences.
Use a specific culture (CurrentCulture
or a particular CultureInfo
) when the input must be interpreted or matched according to cultural and linguistic rules, such as formatting, sorting, or validating user input that is culture-sensitive.
As always drop me a follow on here or twitter/x
Top comments (0)