Hi there, .NET enthusiasts who care about performance!
I wondered whether it's worth writing for
loops for iterating over random-access collections (arrays, lists) in .NET nowadays or the JIT compiler has got so smart by now that we can just use foreach
loops in such cases without significant perfomance penalty.
So I did some measurements (by the help of BenchmarkDotNet) and seeing the results, I decided my findings might be worth sharing.
I benchmarked the following 3 types of iteration methods:
- Plain old
foreach
loop:foreach (var item in collection) { /*...*/ }
-
for
loop withLength
/Count
evaluated in stop condition:for (int i = 0; i < collection.Count; i++) { /*...*/ }
-
for
loop withLength
/Count
cached in variable:for (int i = 0, n = collection.Count; i < n; i++) { /*...*/ }
I run tests for both arrays and lists, for small (10) and bigger (1000) item counts and for platforms .NET 4.8, .NET Core 3.1 and .NET 5.
You can view the results here.
I drew the following conclusions:
- If you aim for maximum performance, use method 3 (
for
loop withLength
/Count
cached in variable). The only exception is direct access to arrays, in which caseforeach
seems a tiny bit faster. Looks like the JIT compiler optimizes the hell out of that. - Avoid iterating over collections via interfaces if possible. The performance penalty is in the range of 4x-6x! Definitely avoid
foreach
over interfaces because that allocates too (as the enumerator is also obtained through the interface, thus, it gets boxed). In this case,for
, at least, is still allocation-free.
Top comments (0)