This article is based on official sources, compiler code exploration, experimentation using decompiler tools, and real-world experience.
Have you ever been told that a for loop is faster than a foreach, or that yield return is slow because it hides a state machine underneath? In modern C#, these differences are often negligible. But over a decade ago, I remember a specific question from my first interview for a Junior Software Engineer — "Is string interpolation $"Hello, {name}"; slower than string concatenation "Hello, " + name;?
At the time, my honest answer was a confident, "I don't know" — but that question stuck with me. It was only later that I truly understood the answer.
Lowering — The Hidden Layer of the Compiler
Eric Lippert, who was a Principal Developer on the C# compiler team at Microsoft, explains this concept perfectly in his blog post Lowering in language design:
A common technique ... is to have the compiler “lower” from high-level language features to low-level language features in the same language.
I also really like this perspective from the 1998 classic book "Building an Optimizing Compiler", by Robert Morgan:
The instructions are lowered so that each operation in the flow graph represents a single instruction in the target machine. Complex instructions, such as subscripted array references, are replaced by the equivalent sequence of elementary machine instructions. Alternatively, multiple instructions may be folded into a single instruction when constants, rather than temporaries holding the constant value, can occur in instructions.
In other words, instead of needing to understand every high-level language feature, the compiler takes these constructs — such as iterators (yield return), scoping blocks (using), or abstractions (LINQ and async/await) — and lowers them into an equivalent set of simpler, more fundamental instructions.
This allows different constructs to often perform similarly to their lower-level equivalents — although this ultimately depends on the specific scenario and how the code is transformed. In some cases, the compiler can also simplify or combine operations when values are known in advance, further reducing the amount of work needed at runtime.
So, what should have been the answer to the question from my interview?
Yes, at the time, string interpolation could be slower than string concatenation, because it often lowered to String.Format, which introduced additional overhead.
Where Lowering Fits in the Compiler Pipeline
In the classic compiler structure described in Compilers: Principles, Techniques, and Tools, known as "the Dragon Book", you won't see "lowering" as a separate phase. In fact, it is not mentioned at all.
Lexical Analysis — also called tokenization or scanning - is the first phase in compilation. The compiler reads the stream of characters (source code) and groups them into meaningful sequences called lexemes, which are then converted into tokens (such as keywords, identifiers, literals, and operators). During this phase, unnecessary whitespace and comments are typically ignored or removed.
Example of generated tokens after a lexical analysis:
int number = 5;
-
int- keyword -
number- identifier -
=- operator -
5- literal -
;- separator
Syntax Analysis — or parsing - is the compiler's second phase. It ensures that the generated tokens follow the grammatical rules of the programming language.
<declaration> → <datatype> <identifier> = <literal> ;
During this phase, the compiler builds a tree-like representation of the code - parse tree or syntax tree. If the structure is invalid, the parser generates errors such as "Unexpected token" or "Missing semicolon".
Parse Tree - shows exactly how the code matches the grammar rules of the programming language. It represents the full syntactic structure including every detail, even those that are not essential for understanding the program.
Syntax Tree - is a simplified representation of the code. It removes unnecessary syntactic details and focuses on the meaning of the program — the operations, relationships, and structure of the code.
Semantic Analysis — ensures the statements are meaningful and do not violate semantic rules. Even a syntactically correct code can be semantically invalid. The following checks are performed:
- Type Checking - ensures variables and operations are used correctly. For example assigning a
stringto anintegervariable would result in an error. - Variable Scope and Declaration - ensures variables are declared before use and accessed only within valid scopes.
- Function Check - ensures function calls match their definitions in terms of number and type of arguments.
Intermediate Code Generation — in the fourth phase, the source code is translated into an intermediate representation (IR), a lower-level, machine-like form that is not yet machine code but is easier to optimize and translate into different target architectures. This phase translates high-level language constructs into a lower-level intermediate representation that is closer to machine instructions, making it easier to optimize. In many compiler designs, the syntax tree can be considered a high-level form of intermediate representation.
Code Optimization — once the IR is generated, the compiler enters the optimization phase where the code is transformed to improve performance and reduce resource usage. This is achieved through techniques such as instruction reordering, elimination of redundant calculations, and removal of "dead code" (code that is never executed).
Code Generation — the final phase of the compilation process, where the IR, which has already been lowered and optimized from high-level language constructs, is translated into machine code for the target architecture.
Symbol Table - is a data structure used by the compiler to store information about code constructs. Each entry in the symbol table contains details about an identifier, such as its name (lexeme), type, memory location, and other relevant attributes. The data structure is designed to allow the compiler to efficiently look up identifiers and quickly store or retrieve information associated with them.
Did you notice the dashed box labeled "Lowering" in the diagram? There it is. It represents a conceptual transformation stage between Semantic Analysis and Intermediate Code Generation, where high-level language features are translated into simpler constructs. In compiler designs, lowering is not a strictly defined pipeline stage. Instead, it is a concept that can happen in multiple places: during transformations on the syntax tree, during intermediate representation (IR) generation, or across several compiler passes as part of a broader lowering and optimization pipeline.
Lowering in the Roslyn Compiler Pipeline
In the Roslyn compiler's four-phase pipeline, "lowering" is a major part of the Binding phase.
Parsing — the first phase consists of tokenization and parsing, where the source code is converted into a syntax tree.
Declaration — similar to semantic analysis in classical compiler structure, this phase analyzes source code and referenced metadata to identify all declared symbols, such as types, methods, and variables, builds a hierarchical symbol table.
Binding — the process of matching syntax (the code you wrote) to symbols (the identities discovered during the Declaration phase). In this phase, the identifiers are assigned to symbols, effectively exposing the result of the compiler's semantic analysis. Lowering happens at the very end of the Binding phase, after the code's type safety has been verified. The compiler rewrites complex high-level features like foreach or async into simpler logical structures that the Emit phase requires to generate IL.
IL Emission - the final phase emits an assembly with all the information produced in the previous phases. This stage is exposed through the Emit API and generates Intermediate Language (IL) byte code.
In his book, Robert Morgan explains a fundamental truth about this process:
During code lowering, where high-level operations are replaced by lower-level instructions, the compiler will generate expressions. The most common example is the lowering of subscript operations from a subscripted load/store operation to the computation of the address followed by an indirect load/store. The compiler generated the expressions, so the compiler must simplify them: The programmer cannot do it.
Understanding this idea helps explain why this simple C# code:
var i = 0;
var numbers = new [] {1, 2, 3};
var n = numbers[i];
can be lowered into a more explicit, lower-level form:
int num = 0;
int[] array = new int[3];
RuntimeHelpers.InitializeArray(array, (RuntimeFieldHandle));
int[] array2 = array;
int num2 = array2[num];
In this transformation, the compiler introduced RuntimeHelpers.InitializeArray and an extra reference array2. The Programmer did not write this, the compiler did.
One of the most surprising discoveries when exploring the Roslyn codebase is just how extensive lowering really is. Take a look at the compiler source code under /src/Compilers/CSharp/Portable/Lowering, you’ll find dedicated rewriters for major language features:
At first glance, this aligns with what we expect — complex, high-level constructs like async/await, iterators, and lambdas require significant transformation. But the real insight comes one level deeper. Digging further into the LocalRewriter reveals something much more interesting. Lowering is not reserved for "complex" features — it is applied to many language constructs, including:
What this really means?
Lowering is not a special-case transformation — it is a core mechanism of the compiler.
Lowering in Practice: Real-World Examples
Having already discussed "lowering", let’s take a look at some more interesting examples.
Iteration Lowering: foreach and for
var numbers = new [] {1, 2, 3};
foreach (var n in numbers)
{
}
for (int i = 0; i < numbers.Length; i++)
{
}
Both constructs are transformed into simpler while constructs:
int[] array = new int[3];
RuntimeHelpers.InitializeArray(array, (RuntimeFieldHandle));
int[] array2 = array;
int[] array3 = array2;
int num = 0;
while (num < array3.Length)
{
int num2 = array3[num];
num++;
}
int num3 = 0;
while (num3 < array2.Length)
{
num3++;
}
However, for non-array collections, foreach is lowered into an enumerator-based pattern using GetEnumerator() and MoveNext():
var list = new List<int> { 1, 2, 3 };
foreach (var item in list)
{
Console.WriteLine(item);
}
Is lowered to:
List<int> list = new List<int>();
list.Add(1);
list.Add(2);
list.Add(3);
List<int> list2 = list;
List<int>.Enumerator enumerator = list2.GetEnumerator();
try
{
while (enumerator.MoveNext())
{
int current = enumerator.Current;
Console.WriteLine(current);
}
}
finally
{
((IDisposable)enumerator).Dispose();
}
Lambda Lowering
Consider the following code:
var evens = numbers.Where(n => n % 2 == 0);`
The lambda expression is lowered into a compiler-generated class:
[Serializable]
[CompilerGenerated]
private sealed class <>c
{
public static readonly <>c <>9 = new <>c();
public static Func<int, bool> <>9__0_0;
internal bool <M>b__0_0(int n)
{
return n % 2 == 0;
}
}
public void Main()
{
int[] source = array;
IEnumerable<int> enumerable = Enumerable.Where(source, <>c.<>9__0_0 ?? (<>c.<>9__0_0 = new Func<int, bool>(<>c.<>9.<M>b__0_0)));
}
The async State Machine
A more dramatic example of lowering in modern C# is the async/await pattern. While we see a simple code block, the compiler sees a much more complex structure. As Robert Morgan notes:
... the compiler translates these operations into the simpler arithmetic and memory references implied by the formula. In other words, the level of the flow graph is lowered by replacing higher-level operations by simpler instruction-level operations.
In the case of asynchronous methods, this idea of lowering is taken further. The compiler does not simply produce a sequence of instructions — it transforms the method into a compiler-generated state machine.
So this C# code:
using System.Threading.Tasks;
public class Program {
public async Task Main()
{
await DoSomething();
}
public Task DoSomething()
{
return Task.CompletedTask;
}
}
is lowered to:
using System;
using System.Diagnostics;
using System.Reflection;
using System.Runtime.CompilerServices;
using System.Security;
using System.Security.Permissions;
using System.Threading.Tasks;
[assembly: CompilationRelaxations(8)]
[assembly: RuntimeCompatibility(WrapNonExceptionThrows = true)]
[assembly: Debuggable(DebuggableAttribute.DebuggingModes.Default | DebuggableAttribute.DebuggingModes.IgnoreSymbolStoreSequencePoints | DebuggableAttribute.DebuggingModes.EnableEditAndContinue | DebuggableAttribute.DebuggingModes.DisableOptimizations)]
[assembly: SecurityPermission(SecurityAction.RequestMinimum, SkipVerification = true)]
[assembly: AssemblyVersion("0.0.0.0")]
[module: UnverifiableCode]
[module: RefSafetyRules(11)]
[NullableContext(1)]
[Nullable(0)]
public class Program
{
[CompilerGenerated]
private sealed class <Main>d__0 : IAsyncStateMachine
{
public int <>1__state;
public AsyncTaskMethodBuilder <>t__builder;
[Nullable(0)]
public Program <>4__this;
private TaskAwaiter <>u__1;
private void MoveNext()
{
int num = <>1__state;
try
{
TaskAwaiter awaiter;
if (num != 0)
{
awaiter = <>4__this.DoSomething().GetAwaiter();
if (!awaiter.IsCompleted)
{
num = (<>1__state = 0);
<>u__1 = awaiter;
<Main>d__0 stateMachine = this;
<>t__builder.AwaitUnsafeOnCompleted(ref awaiter, ref stateMachine);
return;
}
}
else
{
awaiter = <>u__1;
<>u__1 = default(TaskAwaiter);
num = (<>1__state = -1);
}
awaiter.GetResult();
}
catch (Exception exception)
{
<>1__state = -2;
<>t__builder.SetException(exception);
return;
}
<>1__state = -2;
<>t__builder.SetResult();
}
void IAsyncStateMachine.MoveNext()
{
this.MoveNext();
}
[DebuggerHidden]
private void SetStateMachine(IAsyncStateMachine stateMachine)
{
}
void IAsyncStateMachine.SetStateMachine(IAsyncStateMachine stateMachine)
{
this.SetStateMachine(stateMachine);
}
}
[AsyncStateMachine(typeof(<Main>d__0))]
[DebuggerStepThrough]
public Task Main()
{
<Main>d__0 stateMachine = new <Main>d__0();
stateMachine.<>t__builder = AsyncTaskMethodBuilder.Create();
stateMachine.<>4__this = this;
stateMachine.<>1__state = -1;
stateMachine.<>t__builder.Start(ref stateMachine);
return stateMachine.<>t__builder.Task;
}
public Task DoSomething()
{
return Task.CompletedTask;
}
}
The Real-World "Aha!" Moment: When Lowering Saves the Day
Theory is great, but the true power of understanding the compiler's lowering becomes clear during a code review. While implementing a TransactionalExecutionWrapper that ensures all commands run within the same scope and exposes the current transaction, a valid concern was raised.
The Code in Question
using System;
using System.Data;
public class TransactionalExecutionWrapper
{
private readonly IDbConnection _connection;
public TransactionalExecutionWrapper(IDbConnection connection)
{
_connection = connection;
}
public IDbTransaction Transaction { get; private set; }
public void Execute<T>(Func<T> action)
{
using (Transaction = _connection.BeginTransaction())
{
try
{
action();
Transaction.Commit();
}
catch (Exception)
{
Transaction.Rollback();
throw;
}
finally
{
Transaction = null;
}
}
}
}
The Workplace Debate: To Null or Not to Null?
Concern: The
usingstatement is essentially atry/finallyblock and ensures disposal at the end of the scope. IfTransaction = null, by the time theusingblock reaches thefinally, it will be trying to callnull.Dispose(). The object won't be cleaned up!
At first glance, this looks reasonable. However, this is where compiler lowering changes how we reason about the code.
When the compiler sees a using statement, it does not just copy-paste your variable into a finally block. It lowers the using statement by creating a hidden, local variable. In other words, the using statement is effectively rewritten into a form similar to:
using System;
using System.Data;
using System.Diagnostics;
using System.Reflection;
using System.Runtime.CompilerServices;
using System.Security;
using System.Security.Permissions;
[assembly: CompilationRelaxations(8)]
[assembly: RuntimeCompatibility(WrapNonExceptionThrows = true)]
[assembly: Debuggable(DebuggableAttribute.DebuggingModes.Default | DebuggableAttribute.DebuggingModes.IgnoreSymbolStoreSequencePoints | DebuggableAttribute.DebuggingModes.EnableEditAndContinue | DebuggableAttribute.DebuggingModes.DisableOptimizations)]
[assembly: SecurityPermission(SecurityAction.RequestMinimum, SkipVerification = true)]
[assembly: AssemblyVersion("0.0.0.0")]
[module: UnverifiableCode]
[module: RefSafetyRules(11)]
[NullableContext(1)]
[Nullable(0)]
public class TransactionalExecutionWrapper
{
private readonly IDbConnection _connection;
[CompilerGenerated]
[DebuggerBrowsable(DebuggerBrowsableState.Never)]
private IDbTransaction <Transaction>k__BackingField;
public IDbTransaction Transaction
{
[CompilerGenerated]
get
{
return <Transaction>k__BackingField;
}
[CompilerGenerated]
private set
{
<Transaction>k__BackingField = value;
}
}
public TransactionalExecutionWrapper(IDbConnection connection)
{
_connection = connection;
}
public void Execute<[Nullable(2)] T>(Func<T> action)
{
IDbTransaction dbTransaction2 = (Transaction = _connection.BeginTransaction());
IDbTransaction dbTransaction3 = dbTransaction2;
try
{
try
{
action();
Transaction.Commit();
}
catch (Exception)
{
Transaction.Rollback();
throw;
}
finally
{
Transaction = null;
}
}
finally
{
if (dbTransaction3 != null)
{
dbTransaction3.Dispose();
}
}
}
}
This transformation demonstrates an important point:
- The compiler captures the initial transaction reference in a compiler-generated temporary local variable
dbTransaction3 - That local variable is used for disposal in the generated
finallyblock. - The
Transactionproperty is independent and can be modified without affecting disposal.
So when this line executes:
Transaction = null;
it only affects the property value, not the compiler-generated local reference that will be disposed.
This proves Robert Morgan’s point:
The compiler generated the expressions, so the compiler must simplify them.
High-level features like using are not executed directly. They are lowered into explicit control flow with clearly defined lifetime rules for resources. This is why reasoning about correctness must follow the compiler’s model, not just the surface syntax.
What about Garbage Collection?
It is important to separate disposal from garbage collection, as they are often confused. Setting Transaction = null; does not trigger garbage collection, nor does it free memory, nor does it dispose the object. It only removes one reference to the transaction object. After Dispose() executes, the object may become eligible for garbage collection, but the *GC * will only collect it later, when a garbage collection cycle occurs.
Conclusion: Why Lowering Matters
When we write code, we use high-level abstractions like using, await, or foreach to keep our logic clean. But underneath, there is the compiler—the "Architect" — rewrites these abstractions into simpler operations, introducing variables, restructuring control flow, and generating helper code.
Understanding lowering changes how developers think about code: no longer abstract intentions, but concrete operations that reveal how the code is transformed and executed. This insight turns performance tuning, resource management, and debugging from guesswork into more predictable engineering.
By understanding what the compiler produces and how the runtime executes it, developers gain a deeper mental model of their systems—writing code with greater confidence, clarity, and control.
References & Further Reading
- Compilers: Principles, Techniques, and Tools, known as "the Dragon Book"
- Building an Optimizing Compiler by Robert Morgan
- Roslyn source code
- Eric Lippert's blog
- Ahead-of-time lowering and compilation in JAX
- How to lower an IR?
- Overview of the compiler, Rust Compiler Development Guide
- Optimising Compilers, University of Cambridge
- CS 4120: Introduction to Compilers (Spring 2021), Cornell University



Top comments (0)