Basic Concepts of C# Numeric Types — From Integers to SIMD, for LLM-Ready Thinking
Most C# developers use numeric types every day:
int, long, float, double, decimal
But when you start asking deeper, precision- and performance-focused questions, things get interesting:
- What actually happens when an
intoverflows? - Why is
0.1 + 0.2not exactly0.3withdouble? - When should I use
decimalinstead ofdoublein real systems? - How do literal suffixes (
f,m,L) change IL and JIT behavior? - How can I push numeric operations into SIMD using
Vector<T>? - And crucial for this era: how do I talk about numeric types with LLMs so they reason like performance engineers, not like tutorial bots?
In this article, we’ll walk through a ShowNumericTypes() module that treats numeric types as a full-stack concept: from Roslyn and IL to JIT, CPU, and SIMD. Then we’ll connect that mental model to better prompts for LLMs so you can get production-grade answers instead of shallow ones.
If you can run a console app, you can follow this.
Table of Contents
- Mental Model: How Numeric Types Flow Through the .NET Stack
- The Demo File:
ShowNumericTypes()Overview - Basic Numeric Types: Int, Long, Float, Double, Decimal
- Integer Range & Overflow:
checkedvsunchecked - Floating-Point Precision: IEEE-754 and Bit Patterns
- Decimal: Base-10 Arithmetic for Money
- Numeric Literals & Type Inference: How Suffixes Change IL
- Vectorization & SIMD:
Vector<T>for Real Performance - Using This Mental Model to Get More from LLMs
- Numeric-Type Mastery Checklist (Top-1% Developer Mindset)
1. Mental Model: How Numeric Types Flow Through the .NET Stack
Here’s the big-picture pipeline that every numeric value walks through:
-
C# source:
int x = 42; double y = 3.1416; -
Roslyn compiles this into IL with concrete stack types:
-
int32,int64,float32,float64,valuetype System.Decimal, etc.
-
- At runtime, the JIT compiler turns IL into machine code, deciding:
- Which CPU registers to use (general-purpose vs floating-point/SIMD).
- Whether to insert overflow checks (
add.ovf) or not.
- The CPU actually executes:
- Integer ALU instructions (
add,mul,imul, etc.). - Floating-point/SIMD instructions (
addss,addps,vmulps, …). - Multi-word integer sequences for
decimal.
- Integer ALU instructions (
💡 Key idea:
int,double,decimalare contracts between your code, the CLR, the JIT, and the CPU – not just “types in C#”.
When you ask LLMs for help, frame questions in terms of this pipeline: “Roslyn → IL → JIT → CPU”.
2. The Demo File: ShowNumericTypes() Overview
We’ll use this entry point as our guide:
partial class Program
{
// Call ShowNumericTypes() from your own Main() in another partial Program.
static void ShowNumericTypes()
{
var integerNumber = 42m;
double doubleNumber = 3.1416d;
float floatingNumber = 274f;
long longNumber = 300_200_100L;
decimal monetaryNumber = 99.99m;
Console.WriteLine($"Entero: {integerNumber}");
Console.WriteLine($"Double: {doubleNumber}");
Console.WriteLine($"Float: {floatingNumber}");
Console.WriteLine($"Long: {longNumber}");
Console.WriteLine($"Decimal: {monetaryNumber}");
BasicNumericTypesIntro();
IntegerRangeAndOverflow();
FloatingPointPrecision();
DecimalForMoney();
NumericLiteralsAndTypeInference();
VectorizationAndSIMD();
}
}
Each helper method is a “lab” focused on one part of the numeric story. Together they form a teaching file you can put into a public GitHub repo and reuse when talking with LLMs.
3. Basic Numeric Types: Int, Long, Float, Double, Decimal
BasicNumericTypesIntro() clarifies the fundamentals and aligns them with IL and CPU views:
static void BasicNumericTypesIntro()
{
int integerNumber = 42; // System.Int32
double doubleNumber = 3.1416d; // System.Double
float floatingNumber = 274f; // System.Single
long longNumber = 300_200_100L; // System.Int64
decimal monetaryNumber = 99.99m; // System.Decimal
Console.WriteLine($"[Basic] Int: {integerNumber}");
Console.WriteLine($"[Basic] Double: {doubleNumber}");
Console.WriteLine($"[Basic] Float: {floatingNumber}");
Console.WriteLine($"[Basic] Long: {longNumber}");
Console.WriteLine($"[Basic] Decimal: {monetaryNumber}");
}
Conceptual IL:
.locals init (
[0] int32 integerNumber,
[1] float64 doubleNumber,
[2] float32 floatingNumber,
[3] int64 longNumber,
[4] valuetype [System.Runtime]System.Decimal monetaryNumber
)
And to the CPU:
-
int/long→ general-purpose integer registers (EAX/RAX/RCX/…) -
float/double→ floating-point/SIMD registers (XMM/YMM) -
decimal→ handled via multiple 32-bit integer operations in software
⚠ This is why
decimalis more expensive thandouble: the CPU has native hardware for binary floating point, but not fordecimal’s base-10 representation.
4. Integer Range & Overflow: checked vs unchecked
IntegerRangeAndOverflow() shows how overflow works in practice:
static void IntegerRangeAndOverflow()
{
int max = int.MaxValue;
int min = int.MinValue;
Console.WriteLine($"[IntRange] int.MinValue = {min}, int.MaxValue = {max}");
int overflowUnchecked = unchecked(max + 1);
Console.WriteLine($"[Overflow] unchecked(max + 1) = {overflowUnchecked}");
try
{
int overflowChecked = checked(max + 1);
Console.WriteLine($"[Overflow] checked(max + 1) = {overflowChecked}");
}
catch (OverflowException ex)
{
Console.WriteLine($"[Overflow] checked(max + 1) threw: {ex.GetType().Name}");
}
}
Two’s complement and wrapping
-
intis 32-bit, range:[-2^31, 2^31 - 1]. -
int.MaxValue=0x7FFFFFFF(= 2,147,483,647). - Adding 1 in hardware wraps to
0x80000000(= -2,147,483,648).
IL opcodes:
-
add→ unchecked, wraps on overflow. -
add.ovf→ checked, throwsOverflowExceptionon overflow.
Design rule:
- Use
checkedaround security-critical or financial arithmetic to catch bugs early. - Use
uncheckedconsciously in hot paths where you’ve proven overflow cannot happen or where wrap-around is desired (e.g., hashes).
💬 LLM prompt idea
“Given my
IntegerRangeAndOverflow()method, explain the IL (addvsadd.ovf) and show the equivalent x86-64 assembly, including how the CPU flags are used to detect overflow.”
5. Floating-Point Precision: IEEE-754 and Bit Patterns
FloatingPointPrecision() explores why 0.1 + 0.2 is not exactly 0.3:
static void FloatingPointPrecision()
{
double a = 0.1;
double b = 0.2;
double c = a + b;
Console.WriteLine($"[FP] 0.1 + 0.2 = {c:R} (R = round-trip format)");
long rawBits = BitConverter.DoubleToInt64Bits(c);
Console.WriteLine($"[FP] Bits of (0.1+0.2): 0x{rawBits:X16}");
float fx = 1f / 10f;
double dx = 1d / 10d;
Console.WriteLine($"[FP] float 1/10 = {fx:R}");
Console.WriteLine($"[FP] double 1/10 = {dx:R}");
}
IEEE-754 binary64 layout
- 1 bit sign
- 11 bits exponent (biased)
- 52 bits fraction (mantissa)
Values like 0.1 and 0.2 are repeating fractions in base-2, so the nearest representable values are stored. Operating on those approximations yields a result that is near but not exactly 0.3.
For most scientific and graphics workloads, this is fine. For financial work, it’s usually not.
💬 LLM prompt idea
“Using
FloatingPointPrecision()as context, show how IEEE-754 encodes 0.1 and 0.2, and explain how the rounding error accumulates when adding them. Include hex and binary representations.”
6. Decimal: Base-10 Arithmetic for Money
DecimalForMoney() compares decimal and double side by side:
static void DecimalForMoney()
{
decimal price = 19.99m;
decimal tax = 0.16m;
decimal total1 = price * (1 + tax);
Console.WriteLine($"[Decimal] price = {price}, tax = {tax}, total = {total1}");
double priceD = 19.99;
double taxD = 0.16;
double total2D = priceD * (1 + taxD);
Console.WriteLine($"[Decimal] double total ≈ {total2D:R}");
}
Decimal layout (simplified)
- 1 bit sign
- 96-bit integer significand
- A 5-bit scale (power of 10) indicating where the decimal point is
This design allows exact representation of decimal fractions like 0.1, 0.01, 19.99, etc.
Tradeoff:
- ✔ Exact cents for money.
- ✖ Slower math (no direct hardware instructions, implemented in software).
Rule of thumb:
- Use
doublefor physical measurements, graphics, simulations. - Use
decimalfor prices, invoices, and anything tied to human money.
7. Numeric Literals & Type Inference: How Suffixes Change IL
NumericLiteralsAndTypeInference() shows how small syntax choices affect types and IL:
static void NumericLiteralsAndTypeInference()
{
var x = 42; // int
var y = 42L; // long
var z = 3.14; // double
var q = 3.14f; // float
var r = 3.14m; // decimal
Console.WriteLine($"[Literals] x:int={x}, y:long={y}, z:double={z}, q:float={q}, r:decimal={r}");
long big = 1_000_000_000_000L;
Console.WriteLine($"[Literals] big long = {big}");
}
Default literal rules:
-
42→int -
42L→long -
42u→uint -
42UL→ulong -
3.14→double -
3.14f→float -
3.14m→decimal
Digit separators (_) are ignored by the compiler but help readability.
Conceptual IL:
ldc.i4.s 42 // int
ldc.i8 42 // long
ldc.r8 3.14 // double
ldc.r4 3.14 // float
Numeric suffixes are tiny characters with big downstream effects on precision, performance, and interop.
8. Vectorization & SIMD: Vector<T> for Real Performance
VectorizationAndSIMD() demonstrates data-parallel numeric work using System.Numerics.Vector<T>:
static void VectorizationAndSIMD()
{
float[] dataA = { 1, 2, 3, 4, 5, 6, 7, 8 };
float[] dataB = { 10, 20, 30, 40, 50, 60, 70, 80 };
float[] result = new float[dataA.Length];
if (Vector.IsHardwareAccelerated)
{
int width = Vector<float>.Count;
int i = 0;
for (; i <= dataA.Length - width; i += width)
{
var va = new Vector<float>(dataA, i);
var vb = new Vector<float>(dataB, i);
var vr = va * vb;
vr.CopyTo(result, i);
}
for (; i < dataA.Length; i++)
{
result[i] = dataA[i] * dataB[i];
}
}
else
{
for (int i = 0; i < dataA.Length; i++)
{
result[i] = dataA[i] * dataB[i];
}
}
Console.Write("[SIMD] dataA * dataB = ");
foreach (var v in result)
{
Console.Write(v + " ");
}
Console.WriteLine();
}
What’s happening underneath?
-
Vector<float>maps to a hardware SIMD width:- e.g. 4 floats (128-bit), 8 floats (256-bit), etc.
- The JIT turns
va * vbinto one SIMD instruction:- e.g.,
mulpsorvmulps, multiplying N elements in parallel.
- e.g.,
- This can easily yield 4x–8x speedups (or more) on large arrays compared to scalar loops.
💬 LLM prompt idea
“Given my
VectorizationAndSIMD()method, show the IL and a plausible x64 AVX2 assembly sequence for the vectorized part, and explain the memory alignment and cache considerations.”
Now you’re talking to the LLM like a performance engineer.
9. Using This Mental Model to Get More from LLMs
Here’s how to turn this knowledge into better LLM conversations.
9.1 Always Anchor to a Real Code Sample
Instead of:
“Explain floats vs decimals.”
Ask:
“Using my
DecimalForMoney()method fromShowNumericTypes(), compare the IL and JIT behavior of thedecimalanddoublebranches, and explain when each is appropriate for financial applications.”
Context + specificity = better answers.
9.2 Ask Across Layers (Language → IL → JIT → CPU)
Examples:
- “For
IntegerRangeAndOverflow(), show howcheckedchanges the IL opcodes and what CPU flags are used to detect overflow.” - “For
VectorizationAndSIMD(), explain howVector.IsHardwareAcceleratedandVector<float>.Countare determined at runtime.” - “For
NumericLiteralsAndTypeInference(), list the ILldc.*instructions generated for each literal and how they map to stack types.”
9.3 Turn Questions into Experiments
Ask LLMs not just for answers, but for microbenchmark plans:
“Design a BenchmarkDotNet suite that compares scalar vs SIMD multiplication for large float arrays, and predict at what size SIMD should clearly win.”
Now the LLM is a collaborator in your performance work.
10. Numeric-Type Mastery Checklist (Top-1% Developer Mindset)
Use this as a self-check – and as a prompt template library when talking to LLMs.
- [ ] I understand how
int,long,float,double, anddecimalare represented at the IL and CPU levels. - [ ] I can explain two’s complement, wrapping behavior, and the effect of
checkedvsunchecked. - [ ] I understand IEEE-754 binary floating point and why some decimal fractions are not exact.
- [ ] I know when to use
decimalfor money and what performance tradeoffs it introduces. - [ ] I can predict the default type of numeric literals and control it with suffixes.
- [ ] I understand how
Vector<T>maps to SIMD and when it can accelerate numeric workloads. - [ ] I routinely ask LLMs to analyze numeric code across layers: language, IL, JIT, CPU.
- [ ] I use microbenchmarks (e.g., BenchmarkDotNet) to validate numeric performance hypotheses.
Once you think about numeric types this way, you stop writing int x = 1; blindly and start thinking:
“Which representation do I want here? How will IL, JIT, and CPU treat it? What’s the impact on precision, performance, and scaling?”
And when you bring that mindset into your LLM prompts, models stop giving you beginner-level explanations and start acting like numeric systems co-pilots.
Happy computing — and may your overflows be intentional, your decimals exact, and your SIMD lanes always full.

Top comments (1)
Great write-up. You didn’t just list types — you showed how they travel from C# → IL → JIT → CPU, which is exactly how performance-minded developers should think. The SIMD part is especially useful, and framing the topic for LLM reasoning is a smart angle. Clear, practical, and mindset-shifting. Good job.