Axelang is a new systems programming language designed around the following question:
Why is concurrency an afterthought, a library, or a bolt-on runtime — as opposed to a language-level concept?
Axe aims to resolve the issue in that rather than forcing developers to manually struggle with bolted on threads, locks, futures and external libraries, Axe makes concurrency part of the core language semantics, in that writing Axe forces you to think concurrently by default.
Why Axe Exists
Modern CPUs have not been about single-thread speed for a long time. Performance improvements are generally achieved by:
- Adding more cores
- Adding more threads
- Hiding I/O latency
- Overlapping compute with work
But most languages still make concurrency:
- verbose
- error-prone
- tacked on after the fact
C, C++, Java, and Rust rely heavily on APIs, libraries, and patterns. Go introduced goroutines, but concurrency is still a runtime abstraction — not part of the language type system. Concurrency should feel like a natural mode of thinking, not a box of add-on primitives.
Concurrency as a Language Construct
A simple Axe program might include:
main {
parallel {
single {
task1();
task2();
}
task3();
}
}
This reads almost like pseudocode:
-
parallel {}introduces a parallel execution region -
single {}means “one thread runs this block” - work-sharing and scheduling are handled by the compiler + runtime
Axe’s concurrency constructs are structured, visible in the syntax tree, easy for static analysis, and safe to optimize.
Instead of bolting on OpenMP-style pragmas, heavy macro systems, or attributes, Axe builds them directly into the grammar.
Platform-Aware Code at Compile Time
Axe also supports compile-time dispatch based on OS or platform:
platform windows {
println "Running on Windows";
}
platform posix {
println "Running on POSIX";
}
This allows the compiler to eliminate unused branches, generate smaller binaries, and avoid runtime platform checks.
Your code can remain portable without littering your project with #ifdef walls.
A Powerful Dispatch System… Without Generics
One standout feature is Axe’s compile-time overload map:
overload println(x: generic) {
string => println_str;
i32 => println_i32;
}(x)
This looks simple, but it’s extremely powerful:
- Axe can resolve overloads statically
- No generics required
- No template instantiation explosion
- No runtime dispatch cost
- Works with user-defined types
This lets Axe provide generic-friendly ergonomics without introducing template systems or complex type parameterization.
Familiar, But Cleaner
Axe feels like a blend of:
- C-style syntax
- Go-like readability
- Rust-level respect for correctness
- Zig-style minimalism
But it doesn't copy any of them. Axe aims to be:
- easier than C++
- more predictable than Rust’s borrow checker
- more explicit than Go
- safer than straight C
If you know a systems language, you can read Axe code in minutes.
A Language Designed to Scale
Axe has:
- models (its version of simple data types)
- functions with clear signatures
- compile-time branching (for platforms, modes, and toolchains)
- parallel loops and reduction built in
Example parallel loop:
parallel for mut i = 0 to n reduce(+:sum) {
sum = sum + i;
}
The compiler turns this into efficient work sharing, correct reduction handling, and thread safe execution.
It does not require a single pragma, library import, or thread API.
You can read more or try it at:
Top comments (0)