While we in embedded-land are mostly working with either C or C++ on our professional ventures, often with proprietary tooling and whatnot, I always find some much appreciated respite in tinkering with other alternatives when wasting time on hobbies and side-projects.
In the last year or so, I delved into exploring other venues to solve my embedded headaches and landed onto the nim programming language
.
nim compiles to c (and c++, objective-c, js)
This means that any target with an existing c compiler is automatically supported by nim.
Not only that, but calling c code (and c++, objective-c, etc.) is really simple to do and, from what I see, 0-overhead.
You can also easily check the generated C sources, which are human readable, even if with a lot of noise. This can be achieved by specifying a --nimcache
directory when compiling.
writing, building, shipping
Like all modern stuff, we have a sane module system (bye bye text-based #include
s), an ergonomic compiler and a nice, basic package manager.
The compiler lets you use a script-like subset of the language as a format for configuration files. These may contain compiler switches and system-specific flags. It integrates seamlessly with the nimble
package manager, which lets you write tasks that can be executed both after building or as sorts of standalone targets.
writing a simple hello world for AVR is trivial (ish)
This is a valid program that can be run on a x64 intel CPU and, without any changes, on an 8-bit atmega microcontroller:
proc main =
while true:
discard
main()
To compile on AVR, you just have to provide some more configuration on how to handle critical errors without the os covering our backs:
# panicoverride.nim
proc exit(code: int) {.importc, header: "<stdlib.h>", cdecl.}
{.push stack_trace: off, profiler:off.}
proc rawoutput(s: string) = discard
proc panic(s: string) =
rawoutput(s)
while true:
discard
exit(1)
{.pop.}
And a couple of compiler flags:
# config.nims
switch("os", "standalone")
switch("cpu", "avr")
switch("gc", "none")
switch("stackTrace", "off")
switch("lineTrace", "off")
switch("passC", "-mmcu=atmega328p")
switch("passL", "-mmcu=atmega328p")
switch("nimcache", ".nimcache")
switch("avr.standalone.gcc.options.linker", "-static")
switch("avr.standalone.gcc.exe", "avr-gcc")
switch("avr.standalone.gcc.linkerexe", "avr-gcc")
when defined(windows):
switch("gcc.options.always", "-w -fmax-errors=3")
Notice that these is where you specify which c compiler to use, to then actually generate the final binaries.
The compiler is identified by the cpu.os.compiler
name (avr.standalone.gcc
):
- The compiler executable is specified through the
exe
property. - The linker executable is specified through the
linkerexe
property.
Let's dump this code snippets into the following files:
Run the compiler...
nim c avr_hw.nim
...and check its output
avr_hw: file format elf32-avr
Disassembly of section .text:
00000000 <.text>:
0: 0c 94 38 00 jmp 0x70
4: 0c 94 4a 00 jmp 0x94
8: 0c 94 4a 00 jmp 0x94
...
70: 11 24 eor r1, r1
72: 1f be out 0x3f, r1
74: cf ef ldi r28, 0xFF
76: d0 e1 ldi r29, 0x10
78: de bf out 0x3e, r29
7a: cd bf out 0x3d, r28
7c: 21 e0 ldi r18, 0x01
7e: a0 e0 ldi r26, 0x00
80: b1 e0 ldi r27, 0x01
82: 01 c0 rjmp .+2
84: 1d 92 st X+, r1
86: ae 30 cpi r26, 0x0E
88: b2 07 cpc r27, r18
8a: e1 f7 brne .-8
8c: 0e 94 52 00 call 0xa4
90: 0c 94 53 00 jmp 0xa6
94: 0c 94 00 00 jmp 0
98: ff cf rjmp .-2
9a: 08 95 ret
9c: 08 95 ret
9e: ff cf rjmp .-2
a0: ff cf rjmp .-2
a2: ff cf rjmp .-2
a4: ff cf rjmp .-2
a6: f8 94 cli
a8: ff cf rjmp .-2
Worked like a charm!
foreign function interface
Want to use _delay_ms
from util/delay.h
?
proc delay_ms(us: uint16) {.importc: "_delay_ms", header: "util/delay.h".}
proc main =
while true:
delay_ms(1000)
main()
As shown, it's really easy to integrate existing c code in your programs, you don't have to rewrite everything in nim. Note that there are tools to also translate c code to nim (c2nim, futhark).
metaprogramming
Metaprogramming is nim killer-feature in my opinion.
nim has:
- Compile-time functions, which get executed in a VM embedded within the compiler, and supports a subset of the language to be evaluated at compile time.
const data = staticRead("my_file") # the file gets read at compile time!
- Generics and concepts, which from what I observed, are completely 0 cost at runtime, and are really only present in nim code, not in the c-generated one. Using them in combination with typeclasses enables quite powerful patterns.
type MappedIoRegister*[T: uint8|uint16] = distinct uint16 ## \
## A register that can either contain a byte-sized or word-sized datum.
template ioPtr[T](a: MappedIoRegister[T]): ptr T =
cast[ptr T](a)
- Templates, which are hygienic (with scoped symbols) c macros, essentially a replace-mechanism that allows you to have "true inlining". Combining this with operator overloading is really nice.
import volatile
template `[]`*[T](p: MappedIoRegister[T]): T =
volatile.volatileLoad(ioPtr[T](p))
template `[]=`*[T](p: MappedIoRegister[T]; v: T) =
volatile.volatileStore(ioPtr[T](p), v)
- Macros, which are special functions taking in Abstract Syntax Tree (AST) representations of nim code and spewing out ASTs that are transformations of its inputs. This allows for some pretty crazy stuff.
# VectorInterrupt enum definition omitted..
template vectorDecl(n: int): string =
"$1 __vector_" & $n &
"""
$3 __attribute__((__signal__,__used__,__externally_visible__));
$1 __vector_""" & $n & "$3"
macro isr*(v: static[VectorInterrupt], p: untyped): untyped =
## Turns the passed procedure `p` into an interrupt
## service routine.
var pnode = p
if p.kind == nnkStmtList:
pnode = p[0]
expectKind(pnode, nnkProcDef)
addPragma(pnode, newIdentNode("exportc"))
addPragma(pnode,
newNimNode(nnkExprColonExpr).add(
newIdentNode("codegenDecl"),
newLit(vectorDecl(ord(v)))
)
)
pnode
# Now we can map functions to an interrupt
proc timer0_compa_isr() {.isr(Timer0CompAVect).} =
# do stuff when the timer0 compare A interrupt gets triggered
discard
All code snippets are taken from the avr_io library, a small project that I maintain.
Note that we can use exportc
to generate code that will interact with c, and codegenDecl
to manipulate the c-generated code: this is incredibly powerful, especially for writing libraries.
As a rule of thumb, use these features in this order and go to the next one only if needed: non-meta stuff -> generics -> templates -> macros
.
memory management
Memory management is highly configurable in nim.
Starting from v2 onward, the memory management policy used by default is based on a reference counting approach, that also handles cycles (-mm:orc
).
You can also choose to use a more easy-to-reason strategy, which also uses reference counting but without support for cycles (-mm:arc
). Notice that both are deterministic and not stop-the-world.
By experimenting, I noticed that binary size can become a bit larger with orc/arc, so if it is really a problem, you can always opt in not managing your memory at all (-mm:none
).
Why does that matter? Because nim has managed types, but I did not experiment with them enough to have formed an opinion on how good or not they are in bare-metal situations!
Top comments (0)