DEV Community

Christopher McClellan
Christopher McClellan

Posted on • Updated on • Originally published at christopherjmcclellan.wordpress.com

Debugging Rust Cortex-M with VS Code: Take 2

Last time, I wrote about how to configure VS Code to debug Rust Cortex-M programs, but you know what's better than writing documentation for how to do a thing?

Automating it.

I'm very happy to announce my PR to add a basic debug configuration for VS Code has been merged into the cortex-m-quickstart template. VS Code is now supported out of the box when you cargo generate a new project. When I say "out of the box", I really mean "out of the box". I've also added a QEMU configuration, so you can go experiment with debugging embedded Rust with VS Code right now, without any additional hardware.

Setup

First, ensure you've installed Rust, the ARM Cortex-M toolchain, QEMU, and OpenOCD.

Then, ensure you've installed cargo generate.

$ cargo install cargo-generate

Lastly, install the necessary VS Code plugins.

$ code --install-extension rust-lang.rust
Installing extensions...
Installing extension 'rust-lang.rust' v0.7.0...
Extension 'rust-lang.rust' v0.7.0 was successfully installed.

$ code --install-extension marus25.cortex-debug
Installing extensions...
Installing extension 'marus25.cortex-debug' v0.3.4...
Extension 'marus25.cortex-debug' v0.3.4 was successfully installed.

Now we can generate a new project.

$ cargo generate --git https://github.com/rust-embedded/cortex-m-quickstart --name cortexm-app
🔧   Creating project called `cortexm-app`...
✨   Done! New project created /Users/rubberduck/src/cortexm-app

Let's go ahead and open it in Code.

$ code cortexm-app

Debugging with QEMU

It is recommended you complete the QEMU section of The Book before moving forward, although, it's probably not necessary.

  1. Open src/main.rs and set a breakpoint.
  2. Go to the debug screen and run the Debug (QEMU) configuration.

The project will automatically be built targeting the LM3S6965EVB MCU, the resulting executable used to start QEMU, and then break when execution hits the entry point. You can then step through the code as you will.

If you prefer not to break on main, set runToMain to false in the .vscode/launch.json file.

"runToMain": false,

That's pretty much it. I told you it worked out of the box. Feel free to play with the examples that come with the cortex-m-quickstart template. The easiest thing to do, at the moment, is to copy the code into src/main.rs and run the Debug (QEMU) task. This will ensure the code is built prior to starting the emulator. You could also build the executable from the command line and modify the launch.json file to point to the example, if you wish.

cargo build --example hello
{
    //...

    // "executable": "./target/thumbv7m-none-eabi/debug/testapp",
    /* Run `cargo build --example hello` and uncomment this line to run semi-hosting example */
    "executable": "./target/thumbv7m-none-eabi/debug/examples/hello",
},

Currently, the Rust Language Server extension doesn't directly support building an example, although we could create a new configuration and shell build task that executes the right cargo build command.
I consider this to be an area for improvement in the future but, honestly, it's a good exercise for the reader. These default configurations aren't meant to be the "end all, be all". They're meant to be a starting point for you to create debug configurations that meet your and your project's needs.

Debugging on Hardware

Like last time, we'll be using the STM32F3DISCOVERY board as our hardware.

Before you tackle this section, it is highly recommend you work through doing this from the command line to ensure you've installed the tools and properly setup your project. Unlike the QEMU section, debugging on hardware requires we modify a few files. I've included a brief summary below, but I really do encourage you to work through this chapter of The Embedded Rust Book first.

In .cargo/config, ensure you've set your build target to thumbv7em-none-eabihf.

[build]
target = "thumbv7em-none-eabihf" # Cortex-M4F and Cortex-M7F (with FPU)

Then ensure you have the correct memory layout in the memory.x linker script.

/* Linker script for the STM32F303VCT6 */
MEMORY
{
  /* NOTE 1 K = 1 KiBi = 1024 bytes */
  FLASH : ORIGIN = 0x08000000, LENGTH = 256K
  RAM : ORIGIN = 0x20000000, LENGTH = 40K
}

With that out of the way, we can connect our hardware via USB to the ST-Link port and launch a debug session. This time we'll use the Debug (OpenOCD) configuration.

You'll notice that "Cortex Peripherals" aren't loading. That's because, for licensing reasons, we couldn't include the *.svd file in the quickstart template. If you go to ST's website for the STM32F3 series, you can find and download the SVD pack (or just download the STM32F3 SVD pack from this link). Once you've unzipped the package, just copy the STM32F303.svd file into the .vscode/ directory and start a new debug session. The quickstart is configured to look here for the SVD, so you should now see all of the peripherals loaded into the editor.

Of course, you could keep this file anywhere you like and just modify this line of the .vscode/launch.json file.

"svdFile": "${workspaceRoot}/.vscode/STM32F303.svd",

Personally, I like keeping them in my home directory so I don't have to keep multiple copies of the SVD on disk.

"svdFile": "${env:HOME}/.svd/en.stm32f3_svd/STM32F3_svd_V1.2/STM32F303.svd",

If you're working on a shared project though, you may want to use the default location and check the file in. That way, everyone who clones your project has a working configuration as soon as they've cloned it.
Which brings us to...

Git

By default, all files in the .vscode/ directory are ignored by Git. There are a number of files that can end up in the .vscode/ directory that shouldn't be committed, so ignoring the files in this directory as a default is a good idea, but the task.json and launch.json files define build & run configurations that can (in my opinion, should) be shared with anyone cloning the project. They're not user specific files, so they're safe to add to your repository and will be tracked by default.

Logging with ITM

Nothing really changes here since last time, so see the logging section of the original post.

Using other hardware

This is where we need to take an actual hard look at the launch.json configuration and you'll also need a pretty good understanding of the hardware you wish to use, so if you're happily debugging your STM32F3DISCOVERY, feel free to stop here. However, if you're looking to setup VS Code for some other board, continue on and I'll do my best to explain what the different parts of the Debug (OpenOCD) config do. For full documentation, see the Cortex-Debug project site and repository.

For reference, here is the OpenOCD config for the STM32F303.

{
    /* Configuration for the STM32F303 Discovery board */
    "type": "cortex-debug",
    "request": "launch",
    "name": "Debug (OpenOCD)",
    "servertype": "openocd",
    "cwd": "${workspaceRoot}",
    "preLaunchTask": "build",
    "runToMain": true,
    "executable": "./target/thumbv7em-none-eabihf/debug/cortexm-app",
    "device": "STM32F303VCT6",
    "configFiles": [
        "interface/stlink-v2-1.cfg",
        "target/stm32f3x.cfg"
    ],
    "svdFile": "${workspaceRoot}/.vscode/STM32F303.svd",
    "swoConfig": {
        "enabled": true,
        "cpuFrequency": 8000000,
        "swoFrequency": 2000000,
        "source": "probe",
        "decoders": [
            { "type": "console", "label": "ITM", "port": 0 }
        ]
    }
}

Server Type

The Cortex-Debug extension supports a number of different GDB servers. Here, we're specifying that we're using OpenOCD.

"servertype": "openocd",

If you're using a different GDB server, you'll obviously need to reference the documentation to configure your server and most of the information below will be of questionable value to you.

Device

"device": "STM32F303VCT6",

The device field is used to specify which MCU you're targeting and is used in conjunction with one of the several Cortex-Debug Device Support Packs. If you've installed the right pack and specified a device, the Cortex-Debug extension will use this field to look up the right SVD file for you and you can then omit the svdFile field.
So, technically, in the current configuration, this does nothing but document which microcontroller this config is meant for.

SVD File

"svdFile": "${workspaceRoot}/.vscode/STM32F303.svd",

Specifies where the System View Description for your device is located. As we've covered, this allows the Cortex-Debug plugin to load peripherals for your device into the editor. This can be omitted if using a Device Support Pack and device field.

Config Files

"configFiles": [
    "interface/stlink-v2-1.cfg",
    "target/stm32f3x.cfg"
],

This is an array of config files passed to openocd and equivalent to using the -f flag.

openocd \
  -f interface/stlink-v2-1.cfg \
  -f target/stm32f3x.cfg

If you find your openocd installation directory, you will find many common configurations and targets under the share/openocd/scripts/ directory. Specify the correct files for your device here.

SWO Config

"swoConfig": {
    "enabled": true,
    "cpuFrequency": 8000000,
    "swoFrequency": 2000000,
    "source": "probe",
    "decoders": [
        { "type": "console", "label": "ITM", "port": 0 }
    ]
}

This configures the ITM output for the device. The fields you'll likely need to update here are the cpuFrequency, swoFrequency, and the ITM port. Get any of these wrong and you'll not see any output in the SWO output of the editor.

Unfortunately, I can't help you much here. You'll need to reference your datasheet and schematic for the CPU frequency, as well as know what SWO frequency and port the software is using.

Conclusion

I couldn't be happier that these configurations were added to the quickstart template. Hopefully, this lowers the barrier just that much more for folks looking to get into embedded Rust. I have high hopes for Rust as an embedded development language over the next decade. I figure the lower the bar to entry is, the more people we can get involved, the more likely it will be that we can develop the ecosystem into something that makes Rust a default choice for systems programming. I know it won't happen over night, and the languages currently in use aren't ever going away, but I'm happy to contribute one small piece in the puzzle to a better future for embedded software.

Until next time,
Semper Cogitet

Oldest comments (0)