PMR is usually perceived as a performance tool - something to address allocation and deallocation bottlenecks or to build memory models for RTOS-like environments. In practice, its scope is much broader. It has been part of the standard since C++17, yet it is rarely brought up in interviews or public discussions outside of HFT (high frequency trading) or gamedev.
When dealing with signal handling, especially in constrained environments, there are less obvious scenarios where PMR becomes useful. Not because it is faster, but because it provides explicit control over memory usage. Historically, similar problems were solved using C-style stack arenas or manually managed buffers. PMR offers the same idea, but expressed in standard C++ and integrated directly with STD containers, without dropping back to raw C-style memory management.
It brings controlled allocation semantics in places where heap interaction is either undesirable or outright forbidden.
Solution Framing
This technique is not a general recommendation for signal handling.
It is relevant only in environments where a signal handler with some minimal processing is architecturally unavoidable. The goal is not to make signal handling "safe", but to reduce fragility under constrained conditions.
This approach is justified only when:
- Recovery or reporting must happen in-process;
- External crash reporters or core dumps are unavailable or forbidden;
- The required work is strictly bounded and deterministic.
Outside of these constraints, standard crash handling mechanisms remain the correct and preferable solution.
The problem
Here we start
During my career, I have encountered several situations where I had to implement custom SIGSEGV handlers and perform non-trivial work inside them. These were not ideal designs. In some cases, solutions were pragmatic rather than elegant, driven by constraints rather than preference.
- In one case, I had to collect a backtrace using
libunwindand then post-process it to produce meaningful diagnostic output written directly tostderr. Using something likelibSegFaultwas not an option because additional libraries could not be shipped to the target system and because the required backtrace transformation logic was too specific. - In another project,
SIGSEGVhandling combined with jump was part of the legacy architecture. After recovering control flow, the system required additional string processing and some container manipulation to restore a consistent internal state. The architecture could not be redesigned, so the handler had to operate within those constraints. - There was no third case, and I hope there will not be. Signal handlers are not something I appreciate having in a binary, except for simple termination logic in service-style applications.
The common theme in these cases was not preference, but constraint.
POSIX constraints
What makes the situation more complex is that signal handlers themselves operate under strict POSIX requirements.
In practice, these constraints can be summarized as follows:
- Only async-signal-safe functions may be called (list of functions):
- In practice this mostly means no dynamic memory allocation (
new/deleteandmalloc/freeare not permitted);
- In practice this mostly means no dynamic memory allocation (
- No dependency on shared mutable state:
- No locks;
- No reliance on global mutable objects;
- errno must be preserved;
- No stream flushing;
- The handler must not return into an undefined state, etc. These rules exist for good reasons, but they significantly narrow what can be done inside a handler.
Abstraction collapse
The practical consequence is that most idiomatic C++ becomes unusable. No standard containers, no convenient string processing, no algorithms, no higher-level abstractions. Everything must be reduced to manual C-style handling.
This is not only inconvenient. It increases complexity, introduces boilerplate, and makes the code harder to read and maintain. Engineers are forced to reimplement low-level logic even when well-tested C++ abstractions already exist. Over time, this creates fragile code paths in precisely the parts of the system that are already operating under failure conditions.
Here is where PMR comes
Some of the signal handler constraints are manageable. Streams & flushing can be replaced with a direct write call. The errno can be restored before exiting. Lock-free code is not particularly difficult when the handler logic is minimal.
The real difficulty appears when higher-level processing is required, especially string manipulation or limited container usage. In a signal context, the usual abstraction layers collapse. The heap cannot be trusted, global state may be inconsistent, and most idiomatic C++ becomes unusable. As a result, everything is pushed down to manual C-style handling with fixed buffers and defensive size calculations.
The core issue is not that C strings are inconvenient. The issue is that we lose a controlled allocation domain.
This is where PMR becomes interesting.
PMR allows us to define a small, bounded memory region - for example, backed by a stack buffer - and treat it as an isolated allocation environment. Containers and strings can operate inside this region without touching the global heap. Allocation becomes deterministic and locally scoped. No global allocator interaction, no hidden heap calls, no cross-boundary side effects.
In this context, PMR is not a performance tool. It is a way to create a controlled micro-environment inside an otherwise unstable execution state. By routing all container allocations through a stack-backed memory_resource, we regain limited use of C++ abstractions while following the fundamental restriction: no dynamic heap interaction.
Using PMR to allocate all container elements from a predefined stack buffer is often all that is required.
How big of a stack do we need?
On modern OSes, a typical thread stack is on the order of megabytes (often around 8 MB by default). For the kind of work we are discussing - bounded string formatting, a few small containers, some backtrace post-processing - this is usually plenty. For a rough mental model, even fairly large text payloads are tiny compared to megabytes (the plain text of LOTR is only tens of kilobytes). So, the default alternative stack sizes provided by modern systems are typically sufficient for the strictly limited and minimal processing expected inside such a handler.
However, in a SIGSEGV scenario we must not rely on the current thread stack. If the signal was triggered due to stack corruption or overflow, that memory cannot be considered reliable. This is why the practical "stack budget" for PMR in a handler is not the normal thread stack, but the alternative signal stack configured via sigaltstack, which we will get to later. At least size order is clear for now.
The key point remains the same: the memory must already exist and be bounded, because the handler must not allocate dynamically. Heap allocation is not async-signal-safe under POSIX, so new/delete and malloc/free are off the table.
Basic setup
Conceptually, the setup is straightforward. We pre-allocate a fixed buffer in memory and construct a std::pmr::monotonic_buffer_resource on top of it. That resource becomes the allocation domain for all containers used inside the handler.
auto buf = std::array<std::byte, 1024>{}; // Stack-backed allocation buffer
auto res = std::pmr::monotonic_buffer_resource{
buf.data(),
buf.size(),
std::pmr::null_memory_resource() // Forbid fallback to global heap
};
Note: A monotonic buffer resource performs linear allocation within a pre-defined memory region. It does not free individual objects. Memory is reclaimed only when the resource itself is destroyed. In the context of a signal handler, this behavior is desirable: the lifetime of all temporary objects is tied to the lifetime of the handler invocation itself.
The only parameter that requires deliberate consideration is the size of the buffer. It must be large enough to accommodate the worst-case amount of memory required by all containers and strings constructed inside the handler. Since the work performed in the handler must be strictly bounded and deterministic, the required size can and should be estimated conservatively.
Containers are then constructed using the memory resource explicitly. Any std::pmr container instantiated with this resource will allocate exclusively from the provided buffer and will not fall back to the global heap (this is guaranteed by std::pmr::null_memory_resource as an upstream allocator).
auto str = std::pmr::string{&res};
str = "some text buffer";
auto collection = std::pmr::vector<decltype(str)>{&res};
collection.emplace_back(str);
The result is limited but controlled use of C++ abstractions within a strictly bounded memory region.
Following the standard
The minimal example will work on most systems, but according to the standard, a std::pmr::memory_resource::allocate call must return storage that is properly aligned for the requested type. Otherwise, if the underlying buffer does not satisfy the required alignment - it is UB.
The buffer used to back the monotonic_buffer_resource must therefore be aligned at least to std::max_align_t, or to any stricter alignment that might be required by the types allocated within it.
alignas(std::max_align_t) auto buf = std::array<std::byte, 1024>{};
Allocates storage with a size of at least bytes bytes, aligned to the specified alignment.
Source
With correct alignment in place, the PMR containers can be safely instantiated on top of the resource.
Remaining risks on the table
Using PMR inside a signal handler does not make the handler POSIX-safe. It removes only one, but major failure source: heap interaction during an unstable runtime state.
Even with a stack-backed allocation domain, significant risks remain:
- corrupted stack or registers;
- undefined C++ runtime state;
- library code that is not async-signal-safe.
With correct alignment in place, PMR containers can be safely instantiated on top of the resource. However, this only addresses allocation mechanics. It does not eliminate the broader execution risks inherent to signal handling.
Strict constraints must still be respected:
- Avoid code paths that may throw exceptions. Unwinding through potentially corrupted state is undefined territory;
- Avoid synchronization primitives or any form of parallelism. No locks, no hidden synchronization inside abstractions;
- Keep the work strictly bounded and deterministic.
In the typical scenario discussed here - limited string formatting or small container manipulation - these constraints are technically manageable.
However, care must be taken to avoid accidental violations. Parallelism, for example, can be introduced indirectly through execution policies in standard algorithms.
Even without explicitly creating threads, such behavior may break the assumptions required for safe execution inside a signal handler.
Therefore, the real guarantee of this approach is limited to:
Reduced fragility under controlled assumptions - not absolute correctness.
Prepare stack for the PMR
One of the most important measures is configuring an alternative signal stack via sigaltstack. This addresses at least two of the previously mentioned risks:
- It isolates the handler from a potentially corrupted or overflowed main thread stack.
- It provides a predictable and explicitly controlled stack size for handler execution.
By running the handler on an alternative stack, we reduce dependency on the original execution context. If the signal was triggered due to stack overflow or memory corruption, continuing execution on the same stack would be unsafe. The alternative stack provides a separate, pre-allocated memory region dedicated to signal handling.
The setup itself is straightforward and based on SIGSTKSZ, which defines a minimal recommended stack size for signal handlers.
// main:
const auto size = static_cast<std::size_t>(SIGSTKSZ) * 4;
auto alt_stack = stack_t{};
alt_stack.ss_sp = ::operator new(size);
alt_stack.ss_flags = 0;
alt_stack.ss_size = size;
if (::sigaltstack(&alt_stack, nullptr) != 0) {
throw std::runtime_error{
std::string{"sigaltstack failed: "} + std::strerror(errno)
};
}
The value of SIGSTKSZ is platform-dependent and defined in signal.h. In practice, it may range from relatively small values on lightweight systems to significantly larger defaults on desktop platforms. Because of this variability, it is reasonable to scale it conservatively or validate assumptions at compile time with static_assert, depending on the expected workload of the handler.
The important architectural point is that the alternative stack size must be sufficient not only for the PMR buffer but also for the execution overhead of the handler itself. The PMR buffer lives inside that stack context, so both data and control flow must fit within the allocated space.
While this does not eliminate all risks associated with signal handling, it allows us to control stack integrity and stack capacity explicitly, which removes two major sources of uncertainty.
Putting it together
With all components in place, the overall design becomes straightforward. The goal is to establish two controlled boundaries before the failure: a dedicated execution stack and a bounded allocation domain. Once those are prepared during normal program initialization, the handler itself becomes a small, deterministic unit operating inside explicitly defined limits.
The flow looks like this:
- Allocate and register an alternative signal stack (
sigaltstack) so the handler does not depend on the potentially corrupted thread stack; - Install a
SIGSEGVhandler withsigaction, enablingSA_ONSTACKso the handler runs on that alternative stack; - Inside the handler, create a fixed, aligned buffer and build a
std::pmr::monotonic_buffer_resourceon top of it, withstd::pmr::null_memory_resourceas the upstream. This ensures allocations stay local and never fall back to the heap; - Use PMR-backed types (for example,
std::pmr::string) for minimal formatting, then emit output via an async-signal-safe API such as write, and terminate via_exitto avoid returning into an unknown runtime state.
#include <array>
#include <cerrno>
#include <charconv>
#include <cstddef>
#include <cstdlib>
#include <cstring>
#include <memory_resource>
#include <new>
#include <stdexcept>
#include <string>
#include <signal.h>
#include <unistd.h>
constexpr auto signal_exit_base = 0x80; // POSIX: 128 + signal number
static void handler(int sig) {
alignas(std::max_align_t)
auto buf = std::array<std::byte, 1024>{};
auto res = std::pmr::monotonic_buffer_resource{
buf.data(), buf.size(),
std::pmr::null_memory_resource()
};
auto str = std::pmr::string{"Signal ", &res};
auto num = std::pmr::string(16, '\0', &res);
auto [ptr, _] = std::to_chars(num.data(), num.data() + num.size(), sig);
num.resize(static_cast<std::size_t>(ptr - num.data()));
str += num;
str += " received!\n";
::write(STDOUT_FILENO, str.data(), str.size());
::_exit(signal_exit_base + sig);
}
auto main() -> int {
const auto size = static_cast<std::size_t>(SIGSTKSZ) * 4;
auto alt_stack = stack_t{};
alt_stack.ss_sp = ::operator new(size);
alt_stack.ss_flags = 0;
alt_stack.ss_size = size;
if (::sigaltstack(&alt_stack, nullptr) != 0) {
throw std::runtime_error{
std::string{"sigaltstack failed: "} + std::strerror(errno)
};
}
struct sigaction sa{};
sa.sa_handler = handler;
sa.sa_flags = SA_ONSTACK;
::sigemptyset(&sa.sa_mask);
if (::sigaction(SIGSEGV, &sa, nullptr) != 0) {
throw std::runtime_error{
std::string{"sigaction failed: "} + std::strerror(errno)
};
}
*static_cast<volatile int*>(nullptr) = 42; // trigger SIGSEGV
return EXIT_SUCCESS;
}
If you compile and run this with ltrace (and optionally pipe through c++filt), you can observe the behavior when the signal is delivered:
- the alternative stack setup happens during normal execution (before the crash);
- inside the handler, PMR allocations stay within the provided buffer;
- there is no interaction with the global heap allocator from within the handler path.
$ ltrace ./a.out some here &>/dev/stdout | c++filt
__libc_start_main(0xaaaac2042290, 3, 0xffffed93df28, 0xaaaac2041d18 <unfinished ...>
__register_frame_info(0xaaaac2046200, 0xaaaac2060018, 16, 0xffffb2baeab4) = 9
operator new(unsigned long)(0xc000, 0, 0xffffed93df48, 0xffffb2bae338) = 0xffffb2adf030
sigaltstack(0xaaaac2060048, 0, 0xffffffa0, 4044) = 0
sigemptyset(<>) = 0
sigaction(SIGSEGV, { 0xaaaac20420c0, <>, 0, 0 }, nil) = 0
--- SIGSEGV (Segmentation fault) ---
memset(0xffffb2ae99a0, '\0', 1024) = 0xffffb2ae99a0
std::pmr::null_memory_resource()(0xffffb2ae99a0, 0, -32, 0xffffb2ae9d40) = 0xffffb2ac0178
strlen("Signal ") = 7
memcpy(0xffffb2ae9968, "Signal ", 7) = 0xffffb2ae9968
memset(0xffffb2ae99a0, '\0', 16) = 0xffffb2ae99a0
memcpy(0xffffb2ae996f, "11", 2) = 0xffffb2ae996f
strlen(" received!\n") = 11
memcpy(0xffffb2ae99b1, "Signal 11", 9) = 0xffffb2ae99b1
memcpy(0xffffb2ae99ba, " received!\n", 11) = 0xffffb2ae99ba
write(1, "Signal 11 received!\n", 20Signal 11 received!
) = 20
_exit(139 <no return ...>
+++ exited (status 139) +++
The exit status is preserved, and it is clear that no heap allocation takes place inside the handler. This still does not make the handler "POSIX safe" in the absolute sense. It only demonstrates that, from the source code perspective, the handler avoids the most common contract violations (most notably: dynamic allocation and unsafe I/O abstractions).
One optional hardening step is to compile the handler in a separate translation unit with stricter flags (for example, disabling exceptions). This can reduce accidental violations, but it also increases build complexity and can make integration more fragile. In most cases, keeping the handler minimal and explicitly bounded is the more practical approach.
Takeaway
PMR has a place far beyond performance tuning. In certain constrained environments, it can serve as a controlled allocation boundary rather than an optimization tool.
Used carefully, it allows limited reintroduction of C++ abstractions into contexts where they would normally be forbidden, such as inside a signal handler. It does not make signal handling safe or eliminate UB. What it does is remove one significant failure vector - uncontrolled heap interaction - and reduce boilerplate compared to manual C-style buffer management.
As with anything related to POSIX signal safety, this approach must be applied conservatively and with a clear understanding of its limits, it can meaningfully improve clarity and robustness in otherwise fragile code paths.
Top comments (0)