DEV Community

Cover image for ERTS & OTP - The Erlang Runtime System and core library
Rickard Andersson for Quanterall

Posted on • Updated on

ERTS & OTP - The Erlang Runtime System and core library

Since I will be writing posts about how to do things on the BEAM with different languages I thought it might be prudent to create a short intro to the systems that power the programs we write.

If you prefer to look at things yourself, the code that will be referenced later is available here.

BEAM

The name "BEAM" comes originally from Bogdan's/Björn's Erlang Abstract Machine, named after the first and subsequent implementer & maintainer of the virtual machine. There were previous, distinct versions of this concept before, notably JAM (Joe's Abstract Machine), created by Joe Armstrong.

BEAM is a bytecode interpreter, meaning it takes a well-defined, high-level description of logic and executes that on whatever platform we are running on, as opposed to compiling to native code immediately. This is often implicit in the phrase "virtual machine". You can imagine this as a adapter machine running on top of the one actually executing things.

We won't go over the internals of the BEAM in this article, but rather focus on the capabilities offered by the Erlang Runtime System. If you're interested in how the BEAM works, this book by Erik Stenman is a great place to start.

Erlang Runtime System

The Erlang Runtime System is the software that runs your code on top of the BEAM. Since the BEAM is just a bytecode interpreter, any logic that needs to manage the scheduling and de-scheduling of code on top of that needs to have a system taking care of it. That is exactly what ERTS is.

Lightweight processes

Processes are threads managed by ERTS and all code that runs on it is executed in these processes.

Processes:

  • Take up very little heap space by default; about 1232 bytes
  • Have message queues (the primary way to interact with them is messages)
  • Don't really use any processing power unless they have waiting messages (see below)

Preemptive scheduling

ERTS has a scheduler that decides which processes are allowed to do work at the moment. It does so via a list of ready and waiting processes. A process is put in a ready state if it has a message in its message queue, and put in the waiting state if it asks for incoming messages again, or consumes too many resources while processing a message.

This last part is distinct from most models of concurrency and parallelism; we do not usually have a limit on how much we can do in a thread before it gets de-scheduled. ERTS has an internal concept of reductions. A process has a set amount of reductions it can consume/do before the scheduler determines that it has to let other processes run. You can view this concept as having a set amount of fuel to consume before you have to make a pit stop.

The above concept means that threads are scheduled fairly and without any user intervention on the BEAM. We don't ever have to "yield" a thread, it's a concept managed entirely by the runtime itself. The absence of preemptive scheduling can be seen in languages where starting a thread can completely shut off a core from doing anything else while it's running, because we are locking that core up entirely until we are done, or until we yield.

Because we don't use a lot of memory for each process and that a process doesn't use any processing power unless it has incoming messages it is very common to start very high numbers of them if we need to do so. We can be confident that they are managed as they should be by the runtime.

OTP

We've learned that code running in the BEAM is intrinsically tied to the concept of processes and that the way we do things with processes is by way of messages; pieces of data that a process can react and reply to. So how do we deal with processes from a code perspective?

OTP, the Open Telecom Platform, is a library for starting, managing, communicating with and stopping processes. We'll go over some of the most commonly used components of this library to give a sense of what it means to use it.

gen_server / GenServer

As the name suggests, gen_server is for generic servers. You can view a server here as something that accepts and potentially responds to messages and manages an internal state.

As an example of a server, let's look at how a very simple server for storing a list of things and getting the current list might look:

gen_server (Erlang)
-module(beam_intro_lister).

-export([start_link/0, init/1, add/1, get/0, handle_call/3,
         handle_cast/2, handle_info/2]).

-behaviour(gen_server).

start_link() ->
  % We specify how to start the process. In this case we don't use the
  % arguments coming in at all, and pass down only the symbol `ok` to
  % our `init` function, which is used for determining the initial
  % state of the process.
  gen_server:start_link({local, ?MODULE}, ?MODULE, ok, []).

add(Thing) ->
  gen_server:cast(?MODULE, {add, Thing}).

get() ->
  gen_server:call(?MODULE, get).

init(_Args) ->
  % The initial state of the process is a map where the key `contents`
  % is associated with an empty list.
  {ok, #{contents => []}}.

% We use pattern matching here to pull out the `contents` value so we
% can use it in our logic. When a thing is added, we prepend it to our
% internal list of things.
handle_cast({add, Thing}, #{contents := OldContents}) ->
  {noreply, #{contents => [Thing | OldContents]}}.

% When someone requests the contents of our process, we reply to their
% call with the `contents` value in our state.
handle_call(get, _From, #{contents := Contents} = State) ->
  {reply, Contents, State}.

handle_info(_Info, State) ->
  {noreply, State}.

GenServer (Elixir)
defmodule BeamIntro.Lister do
  use GenServer

  def start_link(_args) do
    # We specify how to start the process. In this case we don't use
    # the arguments coming in at all, and pass down only the symbol
    # `ok` to our `init` function, which is used for determining the
    # initial state of the process.
    GenServer.start_link(__MODULE__, :ok, name: __MODULE__)
  end

  def add(thing) do
    GenServer.cast(__MODULE__, {:add, thing})
  end

  def get() do
    GenServer.call(__MODULE__, :get)
  end

  def init(:ok) do
    # The initial state of the process is a map where the key
    # `contents` is associated with an empty list.
    {:ok, %{contents: []}}
  end

  # We use pattern matching here to pull out the `contents` value
  # so we can use it in our logic. When a thing is added, we prepend
  # it to our internal list of things.
  def handle_cast({:add, thing}, %{contents: contents} = state) do
    {:noreply, %{state | contents: [thing | contents]}}
  end

  # When someone requests the contents of our process, we reply to
  # their call with the `contents` value in our state.
  def handle_call(:get, _from, %{contents: contents} = state) do
    {:reply, contents, state}
  end
end

gen_server follows a fairly simple request/response model and if we can answer some key questions we can construct one:

  1. start_link & init: How is the server started and what does that mean for the initial state?
  2. Interface functions: Which messages do we want to send when other processes interact with the server?
  3. handle_call, handle_cast & handle_info: When those messages come in, how does our internal state change and what do we reply with?

With these key questions answered we can cover a surprisingly large area in the design space of a system.

supervisor / Supervisor

Supervisors are processes that manage the starting, restarting and stopping of other processes.

supervisor (Erlang)
-module(beam_intro_lister_supervisor).

-export([start_link/0, init/1]).

-behavior(supervisor).

start_link() ->
  supervisor:start_link({local, ?MODULE}, ?MODULE, []).

init([]) ->
  Flags =
    #{% We specify here that the supervisor restarts a single process when it
      % dies, not all the processes attached to the supervisor.
      strategy => one_for_one,
      % If 3 children die in 5 seconds, the supervisor will terminate. If this
      % supervisor is a child of another supervisor, the parent supervisor will
      % react to that termination as specified.
      intensity => 3,
      period => 5},
  % This describes how our child process is started.
  ListerSpec =
    #{id => beam_intro_lister,
      start => {beam_intro_lister, start_link, []},
      restart => permanent,
      shutdown => 5000,
      type => worker},
  {ok, {Flags, [ListerSpec]}}.

Supervisor (Elixir)
defmodule BeamIntro.Lister.Supervisor do
  use Supervisor

  def start_link(_args) do
    Supervisor.start_link(__MODULE__, :ok, name: __MODULE__)
  end

  def init(:ok) do
    children = [
      {BeamIntro.Lister, []}
    ]

    Supervisor.init(
      children,
      # We specify here that the supervisor restarts a single process
      # when it dies, not all the processes attached to the supervisor.
      strategy: :one_for_one,
      # If 3 children die in 5 seconds, the supervisor will terminate.
      # If this supervisor is a child of another supervisor, the parent
      # supervisor will react to that termination as specified.
      max_restarts: 3,
      max_seconds: 5
    )
  end
end

Supervisors allow us to talk about how a process is started, under which circumstances it should be restarted as well as how many failures we can tolerate before we need to restart the entire supervisor.

Supervisors come with a simple but featureful set of configuration options that we can use to design our system:

  1. Restart strategies: Should we restart only the process that has failed, or should we restart all the supervisor's children, or perhaps all the ones that come after the failed one in our child list?
  2. How many process failures in how much time do we tolerate before we restart the entire supervisor and all its children?
  3. Which processes are important to always have up even though they may have shut down normally? Are some processes fine to not restart if they shut down normally, but should be restarted if they crashed?

Answering the above questions is key to understanding what role a supervisor and its children have in a system.

Supervision trees

Supervision trees are an emergent property of the above questions. Because we can have supervisors as children of other supervisors, this means we can segment our systems up in sub-systems that might be important to keep together.

If one sub-system depends on another, we might for example say that our restart strategy is rest_for_one (restart all subsequent children when one fails) and have them be part of the same supervision tree. This allows us to guarantee that the dependent one is always started after the one it depends on.

If you take the time to look, you'll find that most BEAM applications are actually started by a root supervisor process usually called some variant of application/Application. This means that even the most minimal system will tend towards some level of supervision tree structure.

Seeing the forest and not just the trees

What these things and more boil down to is that on the BEAM we are able to use high-level specifications and libraries to talk about the most interesting parts of a system:

  • Which parts go together?
  • How do the parts communicate change and effect?

Being allowed to take on this high-level view is very liberating and is why I personally feel that the BEAM & ERTS is a great place to be as a creator and maintainer of systems.

Top comments (1)

Collapse
 
bibzmibz profile image
bibzmibz

very neat and informative