loading...
Cover image for How To Make A Makefile

How To Make A Makefile

deciduously profile image Ben Lovy ・10 min read

What's Make?

This post will explore the basics of GNU Make via two small examples. It's a surprisingly versatile build tool, if a bit archaic, and as it's so ubiquitous it's worth getting at least baseline familiar with how it does its thing.

Caveat: as far as I know this is mostly relevant for Mac and Linux users. I don't know much about build tooling or development on Windows outside of just booting up an IDE and letting it handle things, or using WSL as a crutch where available. I do know you can get make via GnuWin32. I have no idea how well it works or if anyone uses it.

In brief, make is a tool that reads a Makefile and turns source files into executable files. It doesn't care what compilers are used to do so, it's just concerned with build orchestration.

If you've compiled packages from source before, you may be familiar with the following set of commands:

$ ./configure
$ make
$ sudo make install

A great number of *nix packages are distributed as C or C++ source code, and will be built something like this. The first line runs a separate program to configure your Makefile for you, which is necessary in big projects which rely on system libraries. The last line generally assumes admin rights so it can copy the executable(s) it just built onto the system path. We don't need any of that to get started with make, though. Just the middle line will do us fine. Aptly named, isn't it?

In this post I'll walk through two different examples with different goals. The syntax can look opaque (at last, it did to me) if you don't know what you're looking at, but once you know the very basic rules they're very straightforward.

Example One - Download A File

We'll do the simpler one first. This Makefile only exists to download boot, a build tool for Clojure, to the user's current directory. This tool exists as a shim that downloads a jarfile to handle the rest, and the shim is very tiny, so it's sometimes convenient to have it live in a project directory itself instead of the system path.

.PHONY: deps help

SHELL        = /bin/bash
export PATH := bin:$(PATH)

deps: bin/boot

bin/boot:
    (mkdir -p bin/                                                                              && \
    curl -fsSLo bin/boot https://github.com/boot-clj/boot-bin/releases/download/latest/boot.sh  && \
    chmod 755 bin/boot)

help:
    @echo "Usage: make {deps|help}" 1>&2 && false

We'll take it from the top.

.PHONY deps help

SHELL       = /bin/bash

First, we declare the phony targets. To explain this, we need to talk about the core of what make is: rules.

Make is for making sources into targets. To do so, we give it rules for understanding what sources and how to feed them to compilers to get the right targets. At the end, we should have produced all the targets needed - the compiled sources.

Keeping that in mind, rules are easy to grok. Each rule starts with the name of the target to be created, followed by a colon. After the colon are any targets this target depend on, and below and indented are a series of commands, or recipes, to build the target from its dependencies. When you invoke make with a target, it will make that target specifically, but when you invoke it on its own it just start evaluating the first rule it sees that doesn't begin with a . (like .PHONY).

Next, we define the shell executable location,

The $() syntax is a Make variable. Make is neat in that it automatically exposes every variable it finds in the environment as a make variable, so we can just use $PATH from bash with $(PATH). To define your own you just assign to the name, omitting the parens, as we do in the first line - that's an assignment to the $(SHELL) variable.

Notably, we're using the := assignment syntax for it. This specifically defines a simply-expanded assignment. This variable will be read once and that's it - any other variables inside it are expanded once immediately at assignment.

The = recursively-expanded variable instead expands anything inside whenever it's substituted. This is powerful, but also can lead to problems like infinite loops and slow execution so it's important to be mindful of the difference.

It's important to note that this is only true for this process and any sub-process of it - this isn't permanent, it cannot alter the parent process. Still useful if you're building inside make, though, and doesn't clutter up your global env!

Then we get to our first rule. In this case, the default rule is called deps, one of our phony targets. No file called "deps" will be created.

deps: bin/boot

After the target name, you'll find a colon and then a list of dependencies. These are targets that must be completed before evaluating this rule. Before executing the block of commands for this target, Make will ensure each of the targets exists, evaluating their rules if it finds them. In this case, the dependency is target "bin/boot". There are no commands associated with this rule, all it does is call this other rule.

bin/boot:
    (mkdir -p bin/                                                                              && \
    curl -fsSLo bin/boot https://github.com/boot-clj/boot-bin/releases/download/latest/boot.sh  && \
    chmod 755 bin/boot)

This isn't a phony target, and includes a slash, which just means a directory name. This target, or the result of evaluating this rule, is going to end up in that directory we added to the PATH.

This rule doesn't have any dependencies - they'd all appear on the same line as the target name. It does have commands though - this rule will create a directory, execute curl to downloade the file from GitHub, and execute chmod to make the downloaded file executable.

So, running make will locate the make deps rule, which is empty itself but has bin/boot as a dependency. Make will realize bin/boot does not yet exist and execute that rule, which will create the file accordingly.

Try running it, and then running it again:

$ make
(mkdir -p bin/                                                                              && \
curl -fsSLo bin/boot https://github.com/boot-clj/boot-bin/releases/download/latest/boot.sh  && \
chmod 755 bin/boot)

$ make
make: Nothing to be done for 'deps'.

After evaluating this rule the first time around, a file called boot already existed in a directory called ./bin. The target was found, so make did no extra work. This handy quality is known as idempotence. Repeated invocations have the same effect as one invocation: f(x); and f(x); f(x); are equivalent.

Neat! Let's look at something a little more typical.

Example Two: Build Some C++

This is more complicated one. This makefile is what I drop in to a brand new C++ project directory before thinking about it. It's more indicative of what makefiles in the wild might look like, but still really small in scope.

It expects a src directory with a bunch of .cpp (and .h) files, and will create a directory called build with all your .o object files and your executable, named whatever you tell it. You can then run that executable.

.PHONY: all clean help

CXX=clang++ -std=c++11
FLAGS=-Wall -Wextra -Werror -pedantic -c -g

BUILDDIR=build
SOURCEDIR=src
EXEC=YOUR_EXECUTABLE_NAME_HERE
SOURCES:=$(wildcard $(SOURCEDIR)/*.cpp)
OBJ:=$(patsubst $(SOURCEDIR)/%.cpp,$(BUILDDIR)/%.o,$(SOURCES))

all: dir $(BUILDDIR)/$(EXEC)

dir:
    mkdir -p $(BUILDDIR)

$(BUILDDIR)/$(EXEC): $(OBJ)
        $(CXX) $^ -o $@

$(OBJ): $(BUILDDIR)/%.o : $(SOURCEDIR)/%.cpp
        $(CXX) $(FLAGS) $< -o $@

clean:
        rm -rf $(BUILDDIR)/*.o $(BUILDDIR)/$(EXEC)

help:
        @echo "Usage: make {all|clean|help}" 1>&2 && false

At the very top we have our phony targets again - these are the targets that aren't creating real files, they're just intended to be invoked as an argument to make.

Next we point it towards our C++ compiler by assigning the variables $(CXX) and $(FLAGS):

CXX=clang++ -std=c++11
FLAGS=-Wall -Wextra -Werror -pedantic -c -g

These aren't special names - you can call them whatever you like. We'll refer to them directly in our rules.

C++ compilation happens in two stages. First, we compile all the separate *.cpp/*.h pairs into their own .o object files, and in a separate step we'll link them all up into a single executable. The flags we pass to the compiler are only relevant when building the objects from source - linking together already-compiled objects doesn't need them! This way we can invoke the compiler with or without this set of flags inside our rule evaluation. I like to make my compiler as restrictive as possible - these flags turn all warnings into errors that prevent successful compilation, and enable to full suite of checks available. The -c flag instructs it not to go on to the linking phase, finishing with an .o file, and the -g flag generates source-level debug info.

A fancier makefile will have multiple build configurations. This, again, is a starter kit.

The next three assignments just configure the names of everything:

BUILDDIR=build
SOURCEDIR=src
EXEC=YOUR_EXECUTABLE_NAME_HERE

I think build for the output and src for the source files make sense, but you can adjust them there, and $(EXEC) will be the final compiled binary.

Below that we define where the sources are, and what the objects should be called:

SOURCES:=$(wildcard $(SOURCEDIR)/*.cpp)
OBJ:=$(patsubst $(SOURCEDIR)/%.cpp,$(BUILDDIR)/%.o,$(SOURCES))

The $(SOURCES) variable is built with the wildcard function. This variable collects anything with the .cpp extension inside src/.

Next we use patsubst. The syntax for this is pattern, replacement, text. The % character in the pattern and replacement is the same, and the other part is swapped. This substitution turns, e.g. "game.cpp" into "game.o". For the text, we're passing in the $(SOURCES) variable we just defined - so the $(OBJ) variable will contain a corresponding build/*.o filename for each src/*.cpp filename that make finds.

Check out the quick reference for a complete run-down of what's available.

I've used simply-expanded variable assignment for these. It's a good idea to do so when you know that will get you the result you need specially when using functions like wildcard - recursively expanding these can (but doesn't always) result in significant slowdowns.

With all our variables configured, we can start defining rules. The first rule is our default behavior, this one is called all:

all: dir $(BUILDDIR)/$(EXEC)

This is one of our phony targets, so there's no corresponding output file called "all". Also, like deps from the first example, this rule has no commands, only dependencies. This one has two dependencies, dir and $(BUILDDIR)/$(EXEC). It will execute them in the order they are found, so lets hop over to dir first:

dir:
    mkdir -p $(BUILDDIR)

This one doesn't have dependencies, so it will immediately execute this command. This is a simple one - it just makes sure the build directory exists. Once that's complete, we can evaluate $(BUILDDIR)/$(EXEC):

$(BUILDDIR)/$(EXEC): $(OBJ)
        $(CXX) $^ -o $@

This rule is starting to look a little funkier. The target itself is not unlike bin/boot from the first example, just using make variables to build it. If you've set $(EXEC) to my_cool_program, this target is named build/my_cool_program. It depends on another make variable, $(OBJ), which we just defined as an object file corresponding to each source file. That will resolve first, so let's look at that rule before looking at the command:

$(OBJ): $(BUILDDIR)/%.o : $(SOURCEDIR)/%.cpp
        $(CXX) $(FLAGS) $< -o $@

Whoa, there's two sets of dependencies here! What the heck, Ben.

This is something called a static pattern rule. This is what we use when we have a list of targets. The overall target, $(OBJ), consists of each one of the object files we'll be creating. After the first colon, we need to define specifically how each individual object depends on a specific source. Again we see the % used for pattern matching, not unlike up in the patsubst call. Each one will have the same name as the corresponding ".cpp" file, but with the extension flipped to ".o".

The command block for this rule will execute for each source/target pair matched. We're using the make variables we defined way up at the top to invoke the compiler and pass in all our flags, which includes the -c flag signalling to stop before the link phase, just outputting object files.

Then we use some automatic variables to fill in the proper command. $< corresponds to the name of the dependency we're working with, and $@ corresonds to the name of the target. Full expanded, this $(CXX) $(FLAGS) $< -o $@ command will look like clang++ -std=c++11 -Wall -Wextra -Werror -pedantic -c -g src/someClass.cpp -o build/someClass.o.

Marvelous! Once this rule completes, every ".cpp" file has a corresponding ".o" file in the build/ directory, exactly what we defined as $(OBJ). With that in place make will jump back up to the calling rule and finish off with the $(CXX) $^ -o $@ command to link our objects together.

This is similar, but we're omitting our flags. We also use a different automatic variable. $^ corresponds to the entire list that $(OBJ) represents. You could also use $+, which fully includes each list member - $^ omits any duplicates. The $@ part is the same as previously - it stands for the target. This might run a command something like clang++ --std=c++11 build/someClassOne.o build/someClassTwo.o build/someClassThree.o build/main.o -o build/my_cool_project.

Once that's done, you've got your compiled executable ready to go at build/my_cool_project. Thanks, make!

This makefile also provides clean:

clean:
        rm -rf $(BUILDDIR)/*.o $(BUILDDIR)/$(EXEC)

This is another phony target with no dependencies that just runs rm to clean out all the object files and the executable. This way when you run make again it will have to build everything again. Otherwise it will just build any files that have changed since your project was last built.

We've only scratched the surface, but hopefully this helps demystify these files a bit should you come across one.

Challenge: write your own make install rule that copies the newly created target out of build to a cooler place!

Photo by Jason Briscoe on Unsplash

Posted on by:

Discussion

markdown guide
 

Thanks for this!! I'll be honest I know enough about makefiles to compile a project that's using them, but haven't ever written more than the most basic one myself. And I know they are WAY more powerful than that!

Bookmarked and will read later tonight hopefully!

 

The problem is that as you get to more and more powerful recipes you get further and further away from anything readable ;)

 

I have the impression that the more 'powerful' my recipes get, the less target: dependency I'm thinking.

The real power of make is that it converts requirements into target files/dirs. "PHONY" targets are to be shunned as much as possible if you want to make use of this power, since they leave nothing to be required, so they're really at the end of the chain.

I don't know if I agree they should be shunned so much as used sparingly. I think they do help organize sets of subtasks, especially ask makefiles grow with many related recipes. I'd much prefer a little extra verbosity to keep my makefile organized and readable.

You're absolutely not alone in that.

But make isn't intended to organize tasks, is it? It's intended to build stuff out of other stuff, honouring the dependencies.

GNU Make is a tool which controls the generation of executables and other non-source files of a program from the program's source files.

These PHONY targets mostly contain bash scripts as recipes. What would be worse if you create actual bash scripts out of them?

I've lost hours and hours trying to get my code changes into a flash image, thinking that make image would do. The makefile authors thought that new developers would be smart enough to understand that not all make targets have explicit dependencies (that would slow the build down). Surprise: I wasn't.

That's what I meant: I prefer the targets to be actual Targets, because then the rules I have to remember (and forward to new devs, and document, ...) becomes way smaller. Less room for human errors. And after all, that's what causes the biggest cost for the company.

 

Awesome article.

I didn’t know about the syntax with two sets of dependencies. How is that different from just this?

$(BUILDDIR)/%.o : $(SOURCEDIR)/%.cpp
        $(CXX) $(FLAGS) $< -o $@

I didn’t know either about the trick with the parentheses to run several commands in one shell process, which is sometimes necessary. I thought you had to use ; and \ at the end of lines, which is annoying to write and read.

Also, I think it’s worth stressing out that make won’t rebuild an existing target file if it’s more recent than all its dependencies. It’s very good at it and it saves a lot of time (compared to Gulp, for example, which insists on rebuilding everything every time, which gets very tedious, very quickly.)

In fact, there is a solid case in favour of using make instead of Grunt or Gulp for JavaScript builds. It’s fast, powerful, extensively documented, and only requires that the tools you use (Sass processor, minifier) include an executable (which they all SHOULD, really), you don’t need also a Gulp plugin that might not exist or work properly.

 

How is that different from just this?

You know... I'm not sure, looking at it. Indeed, it does work perfectly fine as you've written it. I've just always handled this case like this, I'm going to have to think about what, if any, benefit the fancier syntax nets you.

I suppose there's a case to be made for documentation - there's a rule with a dependency on $(OBJ), so it makes sense we should be able to find that variable as a target. It's less clear at a glance that your rule results in the proper target.

annoying to write and read

Exactly, that's pretty much the only reason I do it like this :)

make won’t rebuild an existing target file if it’s more recent than all its dependencies

This is a really good point. You're right, I also think make is still a strong tool stacked up against more recent choices. Its very unix-y, it does what it does and nothing else, and does that one thing very well. It's definitely more flexible than those other tools.

 

I read about static pattern rules, and I think I’m seeing a difference, but I’ll have to make tests.

web.mit.edu/gnu/doc/html/make_4.ht...

Maybe you could have two lists of %.o files that you want to process with two different rules, so you prepend each rule with the corresponding list.

And also, as you said, for documentation/readability. I once made a big Makefile that was using ImageMagick to generate 12 derivatives from 3 intermediate formats for two types of source images. So that’s 18 rules that you only distinguish from the pattern. If I prepend it with the variable containing the list, it surely becomes more readable.

Ah, cool - thanks for the link!

I've still got so much to learn about make. That's a complex use case...my biggest issue with makefiles is readability, so anything to help yourself out it worthwhile to me.

 

Great article!

I use make in really crappy ways - very unsophisticated. Each makefile is more of a dumping ground for useful scripts to run on a project (build, run, test, clean ... stuff like that). This is a bit more advanced.

Could you talk me through two things?

help:
    @echo "Usage: make {deps|help}" 1>&2 && false

What's going on with the redirect and the false at the end? (I know what the @ does).

and

Why do you need to add .PHONY at the top to the tasks with no output? I don't do this, it works OK... am I quietly breaking stuff?

 

Thanks!

@echo "Usage: make {deps|help}" 1>&2 && false

Thanks so much for pointing this out - I completely forgot to cover it!

I'll mention it here in case anyone doesn't know about @ - it suppresses printing this line to stdout when make runs.

The redirect just sends the echo to stderr instead of stdout, which in practice doesn't really make a difference when invoked at the command line. Returning false signals a failed recipe. If any line in a recipe in a makefile returns false, the target stops building and make considers the target failed. Here, it's just a way to ensure that make exits immediately after displaying this help line, and as no target was built, a non-zero error code was appropriate.

None of it's strictly necessary. It'll work fine without it. One way it could be used is to make help your default rule. This will prevent anything from happening, and prompt the user to choose what they want to do.

Why do you need to add .PHONY at the top to the tasks with no output?

You don't need to. You're not breaking anything. It can prevent name collisions with actual targets if you run into that problem, and when evaluating these rules the implicit rule search is skipped. There's also a whole use case around recursive subcalls of make, but I've never run into that sort of thing. Basically it skips some useless work, which is a performance bump but chances are you're not running in to much of that!

So, you can probably continue to omit them without worrying too, but I don't think it's a terrible habit either. It also serves as documentation to some minimal degree.

 
 

Nice intro article, thankyou.

I rewrote the core Solaris build system (6000+ Makefiles, lead a team of 4 for 3 years to deliver it), so my make knowledge is skewed towards the make which comes with Solaris Studio, but there are lots of similarities with the GNU version.

First comment: you define your c++ compiler as

CXX=clang++ -std=c++11
FLAGS=-Wall -Wextra -Werror -pedantic -c -g

I'd have written this differently:

CXX=clang++
CXXFLAGS= -std=c++11 -Wall -Wextra -Werror -pedantic -c -g

For C code, you'd use CC and CFLAGS, you'd also specify LDFLAGS for the linker, etc etc.

When it comes to your bin/boot target, I'd have written that differently too - to make the subdir dependency explicit:

bin:
    @mkdir -p bin

bin/boot: bin
    @(curlcmd....)
    @chmod 0555 $@

I did a lot of gnarly things with Solaris' make pattern matching rules
(for which I sometimes think I still need therapy). Case in point: Solaris has a closed subrepo for delivering binary blobs to partner companies. In order to remove a lot of extra work, I rewrote a few specific install rules to put those blobs in the right place in one hit:

INS.rename= $(INS.file) ; $(MV) $(@D)/$(<F) $@ ; \
    $(MV) $(@D:$(ROOT)/%=$(CLOSEDROOT)/%)/$(<F) \
    $(@D:$(ROOT)/%=$(CLOSEDROOT)/%)/$(@F)

Line noise making things faster, ftw.

The last comment I'll make is that the % character only matches the same pattern in the target and dependency - it's a rather simplistic regex

$(BUILDDIR)/%.o: $(SRCDIR)/%.c

We had to work around this all the time, and that limitation alone added several months to the project while we figured out the most efficient way of doing so.

 

Ah, thanks so much for the tips! That's one issue with such a flexible tool - a bunch of ways to do something can mean it's hard to find the way to do it best. I can see at a glance why both your suggestions are superior but also see why I got where I did and stopped thinking. I'll keep this in mind in the future.

That's a terrifying recipe. I love it.

I did not know that about % - thanks for pointing out the distinction!

 

Nice article, thanks :) We use this to auto document our makefiles (see marmelab.com/blog/2016/02/29/auto-...), I thought you might find it interesting:

help:
    @grep -E '^[a-zA-Z_-]+:.*?## .*$$' $(MAKEFILE_LIST) | sort | awk 'BEGIN {FS = ":.*?## "}; {printf "\033[36m%-30s\033[0m %s\n", $$1, $$2}'

clean: ## Cleanup
        rm -rf $(BUILDDIR)/*.o $(BUILDDIR)/$(EXEC)

The help command show a list of all commands with associated help extracted from the comments (##)

 

Aha! Very cool, thanks for sharing. Much better than hardcoding.

 

Thank you for the great article!

I love seeing people rediscover tools which work really well.

If you want to step up some more, you can go to full autotools: It adds release-creation, library detectiion and clean multiplatform resource installation to Makefiles: draketo.de/light/english/free-soft...

I’m working (slowly) on building a tool to make setup of autotools as convenient as modern language environments: bitbucket.org/ArneBab/conf/

Part of that is a Makefile with nice help-output: bitbucket.org/ArneBab/conf/src/314...

 

aaaargggghghhhh please Cthulhu no!

autotools' library and feature detection is definitely not clean, it's difficult to maintain and hack around the writer's assumptions, and there are much better systems around like CMake and pkg-config. [For the record, I hate CMake, but it's easier to beat a project into shape with it than autotools.]

Where does my bias come from? I've got nearly 30 years experience at this cross-platform feature detection caper if you count automake and before that xmkmf with Imakefiles. Every time I have to hack on an aclocal+friends feature check I come across comments like this: (intltool 0.50.2)

# This macro actually does too much.  Some checks are only needed if
# your package does certain things.  But this isn't really a big deal.


# Fake the existence of programs that GNU maintainers use.  -*- Autoconf -*-


# If the user did not use the arguments to specify the items to instantiate,
# then the envvar interface is used.  Set only those that are not.
# We use the long form for the default assignment because of an extremely
# bizarre bug on SunOS 4.1.3.
if $ac_need_defaults; then
  test "${CONFIG_FILES+set}" = set || CONFIG_FILES=$config_files
fi

SunOS 4.1.3 - an OS that even in 2014 (when that version of intltool was released) had been obsoleted many years prior. Nobody, however, had bothered to remove that comment or check.

To me, autotools are the promotion of bitrot - and a maintenance nightmare.

Use something better.

/me takes curmudgeon hat off for a bit.

 

Ah, nice! Thanks for the links, Autotools is next on my list!

 

If you want something which goes even further, here’s a setup to build a complete book from emacs org-mode via LaTeX using autotools, with references from bibtex, and with index and glossary:

You can even edit with the build environment via ./edit.sh

Get the full repository via hg clone bitbucket.org/ArneBab/1w6/

 

Now this what I have always wanted to know. I know what I'm reading tonight 😄

 

Please let me know if I just make it more confusing!

Nobody ever sat me down and showed me how this tool worked, I kinda had to figure it out piecemeal, so here's hoping this helps someone else.

 

Who is teaching this stuff anyway, that's why this article is so rare and useful. I have got to the part with the := syntax and trying to visualise what it means. But I think I'm gonna have to try it out. #inspired #unsupportedInlineTags

The docs, i didn't add a link to the toplevel! Here's the link to the part specifically about the two variable types.

 

That's what I used in the 90s. I like the idea of dependency resolution (if you need to have multiple steps like .cpp -> .o, .o -> .exe, you say to make: I want .exe and make figures out "ok for this I need foo.o and bar.o so I need to do "gcc foo.cpp" and "bar.cpp" and then I can call the linker etc.)

 

I agree, it's a tool that's extraordinarily well suited to its domain.

 

It is really nice to see a good "back to basics UNIX hacking tools" article here!

 

Thanks a lot for detailed information. I am learning golang and trying to write Makefiles there. This is helpful