Now that I’m one of the organizer of the formal methods meetup it has become a bit more clear why most programmers are not familiar with the general landscape of formal methods. It’s mostly a lack of awareness about what is available out there so to help fill in the awareness gap this post is meant to serve as a brief introduction to some of the vocabulary and high level concepts of formal methods.
This post consists of 3 main sections. 1st is about the general concepts and high level classification(s) of formal methods. 2nd is about why we should care about formal methods in software engineering. 3rd is about concrete applications of formal methods and some pointers for how to apply them in day to day software engineering.
In this section I’ll make some connections between formal methods and what are, hopefully, widely known concepts to most programmers. The idea is that by connecting the unknown with known examples you will get some anchors in theory space as guides and landmarks to navigate further.
I don’t think there is an agreed upon definition of “formal methods”. The defining characteristic in my opinion is the emphasis on utilizing logic, mathematics, and other general principles of formal reasoning to help with the software development process. The promise/dream/ideal of formal methods is that by utilizing the language of logic and mathematics we can build software better, faster, and with fewer bugs.
The field of programming languages and programming is in many ways inseparable from formal methods because the very first requirement for most programming languages is parsing human readable source code. The theory of parsing is the theory of formal grammars and language complexity classes both of which are mathematical theories required for making sense of parsers. Looked at from the right angle this is already enough justification for utilizing more formal methods in other parts of the software development stack because parsing is a fundamental part of the programming toolkit and a pretty big success in terms of utility. Most of us get to use it without having to re-invent all the theory to know that parsing HTML with regular expressions is a doomed enterprise. HTML is not a regular language so we can’t use regular expressions to parse it because regular expressions can only recognize and work with regular languages. We know this because the theory of formal languages guarantees it.
I think most programmers are also familiar with or have heard of type systems. Similar to parsers, type systems fall under the rubric of formal methods because of the Curry-Howard-Lambek correspondence. There are many connections between logic, proofs, type systems, and programs/code in programming languages that can be used to enrich our understanding and practice of software engineering. When writing code we are performing/enacting a formal process to refine a set of requirements into functioning code. This refinement process must preserve the semantics implicit in the requirements because otherwise we are writing code that is “buggy”. Many times we can express approximations of these requirements with the type system so that the compiler can help us by flagging high level inconsistencies (type errors) in our understanding.
All of that is a long-winded way of saying there is no reason to fear formal methods. If you’re a programmer then you are already utilizing formal methods in your daily work in one way or another.
There are delineations/classifications in formal methods just like there are delineations/classifications in programming languages (dynamic, static, homo-iconic, meta-programmable, compiled, interpreted, garbage-collected, etc.). Understanding the high level classification comes in handy when we’re trying to use the right tool to specify the requirements as easily as possible.
The main classifications in formal methods are broad and a little fuzzy just like in programming language classifications. The categories I’m familiar with are theorem proving and verified programming (Coq, Isabelle/HOL, HOL4, etc.), formal specification/modeling/analysis with first/higher order and modal logical languages (Alloy, TLA+, Spin, Promela, Z3, Yices, etc.), and abstract interpretation as a general technique of program approximation and analysis (Astrée, Polyspace, Verasco, K framework, etc.). This is not an authoritative classification by any means and I recommend doing research on your own as well to see what others have to say on the matter. I recommend starting with this paper and this presentation. Each of the tools and techniques can be used in isolation or various combinations to specify and model different parts/approximations of the requirements.
Some of these were already covered in previous paragraphs but it’s worthwhile repeating the points.
Understanding the connections to the theory and expressing software requirements more formally can help us become better programmers by elucidating the process of refining the requirements into executable implementations. By using a more precise/formal language to describe requirements we can uncover implicit assumptions and inconsistencies before investing time and effort into an implementation that will inevitably run into a dead end because of an inconsistency in our understanding. We can avoid the hassle of building a prototype and throwing it away by specifying and validating a formal model of what we’re trying to implement ahead of the actual implementation process.
Another way of saying the above is that formal methods force us to be precise about software requirements. By removing the ambiguity and implicit assumptions formal methods help uncover bugs in the design ahead of the implementation process. Subtle design mistakes are hard to uncover partly because communicating intent with informal languages is prone to personal biases, assumptions, and simplifications. Writing the requirements in a formal language reduces the surface area for confusion by making the assumptions in the requirements explicit and precise. The explicit assumptions can then be analyzed automatically to point out inconsistencies in our understanding. If the requirements are inconsistent then it is much better to know ahead of time instead of uncovering the inconsistencies half way through the implementation process.
Formal specifications can also serve as documentation about design decisions that tend to be more concise than the equivalent informal specifications. Concision can be a double edged sword but if used properly can aid in understanding what the original authors of some software component/system were trying to accomplish. By removing the subjective component of informal design specifications we can be more sure that we’ve made our intent as clear and precise as possible.
Assuming you are convinced formal methods are useful the next obvious question is how to go about actually using them. I’ll start with some examples I’m familiar with and then link to use cases that others have explained much better than I could.
I think the easiest places to start applying formal methods are in operational contexts. Software doesn’t exist in a vacuum. It is always situated in a context where it can be used and these contexts are often implicit and emerge from how people actually use the software. There is usually no logical framework that scaffolds the context which makes it hard to onboard new engineers and customers. I think formal methods can help provide that scaffolding and make the experience of using software in general much better.
From the engineering perspective the most important operational context is probably the deployment process and formally modeling it can be very useful. The model can tell us all the implicit assumptions baked into the informal process like whether restarts are graceful or not and whether rollbacks are allowed. I’ve written about how to model rolling deployments with the Alloy analyzer previously so I’ll just refer to that article. Along with an explanation of how to model a deployment process in Alloy there are further links and references to other articles with a similar flavor about package managers, state transitions in UIs, and message queues.
My work often involves automating error prone and manual processes in cloud environments. Instead of clicking around in the UI for common operational processes it is much better to automate those processes with code. Most of the time the automation requires coordinating and synchronizing different components. It is very easy to introduce bugs in the coordination protocol that leads to deadlocks and weird race conditions. By modeling the process formally we can verify that our coordination protocol is free from weird issues that come up once in a blue moon and are almost impossible to debug.
I’ve written about this as well both informally and then formally. Comparing the informal explanation with the formal model is a useful exercise to see how informal components map to their formal specifications because the implicit assumptions in the informal model are made explicit in the formal model.
Another useful and often overlooked context is security. The case for modeling security properties is hopefully obvious. The weakest links in any security process are the people that have misunderstood the security properties and guarantees of the process who then end up being implicated in a data breach or leak.
One component of security is managing keys for encryption purposes. Many times the process used for managing these keys is either informal or misunderstood and left to “high priests” that understand encryption. In my opinion, this often creates misaligned incentives because the people having to follow the process are inconvenienced but they don’t understand exactly why and so start taking shortcuts to avoid the hassle. By justifying the requirements with a formal model we can hopefully reduce the incentive misalignment and create more secure software systems.
I have spent some time thinking about the problem of encryption key hierarchies and what properties are reasonable to expect from these hierarchies in order to make them secure. The explanation and resulting model can be found here. It specifies all the relationships that are expected of encryption key hierarchies and what security properties are satisfied if a hierarchy is set up in a way that satisfies the constraints of the formal model.
One of the bigger proponents of formal methods are the folks that have worked on systems at AWS. There is a paper that lays out how formal methods (TLA+) have been used at AWS to model and find bugs in systems like S3, DynamoDB, EBS, etc. If you don’t want to dive into the paper directly then Adrian Colyer has a good overview post about the paper.
One reason I started taking formal methods more seriously was because of introductory posts on Hillel Wayne’s website. He has several posts about Alloy and TLA+ (among other kinds of formal methods and tools) and I highly recommend going through them. They’re similar to the examples I gave above in that he tries to address practical day to day use cases.
If we switch our focus from formal modelling to verified programming then the barrier to entry goes up a little bit but it’s still worthwhile to be aware of what’s happening. There are many successful projects like seL4 microkernel, CompCert’s verified C compiler, and Project Everest from Microsoft and co. with the aim of building a verified web stack.
Hopefully that has filled in some gaps if you were unaware of the field of formal methods before. If not then get in touch and let me know what was confusing. I want more people involved in formal methods so any and all improvements are welcome.