With the exponential growth of distributed systems (such as blockchains), consensus is definitely the problem of the century.
We can define the consensus problem as follows. Given a network in which each agent has its own preference over a set of possible decisions, the goal is to have a distributed protocol (e.g. set of rules agents must follow) that leads the network to a unique decision, without a central authority.
For example, imagine that the network should take a decision about a color. Each agent has a favorite color and wants it to be the color everyone will choose after the protocol (e.g. winning decision). That's it.
Formally, a protocol solves consensus if it satisfies three conditions:
- Termination: every agent reaches a final state in a finite time
- Agreement: all agents will support the same decision
- Validity: the winning decision must be initially supported by at least one agent
These three conditions allow the network to stabilize to a common decision. But, this doesn't guarantee that each agent's preference has the same weight.
So, we introduce the concept of fair consensus, where the validity condition is replaced by fairness, e.g. the probability that a decision will be the winner is proportional to the number of agents that initially support it.
In this way, each agent has the same importance as the others.
Now, these conditions are sufficient if every agent is honest and doesn't try to improve their chances to win.
But in reality, this would never happen.
For this reason, we now argue about rational fair consensus, an extension of fair consensus in which there could be rational agents. A rational agent is a kind of agent that could decide to not follow protocol rules in order to gain an advantage over others in every possible way.
For example, imagine that every agent supporting the winning decision will get 100$. A rational agent could say it was supporting the winning decision also if it wasn't, only to get the reward. This is an educational example, but it isn't so different in reality as you might think. An example of "malicious" rational agents could be Byzantine agents.
Here comes game theory. Game theory studies how players (e.g. agents) would play a game (e.g. protocol) in order to have the best reward. The concept of "utility function" explains this. A utility function of an agent is a mapping of the reward (called payoff in game theory) with the final state of the game (e.g. winning decision).
In the last decades, mathematicians have been studying solutions of non-cooperative games using notions such as Nash Equilibrium to understand how players would act. A really famous problem is the Prisoner's dilemma. I suggest reading it to understand why game theory is so important.
This is the point of intersection. We don't have only to realize a consensus protocol, but we need to find a mathematical way to ensure rational agents will follow protocol rules.
If we can find it, distributed systems will be more reliable and secure for end users.
Top comments (0)