A process is called "correct" if it does not fail at any point during its execution. Unlike Terminating Reliable Broadcast, the typical Consensus problem does not label any single process as a "sender". Every process "proposes" a value; the goal of the protocol is for all correct processes to choose a single value from among those proposed. A process may perform many I/O operations during protocol execution, but must eventually "decide" a value by passing it to the application on that process that invoked the Consensus protocol.
Valid consensus protocols must provide important guarantees to all processes involved. All correct processes must eventually decide the same value, for example, and that value must be one of those proposed. A correct process is therefore guaranteed that the value it decides was also decided by all other correct processes, and can act on that value accordingly.
More precisely, a Consensus protocol must satisfy the four formal properties below.
The possibility of faults in the system makes these properties more difficult to satisfy. A simple but invalid Consensus protocol might have every process broadcast its proposal to all others, and have a process decide on the smallest value received. Such a protocol, as described, does not satisfy Agreement if faults can occur: if a process crashes after sending its proposal to some processes, but before sending it to others, then the two sets of processes may decide different values.
Consensus has been shown to be impossible to solve in several models of distributed computing.
In an asynchronous system, where processes have no common clock and run at arbitrarily varying speeds, the problem is impossible to solve if one process may crash and processes communicate by sending messages to one another . The technique used to prove this result is sometimes called an FLP impossibility proof, named after its creators, Michael J. Fischer, Nancy A. Lynch and Michael S. Paterson, who won the Dijkstra Prize for this result. The technique has been widely used to prove other impossibility results. For example, a similar proof can be used to show that consensus is also impossible in asynchronous systems where processes communicate by reading and writing shared variables if one process may crash .
The FLP result does not state that consensus can never be reached: merely that under the model's assumptions, no algorithm can always reach consensus in bounded time. There exist algorithms, even under the asynchronous model, that can reach consensus with probability one. The FLP proof hinges on demonstrating the existence of an order of message receipts that causes the system to never reach consensus. This "bad" input however may be vanishingly unlikely in practice.
In a synchronous system, where all processes run at the same speed, consensus is impossible if processes communicate by sending messages to one another and one third of the processes can experience Byzantine failures.
Google has implemented a distributed lock service library called Chubby . Chubby maintains locks information in small files which are stored in a replicated database to achieve high availability in the face of failures. The database is implemented on top of a fault-tolerant log layer which is based on the Paxos consensus algorithm. In this scheme, Chubby clients communicate with the Paxos master in order to access/update the replicated log, i.e., read/write to the files .