In computer science, concurrency is the execution of several instruction sequences at the same time. In an operating system, this happens when there are several process threads running in parallel. These threads may communicate with each other through either shared memory or message passing.
Distribution is a form of concurrency where all communication between simultaneous threads is done exclusively via message passing. Distribution is useful because it employs a more lenient scaling of resource consumption, which economizes these resources. Whereas shared memory concurrency often requires a single processor per thread, distribution allows several threads to co-exist and communicate between one another.
Concurrency is also a programming design philosophy. In concurrent programming, programmers attempt to break down a complex problem into several simultaneous executing processes that can be addressed individually. Although concurrent programming offers better program structure than sequential programming, it is not always more practical. In a concurrent system, computations being executed at the same time can diverge, giving indeterminate answers. They system may end in a deadlock if well-defined maxima are not assigned for the resource consumption of each of the executing threads. Thus, to design for robust concurrency in an operating system, a programmer needs to both reduce a problem into individual, parallel tasks and coordinate the execution, memory allocation and data exchange of those tasks.