Concurrent computing is a form of computing in which several computations are executing during overlapping time periods – concurrently – instead of sequentially (one completing before the next starts). This is a property of a system – this may be an individual program, a computer, or a network – and there is a separate execution point or "thread of control" for each computation ("process"). A concurrent system is one where a computation can make progress without waiting for all other computations to complete – where more than one computation can make progress at "the same time" (see definition, below).
As a programming paradigm, concurrent computing is a form of modular programming, namely factoring an overall computation into subcomputations that may be executed concurrently. Pioneers in the field of concurrent computing include Edsger Dijkstra, Per Brinch Hansen, and C.A.R. Hoare.
Concurrent computing is related to but distinct from parallel computing, though these concepts are frequently confused, and both can be described as "multiple processes executing at the same time". In parallel computing, execution literally occurs at the same instant, for example on separate processors of a multi-processor machine – parallel computing is impossible on a (single-core) single processor, as only one computation can occur at any instant (during any single clock cycle).[a] By contrast, concurrent computing consists of process lifetimes overlapping, but execution need not happen at the same instant.
For example, concurrent processes can be executed on a single core by interleaving the execution steps of each process via time slices: only one process runs at a time, and if it does not complete during its time slice, it is paused, another process begins or resumes, and then later the original process is resumed. In this way multiple processes are part-way through execution at a single instant, but only one process is being executed at that instant.
Concurrent computations may be executed in parallel, for example by assigning each process to a separate processor or processor core, or distributing a computation across a network. This is known as task parallelism, and this type of parallel computing is a form of concurrent computing.
By contrast, parallel computing by data parallelism may or may not be concurrent computing – a single process may control all computations, in which case it is not concurrent, or the computations may be spread across several processes, in which case this is concurrent. For example, SIMD (single instruction, multiple data) processing is (data) parallel but not concurrent – multiple computations are happening at the same instant (in parallel), but there is only a single process. Examples of this include vector processors and graphics processing units (GPUs). By contrast, MIMD (multiple instruction, multiple data) processing is both data parallel and task parallel, and is concurrent; this is commonly implemented as SPMD (single program, multiple data), where multiple programs execute concurrently and in parallel on different data.
The exact timing of when tasks in a concurrent system are executed depend on the scheduling, and tasks need not always be executed concurrently. For example, given two tasks, T1 and T2:
The word "sequential" is used as an antonym for both "concurrent" and "parallel"; when these are explicitly distinguished, concurrent/sequential and parallel/serial are used as opposing pairs.
Concurrency is pervasive in computing, occurring from low-level hardware on a single chip to world-wide networks. Examples follow.
At the hardware level:
At the programming language level:
At the operating system level:
At the network level, networked systems are generally concurrent by their nature, as they consist of separate devices.
A widespread basic example of concurrency in software engineering is a software pipeline, which considers individual steps in the pipeline as filters operating on a stream. For example, some or all phases of a compiler may be structured as a pipeline. Today this is most common for the compiler frontend, structured as a lexer followed by a parser, which start with a stream of source code, which is converted by the lexer to a stream of tokens, then by the parser to a parse tree – this can also be done for other computer languages like HTML. This may be followed by other pipeline stages, notably in a one-pass compiler, though most modern compilers in most uses are multi-pass and do not operate as strict pipelines.
Concurrent computing can occur without interactions between the processes, for example with isolated virtual machines or time-sharing systems. This type of concurrency is transparent to users (other than speed), and poses no added complexity to programmers, though scheduling must be done by the operating system.
The simplest non-trivial interaction between concurrent processes is a pipeline – narrowly speaking, a linear, one-way process, where the output of one stage is the input for the next. This is of the same complexity as function composition of a fixed set of functions (without recursion, for instance), and can be analyzed similarly.
Many concurrent systems feature more complex interactions between the processes, analogous to mutual recursion of functions, which can result in significant complexity and even nondeterminism in results due to race conditions, since the order of individual steps may vary, depending on how the processes are scheduled. These interactions are often communication via message passing, which may be synchronous or asynchronous; or may be access to shared resources. The main challenges in designing concurrent programs is concurrency control: ensuring the correct sequencing of the interactions or communications between different computational executions, and coordinating access to resources that are shared among executions. Problems that may occur include nondeterminism (from race conditions), deadlock, and resource starvation. In the case of multi-threaded programming, interaction occurs via shared memory, and thread safety refers to code running properly concurrently.
|This section requires expansion. (February 2014)|
A number of different methods can be used to implement concurrent programs, such as implementing each computational execution as an operating system process, or implementing the computational processes as a set of threads within a single operating system process.
In some concurrent computing systems, communication between the concurrent components is hidden from the programmer (e.g., by using futures), while in others it must be handled explicitly. Explicit communication can be divided into two classes:
Shared memory and message passing concurrency have different performance characteristics. Typically (although not always), the per-process memory overhead and task switching overhead is lower in a message passing system, but the overhead of message passing itself is greater than for a procedure call. These differences are often overwhelmed by other performance factors.
One of the major issues in concurrent computing is preventing concurrent processes from interfering with each other. For example, consider the following algorithm for making withdrawals from a checking account represented by the shared resource
bool withdraw( int withdrawal )
if ( balance >= withdrawal )
balance -= withdrawal;
balance=500, and two concurrent threads make the calls
withdraw(350). If line 3 in both operations executes before line 5 both operations will find that
balance > withdrawal evaluates to
true, and execution will proceed to subtracting the withdrawal amount. However, since both processes perform their withdrawals, the total amount withdrawn will end up being more than the original balance. These sorts of problems with shared resources require the use of concurrency control, or non-blocking algorithms.
Because concurrent systems rely on the use of shared resources (including communication media), concurrent computing in general requires the use of some form of arbiter somewhere in the implementation to mediate access to these resources.
Unfortunately, while many solutions exist to the problem of a conflict over one resource, many of those "solutions" have their own concurrency problems such as deadlock when more than one resource is involved.
|This section does not cite any references or sources. (December 2006)|
Concurrent programming languages are programming languages that use language constructs for concurrency. These constructs may involve multi-threading, support for distributed computing, message passing, shared resources (including shared memory) or futures and promises. Such languages are sometimes described as Concurrency Oriented Languages or Concurrency Oriented Programming Languages (COPL).
Today, the most commonly used programming languages that have specific constructs for concurrency are Java and C#. Both of these languages fundamentally use a shared-memory concurrency model, with locking provided by monitors (although message-passing models can and have been implemented on top of the underlying shared-memory model). Of the languages that use a message-passing concurrency model, Erlang is probably the most widely used in industry at present.
Many concurrent programming languages have been developed more as research languages (e.g. Pict) rather than as languages for production use. However, languages such as Erlang, Limbo, and occam have seen industrial use at various times in the last 20 years. Languages in which concurrency plays an important role include:
Many other languages provide support for concurrency in the form of libraries (on level roughly comparable with the above list).
There are several models of concurrent computing, which can be used to understand and analyze concurrent systems. These models include:
Concurrent computing developed out of earlier work on railroads and telegraphy, from the 19th and early 20th century, and some terms date to this period, such as semaphores. These arose to address the question of how to handle multiple trains on the same railroad system (avoiding collisions and maximizing efficiency) and how to handle multiple transmissions over a given set of wires (improving efficiency), such as via time-division multiplexing (1870s).
|This article needs additional citations for verification. (February 2014)|
Mashpedia enables any individual or company to promote their own Youtube-hosted videos or Youtube Channels, offering a simple and effective plan to get them in front of our engaged audience.
Want to learn more? Please contact us at: firstname.lastname@example.org