Play Video
1
Mod-04 Lec-20 Concurrent programming
Mod-04 Lec-20 Concurrent programming
::2011/09/14::
Play Video
2
What is Concurrent Programming?
What is Concurrent Programming?
::2013/10/16::
Play Video
3
Parallel and concurrent programming in Haskell - Simon Marlow at USI
Parallel and concurrent programming in Haskell - Simon Marlow at USI
::2014/03/19::
Play Video
4
Concurrent Computing with Lumerical FDTD Solutions
Concurrent Computing with Lumerical FDTD Solutions
::2011/01/12::
Play Video
5
The java.util.concurrent Package (Executors)
The java.util.concurrent Package (Executors)
::2012/10/09::
Play Video
6
Little bit concurrent programming!
Little bit concurrent programming!
::2011/11/11::
Play Video
7
Concurrent Computing in XGATE Applications: Part 1 (Application Note)
Concurrent Computing in XGATE Applications: Part 1 (Application Note)
::2009/01/28::
Play Video
8
Concurrent programming with Python and my little experiment
Concurrent programming with Python and my little experiment
::2014/07/25::
Play Video
9
Concurrent Computing in XGATE Applications: Part 2 (Application Note)
Concurrent Computing in XGATE Applications: Part 2 (Application Note)
::2009/01/28::
Play Video
10
Stabilization in Concurrent Programming, Junfeng Yang
Stabilization in Concurrent Programming, Junfeng Yang
::2013/10/19::
Play Video
11
Medhat Gayed: Concurrent Programming using multiprocessing
Medhat Gayed: Concurrent Programming using multiprocessing
::2013/09/11::
Play Video
12
Let
Let's Go Further: Build Concurrent Software using the Go Programming Language
::2012/05/03::
Play Video
13
C++ Threading #1:  Introduction
C++ Threading #1: Introduction
::2013/04/30::
Play Video
14
[FOSDEM 2014] Concurrent Programming Made Simple
[FOSDEM 2014] Concurrent Programming Made Simple
::2014/02/10::
Play Video
15
Concurrent programming with Python and my little experiment
Concurrent programming with Python and my little experiment
::2014/05/08::
Play Video
16
2014 CLOUD COMPUTING Distributed, Concurrent, and Independent Access to Encrypted Cloud Databases
2014 CLOUD COMPUTING Distributed, Concurrent, and Independent Access to Encrypted Cloud Databases
::2014/12/02::
Play Video
17
Maurice Herlihy - Multicore, Transactions, and the Future of Distributed Computing
Maurice Herlihy - Multicore, Transactions, and the Future of Distributed Computing
::2011/10/13::
Play Video
18
JoCaml: Concurrent Programming with the Join Calculus - Raphael Speyer
JoCaml: Concurrent Programming with the Join Calculus - Raphael Speyer
::2011/06/16::
Play Video
19
CppCon 2014: F. Pikus "...Scaling Visualization in concurrent C++ programs"
CppCon 2014: F. Pikus "...Scaling Visualization in concurrent C++ programs"
::2014/10/04::
Play Video
20
Ask Developer Hangout - 32 - Concurrent Programming with Task Parallel Library (TPL)
Ask Developer Hangout - 32 - Concurrent Programming with Task Parallel Library (TPL)
::2013/11/09::
Play Video
21
[c¼h] Channels a concept for concurrent programming
[c¼h] Channels a concept for concurrent programming
::2014/10/21::
Play Video
22
Synopsis | Concurrent And Distributed Computing In Java By Vijay K. Garg
Synopsis | Concurrent And Distributed Computing In Java By Vijay K. Garg
::2015/03/10::
Play Video
23
Chapter-11 Concurrent Programming (Introduction to Concurrent Programming)
Chapter-11 Concurrent Programming (Introduction to Concurrent Programming)
::2014/11/06::
Play Video
24
The java.util.concurrnet Package: Concurrent Data Structures (using Scala)
The java.util.concurrnet Package: Concurrent Data Structures (using Scala)
::2013/01/07::
Play Video
25
Multithreaded C++ tutorial Stock Trading
Multithreaded C++ tutorial Stock Trading
::2013/05/15::
Play Video
26
Lecture 1 | Programming Paradigms (Stanford)
Lecture 1 | Programming Paradigms (Stanford)
::2008/07/18::
Play Video
27
ITJNW10 Yet Another Simple Solution for the Concurrent Programming Conttrol Problem
ITJNW10 Yet Another Simple Solution for the Concurrent Programming Conttrol Problem
::2011/12/08::
Play Video
28
Computing in the Presence of Concurrent Solo Executions
Computing in the Presence of Concurrent Solo Executions
::2014/03/31::
Play Video
29
Distributed Concurrent and Independent Access to Encrypted Cloud Databases
Distributed Concurrent and Independent Access to Encrypted Cloud Databases
::2014/07/17::
Play Video
30
SCOOP: A Technical Introduction
SCOOP: A Technical Introduction
::2012/07/31::
Play Video
31
Knobe, Kathleen - Concurrent Collections (CnC): A new approach to parallel programming
Knobe, Kathleen - Concurrent Collections (CnC): A new approach to parallel programming
::2013/01/22::
Play Video
32
Lecture 9 | Programming Paradigms (Stanford)
Lecture 9 | Programming Paradigms (Stanford)
::2008/07/19::
Play Video
33
UC Berkeley CS10 Fall 2010 Lecture 19, Distributed Computing (1080p HD)
UC Berkeley CS10 Fall 2010 Lecture 19, Distributed Computing (1080p HD)
::2011/07/22::
Play Video
34
Advanced Topics in Programming Languages: Transactional Memory at Sun
Advanced Topics in Programming Languages: Transactional Memory at Sun
::2012/08/22::
Play Video
35
Distributed, concurrent, and independent access to encrypted cloud databases
Distributed, concurrent, and independent access to encrypted cloud databases
::2014/12/02::
Play Video
36
golang, practical Go Programming
golang, practical Go Programming
::2011/02/15::
Play Video
37
Distributed, Concurrent, and Independent Access to Encrypted Cloud Databases
Distributed, Concurrent, and Independent Access to Encrypted Cloud Databases
::2014/11/28::
Play Video
38
The java.util.concurrent Package: Using a CyclicBarrier 1 (using Scala)
The java.util.concurrent Package: Using a CyclicBarrier 1 (using Scala)
::2013/01/08::
Play Video
39
brinch hansen and concurrent pascal
brinch hansen and concurrent pascal
::2011/01/03::
Play Video
40
Software Transactions: A Programming-Languages Perspective
Software Transactions: A Programming-Languages Perspective
::2009/05/01::
Play Video
41
FACULTI - Dr Matthew Huntbach - Concurrent Object Oriented Programming in a Logic Variable Language
FACULTI - Dr Matthew Huntbach - Concurrent Object Oriented Programming in a Logic Variable Language
::2013/11/04::
Play Video
42
IEEE 2014 DOTNET DISTRIBUTED, CONCURRENT, AND INDEPENDENT ACCESS TO ENCRYPTED CLOUD DATABASES
IEEE 2014 DOTNET DISTRIBUTED, CONCURRENT, AND INDEPENDENT ACCESS TO ENCRYPTED CLOUD DATABASES
::2014/06/17::
Play Video
43
21/55 Parallel Programming with Microsoft©.NET4 :  212. Concurrent collections
21/55 Parallel Programming with Microsoft©.NET4 : 212. Concurrent collections
::2011/05/15::
Play Video
44
The java.util.concurrent Package: Executors (using Scala)
The java.util.concurrent Package: Executors (using Scala)
::2013/01/07::
Play Video
45
Concurrent and Parallel Programming - Course Projects - CSE - BGU
Concurrent and Parallel Programming - Course Projects - CSE - BGU
::2013/08/11::
Play Video
46
Exploiting a quad-core CPU using C# parallel programming
Exploiting a quad-core CPU using C# parallel programming
::2009/02/16::
Play Video
47
Biomedical Engineering @ UNSW - Concurrent Degrees
Biomedical Engineering @ UNSW - Concurrent Degrees
::2014/06/19::
Play Video
48
The java.util.concurrent Package: Applying Executors (using Scala)
The java.util.concurrent Package: Applying Executors (using Scala)
::2013/01/07::
Play Video
49
Software Transactional Memory in D
Software Transactional Memory in D
::2012/05/03::
Play Video
50
Adam Balali - AnyEvent and asynchronous programming (hebrew)
Adam Balali - AnyEvent and asynchronous programming (hebrew)
::2013/03/11::
NEXT >>
RESULTS [51 .. 101]
From Wikipedia, the free encyclopedia
Jump to: navigation, search
For a more theoretical discussion, see Concurrency (computer science).

Concurrent computing is a form of computing in which several computations are executing during overlapping time periods – concurrently – instead of sequentially (one completing before the next starts). This is a property of a system – this may be an individual program, a computer, or a network – and there is a separate execution point or "thread of control" for each computation ("process"). A concurrent system is one where a computation can make progress without waiting for all other computations to complete – where more than one computation can make progress at "the same time".[1]

As a programming paradigm, concurrent computing is a form of modular programming, namely factoring an overall computation into subcomputations that may be executed concurrently. Pioneers in the field of concurrent computing include Edsger Dijkstra, Per Brinch Hansen, and C.A.R. Hoare.

Introduction[edit]

Concurrent computing is related to but distinct from parallel computing, though these concepts are frequently confused,[2] and both can be described as "multiple processes executing during the same period of time". In parallel computing, execution literally occurs at the same instant, for example on separate processors of a multi-processor machine, with the goal of speeding up computations – parallel computing is impossible on a (single-core) single processor, as only one computation can occur at any instant (during any single clock cycle).[a] By contrast, concurrent computing consists of process lifetimes overlapping, but execution need not happen at the same instant. The goal here is to model processes in the outside world that happen concurrently, such as multiple clients accessing a server at the same time. Structuring software systems as composed of multiple concurrent, communicating parts can be useful for tackling complexity, regardless of whether the parts can be executed in parallel.[3]:1

For example, concurrent processes can be executed on a single core by interleaving the execution steps of each process via time slices: only one process runs at a time, and if it does not complete during its time slice, it is paused, another process begins or resumes, and then later the original process is resumed. In this way multiple processes are part-way through execution at a single instant, but only one process is being executed at that instant.

Concurrent computations may be executed in parallel,[2][4] for example by assigning each process to a separate processor or processor core, or distributing a computation across a network, but in general, the languages, tools and techniques for parallel programming may not be suitable for concurrent programming, and vice versa.

The exact timing of when tasks in a concurrent system are executed depend on the scheduling, and tasks need not always be executed concurrently. For example, given two tasks, T1 and T2:

  • T1 may be executed and finished before T2
  • T2 may be executed and finished before T1
  • T1 and T2 may be executed alternatively (time-slicing)
  • T1 and T2 may be executed simultaneously at the same instant of time (parallelism)

The word "sequential" is used as an antonym for both "concurrent" and "parallel"; when these are explicitly distinguished, concurrent/sequential and parallel/serial are used as opposing pairs.[5]

Coordinating access to shared resources[edit]

The main challenge in designing concurrent programs is concurrency control: ensuring the correct sequencing of the interactions or communications between different computational executions, and coordinating access to resources that are shared among executions.[4] Potential problems include race conditions, deadlocks, and resource starvation. For example, consider the following algorithm for making withdrawals from a checking account represented by the shared resource balance:

  1.   bool withdraw( int withdrawal )
    
  2.   {
    
  3.      if ( balance >= withdrawal )
    
  4.      {
    
  5.          balance -= withdrawal;
    
  6.          return true;
    
  7.      } 
    
  8.      return false;
    
  9.   }
    

Suppose balance=500, and two concurrent threads make the calls withdraw(300) and withdraw(350). If line 3 in both operations executes before line 5 both operations will find that balance > withdrawal evaluates to true, and execution will proceed to subtracting the withdrawal amount. However, since both processes perform their withdrawals, the total amount withdrawn will end up being more than the original balance. These sorts of problems with shared resources require the use of concurrency control, or non-blocking algorithms.

Because concurrent systems rely on the use of shared resources (including communication media), concurrent computing in general requires the use of some form of arbiter somewhere in the implementation to mediate access to these resources.

Unfortunately, while many solutions exist to the problem of a conflict over one resource, many of those "solutions" have their own concurrency problems such as deadlock when more than one resource is involved.

Advantages of concurrent computation[edit]

  • Increased application throughput – parallel execution of a concurrent program allows the number of tasks completed in certain time period to increase.
  • High responsiveness for input/output – input/output-intensive applications mostly wait for input or output operations to complete. Concurrent programming allows the time that would be spent waiting to be used for another task.
  • More appropriate program structure – some problems and problem domains are well-suited to representation as concurrent tasks or processes.

Models of concurrency[edit]

There are several models of concurrent computing, which can be used to understand and analyze concurrent systems. These models include:

Implementation[edit]

A number of different methods can be used to implement concurrent programs, such as implementing each computational execution as an operating system process, or implementing the computational processes as a set of threads within a single operating system process.

Concurrent interaction and communication[edit]

In some concurrent computing systems, communication between the concurrent components is hidden from the programmer (e.g., by using futures), while in others it must be handled explicitly. Explicit communication can be divided into two classes:

Shared memory communication 
Concurrent components communicate by altering the contents of shared memory locations (exemplified by Java and C#). This style of concurrent programming usually requires the application of some form of locking (e.g., mutexes, semaphores, or monitors) to coordinate between threads. A program that properly implements any of these is said to be thread-safe.
Message passing communication 
Concurrent components communicate by exchanging messages (exemplified by Scala, Erlang and occam). The exchange of messages may be carried out asynchronously, or may use a synchronous "rendezvous" style in which the sender blocks until the message is received. Asynchronous message passing may be reliable or unreliable (sometimes referred to as "send and pray"). Message-passing concurrency tends to be far easier to reason about than shared-memory concurrency, and is typically considered a more robust form of concurrent programming.[citation needed] A wide variety of mathematical theories for understanding and analyzing message-passing systems are available, including the Actor model, and various process calculi. Message passing can be efficiently implemented on symmetric multiprocessors, with or without shared coherent memory.

Shared memory and message passing concurrency have different performance characteristics. Typically (although not always), the per-process memory overhead and task switching overhead is lower in a message passing system, but the overhead of message passing itself is greater than for a procedure call. These differences are often overwhelmed by other performance factors.

History[edit]

Concurrent computing developed out of earlier work on railroads and telegraphy, from the 19th and early 20th century, and some terms date to this period, such as semaphores. These arose to address the question of how to handle multiple trains on the same railroad system (avoiding collisions and maximizing efficiency) and how to handle multiple transmissions over a given set of wires (improving efficiency), such as via time-division multiplexing (1870s).

The academic study of concurrent algorithms started in the 1960s, with Dijkstra (1965) credited with being the first paper in this field, identifying and solving mutual exclusion.[6]

Prevalence[edit]

Concurrency is pervasive in computing, occurring from low-level hardware on a single chip to world-wide networks. Examples follow.

At the programming language level:

At the operating system level:

At the network level, networked systems are generally concurrent by their nature, as they consist of separate devices.

Languages supporting concurrent programming[edit]

Concurrent programming languages are programming languages that use language constructs for concurrency. These constructs may involve multi-threading, support for distributed computing, message passing, shared resources (including shared memory) or futures and promises. Such languages are sometimes described as Concurrency Oriented Languages or Concurrency Oriented Programming Languages (COPL).[7]

Today, the most commonly used programming languages that have specific constructs for concurrency are Java and C#. Both of these languages fundamentally use a shared-memory concurrency model, with locking provided by monitors (although message-passing models can and have been implemented on top of the underlying shared-memory model). Of the languages that use a message-passing concurrency model, Erlang is probably the most widely used in industry at present.[citation needed]

Many concurrent programming languages have been developed more as research languages (e.g. Pict) rather than as languages for production use. However, languages such as Erlang, Limbo, and occam have seen industrial use at various times in the last 20 years. Languages in which concurrency plays an important role include:

  • Ada - general purpose programming language with native support for message passing and monitor based concurrency.
  • Alef – concurrent language with threads and message passing, used for systems programming in early versions of Plan 9 from Bell Labs
  • Alice – extension to Standard ML, adds support for concurrency via futures.
  • Ateji PX – an extension to Java with parallel primitives inspired from pi-calculus
  • Axum – domain specific concurrent programming language, based on the Actor model and on the .NET Common Language Runtime using a C-like syntax.
  • C++ – std::thread
  • – C Omega, a research language extending C#, uses asynchronous communication
  • C# – supports concurrent computing since version 5.0 using lock, yield, async and await keywords
  • Clojure – a modern Lisp targeting the JVM
  • Concurrent Clean – a functional programming language, similar to Haskell
  • Concurrent Collections (CnC) Achieves implicit parallelism independent of memory model by explicitly defining data- and control flow
  • Concurrent Haskell – lazy, pure functional language operating concurrent processes on shared memory
  • Concurrent ML – a concurrent extension of Standard ML
  • Concurrent Pascal – by Per Brinch Hansen
  • Curry
  • Dmulti-paradigm system programming language with explicit support for concurrent programming (Actor model)
  • E – uses promises, ensures deadlocks cannot occur
  • ECMAScript – promises available in various libraries, proposed for inclusion in standard in ECMAScript 6
  • Eiffel – through its SCOOP mechanism based on the concepts of Design by Contract
  • Elixir – dynamic and functional meta-programming aware language running on the Erlang VM.
  • Erlang – uses asynchronous message passing with nothing shared
  • Faust – Realtime functional programming language for signal processing. The Faust compiler provides automatic parallelization using either OpenMP or a specific work-stealing scheduler.
  • FortranCoarrays and "do concurrent" are part of Fortran 2008 standard
  • Go – systems programming language with a concurrent programming model based on CSP
  • Hume functional concurrent lang. for bounded space and time environments where automata processes are described by synchronous channels patterns and message passing.
  • Io – actor-based concurrency
  • Janus features distinct "askers" and "tellers" to logical variables, bag channels; is purely declarative
  • JoCaml Concurrent and distributed channel based language (extension of OCaml) that implements the Join-calculus of processes.
  • Join Java – concurrent language based on the Java programming language
  • Joule – dataflow language, communicates by message passing
  • Joyce – a concurrent teaching language built on Concurrent Pascal with features from CSP by Per Brinch Hansen
  • LabVIEW – graphical, dataflow programming language, in which functions are nodes in a graph and data is wires between those nodes. Includes object oriented language extensions.
  • Limbo – relative of Alef, used for systems programming in Inferno (operating system)
  • MultiLispScheme variant extended to support parallelism
  • Modula-2 – systems programming language by N.Wirth as a successor to Pascal with native support for coroutines.
  • Modula-3 – modern language in Algol family with extensive support for threads, mutexes, condition variables.
  • Newsqueak – research language with channels as first-class values; predecessor of Alef
  • Node.js – a server-side runtime environment for JavaScript
  • occam – influenced heavily by Communicating Sequential Processes (CSP).
  • Orc – a heavily concurrent, nondeterministic language based on Kleene algebra.
  • Oz – multiparadigm language, supports shared-state and message-passing concurrency, and futures
  • ParaSail – a pointer-free, data-race-free, object-oriented parallel programming language
  • Pict – essentially an executable implementation of Milner's π-calculus
  • Perl with AnyEvent and Coro
  • Python with Twisted, greenlet and gevent.
  • Reia – uses asynchronous message passing between shared-nothing objects
  • Red/System – a system programming based on Rebol.
  • Rust – a systems programming language with a focus on massive concurrency, utilizing message-passing with move semantics, shared immutable memory, and shared mutable memory that is provably free of data races.[8]
  • SALSA – actor language with token-passing, join, and first-class continuations for distributed computing over the Internet
  • Scala – a general purpose programming language designed to express common programming patterns in a concise, elegant, and type-safe way
  • SequenceL – general purpose functional programming language whose primary design objectives are ease of programming, code clarity/readability, and automatic parallelization for performance on multicore hardware, which is provably free of Race condition
  • SR – research language
  • Stackless Python
  • StratifiedJS – a combinator-based concurrency language based on JavaScript
  • SuperPascal – a concurrent teaching language built on Concurrent Pascal and Joyce by Per Brinch Hansen
  • Unicon – Research language.
  • Termite Scheme adds Erlang-like concurrency to Scheme
  • TNSDL – a language used at developing telecommunication exchanges, uses asynchronous message passing
  • VHDL – VHSIC Hardware Description Language, aka IEEE STD-1076
  • XC – a concurrency-extended subset of the C programming language developed by XMOS based on Communicating Sequential Processes. The language also offers built-in constructs for programmable I/O.

Many other languages provide support for concurrency in the form of libraries (on level roughly comparable with the above list).

See also[edit]

Notes[edit]

  1. ^ This is discounting parallelism internal to a processor core, such as pipelining or vectorized instructions. A single-core, single-processor machine may be capable of some parallelism, such as with a coprocessor, but the processor itself is not.

References[edit]

  1. ^ Operating System Concepts 9th edition, Abraham Silberschatz. "Chapter 4: Threads"
  2. ^ a b "Concurrency is not Parallelism", Waza conference Jan 11, 2012, Rob Pike (slides) (video)
  3. ^ Schneider, Fred B. On Concurrent Programming. Springer. ISBN 9780387949420. 
  4. ^ a b Ben-Ari, Mordechai (2006). Principles of Concurrent and Distributed Programming (2nd ed.). Addison-Wesley. ISBN 978-0-321-31283-9. 
  5. ^ Patterson & Hennessy 2013, p. 503.
  6. ^ "PODC Influential Paper Award: 2002", ACM Symposium on Principles of Distributed Computing, retrieved 2009-08-24 
  7. ^ Armstrong, Joe (2003). "Making reliable distributed systems in the presence of software errors". 
  8. ^ Blum, Ben (2012). "Typesafe Shared Mutable State". Retrieved 2012-11-14. 
  • Patterson, David A.; Hennessy, John L. (2013). Computer Organization and Design: The Hardware/Software Interface. The Morgan Kaufmann Series in Computer Architecture and Design (5 ed.). Morgan Kaufmann. ISBN 978-0-12407886-4.  edit

Further reading[edit]

External links[edit]

Wikipedia content is licensed under the GFDL License
Powered by YouTube
MASHPEDIA
LEGAL
  • Mashpedia © 2015