Share
VIDEOS 1 TO 50
Concurrency vs. Parallelism
Concurrency vs. Parallelism
Published: 2015/10/04
Channel: Dave Xiang
Concurrent computing
Concurrent computing
Published: 2016/01/22
Channel: WikiAudio
What is CONCURRENT COMPUTING? What does CONCURRENT COMPUTING mean?
What is CONCURRENT COMPUTING? What does CONCURRENT COMPUTING mean?
Published: 2017/02/23
Channel: The Audiopedia
Concurrent Programming (Part - 1)
Concurrent Programming (Part - 1)
Published: 2016/02/22
Channel: GATE Lectures by Manu Thakur
Mod-04 Lec-20 Concurrent programming
Mod-04 Lec-20 Concurrent programming
Published: 2011/09/14
Channel: nptelhrd
From Concurrent to Parallel
From Concurrent to Parallel
Published: 2016/02/23
Channel: Jfokus
Rob Pike -
Rob Pike - 'Concurrency Is Not Parallelism'
Published: 2013/10/20
Channel: afriza na
Laws of Concurrent Programming
Laws of Concurrent Programming
Published: 2016/08/17
Channel: Microsoft Research
Adventures with Concurrent Programming in Java: A Quest for Predictable Latency - Martin Thompson
Adventures with Concurrent Programming in Java: A Quest for Predictable Latency - Martin Thompson
Published: 2015/11/24
Channel: Erlang Solutions
Concurrent Programming is not so difficult
Concurrent Programming is not so difficult
Published: 2014/11/23
Channel: Linux.conf.au 2013 -- Canberra, Australia
Amit Murthy - Concurrent and Parallel programming in Julia
Amit Murthy - Concurrent and Parallel programming in Julia
Published: 2015/10/30
Channel: HasGeek TV
CISY 217 Ch 6 Concurrent Processing
CISY 217 Ch 6 Concurrent Processing
Published: 2014/02/24
Channel: Stephen Brower
Concurrent Statements [VHDL RECAP 3B]
Concurrent Statements [VHDL RECAP 3B]
Published: 2014/09/24
Channel: Dler Hasan
Concurrent Computing with Lumerical FDTD Solutions
Concurrent Computing with Lumerical FDTD Solutions
Published: 2011/01/13
Channel: Lumerical Solutions Inc.
Problems with Concurrent Execution of Transactions
Problems with Concurrent Execution of Transactions
Published: 2016/03/07
Channel: Techtud
Concurrent and Parallel F#
Concurrent and Parallel F#
Published: 2012/11/20
Channel: Fayerguorks
Java Concurrent Collections
Java Concurrent Collections
Published: 2014/12/11
Channel: Bharath Thippireddy dot com
Sequential vs. Concurrent code
Sequential vs. Concurrent code
Published: 2014/12/15
Channel: Q Liu
CONCURRENT EXECUTION AND DIRTY READ PROBLEM
CONCURRENT EXECUTION AND DIRTY READ PROBLEM
Published: 2017/04/23
Channel: IP University CSE/IT
Concurrent Programming with Java
Concurrent Programming with Java
Published: 2016/04/03
Channel: NightHacking
Andrew Montalenti: Beating Python
Andrew Montalenti: Beating Python's GIL to Max Out Your CPUs
Published: 2015/12/04
Channel: PyData
CONCURRENT EXECUTION OF WORKFLOW IN INFORMATICA BY MANISH
CONCURRENT EXECUTION OF WORKFLOW IN INFORMATICA BY MANISH
Published: 2012/12/25
Channel: Manish Singh
Martin Thompson - Keynote - Adventures with concurrent programming in Java
Martin Thompson - Keynote - Adventures with concurrent programming in Java
Published: 2017/04/11
Channel: HBB
C# Concurrent Blocking Collection, ParallelForEach in 5 min
C# Concurrent Blocking Collection, ParallelForEach in 5 min
Published: 2016/04/10
Channel: Gaur Associates
Introduction To Multithreaded And Concurrent Programming In Java
Introduction To Multithreaded And Concurrent Programming In Java
Published: 2017/03/21
Channel: Deven Phillips
Concurrent Processing Part - II
Concurrent Processing Part - II
Published: 2015/06/24
Channel: RGMCET Nandyal
Principle of Concurrent Processing
Principle of Concurrent Processing
Published: 2017/02/05
Channel: Yu Kamikawa
#Pragma Conference 2013 - Simple Concurrent Programming
#Pragma Conference 2013 - Simple Concurrent Programming
Published: 2014/02/21
Channel: #pragma mark
Haskell for fast, concurrent, robust services
Haskell for fast, concurrent, robust services
Published: 2015/11/13
Channel: Michael Snoyman
Concurrent Programming on Windows
Concurrent Programming on Windows
Published: 2008/10/31
Channel: OnSoftware
Matthieu Wipliez - Techniques for writing concurrent applications with asynchronous I/O
Matthieu Wipliez - Techniques for writing concurrent applications with asynchronous I/O
Published: 2016/12/10
Channel: Rust
Core Java With OCJP/SCJP:  Concurrent Collections Part-1 || Need of Concurrent Collections
Core Java With OCJP/SCJP: Concurrent Collections Part-1 || Need of Concurrent Collections
Published: 2016/03/24
Channel: Durga Software Solutions
21/55 Parallel Programming with Microsoft©.NET4 :  212. Concurrent collections
21/55 Parallel Programming with Microsoft©.NET4 : 212. Concurrent collections
Published: 2011/05/15
Channel: Catch IT
Running Concurrent Workflows | Informatica
Running Concurrent Workflows | Informatica
Published: 2016/05/09
Channel: Adam Tech
Concurrent programming in Chalice
Concurrent programming in Chalice
Published: 2013/05/11
Channel: Verification Corner
Concurrent 4   Martin
Concurrent 4 Martin
Published: 2013/10/18
Channel: 2013 Pacific Northwest Climate Science Conference
Unit testing concurrent code
Unit testing concurrent code
Published: 2016/01/04
Channel: Parleys
Concurrent programming with Python and my little experiment
Concurrent programming with Python and my little experiment
Published: 2014/07/25
Channel: EuroPython 2014
25   Using concurrent futures
25 Using concurrent futures
Published: 2017/02/17
Channel: ABDUR REHMAN Mubarak
Concurrent Systems Product Showcase
Concurrent Systems Product Showcase
Published: 2016/06/27
Channel: Concurrent Systems
OAUG Pro Tips: Take Five – Concurrent Manager Express Lane
OAUG Pro Tips: Take Five – Concurrent Manager Express Lane
Published: 2016/09/13
Channel: OrclAppsUsersGroup
Concurrent Processing Techniques
Concurrent Processing Techniques
Published: 2016/01/07
Channel: Parleys
The java.util.concurrent Package (Executors using Scala)
The java.util.concurrent Package (Executors using Scala)
Published: 2012/10/09
Channel: Mark Lewis
The difference between concurrent and parallel processing
The difference between concurrent and parallel processing
Published: 2017/05/17
Channel: Kevin Chesser
Java - Introduction to concurrency API
Java - Introduction to concurrency API
Published: 2016/10/18
Channel: classofjava
Exam 70-483: Programming with C# - Objective 1.1 Multithreading and asynchronous processing
Exam 70-483: Programming with C# - Objective 1.1 Multithreading and asynchronous processing
Published: 2013/09/03
Channel: Jesse Dietrichson
Concurrent Programming in Erlang: week 1 feedback
Concurrent Programming in Erlang: week 1 feedback
Published: 2017/04/10
Channel: Kent Computing
Concurrent 1A
Concurrent 1A
Published: 2013/06/14
Channel: TheIM4DC
Concurrent Programming in C# Assignment
Concurrent Programming in C# Assignment
Published: 2016/02/10
Channel: George Songhurst
Starting Concurrent Workflows From The Command Line
Starting Concurrent Workflows From The Command Line
Published: 2016/05/09
Channel: Adam Tech
NEXT
GO TO RESULTS [51 .. 100]

WIKIPEDIA ARTICLE

From Wikipedia, the free encyclopedia
Jump to: navigation, search

Concurrent computing is a form of computing in which several computations are executed during overlapping time periods—concurrently—instead of sequentially (one completing before the next starts). This is a property of a system—this may be an individual program, a computer, or a network—and there is a separate execution point or "thread of control" for each computation ("process"). A concurrent system is one where a computation can advance without waiting for all other computations to complete.[1]

As a programming paradigm, concurrent computing is a form of modular programming, namely factoring an overall computation into subcomputations that may be executed concurrently. Pioneers in the field of concurrent computing include Edsger Dijkstra, Per Brinch Hansen, and C.A.R. Hoare.

Introduction[edit]

The concept of concurrent computing is frequently confused with the related but distinct concept of parallel computing,[2][3] although both can be described as "multiple processes executing during the same period of time". In parallel computing, execution occurs at the same physical instant: for example, on separate processors of a multi-processor machine, with the goal of speeding up computations—parallel computing is impossible on a (one-core) single processor, as only one computation can occur at any instant (during any single clock cycle).[a] By contrast, concurrent computing consists of process lifetimes overlapping, but execution need not happen at the same instant. The goal here is to model processes in the outside world that happen concurrently, such as multiple clients accessing a server at the same time. Structuring software systems as composed of multiple concurrent, communicating parts can be useful for tackling complexity, regardless of whether the parts can be executed in parallel.[4]:1

For example, concurrent processes can be executed on one core by interleaving the execution steps of each process via time-sharing slices: only one process runs at a time, and if it does not complete during its time slice, it is paused, another process begins or resumes, and then later the original process is resumed. In this way, multiple processes are part-way through execution at a single instant, but only one process is being executed at that instant.[citation needed]

Concurrent computations may be executed in parallel,[2][5] for example, by assigning each process to a separate processor or processor core, or distributing a computation across a network. In general, however, the languages, tools, and techniques for parallel programming might not be suitable for concurrent programming, and vice versa.[citation needed]

The exact timing of when tasks in a concurrent system are executed depend on the scheduling, and tasks need not always be executed concurrently. For example, given two tasks, T1 and T2:[citation needed]

  • T1 may be executed and finished before T2 or vice versa (serial and sequential)
  • T1 and T2 may be executed alternately (serial and concurrent)
  • T1 and T2 may be executed simultaneously at the same instant of time (parallel and concurrent)

The word "sequential" is used as an antonym for both "concurrent" and "parallel"; when these are explicitly distinguished, concurrent/sequential and parallel/serial are used as opposing pairs.[6] A schedule in which tasks execute one at a time (serially, no parallelism), without interleaving (sequentially, no concurrency: no task begins until the prior task ends) is called a serial schedule. A set of tasks that can be scheduled serially is serializable, which simplifies concurrency control.[citation needed]

Coordinating access to shared resources[edit]

The main challenge in designing concurrent programs is concurrency control: ensuring the correct sequencing of the interactions or communications between different computational executions, and coordinating access to resources that are shared among executions.[5] Potential problems include race conditions, deadlocks, and resource starvation. For example, consider the following algorithm to make withdrawals from a checking account represented by the shared resource balance:[citation needed]

1 bool withdraw(int withdrawal)
2 {
3     if (balance >= withdrawal)
4     {
5         balance -= withdrawal;
6         return true;
7     } 
8     return false;
9 }

Suppose balance = 500, and two concurrent threads make the calls withdraw(300) and withdraw(350). If line 3 in both operations executes before line 5 both operations will find that balance >= withdrawal evaluates to true, and execution will proceed to subtracting the withdrawal amount. However, since both processes perform their withdrawals, the total amount withdrawn will end up being more than the original balance. These sorts of problems with shared resources need the use of concurrency control, or non-blocking algorithms.[citation needed]

Because concurrent systems rely on the use of shared resources (including communication media), concurrent computing in general needs the use of some form of arbiter somewhere in the implementation to mediate access to these resources.[citation needed]

Unfortunately, while many solutions exist to the problem of a conflict over one resource, many of those "solutions" have their own concurrency problems such as deadlock when more than one resource is involved.[citation needed]

Advantages[edit]

Concurrent computing has the following advantages:

  • Increased program throughput—parallel execution of a concurrent program allows the number of tasks completed in a given time to increase.[citation needed]
  • High responsiveness for input/output—input/output-intensive programs mostly wait for input or output operations to complete. Concurrent programming allows the time that would be spent waiting to be used for another task.[citation needed]
  • More appropriate program structure—some problems and problem domains are well-suited to representation as concurrent tasks or processes.[citation needed]

Models[edit]

There are several models of concurrent computing, which can be used to understand and analyze concurrent systems. These models include:

Implementation[edit]

A number of different methods can be used to implement concurrent programs, such as implementing each computational execution as an operating system process, or implementing the computational processes as a set of threads within a single operating system process.

Interaction and communication[edit]

In some concurrent computing systems, communication between the concurrent components is hidden from the programmer (e.g., by using futures), while in others it must be handled explicitly. Explicit communication can be divided into two classes:

Shared memory communication
Concurrent components communicate by altering the contents of shared memory locations (exemplified by Java and C#). This style of concurrent programming usually needs the use of some form of locking (e.g., mutexes, semaphores, or monitors) to coordinate between threads. A program that properly implements any of these is said to be thread-safe.
Message passing communication
Concurrent components communicate by exchanging messages (exemplified by Scala, Erlang and occam). The exchange of messages may be carried out asynchronously, or may use a synchronous "rendezvous" style in which the sender blocks until the message is received. Asynchronous message passing may be reliable or unreliable (sometimes referred to as "send and pray"). Message-passing concurrency tends to be far easier to reason about than shared-memory concurrency, and is typically considered a more robust form of concurrent programming.[citation needed] A wide variety of mathematical theories to understand and analyze message-passing systems are available, including the actor model, and various process calculi. Message passing can be efficiently implemented via symmetric multiprocessing, with or without shared memory cache coherence.

Shared memory and message passing concurrency have different performance characteristics. Typically (although not always), the per-process memory overhead and task switching overhead is lower in a message passing system, but the overhead of message passing is greater than for a procedure call. These differences are often overwhelmed by other performance factors.

History[edit]

Concurrent computing developed out of earlier work on railroads and telegraphy, from the 19th and early 20th century, and some terms date to this period, such as semaphores. These arose to address the question of how to handle multiple trains on the same railroad system (avoiding collisions and maximizing efficiency) and how to handle multiple transmissions over a given set of wires (improving efficiency), such as via time-division multiplexing (1870s).

The academic study of concurrent algorithms started in the 1960s, with Dijkstra (1965) credited with being the first paper in this field, identifying and solving mutual exclusion.[7]

Prevalence[edit]

Concurrency is pervasive in computing, occurring from low-level hardware on a single chip to worldwide networks. Examples follow.

At the programming language level:

At the operating system level:

At the network level, networked systems are generally concurrent by their nature, as they consist of separate devices.

Languages supporting concurrent programming[edit]

Concurrent programming languages are programming languages that use language constructs for concurrency. These constructs may involve multi-threading, support for distributed computing, message passing, shared resources (including shared memory) or futures and promises. Such languages are sometimes described as Concurrency Oriented Languages or Concurrency Oriented Programming Languages (COPL).[8]

Today, the most commonly used programming languages that have specific constructs for concurrency are Java and C#. Both of these languages fundamentally use a shared-memory concurrency model, with locking provided by monitors (although message-passing models can and have been implemented on top of the underlying shared-memory model). Of the languages that use a message-passing concurrency model, Erlang is probably the most widely used in industry at present.[citation needed]

Many concurrent programming languages have been developed more as research languages (e.g. Pict) rather than as languages for production use. However, languages such as Erlang, Limbo, and occam have seen industrial use at various times in the last 20 years. Languages in which concurrency plays an important role include:

  • Ada—general purpose, with native support for message passing and monitor based concurrency
  • Alef—concurrent, with threads and message passing, for system programming in early versions of Plan 9 from Bell Labs
  • Alice—extension to Standard ML, adds support for concurrency via futures
  • Ateji PX—extension to Java with parallel primitives inspired from π-calculus
  • Axum—domain specific, concurrent, based on actor model and .NET Common Language Runtime using a C-like syntax
  • C++—std::thread
  • (C omega)—for research, extends C#, uses asynchronous communication
  • C#—supports concurrent computing since version 5.0 using lock, yield, async and await keywords
  • Clojure—modern Lisp for the JVM
  • Concurrent Clean—functional programming, similar to Haskell
  • Concurrent Collections (CnC)—Achieves implicit parallelism independent of memory model by explicitly defining flow of data and control
  • Concurrent Haskell—lazy, pure functional language operating concurrent processes on shared memory
  • Concurrent ML—concurrent extension of Standard ML
  • Concurrent Pascal—by Per Brinch Hansen
  • Curry
  • Dmulti-paradigm system programming language with explicit support for concurrent programming (actor model)
  • E—uses promises to preclude deadlocks
  • ECMAScript—promises available in various libraries, proposed for inclusion in standard in ECMAScript 6
  • Eiffel—through its SCOOP mechanism based on the concepts of Design by Contract
  • Elixir—dynamic and functional meta-programming aware language running on the Erlang VM.
  • Erlang—uses asynchronous message passing with nothing shared
  • FAUST—real-time functional, for signal processing, compiler provides automatic parallelization via OpenMP or a specific work-stealing scheduler
  • Fortrancoarrays and do concurrent are part of Fortran 2008 standard
  • Go—for system programming, with a concurrent programming model based on CSP
  • Hume—functional, concurrent, for bounded space and time environments where automata processes are described by synchronous channels patterns and message passing
  • Io—actor-based concurrency
  • Janus—features distinct askers and tellers to logical variables, bag channels; is purely declarative
  • Java—Thread class or Runnable interface.
  • Julia—"concurrent programming primitives: Tasks, async-wait, Channels."[9]
  • JavaScript—via web workers, in a browser environment, promises, and callbacks.
  • JoCaml—concurrent and distributed channel based, extension of OCaml, implements the Join-calculus of processes
  • Join Java—concurrent, based on Java language
  • Joule—dataflow-based, communicates by message passing
  • Joyce—concurrent, teaching, built on Concurrent Pascal with features from CSP by Per Brinch Hansen
  • LabVIEW—graphical, dataflow, functions are nodes in a graph, data is wires between the nodes; includes object-oriented language
  • Limbo—relative of Alef, for system programming in Inferno (operating system)
  • MultiLispScheme variant extended to support parallelism
  • Modula-2—for system programming, by N. Wirth as a successor to Pascal with native support for coroutines
  • Modula-3—modern member of Algol family with extensive support for threads, mutexes, condition variables
  • Newsqueak—for research, with channels as first-class values; predecessor of Alef
  • Node.js—a server-side runtime environment for JavaScript
  • occam—influenced heavily by communicating sequential processes (CSP)
  • Orc—heavily concurrent, nondeterministic, based on Kleene algebra
  • Oz-Mozart—multiparadigm, supports shared-state and message-passing concurrency, and futures
  • ParaSail—object-oriented, parallel, free of pointers, race conditions
  • Pict—essentially an executable implementation of Milner's π-calculus
  • Perl with AnyEvent and Coro
  • Python with Twisted, greenlet and gevent, or using Stackless Python
  • Reia—uses asynchronous message passing between shared-nothing objects
  • Red/System—for system programming, based on Rebol
  • Ruby with Concurrent Ruby and Celluloid
  • Rust—for system programming, using message-passing with move semantics, shared immutable memory, and shared mutable memory.[10]
  • SALSA—actor-based with token-passing, join, and first-class continuations for distributed computing over the Internet
  • Scala—general purpose, designed to express common programming patterns in a concise, elegant, and type-safe way
  • SequenceL—general purpose functional, main design objectives are ease of programming, code clarity-readability, and automatic parallelization for performance on multicore hardware, and provably free of race conditions
  • SR—for research
  • StratifiedJS—combinator-based concurrency, based on JavaScript
  • SuperPascal—concurrent, for teaching, built on Concurrent Pascal and Joyce by Per Brinch Hansen
  • Unicon—for research
  • Termite Scheme—adds Erlang-like concurrency to Scheme
  • TNSDL—for developing telecommunication exchanges, uses asynchronous message passing
  • VHSIC Hardware Description Language (VHDL)—IEEE STD-1076
  • XC—concurrency-extended subset of C language developed by XMOS, based on communicating sequential processes, built-in constructs for programmable I/O

Many other languages provide support for concurrency in the form of libraries, at levels roughly comparable with the above list.

See also[edit]

Notes[edit]

  1. ^ This is discounting parallelism internal to a processor core, such as pipelining or vectorized instructions. A one-core, one-processor machine may be capable of some parallelism, such as with a coprocessor, but the processor alone is not.

References[edit]

  1. ^ Operating System Concepts 9th edition, Abraham Silberschatz. "Chapter 4: Threads"
  2. ^ a b Pike, Rob (2012-01-11). "Concurrency is not Parallelism". Waza conference, 11 January 2012. Retrieved from http://talks.golang.org/2012/waza.slide (slides) and http://vimeo.com/49718712 (video).
  3. ^ "Parallelism vs. Concurrency". Haskell Wiki. 
  4. ^ Schneider, Fred B. On Concurrent Programming. Springer. ISBN 9780387949420. 
  5. ^ a b Ben-Ari, Mordechai (2006). Principles of Concurrent and Distributed Programming (2nd ed.). Addison-Wesley. ISBN 978-0-321-31283-9. 
  6. ^ Patterson & Hennessy 2013, p. 503.
  7. ^ "PODC Influential Paper Award: 2002", ACM Symposium on Principles of Distributed Computing, retrieved 2009-08-24 
  8. ^ Armstrong, Joe (2003). "Making reliable distributed systems in the presence of software errors". 
  9. ^ https://juliacon.talkfunnel.com/2015/21-concurrent-and-parallel-programming-in-julia Concurrent and Parallel programming in Julia
  10. ^ Blum, Ben (2012). "Typesafe Shared Mutable State". Retrieved 2012-11-14. 

Sources[edit]

  • Patterson, David A.; Hennessy, John L. (2013). Computer Organization and Design: The Hardware/Software Interface. The Morgan Kaufmann Series in Computer Architecture and Design (5 ed.). Morgan Kaufmann. ISBN 978-0-12407886-4. 

Further reading[edit]

External links[edit]

Disclaimer

None of the audio/visual content is hosted on this site. All media is embedded from other sites such as GoogleVideo, Wikipedia, YouTube etc. Therefore, this site has no control over the copyright issues of the streaming media.

All issues concerning copyright violations should be aimed at the sites hosting the material. This site does not host any of the streaming media and the owner has not uploaded any of the material to the video hosting servers. Anyone can find the same content on Google Video or YouTube by themselves.

The owner of this site cannot know which documentaries are in public domain, which has been uploaded to e.g. YouTube by the owner and which has been uploaded without permission. The copyright owner must contact the source if he wants his material off the Internet completely.

Powered by YouTube
Wikipedia content is licensed under the GFDL and (CC) license