Introduction to Concurrency in Programming Languages (Chapman & Hall/CRC Computational Science)
Introduction to Concurrency in Programming Languages (Chapman & Hall/CRC Computational Science) [Matthew Sottile, Timothy G. Mattson, Craig E. Rasmussen] on Amazon.com. *FREE* shipping on qualifying offers. Introduction to Concurrency in Programming Languages (Chapman & Hall/CRC Computational Science)
In a concurrent system, more than one program can appear to make progress over some coarse grained unit of time. For example, before multicore processors dominated the world, it was very common to run multitasking operating systems where multiple programs could execute at the same time on a single processor core.
Many decades ago, with the advent of programming languages and compilers, programmers accepted that higher level abstractions were preferable to utilize hardware in a portable, efficient, and productive manner.
The majority of computer users today take advantage of distributed, network-based services, such as the world wide web or e-mail.
In sequential programming, the core concept underlying all constructs was logical correctness and
The most common terms for execution units are thread and process, and they are usually introduced in the context of the design and construction of operating systems (for example, see Silberschatz ).
Processes require allocation of resources both to support the process itself, and within the operating system for tracking resources and making scheduling decisions that support multiprocessing. As such, creation of a process is a relatively heavy-weight operation — for sufficiently simple operations, the cost of creating and destroying a process may be far greater than the actual work that it was to perform. For this reason, lighter-weight entities known as threads were invented.
One can mistake them for equivalent terms, as often “parallel computing” is used to mean the same thing as “concurrent computing,” and vice versa. This is incorrect, although for subtle reasons. The simplest way to distinguish them is that one focuses on an abstract view of how multiple streams of instructions execute over time (“concurrency”), while the other focuses on how they actually execute relative to each other in time (“parallelism”).
A dependency is something, either a state of data or a control state, which must be reached before a part of a program can execute.
Deadlock is a common source of concurrent program failure, but unlike race conditions, is often easier to identify. This is because a program that deadlocks does not produce incorrect results — it simply gets stuck and ceases to progress in its execution.
Livelock is, as the name should imply, related to deadlock in that the net effect is the program ceases to be able to proceed in its control flow. Instead of simply pausing forever waiting on locks that will never be released, a program experiencing livelock does experience a changing program counter, but a pathological interaction occurs between threads of execution resulting in a control flow loop that repeats forever.