Concurrent Programming

Created: by Pradeep GowdaUpdated:Sep 22, 2024Tagged: concurrent-programming .

Lock-free programming often results in slower programs at low CPU counts. Lock-free algorithms offer better scalability (w.r.t. the number of cores). But you need to pay something to achieve that, and what you pay is single-thread performance / performance on machines with only a dozen CPUs or so (depending on the problem and amount of contention).

Lock-free programming can be very, very difficult.

Lectures

Videos

Udemy

Concepts

A set of operations can be considered atomic when two conditions are met:

  1. Until the entire set of operations completes, no other process can know about the changes being made (invisibility); and
  2. If any of the operations fail then the entire set of operations fails, and the state of the system is restored to the state it was in before any of the operations began.

C++ and Beyond 2012: Herb Sutter - atomic Weapons 1 of 2 - YouTube; the C++ memory model and modern hardware.

Memory Consistency Models

memory consistency model: how memory operations behave in the parallel computer system. This is a contract between programmer and system, wherein the system guarantees that if the programmer follows the rules, memory will be consistent and the results of memory operations will be predictable.

Causal consistency is weaker than sequential consistency, which requires that all nodes see all writes in the same order

Sequential consistency was defined by Leslie Lamport (1979) for concurrent programming, as follows: the result of any execution is the same as if the operations of all the processors were executed in some sequential order, and the operations of each individual processor appear in this sequence in the order specified by its program.

The sequence of instructions as defined by a processor’s program are interleaved with the corresponding sequences defined by the other processors’s programs to produce a global linear order of all instructions.

Mutual Exclusion principal

Mutual exclusion (often abbreviated to mutex) algorithms are used in concurrent programming to avoid the simultaneous use of a common resource, such as a global variable, by pieces of code called critical sections.

A critical section is a piece of code where a process or thread accesses a common resource.

The synchronization of access to those resources is an acute problem because a thread can be stopped or started at any time.

Mutexes are enough to handle >90% of all cases. You should resort to lock-free algos only if you know that you need them to improve scalability (or because you need the lock-freedom for some other reason, e.g., to keep worst-case time bounded in real time programs). It also depends on your system and application. If you’re targeting a quad-socket NUMA system, looking at lock-free algos makes a lot more sense than when you’re targeting a laptop. via

Dekker’s algorithm guarantees mutual exclusion, freedom from deadlock, and freedom from starvation.

Peterson’s algorithm is another mutual exclusion mechanism that allows two processes to share a single-use resource without conflict, using only shared memory for communication.

While Peterson’s original formulation worked with only two processes, the algorithm can be generalized for more than two, which makes it more powerful than Dekker’s algorithm.

atomic read-modify-write instruction

test-and-set

The test-and-set instruction is an instruction used to write to a memory location and return its old value as a single atomic (i.e. non-interruptible) operation.

If multiple processes may access the same memory, and if a process is currently performing a test-and-set, no other process may begin another test-and-set until the first process is done.

Compare and Swap ( CAS )

is an atomic instruction used in multithreading to achieve synchronization. It compares the contents of a memory location with a given value and, only if they are the same, modifies the contents of that memory location to a new given value. This is done as a single atomic operation. The atomicity guarantees that the new value is calculated based on up-to-date information; if the value had been updated by another thread in the meantime, the write would fail. The result of the operation must indicate whether it performed the substitution; this can be done either with a simple boolean response (this variant is often called compare-and-set), or by returning the value read from the memory location (not the value written to it). Compare and Swap - YouTub by Rob Edwards at SDSU.


there a subset of lock-free algorithms, called wait-free algorithms, the lock-free algorithms do not give real-time guarantees.

Concurrent data structures

Concurrent data structures are designed to handle simultaneous access and modification by multiple threads or processes in a concurrent computing environment. In such environments, where multiple tasks or threads are executed in parallel, traditional data structures might encounter issues like race conditions, deadlocks, and data corruption if not appropriately managed. Concurrent data structures aim to address these challenges and allow multiple threads to work with the data structure concurrently with correctness.

Mark Moir and Nir Shavit - Concurrent Data Structures ( 32 page PDF ) is a good text book material.

See ACM review article Shavit, Nir. “Data structures in the multicore age,” 2011. 54 (3)https://doi.org/10.1145/1897852.1897873.

Michael Scott Queues

Michael, Maged M, and Michael L Scott. “Simple, fast, and practical non-blocking and blocking concurrent queue algorithms,” 1996.paper ( PDF)

Video lecture Michael Scott — Nonblocking data structures. Part 1. - YouTube, part 2