PODC Archives

ACM PODC Participants List


Options: Use Forum View

Use Monospaced Font
Show Text Part by Default
Show All Mail Headers

Message: [<< First] [< Prev] [Next >] [Last >>]
Topic: [<< First] [< Prev] [Next >] [Last >>]
Author: [<< First] [< Prev] [Next >] [Last >>]

Print Reply
"Michael L. Scott" <[log in to unmask]>
Reply To:
Michael L. Scott
Thu, 27 Jun 2013 13:57:58 -0400
text/plain (133 lines)
Dear friends:

I am pleased to announce the availability of a new 200-page monograph on
shared-memory synchronization.  As an issue of the Morgan & Claypool
Synthesis Series on Computer Architecture, it might not naturally come
to the attention of members of the PODC/DISC community -- hence my note
to this list.

I've attempted, in the monograph, to span the full spectrum from
theoretical foundations to operating systems and hardware.  I was
fortunate enough to receive feedback and advice from almost a dozen
distinguished colleagues.

I've appended the abstract and table of contents below.  Chapters 4 and
5 subsume material from the 1991 Mellor-Crummey & Scott TOCS paper, and
may be of particular interest to instructors who still use that paper
in their courses.  Chapter 9 surveys the current state of the art in
transactional memory, including the hardware mechanisms of recent Intel
and IBM processors.

In what I hope will be a welcome break with common practice, algorithms
throughout the book specify the atomic loads, stores, and fences
required for correct behavior on non-sequentially-consistent processors.

Electronic or paper copies can be obtained from
or from the many institutional libraries that subscribe to the Synthesis
series.  Comments or errata would of course be very welcome.

Michael L. Scott            (585) 275-7745, 5478
Computer Science Dept.        FAX 273-4556
University of Rochester     [log in to unmask]
Rochester, NY  14627-0226   http://www.cs.rochester.edu/u/scott


Since the advent of time sharing in the 1960s, designers of concurrent
and parallel systems have needed to synchronize the activities of
threads of control that share data structures in memory. In recent
years, the study of synchronization has gained new urgency with the
proliferation of multicore processors, on which even relatively simple
user-level programs must frequently run in parallel.

This lecture offers a comprehensive survey of shared-memory
synchronization, with an emphasis on "systems-level" issues. It includes
sufficient coverage of architectural details to understand correctness
and performance on modern multicore machines, and sufficient coverage of
higher-level issues to understand how synchronization is embedded in
modern programming languages.

The primary intended audience is "systems programmers" -- the authors
of operating systems, library packages, language run-time systems,
concurrent data structures, and server and utility programs. Much of the
discussion should also be of interest to application programmers who
want to make good use of the synchronization mechanisms available to
them, and to computer architects who want to understand the
ramifications of their design decisions on systems-level code.



1  Introduction  
    1.1  Atomicity  
    1.2  Condition Synchronization  
    1.3  Spinning vs. Blocking  
    1.4  Safety and Liveness  

2  Architectural Background  
    2.1  Cores and Caches: Basic Shared-Memory Architecture  
    2.2  Memory Consistency  
    2.3  Atomic Primitives  

3  Essential Theory
    3.1  Safety  
    3.2  Liveness  
    3.3  The Consensus Hierarchy  
    3.4  Memory Models  

4  Practical Spin Locks  
    4.1  Classical load-store-only Algorithms  
    4.2  Centralized Algorithms  
    4.3  Queued Spin Locks  
    4.4  Interface Extensions  
    4.5  Special-Case Optimizations  

5  Busy-wait Synchronization with Conditions  
    5.1  Flags  
    5.2  Barrier Algorithms  
    5.3  Barrier Extensions  
    5.4  Combining as a General Technique  

6  Read-mostly Atomicity  
    6.1  Reader-Writer Locks  
    6.2  Sequence Locks  
    6.3  Read-Copy Update  

7  Synchronization and Scheduling  
    7.1  Scheduling  
    7.2  Semaphores  
    7.3  Monitors  
    7.4  Other Language Mechanisms  
    7.5  Kernel/User Interactions  

8  Nonblocking Algorithms  
    8.1  Single-location Structures  
    8.2  The Michael and Scott (M&S) Queue  
    8.3  Harris and Michael (H&M) Lists  
    8.4  Hash Tables  
    8.5  Skip Lists  
    8.6  Double-ended Queues  
    8.7  Dual Data Structures  
    8.8  Nonblocking Elimination  
    8.9  Universal Constructions  

9  Transactional Memory  
    9.1  Software TM  
    9.2  Hardware TM  
    9.3  Challenges  

Author's Biography  


To unsubscribe from the PODC list:
write to: mailto:[log in to unmask]
or click the following link: