Hubbry Logo
Readers–writer lockReaders–writer lockMain
Open search
Readers–writer lock
Community hub
Readers–writer lock
logo
7 pages, 0 posts
0 subscribers
Be the first to start a discussion here.
Be the first to start a discussion here.
Readers–writer lock
Readers–writer lock
from Wikipedia

In computer science, a readers–writer (single-writer lock,[1] a multi-reader lock,[2] a push lock,[3] or an MRSW lock) is a synchronization primitive that solves one of the readers–writers problems. An RW lock allows concurrent access for read-only operations, whereas write operations require exclusive access. This means that multiple threads can read the data in parallel but an exclusive lock is needed for writing or modifying data. When a writer is writing the data, all other writers and readers will be blocked until the writer is finished writing. A common use might be to control access to a data structure in memory that cannot be updated atomically and is invalid (and should not be read by another thread) until the update is complete.

Readers–writer locks are usually constructed on top of mutexes and condition variables, or on top of semaphores.

Upgradable RW lock

[edit]

Some RW locks allow the lock to be atomically upgraded from being locked in read-mode to write-mode, as well as being downgraded from write-mode to read-mode.[4] Upgrading a lock from read-mode to write-mode is prone to deadlocks, since whenever two threads holding reader locks both attempt to upgrade to writer locks, a deadlock is created that can only be broken by one of the threads releasing its reader lock. The deadlock can be avoided by allowing only one thread to acquire the lock in "read-mode with intent to upgrade to write" while there are no threads in write mode and possibly non-zero threads in read-mode.

Priority policies

[edit]

RW locks can be designed with different priority policies for reader vs. writer access. The lock can either be designed to always give priority to readers (read-preferring), to always give priority to writers (write-preferring) or be unspecified with regards to priority. These policies lead to different tradeoffs with regards to concurrency and starvation.

  • Read-preferring RW locks allow for maximum concurrency, but can lead to write-starvation if contention is high. This is because writer threads will not be able to acquire the lock as long as at least one reading thread holds it. Since multiple reader threads may hold the lock at once, this means that a writer thread may continue waiting for the lock while new reader threads are able to acquire the lock, even to the point where the writer may still be waiting after all of the readers which were holding the lock when it first attempted to acquire it have released the lock. Priority to readers may be weak, as just described, or strong, meaning that whenever a writer releases the lock, any blocking readers always acquire it next.[5]: 76 
  • Write-preferring RW locks avoid the problem of writer starvation by preventing any new readers from acquiring the lock if there is a writer queued and waiting for the lock; the writer will acquire the lock as soon as all readers which were already holding the lock have completed.[6] The downside is that write-preferring locks allows for less concurrency in the presence of writer threads, compared to read-preferring RW locks. Also the lock is less performant because each operation, taking or releasing the lock for either read or write, is more complex, internally requiring taking and releasing two mutexes instead of one.[citation needed] This variation is sometimes also known as "write-biased" readers–writer lock.[7]
  • Unspecified priority RW locks does not provide any guarantees with regards read vs. write access. Unspecified priority can in some situations be preferable if it allows for a more efficient implementation.[citation needed]

Implementation

[edit]

Several implementation strategies for readers–writer locks exist, reducing them to synchronization primitives that are assumed to pre-exist.

Using two mutexes

[edit]

Raynal demonstrates how to implement an R/W lock using two mutexes and a single integer counter. The counter, b, tracks the number of blocking readers. One mutex, r, protects b and is only used by readers; the other, g (for "global") ensures mutual exclusion of writers. This requires that a mutex acquired by one thread can be released by another. The following is pseudocode for the operations:

Initialize

Set b to 0.
r is unlocked.
g is unlocked.

Begin read

Lock r.
Increment b.
If b = 1, lock g.
Unlock r.

End read

Lock r.
Decrement b.
If b = 0, unlock g.
Unlock r.

Begin write

Lock g.

End write

Unlock g.

This implementation is read-preferring.[5]: 76 

Using a condition variable and a mutex

[edit]

Alternatively an RW lock can be implemented in terms of a condition variable, cond, an ordinary (mutex) lock, g, and various counters and flags describing the threads that are currently active or waiting.[8][9][10] For a write-preferring RW lock one can use two integer counters and one Boolean flag:

num_readers_active: the number of readers that have acquired the lock (integer)
num_writers_waiting: the number of writers waiting for access (integer)
writer_active: whether a writer has acquired the lock (Boolean).

Initially num_readers_active and num_writers_waiting are zero and writer_active is false.

The lock and release operations can be implemented as

Begin read

Lock g
While num_writers_waiting > 0 or writer_active:
  wait cond, g[a]
Increment num_readers_active
Unlock g.

End read

Lock g
Decrement num_readers_active
If num_readers_active = 0:
  Notify cond (broadcast)
Unlock g.

Begin write

Lock g
Increment num_writers_waiting
While num_readers_active > 0 or writer_active is true:
  wait cond, g
Decrement num_writers_waiting
Set writer_active to true
Unlock g.

End write

Lock g
Set writer_active to false
Notify cond (broadcast)
Unlock g.

Programming language support

[edit]

Example in Rust

[edit]
use std::sync::RwLock;

let lock = RwLock::new(5);

// many reader locks can be held at once
{
    let r1 = lock.read().unwrap();
    let r2 = lock.read().unwrap();
    assert_eq!(*r1, 5);
    assert_eq!(*r2, 5);
} // read locks are dropped at this point

// only one write lock may be held, however
{
    let mut w = lock.write().unwrap();
    *w += 1;
    assert_eq!(*w, 6);
} // write lock is dropped here

Alternatives

[edit]

The read-copy-update (RCU) algorithm is one solution to the readers–writers problem. RCU is wait-free for readers. The Linux kernel implements a special solution for few writers called seqlock.

See also

[edit]

Notes

[edit]

References

[edit]
Revisions and contributorsEdit on WikipediaRead on Wikipedia
from Grokipedia
A readers–writer lock, also known as a multiple-read/single-write lock, is a primitive in concurrent programming that allows multiple threads to simultaneously access a for reading while ensuring that only one thread can write to it at a time, thereby preventing in multithreaded environments. This mechanism addresses the classic , first formulated by Courtois, Heymans, and Parnas in 1971, which models scenarios like database access where reads are frequent and non-mutating, but writes must be atomic and exclusive. In operation, a readers–writer lock distinguishes between read locks and write locks: when no writer is active, multiple readers can acquire read locks concurrently without blocking each other, maximizing throughput for read-heavy workloads. However, any write lock acquisition blocks all readers and other writers until the writing thread releases the lock, ensuring consistency. Unlike a standard mutex, which enforces total , this design reduces contention and improves performance in applications where reads outnumber writes, such as caching systems or file systems. Readers–writer locks are implemented across various operating systems and programming environments, including threads (via pthread_rwlock_* functions), .NET's ReaderWriterLockSlim class, and kernel-level primitives in systems like Solaris and . While effective, they can introduce fairness issues—such as writer starvation if readers continually arrive—and scalability challenges in highly contended scenarios, prompting research into optimized variants like passive reader-writer locks.

Fundamentals

Definition and Principles

A readers–writer lock (RW lock) is a primitive designed to manage concurrent access to a , permitting multiple threads or acting as readers to access the resource simultaneously for read-only operations, while ensuring that only one thread or acting as a writer can access it exclusively for modifications, thereby excluding all readers and other writers during write operations. The core principles governing RW locks emphasize differentiated access modes: readers obtain shared (non-exclusive) access, allowing concurrency among them as long as no writer is active; writers, in contrast, obtain exclusive access, blocking all other readers and writers until the write completes. Each acquiring thread must release the lock after use, typically via a corresponding unlock operation, to maintain system progress. Improper policy choices can introduce risks such as starvation—for instance, in reader-preference schemes, continuous reader arrivals may indefinitely delay writers. The readers–writer problem and its lock mechanism originated in the early 1970s as part of efforts to optimize concurrent control in multiprogramming systems, particularly for scenarios like database and file access where read operations vastly outnumber writes. P. J. Courtois, F. Heymans, and D. L. Parnas formalized the problem in their 1971 paper, proposing semaphore-based solutions to enforce while minimizing delays for the more frequent reader operations. A basic illustration of RW lock operations can be expressed using semaphores for a reader-preference policy, where readers are prioritized to reduce their wait times: Variables:
  • integer readcount (initialized to 0)
  • semaphore mutex (initialized to 1, for protecting readcount)
  • semaphore w (initialized to 1, for writer exclusion)
Reader acquire and release:

P(mutex); readcount := readcount + 1; if readcount = 1 then P(w); V(mutex); // Perform reading P(mutex); readcount := readcount - 1; if readcount = 0 then V(w); V(mutex);

P(mutex); readcount := readcount + 1; if readcount = 1 then P(w); V(mutex); // Perform reading P(mutex); readcount := readcount - 1; if readcount = 0 then V(w); V(mutex);

Writer acquire and release:

P(w); // Perform writing V(w);

P(w); // Perform writing V(w);

This pseudocode ensures that multiple readers can proceed concurrently after the first reader secures the semaphore, while writers block until no readers remain.

Motivation and Use Cases

Readers–writer locks, also known as read-write locks, are primitives designed to optimize in scenarios where multiple threads frequently read shared but writes are infrequent. Unlike standard mutexes that enforce exclusive access for both reads and writes, readers–writer locks permit concurrent read operations by multiple threads while ensuring writers obtain exclusive access, thereby minimizing contention and enhancing overall throughput in read-heavy environments. This design is particularly beneficial for workloads exhibiting high read-to-write ratios, where traditional exclusive locks would serialize all operations and underutilize available parallelism. For instance, in read-dominated benchmarks such as those involving in-memory , scalable readers–writer lock implementations have demonstrated throughput improvements of up to 7x compared to conventional reader–writer locks, highlighting the potential for substantial gains over simpler mutex-based approaches in similar settings. Common use cases include database systems, where index lookups and queries vastly outnumber updates, allowing multiple concurrent reads of catalog data. In web caches, they facilitate simultaneous serving of static content to numerous clients with occasional invalidations or refreshes. File systems often employ them for metadata operations, enabling parallel reads of directory entries while serializing modifications. Multi-threaded servers, such as those handling request routing or configuration data, also leverage these locks to support high-concurrency read access with rare updates. Despite these advantages, readers–writer locks introduce greater implementation complexity than mutexes and risk writer starvation, where continuous reader arrivals indefinitely delay writers unless fairness mechanisms are incorporated.

Variants

Upgradable Locks

An upgradable readers–writer lock extends the standard model by allowing a thread that holds a shared read lock to request an upgrade to an exclusive write lock without first releasing the read access. This variant introduces an intermediate "upgradeable" or "intent" state, where the holder can read concurrently with other readers but blocks new writers and other upgradeable holders until the upgrade completes or is abandoned. The upgrade mechanism typically involves the thread invoking a dedicated method, such as UpgradeToWriterLock in .NET's ReaderWriterLockSlim or tryConvertToWriteLock in Java's StampedLock, which waits for any active readers or writers to release their holds before transitioning the lock state to exclusive. During this wait, the original read access remains valid, preventing data races, and the process is often designed to be atomic to avoid intermediate inconsistent states. Downgrades from write back to read (or upgradeable) are symmetrically supported via methods like DowngradeFromWriterLock or tryConvertToReadLock, allowing the thread to revert exclusivity while retaining access. Upgradable locks are particularly useful in scenarios involving , such as database transactions that begin with reads to check conditions before committing writes, as seen in systems like where read locks are automatically upgraded to write locks during processing. They also benefit caching or configuration systems where initial reads assess data before potential modifications, enabling efficient read-mostly workloads with occasional updates. A key challenge with upgradable locks is the risk of deadlock, which arises if multiple threads hold upgradeable reads and simultaneously attempt upgrades, as each blocks the others from proceeding; careful design, such as using non-blocking try-upgrade methods and avoiding nested locking, is essential to mitigate this. Additionally, the upgrade process can introduce fairness issues if not prioritized properly, potentially starving writers under high read contention.

Priority Policies

In readers–writer locks, a key challenge arises from the potential for writer starvation under certain access patterns. When multiple readers frequently acquire the lock in quick succession, a waiting writer may be indefinitely delayed, as the lock allows concurrent reads but exclusive writes. This issue, first formalized in the classic , occurs in implementations with reader preference, where new readers can join ongoing read sessions without checking for pending writers. To address starvation and balance access, various priority policies have been developed. Writer-priority policies favor writers by allowing them to "jump the queue" once they request the lock, often blocking new readers from acquiring it until the writer proceeds. This prevents but can increase reader wait times, particularly in read-heavy workloads. Conversely, reader-priority policies permit multiple readers to acquire the lock even if writers are queued, optimizing for high read throughput at the risk of delaying writers. Fair policies aim to treat all requesters equitably, using mechanisms like ticket-based ordering to enforce first-come, first-served access for both readers and writers, thereby bounding wait times without strong bias toward either side. Implementing these policies typically involves maintaining counters for active and waiting readers alongside queues for pending requests. For writer priority, a flag or counter tracks waiting writers, and incoming readers check it before proceeding; if a writer is pending, readers wait or defer. Fair implementations may use a single FIFO queue for all threads, assigning tickets upon request and advancing based on type (e.g., allowing reader tickets to pass others only if no writers follow). These approaches complexity and : writer priority reduces writer latency in mixed workloads but can degrade overall read throughput in read-heavy workloads with infrequent writes, while fair policies ensure bounded fairness at the cost of higher overhead from queuing. Practical examples illustrate these policies in production systems. The kernel's rw_semaphore (rwsem) employs a implementation in non-real-time kernels, using optimistic spinning and wait queues to prevent writer starvation while allowing multiple readers. In contrast, Java's ReentrantReadWriteLock supports configurable fairness via its constructor; the default non-fair mode permits barging for higher throughput, but enabling fairness enforces FIFO ordering to avoid indefinite delays for either readers or writers.

Implementations

Two Mutex Approach

The two mutex approach implements a readers–writer lock using a counter to track the number of active readers, protected by a count mutex, and a separate resource mutex to enforce exclusive access by writers or the first reader. This design allows multiple readers to proceed concurrently after the first reader acquires the resource mutex, while writers must wait until no readers are active. The approach relies solely on mutexes for , avoiding more complex primitives like condition variables, which makes it suitable for simple scenarios or systems with limited synchronization support. To acquire a read lock, a reader thread locks the count mutex, increments the counter, and—if it is the first reader (counter was zero)—locks the mutex before releasing the count mutex to perform the read operation. Upon releasing the read lock, the reader locks the count mutex again, decrements the counter, and—if it is the last reader (counter reaches zero)—releases the mutex. For a write lock, the locks the mutex, which blocks until no readers are active, before performing the write operation. The write lock is released by unlocking the mutex. This mechanism ensures correctness without additional signaling but can lead to writers blocking indefinitely under heavy read traffic. The following pseudocode illustrates the operations (assuming atomic increments/decrements under the count mutex and standard mutex lock/unlock semantics): Read acquire:

lock(count_mutex) counter += 1 if counter == 1: lock(resource_mutex) unlock(count_mutex) // Perform read

lock(count_mutex) counter += 1 if counter == 1: lock(resource_mutex) unlock(count_mutex) // Perform read

Read release:

lock(count_mutex) counter -= 1 if counter == 0: unlock(resource_mutex) unlock(count_mutex)

lock(count_mutex) counter -= 1 if counter == 0: unlock(resource_mutex) unlock(count_mutex)

Write acquire:

lock(resource_mutex) // Perform write

lock(resource_mutex) // Perform write

Write release:

unlock(resource_mutex)

unlock(resource_mutex)

This implementation favors readers, as once readers are active, arriving writers block until all readers exit, potentially leading to writer starvation under heavy read traffic. Additionally, the shared count mutex creates a bottleneck under high contention, limiting scalability in multiprocessor environments where many threads compete for access. The readers–writer problem and its solutions, including reader-preference like this one, were first formalized in the early . This approach bears similarity to mechanisms in early systems, such as the flock advisory locking introduced in 4.2BSD (), which provided shared (read) and exclusive (write) file locks to coordinate access among processes.

Condition Variable Approach

The condition variable approach to implementing a readers–writer lock employs a single mutex to protect shared state variables—typically a reader count and a indicating writer activity—along with a condition variable to efficiently block and notify threads awaiting access. This design allows multiple readers to proceed concurrently when no writer is active, while ensuring exclusive access for writers, by having contending threads atomically check the state under the mutex and wait on the condition variable if the lock is unavailable. Upon release, the appropriate signaling (signal or broadcast) wakes waiting threads, minimizing CPU overhead compared to spin-based alternatives. The following pseudocode illustrates a basic reader-preferring implementation, where readers increment a counter to enter and decrement it to exit, signaling only when the last reader leaves; writers wait until the reader count is zero before proceeding, and upon writer exit, a broadcast notifies waiting threads, allowing multiple readers to acquire read locks concurrently.

class ConditionRWLock { private: mutex m; condition_variable cv; int readers = 0; bool writer_active = false; public: void read_lock() { unique_lock<mutex> lock(m); while (writer_active) { cv.wait(lock); } readers++; } void read_unlock() { unique_lock<mutex> lock(m); readers--; if (readers == 0) { cv.notify_one(); // Signal a waiting [writer](/page/Writer) if present } } void write_lock() { unique_lock<mutex> lock(m); while (readers > 0 || writer_active) { cv.wait(lock); } writer_active = true; } void write_unlock() { writer_active = false; cv.notify_all(); // Broadcast to wake waiting readers and writers } }

class ConditionRWLock { private: mutex m; condition_variable cv; int readers = 0; bool writer_active = false; public: void read_lock() { unique_lock<mutex> lock(m); while (writer_active) { cv.wait(lock); } readers++; } void read_unlock() { unique_lock<mutex> lock(m); readers--; if (readers == 0) { cv.notify_one(); // Signal a waiting [writer](/page/Writer) if present } } void write_lock() { unique_lock<mutex> lock(m); while (readers > 0 || writer_active) { cv.wait(lock); } writer_active = true; } void write_unlock() { writer_active = false; cv.notify_all(); // Broadcast to wake waiting readers and writers } }

In this structure, readers wait if a writer is active, while writers wait if any readers are active or another writer holds the lock; the broadcast in write_unlock allows concurrent resumption of readers after writer completion. This approach reduces busy-waiting by leveraging the operating system's scheduler for blocked threads, leading to lower CPU utilization in scenarios with infrequent contention or high reader throughput. It also facilitates priority policies through selective signaling, such as notifying writers before readers in the read_unlock to favor writers and mitigate reader-induced starvation. However, managing the state (reader count and writer flag) introduces complexity, requiring careful ordering of locks and waits to prevent deadlocks or missed signals; additionally, condition variable operations incur overhead from kernel-mode transitions, which can impact performance in low-contention environments. An enhancement involves using multiple condition variables—one dedicated to readers and another to writers—for finer-grained notifications, allowing targeted signals (e.g., writer-specific wakes without disturbing reader queues) and reducing unnecessary wake-ups in mixed workloads. This variant improves scalability in systems with imbalanced reader-writer ratios by minimizing thundering herd effects.

Language Support

Built-in Implementations

provides built-in support for readers–writer locks through the ReentrantReadWriteLock class in the java.util.concurrent.locks package, introduced in JDK 5 in September 2004. This implementation allows multiple threads to acquire read locks concurrently while ensuring exclusive access for write locks, with support for reentrancy similar to ReentrantLock. In C++, the std::shared_mutex class, added in the standard released in December 2017, offers native readers–writer lock functionality as part of the <shared_mutex> header. It enables multiple threads to hold shared (read) locks simultaneously or a single thread to hold an exclusive (write) lock, fulfilling the SharedMutex requirements for . Rust provides built-in support for readers–writer locks through the std::sync::RwLock type in the , available since Rust 1.0 released in May 2015. This allows multiple readers or a single writer to access shared data, with RAII-based guards for automatic lock release and poisoning detection for error handling. Python's standard library does not include a built-in readers–writer lock; instead, developers rely on third-party libraries such as readerwriterlock from PyPI, which provides a compliant supporting timeouts and the three variants of the readers–writer problem. Discussions within the Python community have proposed adding native support to the , but as of Python 3.14, it remains unavailable. The POSIX threads (pthreads) API includes native support for readers–writer locks via functions like pthread_rwlock_init, pthread_rwlock_rdlock, and pthread_rwlock_wrlock, standardized in POSIX.1-2001. These allow multiple readers or a single writer to acquire the lock, with optional attributes for kind (e.g., preferring readers or writers) on some implementations. In .NET, the ReaderWriterLockSlim class in the System.Threading namespace, introduced in .NET Framework 3.5 in November 2007, provides a lightweight readers–writer lock optimized for performance over its predecessor ReaderWriterLock. It supports multiple concurrent readers, exclusive writers, and features like upgradeable read locks, using interlocked operations to minimize overhead. At the operating system level, implements readers–writer semaphores (rw_semaphore) in the kernel, which support multiple readers or a single writer and use futexes for efficient user-space waiting to avoid unnecessary kernel transitions. This primitive has been available since kernel version 2.5.x in the early and ensures fairness to prevent writer starvation on non-real-time kernels. Windows offers slim reader/writer (SRW) locks via the SRWLOCK structure and associated APIs like AcquireSRWLockShared and AcquireSRWLockExclusive, introduced in in 2007. These user-mode locks are designed for single-process , providing low-overhead concurrent reads or exclusive writes without kernel involvement in uncontended cases. Languages without native support, such as C, require manual implementations of readers–writer locks, often using pthreads primitives or atomic operations to manage counters for active readers and pending writers.

Code Examples

The readers–writer lock, also known as a multiple readers/single writer lock, is a synchronization primitive that allows multiple concurrent readers or a single writer to access a shared resource, improving concurrency over a standard mutex in read-heavy scenarios.

Rust Example

In Rust, the std::sync::RwLock provides a readers–writer lock suitable for concurrent access to shared data across threads. The following example demonstrates multi-threaded read and write operations on a shared Vec<i32>, where multiple reader threads read the length of the vector while a writer thread appends an element to the vector exclusively. Rust's ownership model ensures safety through RAII guards (RwLockReadGuard and RwLockWriteGuard), which automatically release the lock upon dropping, and error handling via Result for poisoned locks.

rust

use std::sync::{Arc, RwLock}; use std::thread; fn main() { let data = Arc::new(RwLock::new(vec![1, 2, 3])); // Writer thread: exclusive write access let data_clone = Arc::clone(&data); let writer = thread::spawn(move || { let mut guard = data_clone.write().expect("Failed to acquire write lock"); guard.push(4); println!("Writer: {:?}", *guard); // Output: [1, 2, 3, 4] }); // Multiple reader threads: shared read access let handles: Vec<_> = (0..2).map(|i| { let data_clone = Arc::clone(&data); thread::spawn(move || { let guard = data_clone.read().expect("Failed to acquire read lock"); println!("Reader {}: len = {}", i, guard.len()); }) }).collect(); writer.join().unwrap(); for handle in handles { handle.join().unwrap(); } }

use std::sync::{Arc, RwLock}; use std::thread; fn main() { let data = Arc::new(RwLock::new(vec![1, 2, 3])); // Writer thread: exclusive write access let data_clone = Arc::clone(&data); let writer = thread::spawn(move || { let mut guard = data_clone.write().expect("Failed to acquire write lock"); guard.push(4); println!("Writer: {:?}", *guard); // Output: [1, 2, 3, 4] }); // Multiple reader threads: shared read access let handles: Vec<_> = (0..2).map(|i| { let data_clone = Arc::clone(&data); thread::spawn(move || { let guard = data_clone.read().expect("Failed to acquire read lock"); println!("Reader {}: len = {}", i, guard.len()); }) }).collect(); writer.join().unwrap(); for handle in handles { handle.join().unwrap(); } }

This code uses Arc for thread-safe shared ownership of the RwLock, with read() allowing concurrent access and write() ensuring exclusivity; panics are avoided via expect() for demonstration, though production code might propagate errors.

Java Example

Java's java.util.concurrent.locks.ReentrantReadWriteLock supports reentrant read and write locks for fine-grained control in multi-threaded environments. The example below simulates a simple cache using a HashMap<String, String>, where reader threads query keys and a writer thread updates entries. Locks are acquired via readLock().lock() and writeLock().lock(), with manual unlock() in finally blocks for release, and the reentrant nature allows nested locking by the same thread.

java

import java.util.HashMap; import java.util.Map; import java.util.concurrent.locks.ReentrantReadWriteLock; public class CacheExample { private final Map<String, String> cache = new HashMap<>(); private final ReentrantReadWriteLock rwl = new ReentrantReadWriteLock(); private final ReentrantReadWriteLock.ReadLock readLock = rwl.readLock(); private final ReentrantReadWriteLock.WriteLock writeLock = rwl.writeLock(); public String get(String key) { readLock.lock(); try { return cache.get(key); } finally { readLock.unlock(); } } public void put(String key, String value) { writeLock.lock(); try { cache.put(key, value); } finally { writeLock.unlock(); } } public static void main(String[] args) { CacheExample cache = new CacheExample(); cache.put("key1", "value1"); // Writer operation // Simulate reader threads (in practice, use ExecutorService) Runnable reader = () -> { String value = cache.get("key1"); System.out.println("Reader: " + value); // Output: value1 }; Thread reader1 = new Thread(reader); Thread reader2 = new Thread(reader); reader1.start(); reader2.start(); try { reader1.join(); reader2.join(); } catch (InterruptedException e) { Thread.currentThread().interrupt(); } } }

import java.util.HashMap; import java.util.Map; import java.util.concurrent.locks.ReentrantReadWriteLock; public class CacheExample { private final Map<String, String> cache = new HashMap<>(); private final ReentrantReadWriteLock rwl = new ReentrantReadWriteLock(); private final ReentrantReadWriteLock.ReadLock readLock = rwl.readLock(); private final ReentrantReadWriteLock.WriteLock writeLock = rwl.writeLock(); public String get(String key) { readLock.lock(); try { return cache.get(key); } finally { readLock.unlock(); } } public void put(String key, String value) { writeLock.lock(); try { cache.put(key, value); } finally { writeLock.unlock(); } } public static void main(String[] args) { CacheExample cache = new CacheExample(); cache.put("key1", "value1"); // Writer operation // Simulate reader threads (in practice, use ExecutorService) Runnable reader = () -> { String value = cache.get("key1"); System.out.println("Reader: " + value); // Output: value1 }; Thread reader1 = new Thread(reader); Thread reader2 = new Thread(reader); reader1.start(); reader2.start(); try { reader1.join(); reader2.join(); } catch (InterruptedException e) { Thread.currentThread().interrupt(); } } }

Error handling here relies on the lock's fairness policy (default non-fair), preventing indefinite blocking; InterruptedException is managed for thread interruption.

C++ Example

C++17 introduces std::shared_mutex in <shared_mutex>, enabling shared (reader) and exclusive (writer) locking for concurrent data access. This example uses a thread pool to access a shared std::map<int, int>, with reader threads querying values and a writer updating the map. std::shared_lock handles read access (RAII-based release), while std::unique_lock manages writes; exception safety is ensured by automatic unlocking on scope exit.

cpp

#include <shared_mutex> #include <map> #include <thread> #include <vector> #include <iostream> class ThreadSafeMap { private: std::map<int, int> data; mutable std::shared_mutex mutex; public: int get(int key) const { std::shared_lock<std::shared_mutex> lock(mutex); auto it = data.find(key); return (it != data.end()) ? it->second : -1; } void put(int key, int value) { std::unique_lock<std::shared_mutex> lock(mutex); data[key] = value; } }; int main() { ThreadSafeMap map; map.put(1, 42); // Writer operation // Simulate thread pool with reader threads std::vector<std::thread> threads; for (int i = 0; i < 2; ++i) { threads.emplace_back([&map, i]() { int value = map.get(1); std::cout << "Reader " << i << ": " << value << std::endl; // Output: 42 }); } for (auto& t : threads) { t.join(); } return 0; }

#include <shared_mutex> #include <map> #include <thread> #include <vector> #include <iostream> class ThreadSafeMap { private: std::map<int, int> data; mutable std::shared_mutex mutex; public: int get(int key) const { std::shared_lock<std::shared_mutex> lock(mutex); auto it = data.find(key); return (it != data.end()) ? it->second : -1; } void put(int key, int value) { std::unique_lock<std::shared_mutex> lock(mutex); data[key] = value; } }; int main() { ThreadSafeMap map; map.put(1, 42); // Writer operation // Simulate thread pool with reader threads std::vector<std::thread> threads; for (int i = 0; i < 2; ++i) { threads.emplace_back([&map, i]() { int value = map.get(1); std::cout << "Reader " << i << ": " << value << std::endl; // Output: 42 }); } for (auto& t : threads) { t.join(); } return 0; }

The mutable keyword allows const methods to lock for reads, aligning with C++'s const-correctness; no explicit error handling is needed beyond potential exceptions from mutex operations, which propagate naturally.

Alternatives

Comparison to Mutex

A mutex enforces , permitting only a single thread—whether reading or writing—to access the protected resource at any given time. In contrast, a readers–writer lock distinguishes between read and write operations, allowing multiple threads to concurrently hold the lock in read-only mode while ensuring exclusive access for writers. This design leads to distinct performance characteristics. Mutexes are simpler to implement and exhibit lower latency in scenarios with balanced read-write ratios or frequent writes, benefiting from techniques like optimistic spinning that can boost throughput by up to 3.5 times in microbenchmarks such as Linux's operations. However, under read-heavy workloads, mutexes serialize all accesses, causing throughput to drop by more than 50% with just two or more concurrent readers, as each must wait in turn. Readers–writer locks mitigate this by enabling parallel reads, delivering superior ; for instance, optimized variants achieve up to 7.37 times higher throughput than traditional implementations in database benchmarks on 64-core systems. Selection between the two depends on patterns and priorities. Mutexes suit applications requiring straightforward exclusive access, such as short critical sections with mixed operations or write-dominant tasks, where their simplicity avoids unnecessary complexity. Readers–writer locks are preferable for in read-intensive environments, like large shared collections accessed predominantly by readers, provided the read-to-write ratio justifies the added mechanism—typically worthwhile only when reader threads outnumber writers significantly. Readers–writer locks carry greater overhead in acquisition due to the need for state tracking, such as reader counts and mode distinctions, resulting in uncontended read latencies of approximately 107 cycles in traditional implementations—roughly 2 to 5 times that of a typical mutex's 20–50 cycles in microbenchmarks. This complexity also elevates writer overhead, as they must coordinate with active readers, though it pays off in concurrent read scenarios. Reader-writer semaphores, also known as rw_semaphores, extend the classic primitive to support concurrent read access while ensuring exclusive write access, typically using two semaphores: one to track the number of active readers and another for writer exclusion. This approach, originally proposed as a solution to the readers-writers problem using Dijkstra's () and () operations on counting semaphores, allows multiple readers to proceed simultaneously by incrementing a reader count before accessing the and decrementing it afterward, while writers must acquire exclusive control by blocking all readers. However, implementing reader-writer semaphores requires careful ordering of operations to avoid deadlocks or , making them more error-prone than higher-level reader-writer locks, which abstract away these details. Sequence locks, or seqlocks, provide an optimistic mechanism primarily for read-heavy workloads, where readers perform lockless access by checking a sequence counter before and after reading the shared data; if the counter matches (indicating no concurrent write), the read succeeds, but a mismatch prompts a retry. Writers update the sequence counter to an odd value at the start of their and to an even value at the end, allowing them to proceed without blocking readers, though this risks readers observing inconsistent data during writes. Commonly used in the for infrequently updated data structures like , seqlocks come in variants such as raw seqcount_t (requiring external ) or seqlock_t (with integrated protection), but they cannot safely protect pointer-based data due to potential invalidation during concurrent writes. Read-copy-update (RCU) is a lock-free synchronization primitive designed for read-mostly environments, where readers traverse data structures without acquiring locks by entering an RCU read-side critical section via rcu_read_lock() and exiting with rcu_read_unlock(), relying on the kernel's grace-period mechanism to ensure updates do not interfere. Writers perform updates by copying the affected data structure, modifying the copy, and then atomically replacing pointers to it using rcu_assign_pointer(), followed by a grace period (via synchronize_rcu()) to wait for all pre-existing readers to complete before reclaiming old data. Primarily employed in the Linux kernel and user-space libraries for scalable access to dynamic structures like linked lists, RCU avoids blocking readers entirely but demands careful memory management and is not suited for general-purpose synchronization due to its complexity in handling update lifetimes. While reader-writer semaphores enable concurrency similar to locks but at the cost of manual deadlock avoidance, prioritize read performance through retries without writer-reader , potentially leading to livelocks if writes are frequent, and RCU excels in scalability for low-update scenarios by deferring reclamation but introduces overhead from grace periods and copy operations. In read-dominant applications, offer faster reads than traditional reader-writer locks by eliminating blocking, though with inconsistency risks requiring retries, whereas RCU provides non-blocking reads at the expense of higher write complexity and limited applicability beyond kernel contexts.

References

Add your contribution
Related Hubs
User Avatar
No comments yet.