Recent from talks
Nothing was collected or created yet.
Readers–writer lock
View on Wikipedia
In computer science, a readers–writer (single-writer lock,[1] a multi-reader lock,[2] a push lock,[3] or an MRSW lock) is a synchronization primitive that solves one of the readers–writers problems. An RW lock allows concurrent access for read-only operations, whereas write operations require exclusive access. This means that multiple threads can read the data in parallel but an exclusive lock is needed for writing or modifying data. When a writer is writing the data, all other writers and readers will be blocked until the writer is finished writing. A common use might be to control access to a data structure in memory that cannot be updated atomically and is invalid (and should not be read by another thread) until the update is complete.
Readers–writer locks are usually constructed on top of mutexes and condition variables, or on top of semaphores.
Upgradable RW lock
[edit]Some RW locks allow the lock to be atomically upgraded from being locked in read-mode to write-mode, as well as being downgraded from write-mode to read-mode.[4] Upgrading a lock from read-mode to write-mode is prone to deadlocks, since whenever two threads holding reader locks both attempt to upgrade to writer locks, a deadlock is created that can only be broken by one of the threads releasing its reader lock. The deadlock can be avoided by allowing only one thread to acquire the lock in "read-mode with intent to upgrade to write" while there are no threads in write mode and possibly non-zero threads in read-mode.
Priority policies
[edit]RW locks can be designed with different priority policies for reader vs. writer access. The lock can either be designed to always give priority to readers (read-preferring), to always give priority to writers (write-preferring) or be unspecified with regards to priority. These policies lead to different tradeoffs with regards to concurrency and starvation.
- Read-preferring RW locks allow for maximum concurrency, but can lead to write-starvation if contention is high. This is because writer threads will not be able to acquire the lock as long as at least one reading thread holds it. Since multiple reader threads may hold the lock at once, this means that a writer thread may continue waiting for the lock while new reader threads are able to acquire the lock, even to the point where the writer may still be waiting after all of the readers which were holding the lock when it first attempted to acquire it have released the lock. Priority to readers may be weak, as just described, or strong, meaning that whenever a writer releases the lock, any blocking readers always acquire it next.[5]: 76
- Write-preferring RW locks avoid the problem of writer starvation by preventing any new readers from acquiring the lock if there is a writer queued and waiting for the lock; the writer will acquire the lock as soon as all readers which were already holding the lock have completed.[6] The downside is that write-preferring locks allows for less concurrency in the presence of writer threads, compared to read-preferring RW locks. Also the lock is less performant because each operation, taking or releasing the lock for either read or write, is more complex, internally requiring taking and releasing two mutexes instead of one.[citation needed] This variation is sometimes also known as "write-biased" readers–writer lock.[7]
- Unspecified priority RW locks does not provide any guarantees with regards read vs. write access. Unspecified priority can in some situations be preferable if it allows for a more efficient implementation.[citation needed]
Implementation
[edit]Several implementation strategies for readers–writer locks exist, reducing them to synchronization primitives that are assumed to pre-exist.
Using two mutexes
[edit]Raynal demonstrates how to implement an R/W lock using two mutexes and a single integer counter. The counter, b, tracks the number of blocking readers. One mutex, r, protects b and is only used by readers; the other, g (for "global") ensures mutual exclusion of writers. This requires that a mutex acquired by one thread can be released by another. The following is pseudocode for the operations:
Initialize
Set b to 0. r is unlocked. g is unlocked.
Begin read
Lock r. Increment b. If b = 1, lock g. Unlock r.
End read
Lock r. Decrement b. If b = 0, unlock g. Unlock r.
Begin write
Lock g.
End write
Unlock g.
This implementation is read-preferring.[5]: 76
Using a condition variable and a mutex
[edit]Alternatively an RW lock can be implemented in terms of a condition variable, cond, an ordinary (mutex) lock, g, and various counters and flags describing the threads that are currently active or waiting.[8][9][10] For a write-preferring RW lock one can use two integer counters and one Boolean flag:
- num_readers_active: the number of readers that have acquired the lock (integer)
- num_writers_waiting: the number of writers waiting for access (integer)
- writer_active: whether a writer has acquired the lock (Boolean).
Initially num_readers_active and num_writers_waiting are zero and writer_active is false.
The lock and release operations can be implemented as
Begin read
Lock g While num_writers_waiting > 0 or writer_active: wait cond, g[a] Increment num_readers_active Unlock g.
End read
Lock g Decrement num_readers_active If num_readers_active = 0: Notify cond (broadcast) Unlock g.
Begin write
Lock g Increment num_writers_waiting While num_readers_active > 0 or writer_active is true: wait cond, g Decrement num_writers_waiting Set writer_active to true Unlock g.
End write
Lock g Set writer_active to false Notify cond (broadcast) Unlock g.
Programming language support
[edit]- POSIX standard
pthread_rwlock_tand associated operations[11] - ReadWriteLock[12] interface and the ReentrantReadWriteLock[7] locks in Java version 5 or above
- Microsoft
System.Threading.ReaderWriterLockSlimlock for C# and other .NET languages[13] std::shared_mutexread/write lock in C++17[14]boost::shared_mutexandboost::upgrade_mutexlocks in Boost C++ Libraries[15]SRWLock, added to the Windows operating system API as of Windows Vista.[16]sync.RWMutexin Go[17]- Phase fair reader–writer lock, which alternates between readers and writers[18]
std::sync::RwLockread/write lock in Rust[19]- Poco::RWLock in POCO C++ Libraries
mse::recursive_shared_timed_mutexin the SaferCPlusPlus library is a version ofstd::shared_timed_mutexthat supports the recursive ownership semantics ofstd::recursive_mutex.txrwlock.ReadersWriterDeferredLockReaders/Writer Lock for Twisted[20]rw_semaphorein the Linux kernel[21]
Example in Rust
[edit]use std::sync::RwLock;
let lock = RwLock::new(5);
// many reader locks can be held at once
{
let r1 = lock.read().unwrap();
let r2 = lock.read().unwrap();
assert_eq!(*r1, 5);
assert_eq!(*r2, 5);
} // read locks are dropped at this point
// only one write lock may be held, however
{
let mut w = lock.write().unwrap();
*w += 1;
assert_eq!(*w, 6);
} // write lock is dropped here
Alternatives
[edit]The read-copy-update (RCU) algorithm is one solution to the readers–writers problem. RCU is wait-free for readers. The Linux kernel implements a special solution for few writers called seqlock.
See also
[edit]Notes
[edit]- ^ This is the standard "wait" operation on condition variables, which, among other actions, releases the mutex g.
References
[edit]- ^ Hamilton, Doug (21 April 1995). "Suggestions for multiple-reader/single-writer lock?". Newsgroup: comp.os.ms-windows.nt.misc. Usenet: hamilton.798430053@BIX.com. Retrieved 8 October 2010.
- ^ "Practical lock-freedom" by Keir Fraser 2004
- ^ "Push Locks – What are they?". Ntdebugging Blog. MSDN Blogs. 2 September 2009. Retrieved 11 May 2017.
- ^ "Synchronization § UpgradeLockable Concept – EXTENSION". Boost C++ Libraries.
- ^ a b Raynal, Michel (2012). Concurrent Programming: Algorithms, Principles, and Foundations. Springer.
- ^ Stevens, W. Richard; Rago, Stephen A. (2013). Advanced Programming in the UNIX Environment. Addison-Wesley. p. 409.
- ^ a b
java.util.concurrent.locks.ReentrantReadWriteLockJava readers–writer lock implementation offers a "fair" mode - ^ Herlihy, Maurice; Shavit, Nir (2012). The Art of Multiprocessor Programming. Elsevier. pp. 184–185.
- ^ Nichols, Bradford; Buttlar, Dick; Farrell, Jacqueline (1996). PThreads Programming: A POSIX Standard for Better Multiprocessing. O'Reilly. pp. 84–89. ISBN 9781565921153.
- ^ Butenhof, David R. (1997). Programming with POSIX Threads. Addison-Wesley. pp. 253–266.
- ^ "The Open Group Base Specifications Issue 6, IEEE Std 1003.1, 2004 Edition: pthread_rwlock_destroy". The IEEE and The Open Group. Retrieved 14 May 2011.
- ^
java.util.concurrent.locks.ReadWriteLock - ^ "ReaderWriteLockSlim Class (System.Threading)". Microsoft Corporation. Retrieved 14 May 2011.
- ^ "New adopted paper: N3659, Shared Locking in C++—Howard Hinnant, Detlef Vollmann, Hans Boehm". Standard C++ Foundation.
- ^ Anthony Williams. "Synchronization – Boost 1.52.0". Retrieved 31 January 2012.
- ^ Alessandrini, Victor (2015). Shared Memory Application Programming: Concepts and Strategies in Multicore Application Programming. Morgan Kaufmann.
- ^ "The Go Programming language – Package sync". Retrieved 30 May 2015.
- ^ "Reader–Writer Synchronization for Shared-Memory Multiprocessor Real-Time Systems" (PDF).
- ^ "std::sync::RwLock – Rust". Retrieved 26 October 2019.
- ^ "Readers/Writer Lock for Twisted". GitHub. Retrieved 28 September 2016.
- ^ "Synchronization primitives in the Linux kernel: Reader/Writer semaphores". Linux Insides. Retrieved 8 June 2023.
Readers–writer lock
View on Grokipediapthread_rwlock_* functions), .NET's ReaderWriterLockSlim class, and kernel-level primitives in systems like Solaris and Linux.[5][3][2][4] While effective, they can introduce fairness issues—such as writer starvation if readers continually arrive—and scalability challenges in highly contended scenarios, prompting research into optimized variants like passive reader-writer locks.[6]
Fundamentals
Definition and Principles
A readers–writer lock (RW lock) is a synchronization primitive designed to manage concurrent access to a shared resource, permitting multiple threads or processes acting as readers to access the resource simultaneously for read-only operations, while ensuring that only one thread or process acting as a writer can access it exclusively for modifications, thereby excluding all readers and other writers during write operations.[7][8] The core principles governing RW locks emphasize differentiated access modes: readers obtain shared (non-exclusive) access, allowing concurrency among them as long as no writer is active; writers, in contrast, obtain exclusive access, blocking all other readers and writers until the write completes.[7] Each acquiring thread must release the lock after use, typically via a corresponding unlock operation, to maintain system progress.[9] Improper policy choices can introduce risks such as starvation—for instance, in reader-preference schemes, continuous reader arrivals may indefinitely delay writers.[10] The readers–writer problem and its lock mechanism originated in the early 1970s as part of efforts to optimize concurrent control in multiprogramming systems, particularly for scenarios like database and file access where read operations vastly outnumber writes.[1] P. J. Courtois, F. Heymans, and D. L. Parnas formalized the problem in their 1971 paper, proposing semaphore-based solutions to enforce mutual exclusion while minimizing delays for the more frequent reader operations.[1] A basic illustration of RW lock operations can be expressed using semaphores for a reader-preference policy, where readers are prioritized to reduce their wait times: Variables:integer readcount(initialized to 0)semaphore mutex(initialized to 1, for protectingreadcount)semaphore w(initialized to 1, for writer exclusion)
P(mutex);
readcount := readcount + 1;
if readcount = 1 then P(w);
V(mutex);
// Perform reading
P(mutex);
readcount := readcount - 1;
if readcount = 0 then V(w);
V(mutex);
P(mutex);
readcount := readcount + 1;
if readcount = 1 then P(w);
V(mutex);
// Perform reading
P(mutex);
readcount := readcount - 1;
if readcount = 0 then V(w);
V(mutex);
P(w);
// Perform writing
V(w);
P(w);
// Perform writing
V(w);
Motivation and Use Cases
Readers–writer locks, also known as read-write locks, are synchronization primitives designed to optimize performance in scenarios where multiple threads frequently read shared data but writes are infrequent. Unlike standard mutexes that enforce exclusive access for both reads and writes, readers–writer locks permit concurrent read operations by multiple threads while ensuring writers obtain exclusive access, thereby minimizing contention and enhancing overall system throughput in read-heavy environments.[11] This design is particularly beneficial for workloads exhibiting high read-to-write ratios, where traditional exclusive locks would serialize all operations and underutilize available parallelism. For instance, in read-dominated benchmarks such as those involving in-memory databases, scalable readers–writer lock implementations have demonstrated throughput improvements of up to 7x compared to conventional reader–writer locks, highlighting the potential for substantial gains over simpler mutex-based approaches in similar settings.[6] Common use cases include database systems, where index lookups and queries vastly outnumber updates, allowing multiple concurrent reads of catalog data.[12] In web caches, they facilitate simultaneous serving of static content to numerous clients with occasional invalidations or refreshes.[13] File systems often employ them for metadata operations, enabling parallel reads of directory entries while serializing modifications.[14] Multi-threaded servers, such as those handling request routing or configuration data, also leverage these locks to support high-concurrency read access with rare updates.[11] Despite these advantages, readers–writer locks introduce greater implementation complexity than mutexes and risk writer starvation, where continuous reader arrivals indefinitely delay writers unless fairness mechanisms are incorporated.[11]Variants
Upgradable Locks
An upgradable readers–writer lock extends the standard model by allowing a thread that holds a shared read lock to request an upgrade to an exclusive write lock without first releasing the read access. This variant introduces an intermediate "upgradeable" or "intent" state, where the holder can read concurrently with other readers but blocks new writers and other upgradeable holders until the upgrade completes or is abandoned.[15][16][17] The upgrade mechanism typically involves the thread invoking a dedicated method, such asUpgradeToWriterLock in .NET's ReaderWriterLockSlim or tryConvertToWriteLock in Java's StampedLock, which waits for any active readers or writers to release their holds before transitioning the lock state to exclusive. During this wait, the original read access remains valid, preventing data races, and the process is often designed to be atomic to avoid intermediate inconsistent states. Downgrades from write back to read (or upgradeable) are symmetrically supported via methods like DowngradeFromWriterLock or tryConvertToReadLock, allowing the thread to revert exclusivity while retaining access.[15][16][17]
Upgradable locks are particularly useful in scenarios involving optimistic concurrency control, such as database transactions that begin with reads to check conditions before committing writes, as seen in systems like Teradata where read locks are automatically upgraded to write locks during processing. They also benefit caching or configuration systems where initial reads assess data before potential modifications, enabling efficient read-mostly workloads with occasional updates.[18][15]
A key challenge with upgradable locks is the risk of deadlock, which arises if multiple threads hold upgradeable reads and simultaneously attempt upgrades, as each blocks the others from proceeding; careful API design, such as using non-blocking try-upgrade methods and avoiding nested locking, is essential to mitigate this. Additionally, the upgrade process can introduce fairness issues if not prioritized properly, potentially starving writers under high read contention.[15][16]
Priority Policies
In readers–writer locks, a key challenge arises from the potential for writer starvation under certain access patterns. When multiple readers frequently acquire the lock in quick succession, a waiting writer may be indefinitely delayed, as the lock allows concurrent reads but exclusive writes. This issue, first formalized in the classic readers–writers problem, occurs in implementations with reader preference, where new readers can join ongoing read sessions without checking for pending writers.[1] To address starvation and balance access, various priority policies have been developed. Writer-priority policies favor writers by allowing them to "jump the queue" once they request the lock, often blocking new readers from acquiring it until the writer proceeds. This prevents starvation but can increase reader wait times, particularly in read-heavy workloads. Conversely, reader-priority policies permit multiple readers to acquire the lock even if writers are queued, optimizing for high read throughput at the risk of delaying writers. Fair policies aim to treat all requesters equitably, using mechanisms like ticket-based ordering to enforce first-come, first-served access for both readers and writers, thereby bounding wait times without strong bias toward either side.[19] Implementing these policies typically involves maintaining counters for active and waiting readers alongside queues for pending requests. For writer priority, a flag or counter tracks waiting writers, and incoming readers check it before proceeding; if a writer is pending, readers wait or defer. Fair implementations may use a single FIFO queue for all threads, assigning tickets upon request and advancing based on type (e.g., allowing reader tickets to pass others only if no writers follow). These approaches trade off complexity and performance: writer priority reduces writer latency in mixed workloads but can degrade overall read throughput in read-heavy workloads with infrequent writes, while fair policies ensure bounded fairness at the cost of higher overhead from queuing.[19] Practical examples illustrate these policies in production systems. The Linux kernel's rw_semaphore (rwsem) employs a fair implementation in non-real-time kernels, using optimistic spinning and wait queues to prevent writer starvation while allowing multiple readers. In contrast, Java's ReentrantReadWriteLock supports configurable fairness via its constructor; the default non-fair mode permits barging for higher throughput, but enabling fairness enforces FIFO ordering to avoid indefinite delays for either readers or writers.[5][20]Implementations
Two Mutex Approach
The two mutex approach implements a readers–writer lock using a counter to track the number of active readers, protected by a count mutex, and a separate resource mutex to enforce exclusive access by writers or the first reader. This design allows multiple readers to proceed concurrently after the first reader acquires the resource mutex, while writers must wait until no readers are active. The approach relies solely on mutexes for synchronization, avoiding more complex primitives like condition variables, which makes it suitable for simple scenarios or systems with limited synchronization support. To acquire a read lock, a reader thread locks the count mutex, increments the counter, and—if it is the first reader (counter was zero)—locks the resource mutex before releasing the count mutex to perform the read operation. Upon releasing the read lock, the reader locks the count mutex again, decrements the counter, and—if it is the last reader (counter reaches zero)—releases the resource mutex. For a write lock, the writer locks the resource mutex, which blocks until no readers are active, before performing the write operation. The write lock is released by unlocking the resource mutex. This mechanism ensures correctness without additional signaling but can lead to writers blocking indefinitely under heavy read traffic. The following pseudocode illustrates the operations (assuming atomic increments/decrements under the count mutex and standard mutex lock/unlock semantics): Read acquire:lock(count_mutex)
counter += 1
if counter == 1:
lock(resource_mutex)
unlock(count_mutex)
// Perform read
lock(count_mutex)
counter += 1
if counter == 1:
lock(resource_mutex)
unlock(count_mutex)
// Perform read
lock(count_mutex)
counter -= 1
if counter == 0:
unlock(resource_mutex)
unlock(count_mutex)
lock(count_mutex)
counter -= 1
if counter == 0:
unlock(resource_mutex)
unlock(count_mutex)
lock(resource_mutex)
// Perform write
lock(resource_mutex)
// Perform write
unlock(resource_mutex)
unlock(resource_mutex)
Condition Variable Approach
The condition variable approach to implementing a readers–writer lock employs a single mutex to protect shared state variables—typically a reader count and a flag indicating writer activity—along with a condition variable to efficiently block and notify threads awaiting access. This design allows multiple readers to proceed concurrently when no writer is active, while ensuring exclusive access for writers, by having contending threads atomically check the state under the mutex and wait on the condition variable if the lock is unavailable. Upon release, the appropriate signaling (signal or broadcast) wakes waiting threads, minimizing CPU overhead compared to spin-based alternatives. The following pseudocode illustrates a basic reader-preferring implementation, where readers increment a counter to enter and decrement it to exit, signaling only when the last reader leaves; writers wait until the reader count is zero before proceeding, and upon writer exit, a broadcast notifies waiting threads, allowing multiple readers to acquire read locks concurrently.class ConditionRWLock {
private:
mutex m;
condition_variable cv;
int readers = 0;
bool writer_active = false;
public:
void read_lock() {
unique_lock<mutex> lock(m);
while (writer_active) {
cv.wait(lock);
}
readers++;
}
void read_unlock() {
unique_lock<mutex> lock(m);
readers--;
if (readers == 0) {
cv.notify_one(); // Signal a waiting [writer](/page/Writer) if present
}
}
void write_lock() {
unique_lock<mutex> lock(m);
while (readers > 0 || writer_active) {
cv.wait(lock);
}
writer_active = true;
}
void write_unlock() {
writer_active = false;
cv.notify_all(); // Broadcast to wake waiting readers and writers
}
}
class ConditionRWLock {
private:
mutex m;
condition_variable cv;
int readers = 0;
bool writer_active = false;
public:
void read_lock() {
unique_lock<mutex> lock(m);
while (writer_active) {
cv.wait(lock);
}
readers++;
}
void read_unlock() {
unique_lock<mutex> lock(m);
readers--;
if (readers == 0) {
cv.notify_one(); // Signal a waiting [writer](/page/Writer) if present
}
}
void write_lock() {
unique_lock<mutex> lock(m);
while (readers > 0 || writer_active) {
cv.wait(lock);
}
writer_active = true;
}
void write_unlock() {
writer_active = false;
cv.notify_all(); // Broadcast to wake waiting readers and writers
}
}
write_unlock allows concurrent resumption of readers after writer completion.[22][23]
This approach reduces busy-waiting by leveraging the operating system's scheduler for blocked threads, leading to lower CPU utilization in scenarios with infrequent contention or high reader throughput. It also facilitates priority policies through selective signaling, such as notifying writers before readers in the read_unlock to favor writers and mitigate reader-induced starvation.[22][24]
However, managing the state (reader count and writer flag) introduces complexity, requiring careful ordering of locks and waits to prevent deadlocks or missed signals; additionally, condition variable operations incur overhead from kernel-mode transitions, which can impact performance in low-contention environments.[22][23]
An enhancement involves using multiple condition variables—one dedicated to readers and another to writers—for finer-grained notifications, allowing targeted signals (e.g., writer-specific wakes without disturbing reader queues) and reducing unnecessary wake-ups in mixed workloads. This variant improves scalability in systems with imbalanced reader-writer ratios by minimizing thundering herd effects.[22][24]
Language Support
Built-in Implementations
Java provides built-in support for readers–writer locks through theReentrantReadWriteLock class in the java.util.concurrent.locks package, introduced in JDK 5 in September 2004. This implementation allows multiple threads to acquire read locks concurrently while ensuring exclusive access for write locks, with support for reentrancy similar to ReentrantLock.[20]
In C++, the std::shared_mutex class, added in the C++17 standard released in December 2017, offers native readers–writer lock functionality as part of the <shared_mutex> header. It enables multiple threads to hold shared (read) locks simultaneously or a single thread to hold an exclusive (write) lock, fulfilling the SharedMutex requirements for synchronization.
Rust provides built-in support for readers–writer locks through the std::sync::RwLock type in the standard library, available since Rust 1.0 released in May 2015. This allows multiple readers or a single writer to access shared data, with RAII-based guards for automatic lock release and poisoning detection for error handling.[25]
Python's standard library does not include a built-in readers–writer lock; instead, developers rely on third-party libraries such as readerwriterlock from PyPI, which provides a compliant implementation supporting timeouts and the three variants of the readers–writer problem. Discussions within the Python community have proposed adding native support to the standard library, but as of Python 3.14, it remains unavailable.[26][27][28]
The POSIX threads (pthreads) API includes native support for readers–writer locks via functions like pthread_rwlock_init, pthread_rwlock_rdlock, and pthread_rwlock_wrlock, standardized in POSIX.1-2001. These allow multiple readers or a single writer to acquire the lock, with optional attributes for kind (e.g., preferring readers or writers) on some implementations.[7]
In .NET, the ReaderWriterLockSlim class in the System.Threading namespace, introduced in .NET Framework 3.5 in November 2007, provides a lightweight readers–writer lock optimized for performance over its predecessor ReaderWriterLock. It supports multiple concurrent readers, exclusive writers, and features like upgradeable read locks, using interlocked operations to minimize overhead.[29]
At the operating system level, Linux implements readers–writer semaphores (rw_semaphore) in the kernel, which support multiple readers or a single writer and use futexes for efficient user-space waiting to avoid unnecessary kernel transitions. This primitive has been available since kernel version 2.5.x in the early 2000s and ensures fairness to prevent writer starvation on non-real-time kernels.[5]
Windows offers slim reader/writer (SRW) locks via the SRWLOCK structure and associated APIs like AcquireSRWLockShared and AcquireSRWLockExclusive, introduced in Windows Vista in January 2007. These user-mode locks are designed for single-process synchronization, providing low-overhead concurrent reads or exclusive writes without kernel involvement in uncontended cases.[30]
Languages without native support, such as C, require manual implementations of readers–writer locks, often using pthreads primitives or atomic operations to manage counters for active readers and pending writers.[22]
Code Examples
The readers–writer lock, also known as a multiple readers/single writer lock, is a synchronization primitive that allows multiple concurrent readers or a single writer to access a shared resource, improving concurrency over a standard mutex in read-heavy scenarios.[25][20][31]Rust Example
In Rust, thestd::sync::RwLock provides a readers–writer lock suitable for concurrent access to shared data across threads. The following example demonstrates multi-threaded read and write operations on a shared Vec<i32>, where multiple reader threads read the length of the vector while a writer thread appends an element to the vector exclusively. Rust's ownership model ensures safety through RAII guards (RwLockReadGuard and RwLockWriteGuard), which automatically release the lock upon dropping, and error handling via Result for poisoned locks.[25]
use std::sync::{Arc, RwLock};
use std::thread;
fn main() {
let data = Arc::new(RwLock::new(vec![1, 2, 3]));
// Writer thread: exclusive write access
let data_clone = Arc::clone(&data);
let writer = thread::spawn(move || {
let mut guard = data_clone.write().expect("Failed to acquire write lock");
guard.push(4);
println!("Writer: {:?}", *guard); // Output: [1, 2, 3, 4]
});
// Multiple reader threads: shared read access
let handles: Vec<_> = (0..2).map(|i| {
let data_clone = Arc::clone(&data);
thread::spawn(move || {
let guard = data_clone.read().expect("Failed to acquire read lock");
println!("Reader {}: len = {}", i, guard.len());
})
}).collect();
writer.join().unwrap();
for handle in handles {
handle.join().unwrap();
}
}
use std::sync::{Arc, RwLock};
use std::thread;
fn main() {
let data = Arc::new(RwLock::new(vec![1, 2, 3]));
// Writer thread: exclusive write access
let data_clone = Arc::clone(&data);
let writer = thread::spawn(move || {
let mut guard = data_clone.write().expect("Failed to acquire write lock");
guard.push(4);
println!("Writer: {:?}", *guard); // Output: [1, 2, 3, 4]
});
// Multiple reader threads: shared read access
let handles: Vec<_> = (0..2).map(|i| {
let data_clone = Arc::clone(&data);
thread::spawn(move || {
let guard = data_clone.read().expect("Failed to acquire read lock");
println!("Reader {}: len = {}", i, guard.len());
})
}).collect();
writer.join().unwrap();
for handle in handles {
handle.join().unwrap();
}
}
Arc for thread-safe shared ownership of the RwLock, with read() allowing concurrent access and write() ensuring exclusivity; panics are avoided via expect() for demonstration, though production code might propagate errors.[25]
Java Example
Java'sjava.util.concurrent.locks.ReentrantReadWriteLock supports reentrant read and write locks for fine-grained control in multi-threaded environments. The example below simulates a simple cache using a HashMap<String, String>, where reader threads query keys and a writer thread updates entries. Locks are acquired via readLock().lock() and writeLock().lock(), with manual unlock() in finally blocks for release, and the reentrant nature allows nested locking by the same thread.[20]
import java.util.HashMap;
import java.util.Map;
import java.util.concurrent.locks.ReentrantReadWriteLock;
public class CacheExample {
private final Map<String, String> cache = new HashMap<>();
private final ReentrantReadWriteLock rwl = new ReentrantReadWriteLock();
private final ReentrantReadWriteLock.ReadLock readLock = rwl.readLock();
private final ReentrantReadWriteLock.WriteLock writeLock = rwl.writeLock();
public String get(String key) {
readLock.lock();
try {
return cache.get(key);
} finally {
readLock.unlock();
}
}
public void put(String key, String value) {
writeLock.lock();
try {
cache.put(key, value);
} finally {
writeLock.unlock();
}
}
public static void main(String[] args) {
CacheExample cache = new CacheExample();
cache.put("key1", "value1"); // Writer operation
// Simulate reader threads (in practice, use ExecutorService)
Runnable reader = () -> {
String value = cache.get("key1");
System.out.println("Reader: " + value); // Output: value1
};
Thread reader1 = new Thread(reader);
Thread reader2 = new Thread(reader);
reader1.start();
reader2.start();
try {
reader1.join();
reader2.join();
} catch (InterruptedException e) {
Thread.currentThread().interrupt();
}
}
}
import java.util.HashMap;
import java.util.Map;
import java.util.concurrent.locks.ReentrantReadWriteLock;
public class CacheExample {
private final Map<String, String> cache = new HashMap<>();
private final ReentrantReadWriteLock rwl = new ReentrantReadWriteLock();
private final ReentrantReadWriteLock.ReadLock readLock = rwl.readLock();
private final ReentrantReadWriteLock.WriteLock writeLock = rwl.writeLock();
public String get(String key) {
readLock.lock();
try {
return cache.get(key);
} finally {
readLock.unlock();
}
}
public void put(String key, String value) {
writeLock.lock();
try {
cache.put(key, value);
} finally {
writeLock.unlock();
}
}
public static void main(String[] args) {
CacheExample cache = new CacheExample();
cache.put("key1", "value1"); // Writer operation
// Simulate reader threads (in practice, use ExecutorService)
Runnable reader = () -> {
String value = cache.get("key1");
System.out.println("Reader: " + value); // Output: value1
};
Thread reader1 = new Thread(reader);
Thread reader2 = new Thread(reader);
reader1.start();
reader2.start();
try {
reader1.join();
reader2.join();
} catch (InterruptedException e) {
Thread.currentThread().interrupt();
}
}
}
InterruptedException is managed for thread interruption.[20]
C++ Example
C++17 introducesstd::shared_mutex in <shared_mutex>, enabling shared (reader) and exclusive (writer) locking for concurrent data access. This example uses a thread pool to access a shared std::map<int, int>, with reader threads querying values and a writer updating the map. std::shared_lock handles read access (RAII-based release), while std::unique_lock manages writes; exception safety is ensured by automatic unlocking on scope exit.[31]
#include <shared_mutex>
#include <map>
#include <thread>
#include <vector>
#include <iostream>
class ThreadSafeMap {
private:
std::map<int, int> data;
mutable std::shared_mutex mutex;
public:
int get(int key) const {
std::shared_lock<std::shared_mutex> lock(mutex);
auto it = data.find(key);
return (it != data.end()) ? it->second : -1;
}
void put(int key, int value) {
std::unique_lock<std::shared_mutex> lock(mutex);
data[key] = value;
}
};
int main() {
ThreadSafeMap map;
map.put(1, 42); // Writer operation
// Simulate thread pool with reader threads
std::vector<std::thread> threads;
for (int i = 0; i < 2; ++i) {
threads.emplace_back([&map, i]() {
int value = map.get(1);
std::cout << "Reader " << i << ": " << value << std::endl; // Output: 42
});
}
for (auto& t : threads) {
t.join();
}
return 0;
}
#include <shared_mutex>
#include <map>
#include <thread>
#include <vector>
#include <iostream>
class ThreadSafeMap {
private:
std::map<int, int> data;
mutable std::shared_mutex mutex;
public:
int get(int key) const {
std::shared_lock<std::shared_mutex> lock(mutex);
auto it = data.find(key);
return (it != data.end()) ? it->second : -1;
}
void put(int key, int value) {
std::unique_lock<std::shared_mutex> lock(mutex);
data[key] = value;
}
};
int main() {
ThreadSafeMap map;
map.put(1, 42); // Writer operation
// Simulate thread pool with reader threads
std::vector<std::thread> threads;
for (int i = 0; i < 2; ++i) {
threads.emplace_back([&map, i]() {
int value = map.get(1);
std::cout << "Reader " << i << ": " << value << std::endl; // Output: 42
});
}
for (auto& t : threads) {
t.join();
}
return 0;
}
mutable keyword allows const methods to lock for reads, aligning with C++'s const-correctness; no explicit error handling is needed beyond potential exceptions from mutex operations, which propagate naturally.[31]
