Hubbry Logo
Green threadGreen threadMain
Open search
Green thread
Community hub
Green thread
logo
7 pages, 0 posts
0 subscribers
Be the first to start a discussion here.
Be the first to start a discussion here.
Green thread
Green thread
from Wikipedia

In computer programming, a green thread is a thread that is scheduled by a runtime library or virtual machine (VM) instead of natively by the underlying operating system (OS). Green threads emulate multithreaded environments without relying on any native OS abilities, and they are managed in user space instead of kernel space, enabling them to work in environments that do not have native thread support.[1]

Etymology

[edit]

Green threads refers to the name of the original thread library for Java programming language (that was released in version 1.1 and then Green threads were abandoned in version 1.3 to native threads). It was designed by The Green Team at Sun Microsystems.[2]

History

[edit]

Green threads were briefly available in Java between 1997 and 2000.

Green threads share a single operating system thread through co-operative concurrency and can therefore not achieve parallelism performance gains like operating system threads. The main benefit of coroutines and green threads is ease of implementation.

Performance

[edit]

On a multi-core processor, native thread implementations can automatically assign work to multiple processors, whereas green thread implementations normally cannot.[1][3] Green threads can be started much faster on some VMs. On uniprocessor computers, however, the most efficient model has not yet been clearly determined.

Benchmarks on computers running the Linux kernel version 2.2 (released in 1999) have shown that:[4]

When a green thread executes a blocking system call, not only is that thread blocked, but all of the threads within the process are blocked.[5] To avoid that problem, green threads must use non-blocking I/O or asynchronous I/O operations, although the increased complexity on the user side can be reduced if the virtual machine implementing the green threads spawns specific I/O processes (hidden to the user) for each I/O operation.[citation needed]

There are also mechanisms which allow use of native threads and reduce the overhead of thread activation and synchronization:

  • Thread pools reduce the cost of spawning a new thread by reusing a limited number of threads.[6]
  • Languages which use virtual machines and native threads can use escape analysis to avoid synchronizing blocks of code when unneeded.[7]

Green threads in the Java Virtual Machine

[edit]

In Java 1.1, green threads were the only threading model used by the Java virtual machine (JVM),[8] at least on Solaris. As green threads have some limitations compared to native threads, subsequent Java versions dropped them in favor of native threads.[9][10]

An exception to this is the Squawk virtual machine, which is a mixture between an operating system for low-power devices and a Java virtual machine. It uses green threads to minimize the use of native code, and to support migrating its isolates.

Kilim[11][12] and Quasar[13][14] are open-source projects which implement green threads on later versions of the JVM by modifying the Java bytecode produced by the Java compiler (Quasar also supports Kotlin and Clojure).

Green threads in other languages

[edit]

There are some other programming languages that implement equivalents of green threads instead of native threads. Examples:

The Erlang virtual machine has what might be called green processes – they are like operating system processes (they do not share state like threads do) but are implemented within the Erlang Run Time System (erts). These are sometimes termed green threads, but have significant differences[clarification needed] from standard green threads.[citation needed]

In the case of GHC Haskell, a context switch occurs at the first allocation after a configurable timeout. GHC threads are also potentially run on one or more OS threads during their lifetime (there is a many-to-many relationship between GHC threads and OS threads), allowing for parallelism on symmetric multiprocessing machines, while not creating more costly OS threads than needed to run on the available number of cores.[citation needed]

Most Smalltalk virtual machines do not count evaluation steps; however, the VM can still preempt the executing thread on external signals (such as expiring timers, or I/O becoming available). Usually round-robin scheduling is used so that a high-priority process that wakes up regularly will effectively implement time-sharing preemption:

 [
    [(Delay forMilliseconds: 50) wait] repeat
 ] forkAt: Processor highIOPriority

Other implementations, e.g., QKS Smalltalk, are always time-sharing. Unlike most green thread implementations, QKS also supports preventing priority inversion.

Differences to virtual threads in the Java Virtual Machine

[edit]

Virtual threads were introduced as a preview feature in Java 19[28] and stabilized in Java 21.[29] Important differences between virtual threads and green threads are:

  • Virtual threads coexist with existing (non-virtual) platform threads and thread pools.
  • Virtual threads protect their abstraction:
    • Unlike with green threads, sleeping on a virtual thread does not block the underlying carrier thread.
    • Working with thread-local variables is deemphasized, and scoped values are suggested as a more lightweight replacement.[30]
  • Virtual threads can be cheaply suspended and resumed, making use of JVM support for the special jdk.internal.vm.Continuation class.
  • Virtual threads handle blocking calls by transparently unmounting from the carrier thread where possible, otherwise compensating by increasing the number of platform threads.

See also

[edit]

References

[edit]
[edit]
Revisions and contributorsEdit on WikipediaRead on Wikipedia
from Grokipedia
Green threads are lightweight, user-level threads managed and scheduled entirely by an application or (VM), rather than by the operating system kernel, typically following a many-to-one threading model where multiple green threads are multiplexed onto a single kernel thread. This approach originated in early versions of the (JVM), such as in JDK 1.1 and prior to Solaris 2.6, where the runtime employed a user-level threads known as green threads to handle concurrency without direct kernel involvement. The model allows for rapid thread creation and efficient context switching at the user level, avoiding the overhead of system calls, which makes green threads particularly advantageous for scenarios demanding high concurrency with numerous tasks, such as in early applications or certain runtime environments like Portable Threads. Despite these benefits, green threads have notable drawbacks, including limited scalability on multi-core processors since all threads run sequentially on one kernel thread, preventing true parallelism, and a blocking issue where a single thread's blocking can halt the entire process, necessitating dual-level scheduling between user and kernel spaces. Over time, many systems, including later implementations, shifted toward native or hybrid threading models (such as one-to-one or many-to-many) to address these limitations and better leverage modern hardware. The concept of green threads continues to influence designs in languages and runtimes seeking efficient concurrency, though often evolved into more advanced forms like virtual threads.

Fundamentals

Definition

Green threads are lightweight threads implemented at the user level, where scheduling and management are handled entirely by a programming language's or (VM) rather than by the operating system's kernel scheduler. Unlike kernel-level threads, which are directly visible and scheduled by the OS, green threads operate within the application's , allowing the runtime to create, manage, and terminate them without invoking kernel services for each operation. A key distinction from kernel-level threading models is that green threads typically follow a many-to-one mapping, where multiple green threads are multiplexed onto one or a few underlying OS threads, facilitating within a single . This approach enables the runtime to handle concurrency without the overhead of creating numerous heavy-weight OS threads, though it means that a blocking operation in one green thread can stall the entire if not carefully managed. In their basic operational model, green threads rely on user-space mechanisms for context switching, which involves saving and restoring thread states—such as registers and program counters—without kernel intervention, resulting in lower latency compared to OS-level switches. Stack allocation for each green thread is managed by the runtime, often using fixed-size stacks or segmented stacks to support efficient creation and switching. Thread states, including ready, running, and blocked, are maintained in runtime data structures to enable scheduling, where threads explicitly yield control to allow others to run. Green threads are particularly useful for simplifying concurrent programming in environments constrained to single OS threads, such as enabling task parallelism for I/O-bound operations or event-driven applications without the complexity of OS thread management. For instance, they support high-level abstractions for handling multiple concurrent tasks efficiently in resource-limited settings.

Characteristics

Green threads utilize cooperative scheduling, in which threads voluntarily relinquish control to the runtime scheduler rather than being preempted by the operating system. This mechanism relies on the runtime maintaining ready queues to manage thread execution, often incorporating priority levels to determine dispatch order, though threads of equal priority may not undergo time-slicing without explicit yields. The scheduler typically operates in user space, avoiding kernel interventions for switches, which enhances responsiveness in collaborative environments but requires developers to insert yield calls at appropriate points to prevent monopolization by individual threads. A key attribute of green threads is their platform independence, achieved by abstracting away operating system-specific threading primitives. The runtime handles all thread management internally, allowing application code written for green threads to execute unchanged across diverse operating systems without modification for native APIs. This portability was particularly valuable in early environments where OS thread support varied widely, enabling consistent behavior in virtual machines like early implementations. Green threads exhibit high due to their user-space implementation, incurring minimal overhead for creation and switching compared to kernel-managed threads. Thread operations occur without calls, reducing latency, while each thread maintains a compact , often limited to a small stack (e.g., on the order of kilobytes to megabytes) plus essential registers and program counters. This design supports spawning thousands of threads affordably, ideal for scenarios with high concurrency demands but limited resources. Inherent limitations of green threads stem from their user-level nature, including an inability to natively utilize multiple CPU cores, as all threads multiplex onto a single or limited set of underlying OS threads without built-in parallelism support. Additionally, blocking operations, such as I/O calls within a green thread, can suspend the entire runtime scheduler, halting progress for all threads until the block resolves. The thread lifecycle in green threads is fully orchestrated by the runtime, beginning with creation through allocation of a , stack initialization, and insertion into the ready queue. Suspension occurs via explicit yield invocations, transferring control back to the scheduler for potential resumption of another thread. Resumption involves the scheduler selecting the thread from the queue and restoring its context to continue execution. Termination entails removing the thread from active structures, deallocating its resources, and optionally notifying dependent components, all handled without OS involvement.

Historical Development

Etymology

The term "green threads" was coined in the early 1990s by the Green Team, a group of engineers at Sun Microsystems responsible for developing the original Java programming language (initially codenamed Oak) and its associated threading facilities. The name directly derives from this team's internal designation, part of Sun's practice of assigning colors to project groups, such as the Green Team formed in 1991 under James Gosling to explore platform-independent software for consumer electronics. The first documented usage of "green threads" appears in ' technical documentation for the (JDK) 1.1, released in February 1997, where it described the default threading implementation in the (JVM). In this context, green threads represented a user-mode threading library managed entirely by the JVM, providing portability across operating systems without relying on native kernel support. Early references tied the term explicitly to Java's design goals for lightweight, in a environment. The "" moniker evokes the simplicity and efficiency of user-space thread management, positioning these threads as a lighter alternative to kernel-managed native threads, much like how "" connotes in broader contexts. This linguistic choice aligned with Sun's emphasis on accessible, low-overhead concurrency for developers. Over the subsequent decades, the term evolved from a Java-specific label to a widely adopted standard in literature for describing user-level or runtime-managed threads in general. It became commonplace in discussions of concurrency models, as seen in university curricula and technical analyses that reference green threads as a pioneering approach to scalable, virtualized execution. For instance, lecture materials from note: "Green Threads were so named because they were created by the Green team at ," highlighting its transition to a generic descriptor for non-native threading paradigms.

Evolution

The concept of green threads, or user-level threads scheduled by application runtimes rather than the operating system kernel, traces its roots to 1980s research on efficient concurrency mechanisms in multiprocessor systems. The Mach microkernel project, initiated at in 1985, pioneered the separation of processes and threads as distinct abstractions, providing foundational support for user-space implementations of lightweight threading through its message-passing model and kernel facilities for threads. Early implementations during this era, such as those on Lisp machines, employed and user-level scheduling for concurrent execution, emphasizing efficiency in garbage-collected environments without relying on kernel intervention. These developments addressed the overhead of kernel-level processes, laying groundwork for runtime-managed concurrency. In the 1990s, green threads gained prominence through their adoption in production languages, particularly 1.0 released by in 1996. Sun chose user-space thread management to ensure platform portability, especially for applets and network-intensive applications running on diverse operating systems like early Windows versions that lacked robust native threading support. This rationale allowed the (JVM) to abstract threading details, enabling consistent behavior across environments without deep OS dependencies. A seminal contribution came from the 1991 SOSP paper "First-Class User-Level Threads" by Brian D. Marsh, Michael L. Scott, Thomas J. LeBlanc, and Evangelos P. Markatos, which proposed kernel mechanisms like software interrupts and scheduler interfaces to integrate user-level threads seamlessly, demonstrating performance gains of 35-39% over kernel processes in benchmarks on the BBN Butterfly multiprocessor. By the early 2000s, limitations in led to a decline in green threads' dominance, exemplified by changes in . The model struggled with blocking system calls that could halt the entire JVM on multiprocessor systems, as green threads typically mapped many user threads to few or single kernel threads, preventing effective parallelization. In response, Sun made native OS threads the default in JDK 1.2 (1998) and fully deprecated green threads in JDK 1.3 (2000), favoring kernel-managed threads for better multicore utilization. Post-2010 revivals adapted green thread principles to modern multicore architectures. The Go programming language, announced in , introduced goroutines as lightweight, runtime-scheduled units with M:N multiplexing onto OS threads, drawing from user-level threading to enable millions of concurrent tasks efficiently. Similarly, early versions of (pre-1.0, circa 2010-2014) defaulted to a green threading runtime for its standard library, but deprecated it via RFC 230 in 2014 to eliminate runtime overhead and prioritize zero-cost abstractions, shifting toward native threads and async models. Key figures like , who led Sun's Green Project that birthed , influenced these evolutions by embedding user-level concurrency into language design for scalable, portable programming.

Implementations

In the Java Virtual Machine

Green threads were initially implemented as the default threading model in the (JDK) versions 1.0 (released in 1996) and 1.1 (released in 1997), where the (JVM) managed multiple user-level threads multiplexed onto a single operating system (OS) thread responsible for scheduling. Green threads were the default and only threading model in JDK 1.0 and 1.1 across platforms, but support varied post-JDK 1.2, with removal on Linux in JDK 1.3 while lingering optionally on Solaris until later versions. This many-to-one mapping allowed the JVM to simulate concurrency without relying on OS-level threading support, which was immature on some platforms at the time. The architecture of green threads in the JVM centered on a runtime thread scheduler integrated into the java.lang.Thread class and supporting libraries, which handled context switching, priority-based scheduling, and state management entirely in user space. Monitor locks and synchronization primitives, such as those in java.lang.Object, were adapted for this model by implementing wait/notify mechanisms and mutual exclusion within the JVM, avoiding kernel-level synchronization calls to maintain portability and reduce overhead. However, this user-space approach meant that blocking operations, like I/O, could halt all green threads since they shared the underlying OS thread. In JDK 1.2 (released in 1998), green threads were replaced as the default by native threads, which mapped directly to OS threads via platform-specific libraries like POSIX threads on Unix or Win32 threads on Windows, primarily to improve performance, enable true parallelism on multi-CPU systems, and handle blocking operations without affecting other threads. The transition addressed key limitations of green threads, including poor scalability on multi-processor hardware and complications with native code integration. Support for green threads lingered as an optional mode on certain platforms until it was deprecated and removed in subsequent releases, such as JDK 1.3 for Linux implementations. (citing archived Sun documentation) Historically, green threads could be explicitly enabled at runtime using command-line flags, such as java -green MyClass on supported JVMs, overriding the default native mode where available. Thread creation and management followed the standard java.lang.Thread API, unchanged from native threads; for example:

java

Thread greenThread = new Thread(() -> { System.out.println("Running in green thread"); }); greenThread.start(); greenThread.join();

Thread greenThread = new Thread(() -> { System.out.println("Running in green thread"); }); greenThread.start(); greenThread.join();

This API simplicity masked the underlying user-space implementation. By Java 21 (released in 2023), the original green threads implementation is fully obsolete and unsupported in modern JVMs, having been supplanted by native threads since the late 1990s, though they retain archival value for analyzing legacy systems and informing the design of contemporary features like virtual threads in Project Loom.

In Other Languages

In the Go programming language, goroutines serve as lightweight, M:N green threads managed by the Go runtime scheduler, enabling efficient multiplexing of thousands of concurrent tasks onto a smaller number of OS threads. Introduced with the language's public release in November 2009, goroutines are launched using the go keyword and facilitate concurrency through channels for safe, synchronous communication between them, avoiding shared memory issues common in traditional threading models. Erlang's lightweight processes, a form of green threads, have been integral to the language since its development in the mid-1980s by researchers, predating modern concurrency paradigms. These processes are scheduled and managed entirely by the BEAM , which implements the where each process operates in isolation, communicating solely via asynchronous to enhance and distribution. This design inherently supports through mechanisms like supervision trees in the OTP framework, allowing processes to fail and restart without impacting the system as a whole. Ruby introduced fibers in version 1.9, released in August 2009, as delimited continuations that function as one-shot for implementing coroutine-like . Unlike full threads, fibers are cooperatively scheduled in user space and resume only at explicit yield or transfer points, making them suitable for non-blocking, operations such as those in web servers or . Other languages have adopted similar green thread constructs for user-space concurrency. Lua's coroutines, added in version 5.0 released in 2003, provide asymmetric cooperation for multitasking without OS involvement, allowing scripts to pause and resume execution for tasks like or . In Python, the greenlet library, first released in 2006 as a spin-off from the project, enables lightweight, stackful coroutines that achieve concurrency in a single OS thread by switching contexts manually, underpinning libraries like gevent for asynchronous networking.

Performance and Comparisons

Performance Characteristics

Green threads exhibit significantly lower overhead in creation and context switching compared to native threads, primarily due to their management entirely in user space without kernel involvement. Thread creation and context switching for green threads occur with notably lower latency, on the order of microseconds, avoiding the overhead of kernel traps associated with native threads. These characteristics were evaluated on 1990s architectures like MIPS R4400 and , highlighting the lightweight nature that avoids costly kernel traps. Scalability of green threads shines in single-core environments, where they can efficiently manage thousands of concurrent threads without proportional degradation, as demonstrated by stable execution times up to 200 threads in early Java benchmarks on Solaris kernels. However, on multicore s, performance degrades due to the many-to-one mapping to operating system threads, limiting true parallelism and confining execution to a unless hybrid scheduling is employed. This bottleneck was evident in 1990s tests, where green threads handled high thread counts effectively for but failed to leverage multiple cores natively. In I/O-bound workloads, green threads excel through integration with non-blocking I/O multiplexing and event loops, allowing rapid yielding during waits and resumption upon completion, which minimizes blocking overhead. Recent evaluations of lightweight thread models akin to green threads, conducted post-2020 on modern JVMs, report up to 60% higher throughput (e.g., 8898 requests per second versus 5410 for native equivalents) and 28.8% lower latency (319 ms versus 448 ms) in high-concurrency scenarios like web servers. These advantages hold particularly in environments with ARM-based hardware, where provides significant reductions under I/O-intensive loads, supporting scalable handling of millions of connections.

Differences from Native Threads

Green threads and native threads differ fundamentally in their management level. Green threads are implemented and managed entirely in user space by the runtime environment, such as the (JVM), without direct involvement from the operating system kernel. In contrast, native threads, also known as kernel threads, are managed by the operating system kernel, where each thread is explicitly created and tracked through calls, such as pthread_create in POSIX-compliant s. This kernel-level oversight allows native threads to operate as independent entities visible to the OS, enabling simultaneous kernel access by multiple threads, whereas green threads share a single kernel thread, limiting concurrent OS interactions. Context switching in green threads occurs entirely within user space, making it faster and less resource-intensive since it avoids kernel traps and overhead. Native threads, however, require kernel involvement for context switches, which introduces higher latency due to the expense of transitioning between user and kernel modes via system calls. This difference is particularly pronounced in scenarios with frequent thread switches, where green threads' cooperative scheduling—often relying on mechanisms like yield() or sleep()—enables efficient without OS intervention. In terms of portability and abstraction, green threads provide a uniform threading model across platforms because their implementation is abstracted by the runtime, independent of underlying OS differences. Native threads, by relying on OS-specific APIs, exhibit dependencies that can vary significantly—for instance, POSIX threads on systems differ from threads in creation, scheduling, and signaling behaviors. This makes green threads more suitable for applications requiring consistent behavior without platform-specific adaptations. Regarding parallelism, native threads support true concurrent execution on multicore processors, as each thread can be scheduled independently by the kernel across multiple CPU cores. Green threads, confined to a single OS thread, cannot achieve this level of parallelism and are inherently limited to sequential execution from the kernel's perspective, even if the runtime multiplexes multiple green threads. Synchronization mechanisms also diverge between the two models. Green threads typically employ runtime-provided monitors, such as those in Java's synchronized blocks, which are handled in user space for simplicity and efficiency. Native threads, on the other hand, utilize kernel-enforced primitives like mutexes and condition variables (e.g., pthread_mutex_lock and pthread_cond_wait), offering robust but more overhead-intensive coordination that integrates with OS-wide resources. This user-space approach in green threads can simplify development but may introduce limitations in scenarios requiring OS-level .

Differences from Virtual Threads

Green threads, as implemented in early versions of the Java Virtual Machine (JVM), employed a cooperative scheduling model entirely managed within the runtime, where thread switching occurred only at explicit yield points or during blocking operations, limiting execution to a single operating system (OS) thread and preventing true parallelism across multiple cores. In contrast, virtual threads, introduced via Project Loom and finalized in Java 21 in September 2023, utilize a carrier-thread model that maps multiple virtual threads onto a pool of platform threads (OS threads), enabling the JVM to integrate with OS scheduling for parallelism while maintaining user-mode efficiency during I/O-bound tasks. This M:N mapping allows virtual threads to unpark and resume on different carrier threads, addressing the single-threaded bottleneck inherent in green threads. A key scalability limitation of green threads was their vulnerability to blocking operations, such as synchronous I/O calls, which could halt the entire runtime since all green threads shared a single OS thread, often leading to poor in concurrent applications with even modest thread counts. Virtual threads overcome this through continuation-based mechanisms, where blocking operations "pin" the virtual thread temporarily to its carrier but allow the carrier to execute other virtual threads, supporting the creation and management of millions of threads with minimal memory overhead—typically under 1 KB per thread compared to the larger stacks of green threads. This design facilitates high-throughput scenarios, such as web servers handling thousands of concurrent requests, without the global blocking issues that plagued green threads. In terms of implementation, pre-Loom green threads formed a deprecated, monolithic component of the JVM, tightly coupled to the runtime's thread management and removed by JDK 1.3 due to scalability shortcomings and the maturation of OS threading support. Virtual threads, however, represent a modern evolution, leveraging JVM intrinsics like continuations (JEP 428) for efficient suspension and resumption, scoped values (JEP 429) for thread-local data isolation, and structured concurrency (JEP 437) to manage thread hierarchies and prevent leaks, all integrated as stable features in Java 21. These enhancements make virtual threads a lightweight, composable alternative that revives the user-space threading philosophy of green threads but with robust support for contemporary multicore systems. Virtual threads preserve compatibility with the existing Thread , allowing developers to use familiar constructs like Thread.start() while automatically benefiting from the new model, unlike green threads which required specific JVM flags and lacked seamless integration with modern libraries. For migration, applications previously limited by green threads' constraints—such as those using blocking I/O—can transition by simply enabling virtual threads via JVM options like -Djdk.virtualThreadScheduler.parallelism=N, often yielding immediate scalability gains without code rewrites, as demonstrated in server frameworks like . Post-2023 adoption of virtual threads has accelerated, with Java 21 reaching approximately 1.4% usage among production JVMs as of early 2024, according to telemetry from . By mid-2025, adoption estimates have risen to around 45%, reflecting faster uptake due to concurrency improvements. Benchmarks in server applications show virtual threads delivering higher throughput and reduced latency in many I/O-intensive workloads compared to traditional platform threads, though results vary by and can include challenges like pinning in certain scenarios.

References

Add your contribution
Related Hubs
User Avatar
No comments yet.